DevFest 2025 Notes: Developer Evolution in the Age of AI Agents

Foreword

I recently attended DevFest 2025, and this year’s theme undoubtedly revolved around AI Agents and Agentic Workflows. From simple Prompt Engineering to giving AI “hands and feet” to execute tasks, the entire paradigm of software development is undergoing a massive shift.

The agenda was packed, ranging from the practical experiences of independent developers to deep dives into Google’s official toolchain (Gemini CLI, ADK), and extending to enterprise-grade application architectures. I’ve compiled key notes from each session, recording the essence of this feast of knowledge, hoping to give friends who couldn’t attend a glimpse into the era of AI Agents.


1. A Geek’s Romance: Building a Ticketing System for Your Own Conference with AI

Speaker: Amos (Ka-Kien Long)

This session left a deep impression on me. Amos shared how he started from “a geek’s romance” and used AI to assist himself in building a fully functional ticketing and sales system, ezBundle.

The key takeaway is that while AI is powerful, it is ultimately an extension of the engineer’s will—fundamentals are king.

Why Build a Ticketing System from Scratch?

Although there are excellent existing platforms like K or A, the speaker encountered some unsolvable pain points:

  • Lack of Customization Flexibility: Existing platforms are difficult to customize for specific details.
  • Cash Flow Needs: Organizing events often requires paying high venue fees in advance (e.g., the venue fee for WebConf 2025 was up to a million TWD). Existing platforms usually disburse funds only after the event ends, creating huge cash flow pressure for organizers.
  • “Solving your own problem might solve others’ problems too”: The goal of ezBundle was not to replace major platforms, but to solve the actual problems encountered when organizing his own events.

Real Productivity of AI Collaboration

The ezBundle system proved that AI-written code is ready for production environments:

  • Astonishing Results from a One-Man Army: This project was completed by 1 engineer (no PM, no designer) utilizing after-work hours and weekends.
  • Maximized Efficiency: The entire project consists of about 290,000 lines of code. The speaker estimated that he personally wrote less than 1% of the code.
  • Operational Track Record: Since its launch on 2025/9/1, membership has exceeded 1,000, and transaction volume is approaching NT$ 2.9 million.
  • Extensive Functionality: The system handles not only ticketing but also sales of physical goods and digital content, and integrates multiple payment gateways (NewebPay, ECPay, LinePay), QR Code ticket verification, surveys, and check-in functions.

How to Make AI “Behave”: Development Workflow in Practice

The biggest challenge in using AI-assisted development is quality control—avoiding the creation of “Tech Garbage.”

(A) Leverage CLI and Pipes

  • AI Should Not Be Held Hostage by Editors: CLI (Command-Line Interface) tools allow AI to operate on servers and provide great flexibility.
  • Pipe Concept: Chain different AI models together, where the output of the previous model becomes the input for the next. For example: First ask Gemini to identify potential issues, then ask Claude to provide fix suggestions and modify the code.

(B) Master Specifications and Design

  • Avoid Vibe Coding: Wishing is not the same as building something usable. To get stable and expected results from AI, you must provide clear grounds.
  • Few-Shot/One-Shot Examples: To ensure consistent visual style, use One Shot or Few Shot methods to give AI reference examples (e.g., ViewComponent previews) to “copy,” ensuring the design follows Design Guidelines.
  • TDD (Test-Driven Development): This is key to ensuring core functions (like checkout) don’t break. Even code written by AI must pass test verification.

(C) Maintain Strict Control

  • AI is a Collaborator, Not a Decision Maker: Engineers must still Code Review the output from AI.
  • Control Git Permissions: Never let AI operate Git on your behalf. Although you can use git auto-commit to help AI generate Commit Messages, the ultimate permissions (including permission to install packages) must remain in your hands.
  • Error Monitoring: Use tools like Sentry for monitoring to send immediate notifications upon errors. This is much more timely than discovering issues through user reports.
NOTE

Conclusion: AI is a Capability Amplifier

Engineers will not be replaced by AI, but their roles will change.

  • AI is an extension of the driver’s will: As long as the engineer’s own technical skills are strong enough, AI can amplify your capabilities many times over.
  • Hone your fundamentals: AI amplifies your good habits and your bad habits. In the AI era, fundamentals become even more important; “the human must be fiercer than the car.”
  • Role Transformation: Engineers will shift from “Code Producers” to “Code Reviewers” and “Architects + Product Thinkers.”

2. Developing Agentic Applications with Gemini CLI and ADK

Speaker: Jimmy Liao (AI GDE, CTO & Co-Founder)

The core of this session was how to leverage Google’s Agentic tech stack to solve pain points enterprises face when adopting AI, providing a practical path from rapid validation to formal deployment.

Three Major Pain Points of Enterprise AI Adoption 💥

  1. Data Security: Sensitive data cannot be uploaded to external APIs; requires on-premise deployment or solutions with high data control.
  2. Tool Integration: Internal systems (like Jira, GitLab, internal APIs) operate in silos, lacking a unified integration standard.
  3. Developer Experience (DevX): Existence of multiple AI interfaces (CLI, Web UI, API) causes development chaos.

Core Tech Stack

To solve the above pain points, the solution is based on three core components:

Component Name Positioning/Role Note
Gemini CLI Rapid Validation & Iteration Google’s official command-line tool, supporting the latest Gemini models with a free tier (50 req/min), suitable for rapid trial and error in Phase 1.
MCP Protocol Standardized Tool Integration (Model Context Protocol) Like a “USB protocol” between LLMs and external tools; a tool only needs to be implemented once to be called by multiple Clients.
Google ADK Agent Automation Orchestration (Agent Development Kit) Designed for Agent-first architectures, natively supporting MCP, excelling in automatic orchestration of multi-step tools and context memory.

Demo & Benefit Highlights ✨

  • FastMCP Taiwan Stock Lookup: Used FastMCP (only 50 lines of code) to quickly wrap the Stock Exchange API into an MCP Server. Reduced manual query time from 5-10 minutes to under 5 seconds.
  • Enterprise Document Compliance Check: Implemented a Document MCP to automatically scan documents for sensitive data (like PII), reducing manual auditing from 3-5 days to 5 seconds.
  • Google ADK Investment Analysis: Combined multiple MCP Tools, performing automatic reasoning and complete investment analysis via an ADK Agent, with astonishing efficiency gains.
TIP

Practical Analogy

The entire enterprise Agentic development process is like building a corporate super toolbox:

  • MCP is like the unified power socket specification (USB protocol), ensuring all your company’s tools (GitLab, Jira) can connect with AI models in a standardized way.
  • Google ADK acts as a senior technician; it can automatically decide which tools to pick up based on your voice commands and complete complex tasks precisely in order.

3. ADK/A2A/MCP Practical Analysis

Speaker: Edward Chuang

This session dove deep into architectural design in the AG (Agent) era. The core composition of an Agent includes: Runtime (R), Model (Brain), and Tools (Hands and Feet).

ADK: Google’s Agent Development Framework

ADK (Agent Development Kit) is an open-source framework launched by Google, supporting Single Agent and Multi-Agent (M-Agent) architectures.

Types of Tools: The Agent decides which tool to call based on user requests (Prompts) and its own instructions:

  1. Function Tool: User-defined functions.
  2. Building Tool: ADK built-in tools (like Google Search).
  3. Third-Party Tool / MCP Tool: Integrating external services, like GitHub API.

🧠 Agent Architecture Patterns

1. Single Agent One Agent is responsible for judging and invoking multiple tools it possesses. Pros: Simple structure, low latency. Cons: Difficult to handle complex Workflows.

2. Multi-Agent Distributes tasks to multiple Sub-Agents via a Coordinator/Root Agent.

  • Dispatcher / Coordinator: Root Agent decides the flow; high modularity.
  • Sequential: Pre-defined fixed order (A -> B). Predictable results, but latency stacks up.
  • Parallel: Multiple Agents execute simultaneously. High efficiency, but increased complexity and harder debugging.

A2A (Agent-to-Agent) Protocol This is the “business card” between Agents, allowing one Agent to discover and understand the capabilities and communication methods of other Agents.

WARNING

Challenges and Advice

The main challenges of Agent systems lie in Predictability and Stability. It is recommended to start with the simplest architecture (Single Agent) and not over-engineer complex M-Agent Patterns from the start.


4. Building No-Code AI Agents with Gemini CLI

Speaker: Will

Who says playing with AI Agents requires writing SDKs and connecting APIs? This session demonstrated how to use Gemini CLI to transform complex AI applications into business design tasks.

Core Positioning of Gemini CLI

Gemini CLI allows users to interact directly with models via the terminal without writing code.

  • Simplify Development: Drastically lowers the technical barrier, allowing focus on Agent design.
  • Rapid Prototyping: Build interactive prototypes in minutes.

Agent Practice Philosophy: Tools, Persona, Testing

Three elements of building an AI Agent:

  1. Register External Tools (Tools) To give an Agent “superpowers,” the core lies in Tool Calling. Register local scripts (like Shell Scripts) as tools callable by Gemini CLI. Philosophy: Follow the Single Responsibility Principle. Outsource complex execution details to independent CLI tools, letting the LLM focus on logic control.

  2. Persona Setting and Prompt Optimization High-quality Agents rely on precise System Prompts:
    • Persona Definition
    • Task Instructions
    • Behavioral Boundaries (Constraints)
  3. Testing and Iteration Don’t try to accomplish everything with one super-long Prompt. Proceed in stages, breaking complex processes down into multiple single-function nodes.

5. Building Human-AI Agent Team Collaboration Models with Google ADK and Gemini CLI

Speaker: Simon Liu

This session demonstrated a collaboration model for Human and Agent Teams. The architectural design follows the principle: “Large models do big things, small models do small things.”

Architecture Planning

  • Gemini CLI / Main Agent: The main coordinator, receiving user commands and performing macro planning.
  • Google ADK Agents / Sub Agents: Specialized sub-agents executing specific tasks.
  • Gemini CLI Extension (adk-agent-extension): The bridge connecting the two. The user issues commands via CLI, and the Main Agent calls remote ADK Agents via the Extension.

Real Application Cases

  • HR Agent: Assists HR in organizing employee onboarding processes, with multiple sub-agents like Hiring, Salary, and Performance working together.
  • Database Agent: Accurately retrieves database (DuckDB/PostgreSQL) data. Through natural language queries for data trends, the Agent automatically executes SQL and returns results.
  • PM Agent: Writes PRDs and collaborates with a drawing Agent (Nano extension) to generate architecture diagrams.

6. The MCP Workshop: Turning APIs into AI-Ready Tools

Speaker: Tamas

This Workshop was very interesting. To wake everyone up at the start, Tamas led the whole room in some exercise. He asked everyone to raise their hands and clap fast. Just as everyone was seriously following along, he suddenly took out his phone, turned around, and took a selfie with everyone—a spontaneous move that was quite funny and instantly warmed up the atmosphere.

After the relaxed opening, Tamas guided everyone step-by-step, from shallow to deep, in a hands-on tutorial on how to build a standard MCP Server.

(The content was mainly hands-on, focusing on experiencing how to transform existing APIs into MCP Servers so AI can understand and use them.)


Conclusion

After attending DevFest 2025, my deepest feeling is: The role of the engineer is evolving rapidly.

We are no longer just “Coders,” but are transforming into “AI Coordinators” and “Architects.” Tools like Gemini CLI, Google ADK, and the MCP protocol are paving the way for this new era.

The future development model will be: Humans define goals, AI Agent teams collaborate to execute. Our core value will depend on how we design these Agents, how we define their communication methods, and how we ensure the quality and safety of their output.

This is an exciting era; let’s embrace the change and dance with AI!




    Enjoy Reading This Article?

    Here are some more articles you might like to read next:

  • Claude Code Complete Tutorial: 30+ Practical Tips to Boost AI Development Efficiency【2025 Latest】
  • 🤖 AI Agent Series (Part 1): Understanding the Core Interaction Logic of LLM, RAG, and MCP
  • 💡 Managing Multiple GitHub Accounts on One Computer: The Simplest SSH Configuration Method
  • 🚀 How to Use Excalidraw AI to Quickly Generate Professional Diagrams and Boost Work Efficiency!
  • Complete macOS Development Environment Setup Guide: Mobile Development Toolchain Configuration Tutorial