Introduction
If you've been following the AI agent ecosystem, you've likely encountered two similar-sounding protocols: A2UI (Agent-to-User Interface) from Google and AG-UI (Agent-User Interaction Protocol) from CopilotKit. Despite the naming similarities, they solve fundamentally different problems.
The good news? They're not competitors — they're complementary layers in the emerging agentic application stack. This guide explains what each protocol does and how they work together.
TL;DR
- A2UI defines what UI the agent wants to display (the payload)
- AG-UI defines how agents and frontends communicate (the transport)
- They work together: AG-UI can use A2UI as the data format for rendering UI
- CopilotKit is a launch partner with Google for A2UI
Quick Comparison
| Aspect | A2UI | AG-UI |
|---|---|---|
| Purpose | Generative UI specification | Runtime communication protocol |
| Answers | "What UI should render?" | "How do agent and app talk?" |
| Output Format | JSON UI blueprints | JSON event stream |
| Transport | Agnostic (needs a transport layer) | HTTP / Server-Sent Events (SSE) |
| Primary Backer | CopilotKit | |
| Focus | UI rendering security & portability | Real-time bi-directional sync |
What is A2UI?
A2UI is a declarative generative UI specification. When an agent wants to show UI to a user, it doesn't output HTML or JavaScript — it outputs an A2UI response: a JSON payload describing components, their properties, and a data model.
The client application reads this description and maps each component to its own native widgets. This means the same A2UI response renders natively on web (React, Angular, Lit), mobile (Flutter, SwiftUI, Jetpack Compose), and desktop.
A2UI Key Features
- Security first: Declarative data format, not executable code. The client maintains a catalog of trusted components (Card, Button, TextField). The agent can only reference types in this catalog — no arbitrary script execution.
- Portability: One agent response works everywhere. Same JSON renders across all platforms.
- Native rendering: UI matches the host application's look and feel.
- LLM-optimized: Flat JSON structure is easy for models to generate correctly.
What is AG-UI?
AG-UI is the Agent-User Interaction protocol — a bi-directional runtime connection between your user-facing application and any agentic backend. It doesn't dictate what the UI looks like; it ensures your agent and UI can communicate in real time.
AG-UI streams a sequence of JSON events over standard HTTP or Server-Sent Events (SSE). These events include messages, tool calls, state patches, and lifecycle signals — keeping the frontend and agent backend perfectly synchronized.
AG-UI Key Features
- Real-time streaming: Server-Sent Events for low-latency updates.
- Standardized events: Messages, tool calls, state sync, lifecycle signals.
- Framework agnostic: Works with LangGraph, CrewAI, Google ADK, Microsoft Agent Framework, AWS Strands, and more.
- Human-in-the-loop: Built-in support for approvals and interventions.
- Thread management: Handles conversation state and history.
How They Work Together
Think of the agentic stack in layers:
- MCP — Agent ↔ Tool protocol (backend actions)
- AG-UI — Agent ↔ Frontend protocol (runtime communication)
- A2UI — UI payload specification (what to render)
In practice, an agent might:
- Use MCP to call external tools and fetch data
- Generate an A2UI payload describing the UI to show
- Stream that payload to the frontend via AG-UI
CopilotKit fully supports A2UI as a launch partner with Google. When you connect your host application using AG-UI, it can use A2UI as the data format for rendering responses from both the host agent and remote agents.
When to Use Each
You need A2UI when...
- Your agent needs to generate dynamic UI components
- Security is critical — no arbitrary code execution
- You want cross-platform UI from a single response
- Native look-and-feel matters
You need AG-UI when...
- Building real-time agent interfaces
- You need streaming responses and state sync
- Working with multiple agent frameworks
- Implementing human-in-the-loop workflows
You need both when...
- Building production agentic applications with dynamic UI
- You want the full stack: real-time communication + secure generative UI
- Creating cross-platform experiences from a single agent backend
Where They Fit in the Ecosystem
The agentic protocol landscape is converging around complementary standards:
| Layer | Protocol | Purpose |
|---|---|---|
| Tools & Context | MCP | Agent ↔ external tools/data |
| Agent Communication | A2A | Agent ↔ Agent coordination |
| User Interaction | AG-UI | Agent ↔ Frontend runtime |
| UI Generation | A2UI | Declarative UI specification |
The future is composable. These protocols aren't competing for the same space — they're building blocks that work together to create the next generation of AI-powered applications.
Getting Started
Ready to build with both protocols?
- A2UI: Start with the official A2UI documentation and our Getting Started guide.
- AG-UI: Check out the AG-UI documentation and CopilotKit's integration guides.
- Together: CopilotKit's A2UI + AG-UI guide shows how to use both protocols in a single application.
Conclusion
A2UI and AG-UI aren't competing protocols — they're complementary layers in the agentic stack:
- A2UI is the what: a secure, portable specification for agent-generated UI
- AG-UI is the how: a real-time protocol connecting agents to frontends
Together, they enable developers to build secure, cross-platform, real-time agentic applications. As the ecosystem matures, expect these protocols to become the standard foundation for AI-powered interfaces.