What is A2UI?
A2UI (Agent-to-User Interface) is an open protocol developed by Google that enables AI agents to generate secure, native user interfaces across multiple platforms. Released on December 15, 2025, under the Apache 2.0 license, A2UI represents a significant step forward in how AI agents can interact with users.
Unlike traditional approaches where AI agents return plain text or markdown, A2UI allows agents to create rich, interactive interfaces that feel native to each platform — whether that's a web browser, iOS app, Android app, or desktop application.
The Problem A2UI Solves
As AI agents become more sophisticated, they increasingly need to present complex information and gather structured input from users. Consider these scenarios:
- A travel agent needs to show flight options and collect booking details
- A customer service bot needs to display order information and process returns
- A coding assistant needs to present code changes for review
Current solutions typically fall into two categories, each with significant drawbacks:
Plain text/Markdown: Limited expressiveness, no interactivity, poor user experience for complex data.
Iframe sandboxes: Security concerns, inconsistent styling, poor integration with the host application, and accessibility challenges.
A2UI solves these problems with a "Native-First" approach: agents send declarative JSON blueprints, and the client renders them using its own native components.
Core Principles
1. Security First
A2UI's most important design decision is using declarative data instead of executable code. When an agent sends a UI specification, it's just JSON data describing what components to render — not JavaScript or any other code that could be executed.
Furthermore, agents can only request components from a pre-approved catalog defined by the client application. This means:
- No arbitrary code execution
- No access to APIs or resources beyond what's explicitly allowed
- No UI injection attacks
- Complete control by the client over what can be rendered
2. LLM Friendly
A2UI is designed from the ground up to work well with Large Language Models. The protocol uses a flat adjacency-list structure that allows LLMs to generate UI incrementally:
[
{ "id": "root", "type": "a2ui.Container", "children": ["header", "content"] },
{ "id": "header", "type": "a2ui.Text", "props": { "text": "Welcome" } },
{ "id": "content", "type": "a2ui.Form", "children": ["input", "button"] },
{ "id": "input", "type": "a2ui.TextInput", "props": { "placeholder": "Name" } },
{ "id": "button", "type": "a2ui.Button", "props": { "label": "Submit" } }
]This flat structure (compared to deeply nested JSON) is easier for LLMs to generate correctly and supports streaming JSONL output — meaning the UI can start rendering before the agent has finished generating the complete response.
3. Cross Platform
The same A2UI message can be rendered on any platform that has a compatible renderer. Currently available renderers include:
- Lit Web Components — For web applications
- Angular — For Angular-based web apps
- Flutter — For cross-platform mobile and desktop apps
React and SwiftUI renderers are on the roadmap for 2026.
How A2UI Works
The A2UI flow is straightforward:
- An AI agent generates a JSON message describing the UI it wants to display
- The client receives this message and validates it against its component catalog
- The client renders the UI using its native component library
- User interactions are captured and can be sent back to the agent
Here's a simple example of an A2UI component:
{
"type": "a2ui.Form",
"id": "booking-form",
"children": ["name-field", "date-field", "submit-btn"],
"props": {
"title": "Book a Meeting"
}
}This JSON describes a form with a title, containing three child components (referenced by ID). The client would look up each child ID and render the appropriate native component.
Current Ecosystem Status
A2UI is currently at version 0.8 (Public Preview). Google has indicated that the protocol is still evolving and developers should expect changes before the stable v1.0 release, targeted for Q4 2026.
Key milestones on the roadmap:
- Q1 2026: React renderer release
- Q2 2026: SwiftUI and Jetpack Compose renderers
- Q4 2026: Stable v1.0 release
Getting Started
Ready to dive deeper? Check out our tutorial on building your first A2UI component, or visit the official A2UI documentation.
If you're trying to decide between A2UI and other protocols like MCP Apps, read our comparison guide.