A2A Protocol v1.0: Agent-to-Agent Interoperability
The definitive guide to Google's Agent-to-Agent (A2A) Protocol -- the open standard that lets AI agents built in different frameworks discover, delegate to, and collaborate with each other. From Agent Cards and task lifecycle to streaming, authentication, the A2A+MCP hybrid architecture, native integration in Azure AI Foundry and AWS Bedrock AgentCore, and the x402 extension for agent payments.
What Is A2A?
The Agent-to-Agent Protocol (A2A) is an open standard for communication between autonomous AI agents -- regardless of the framework, vendor, or cloud they run on. It was announced by Google at Cloud Next on April 9, 2025 and donated to the Linux Foundation two months later. Version 1.0 of the specification was published in March 2026, turning A2A from a vendor proposal into a production-grade, cross-industry standard. Where MCP solves the agent-to-tool problem (how does one agent call a database, API, or service?), A2A solves the agent-to-agent problem (how does one agent discover, hand off to, and collaborate with another agent that it did not author?).
The design principle is opaque execution. Two agents do not share internal memory, reasoning traces, or proprietary prompt templates. They expose a public contract -- an Agent Card advertising capabilities, a JSON-RPC surface for invoking those capabilities, and a Task object that tracks progress. Either side can be a closed system; they only need to agree on the wire format. That property is what makes A2A usable across organizational boundaries: a LangGraph agent at a SaaS vendor can delegate a subtask to a CrewAI agent inside a bank, and neither team has to surrender their internals.
A2A is deliberately complementary to MCP. MCP plugs tools into a single agent; A2A composes multiple agents into a system. A production deployment in April 2026 typically uses both: MCP servers expose tools (Slack, Postgres, filesystem) to each individual agent, while A2A connects agents to each other so a planner agent can hand off to a specialist agent without either knowing the other's tool inventory. Google, AWS, Microsoft, IBM, Salesforce, SAP, Cisco and ServiceNow all sit on the A2A steering committee, and the reference implementations now live under the Linux Foundation.
Why April 2026: v1.0, Cloud Adoption, and the First-Year Mark
April 9, 2026 marks the one-year anniversary of the A2A announcement, and the numbers justify the attention. More than 150 organizations now participate in the project -- hyperscalers, ISVs, and multinational enterprises. The reference repository at github.com/a2aproject/A2A has passed 22,000 GitHub stars. A2A v1.0, published March 12, 2026, is the release that made the protocol genuinely production-ready: it added Signed Agent Cards for cryptographic identity, multi-tenancy so one endpoint can host many agents, and multi-protocol bindings so the same logical agent can be exposed over JSON-RPC, gRPC, and REST without duplicating code. The SDK ecosystem has expanded from a single Python implementation to five production-ready languages: Python, JavaScript, Java, Go, and .NET.
Cloud-native support landed almost simultaneously. Microsoft integrated A2A natively into Azure AI Foundry and Copilot Studio, meaning any agent built in Foundry is A2A-callable from outside Azure by default. AWS added A2A support to the Amazon Bedrock AgentCore Runtime, with the published A2A protocol contract documented as a first-class runtime feature -- see the AWS guide. Vertex AI and Agent Engine on Google Cloud added matching support. As of this writing, real enterprise workloads are in production at Tyson Foods, Gordon Food Service, S&P Global Market Intelligence, ServiceNow, and Adobe, mostly in supply-chain, financial services, insurance, and IT operations.
The practical consequence: if you are building a new multi-agent system in 2026, not supporting A2A is a strategic mistake. Vendor-specific agent protocols (proprietary REST APIs, custom task queues, ad-hoc WebSocket schemas) now look the same way private database wire protocols looked after the SQL standard landed -- technically workable, but commercially isolated.
Key Features
Agent Cards (Discovery)
Every A2A agent publishes a machine-readable Agent Card at /.well-known/agent-card.json. The card advertises identity (name, provider, description), skills with input/output schemas, supported transport bindings, security schemes, and extensions. Clients fetch the card to decide whether an agent can handle a task -- no manual integration required.
Signed Agent Cards (v1.0)
v1.0 introduced cryptographically signed Agent Cards so a receiving agent can verify that a card was actually issued by the claimed domain. Signatures use the JWS format over the card payload. This closes the supply-chain attack where a malicious agent impersonates a well-known service. Signed Agent Cards are mandatory for cross-organizational trust.
Task State Machine
Every interaction produces a Task with a well-defined lifecycle: submitted, working, input-required, auth-required, completed, failed, canceled, and rejected. Terminal states are completed, failed, canceled, and rejected. The state machine is the contract clients rely on for retries, timeouts, and user-facing progress indicators -- no more vendor-specific status enums.
Multi-Protocol Bindings
One logical agent, multiple wire protocols. v1.0 supports JSON-RPC 2.0 over HTTP (default), gRPC with Protocol Buffers (added in v0.3 for low-latency internal traffic), and plain JSON-over-HTTP/REST. The Agent Card lists which bindings are available so clients pick the best one. No more rewriting an agent to support a second transport.
Streaming via SSE
Long-running tasks stream progress over Server-Sent Events. The stream carries TaskStatusUpdateEvent frames (state transitions) and TaskArtifactUpdateEvent frames (incremental outputs, including chunked artifacts). Multiple concurrent subscriptions per task are supported, so both the originating client and a monitoring dashboard can watch the same task live.
Push Notifications (Webhooks)
For fire-and-forget workflows, A2A supports webhook delivery. The client registers a push endpoint via tasks/pushNotifications/create and the agent POSTs task updates when state changes. Webhook calls carry their own authentication configuration. This is the right mode for overnight batch tasks and cross-region deployments where an open SSE connection is impractical.
Multimodal Parts
Messages are built from Parts. Four Part types are defined: text (UTF-8 strings), raw (binary, base64-encoded), url (external file references), and data (structured JSON objects). Each Part may carry mediaType and filename. This lets one protocol carry chat, file uploads, tool-call results, and structured reports without separate endpoints.
Standard Security Schemes
A2A reuses OpenAPI-style security schemes: API key, HTTP Basic/Bearer, OAuth 2.0 (Authorization Code, Client Credentials, Device Code), OpenID Connect, and mutual TLS. The Agent Card declares which schemes are accepted. An auth-required task state lets an agent pause mid-flight and request the user to re-authenticate before continuing -- no custom auth dance.
Multi-Tenancy (v1.0)
v1.0 makes multi-tenancy first-class: a single A2A endpoint can host many agents under the same URL, routed by a logical agent identifier in the request. This matches how enterprises actually deploy -- one ingress, many agents, managed per-team. It also aligns with how Kubernetes ingresses and API gateways work.
Opaque Execution
The protocol is deliberately silent about how an agent computes its response. No shared memory, no forced reasoning trace format, no prompt-template leakage. Two agents only need to agree on skills, inputs, and outputs. This is what makes A2A usable between competitors -- the bank agent calls the vendor agent without surrendering how either works internally.
Context and Task Continuity
Tasks and messages carry contextId and taskId fields so a multi-turn conversation or a chain of delegated subtasks can be correlated. Tasks can also reference other tasks via referenceTaskIds, which is how A2A represents "task B depends on task A" without collapsing the two into a single stream.
Extensions (x402, AP2, custom)
A2A supports declared extensions in the Agent Card. The most widely adopted in 2026 is a2a-x402, which adds cryptocurrency and stablecoin micropayments between agents using the HTTP 402 semantics and Coinbase's x402 payment protocol. Enterprises can also publish private extensions for vertical needs (clinical data, KYC, export controls) without forking the spec.
A2A vs MCP: The Comparison Everyone Asks For
The single most common question in 2026 is "do I use MCP or A2A?". The honest answer is "both, at different layers". The two protocols solve orthogonal problems and are explicitly designed to compose.
| Dimension | MCP (Model Context Protocol) | A2A (Agent-to-Agent) |
|---|---|---|
| Primary call | Agent calls a tool | Agent calls another agent |
| Callee is | Deterministic code (a function, an API) | An autonomous agent with its own reasoning |
| Discovery | tools/list after connecting a server | /.well-known/agent-card.json (public URL) |
| Transport | Stdio, Streamable HTTP (SSE deprecated) | JSON-RPC over HTTP, gRPC, REST; SSE for streaming |
| Execution shape | Synchronous request/response; short-lived | Long-running Task with lifecycle state |
| Auth | Mostly in-process; OAuth for remote servers | OAuth 2.0, mTLS, signed Agent Cards -- designed for cross-org |
| Typical scope | Inside one agent or one team | Across teams, vendors, or clouds |
| Opaque callee? | No -- tool schema is fully public | Yes -- only the contract is public, internals are hidden |
A clean rule of thumb: if the callee is a function you could equally well write as a REST endpoint, use MCP. If the callee reasons, plans, and may itself call other tools or agents, use A2A. A legal-review agent that reads a document, consults case law, drafts an opinion, and asks clarifying questions is an A2A peer. A Postgres query runner is an MCP tool.
Agent Cards and Discovery
Discovery in A2A is HTTP-native. Every compliant agent serves a JSON document at https://<host>/.well-known/agent-card.json describing who it is and what it can do. The card is intentionally the only mandatory piece of public metadata -- no separate registry, no DNS-SD, no broker. A client that knows the agent's URL can introspect everything it needs to call the agent correctly.
// Example: /.well-known/agent-card.json (v1.0)
{
"protocolVersion": "1.0",
"name": "contract-review-agent",
"provider": {
"name": "AcmeLegal",
"url": "https://acmelegal.example.com"
},
"description": "Reviews commercial contracts and flags unusual clauses.",
"interfaces": [
{
"transport": "JSONRPC",
"url": "https://agents.acmelegal.example.com/a2a"
},
{
"transport": "GRPC",
"url": "grpc://agents.acmelegal.example.com:443"
}
],
"capabilities": {
"streaming": true,
"pushNotifications": true,
"stateTransitionHistory": true
},
"securitySchemes": {
"oauth2": {
"type": "oauth2",
"flows": {
"clientCredentials": {
"tokenUrl": "https://auth.acmelegal.example.com/token",
"scopes": {"contracts.read": "Read contracts"}
}
}
}
},
"skills": [
{
"id": "review-contract",
"name": "Review a commercial contract",
"inputModes": ["text/plain", "application/pdf"],
"outputModes": ["application/json"],
"description": "Returns flagged clauses with severity and rationale."
}
],
"extensions": [
{"uri": "https://x402.org/a2a/v1", "required": false}
],
"signatures": [
{"alg": "ES256", "kid": "acmelegal-2026-04", "signature": "eyJhbGciOi..."}
]
}
The signatures array is the v1.0 addition. A client that trusts acmelegal.example.com's public key can verify that the card really came from them, which closes the impersonation attack where a malicious proxy returns a plausible-looking card pointing at a hostile endpoint. Cards may also declare an extended-card capability: clients with elevated credentials can request an extended card that exposes internal skills not advertised publicly.
Task Lifecycle
Every A2A interaction produces a Task. The Task object is the unit of observability, retry, and cancellation -- it is to agent calls what an HTTP response is to REST, except it can live for seconds, minutes, or hours. Here is the full state machine.
Non-terminal states: submitted (the agent acknowledged the task but has not started), working (processing), input-required (needs more info from the user before continuing), auth-required (needs the client to re-authenticate, typically a scope the initial token did not include).
Terminal states: completed (success, artifacts attached), failed (agent-level error), canceled (client called tasks/cancel), rejected (agent declined before processing -- policy violation, unsupported skill, rate limit). Once terminal, a task never transitions again; a retry creates a new task.
Clients observe state transitions in three ways. Polling via tasks/get works for simple integrations. Streaming via tasks/subscribe over Server-Sent Events is the default for interactive UIs -- the server pushes TaskStatusUpdateEvent frames as the state moves and TaskArtifactUpdateEvent frames as outputs are produced. Push notifications via webhooks registered with tasks/pushNotifications/create are the right choice for long-running or detached workloads. The same task can have multiple observers using different mechanisms simultaneously.
Core JSON-RPC Methods
A2A uses JSON-RPC 2.0 over HTTP as its default wire protocol. The method surface is small and deliberate.
tasks/send— submit a new task; returns either a Task (work queued) or a Message (immediate reply).tasks/sendStreaming— submit and open an SSE connection for live updates in one round trip.tasks/get— retrieve current task state, optionally with full history.tasks/list— query tasks with filters (state, time window, skill) and pagination.tasks/cancel— request cancellation of a running task; agent may still transition to failed.tasks/subscribe— open an SSE stream for an existing task (catch up on an in-flight task from a second client).tasks/pushNotifications/create— register a webhook for task updates.tasks/pushNotifications/list/get/delete— manage webhook subscriptions.
// Example: submit a task with streaming (JSON-RPC 2.0 over HTTP)
POST /a2a HTTP/1.1
Host: agents.acmelegal.example.com
Authorization: Bearer <token>
Content-Type: application/json
Accept: text/event-stream
{
"jsonrpc": "2.0",
"id": "req-42",
"method": "tasks/sendStreaming",
"params": {
"message": {
"messageId": "msg-01",
"role": "user",
"parts": [
{"kind": "text", "text": "Review this SaaS agreement for red flags."},
{"kind": "url", "mediaType": "application/pdf",
"filename": "agreement.pdf",
"url": "https://files.example.com/agr-7891.pdf"}
]
},
"skill": "review-contract"
}
}
// Response: SSE stream
event: status
data: {"taskId":"t-9f3...","state":"submitted"}
event: status
data: {"taskId":"t-9f3...","state":"working"}
event: artifact
data: {"taskId":"t-9f3...","artifact":{
"parts":[{"kind":"data","data":{"flags":[...partial...]}}],
"index":0,"lastChunk":false}}
event: status
data: {"taskId":"t-9f3...","state":"completed"}
Authentication and Mandates
A2A deliberately reuses well-understood HTTP auth rather than inventing a new mechanism. An agent's Agent Card declares which schemes it accepts under securitySchemes, using the same vocabulary as OpenAPI: API key, HTTP Basic/Bearer, OAuth 2.0 (all standard flows), OpenID Connect, and mutual TLS. Clients pick a scheme they can satisfy and attach the credential to every request.
Two A2A-specific additions matter in production. First, the auth-required task state lets an agent pause mid-task and ask the client to present a token with additional scopes -- for example, a planning agent discovers mid-run that it needs write access to a calendar it was only given read scope to. The client satisfies the new requirement and the task resumes from where it stopped. Second, v1.0 introduced mandates, a signed delegation credential that lets a user authorize an agent to act on their behalf with bounded authority ("this agent may spend up to $200 on my behalf, only at these merchants, only until Friday"). Mandates are what make agent-led commerce safe and are the foundation for the x402/AP2 payment extensions.
For cross-organizational calls, the dominant pattern is OAuth 2.0 client credentials with narrowly scoped tokens, combined with signed Agent Cards for identity verification. Internal deployments often prefer mutual TLS inside the service mesh; see the Istio guide for that pattern.
A2A with the Claude Agent SDK
The Claude Agent SDK does not ship an A2A server out of the box, but wrapping a Claude-backed agent behind an A2A endpoint is a small amount of glue code. The pattern: expose your Claude agent as an A2A skill, translate incoming A2A messages into Claude messages, and stream Claude's output back as A2A TaskArtifactUpdateEvents.
// a2a-claude-agent.ts
// Wrap a Claude Agent SDK session behind an A2A endpoint.
import { A2AServer } from "@a2a/server"; // hypothetical but follows spec
import { query } from "@anthropic-ai/claude-agent-sdk";
const a2a = new A2AServer({
agentCard: {
protocolVersion: "1.0",
name: "research-agent",
description: "Deep research agent powered by Claude.",
interfaces: [{ transport: "JSONRPC", url: process.env.A2A_URL! }],
capabilities: { streaming: true, pushNotifications: true },
securitySchemes: {
bearer: { type: "http", scheme: "bearer" }
},
skills: [{
id: "research",
name: "Research a topic",
inputModes: ["text/plain"],
outputModes: ["application/json", "text/markdown"]
}]
}
});
a2a.onSkill("research", async (task, send) => {
const prompt = task.message.parts
.filter(p => p.kind === "text")
.map(p => p.text)
.join("\n");
await send.status("working");
// Claude Agent SDK: stream the response
for await (const ev of query({
prompt,
options: { model: "claude-opus-4-6", permissionMode: "acceptEdits" }
})) {
if (ev.type === "text_delta") {
await send.artifact({
parts: [{ kind: "text", text: ev.delta }],
lastChunk: false
});
}
if (ev.type === "done") {
await send.artifact({
parts: [{ kind: "text", text: "" }],
lastChunk: true
});
await send.status("completed");
}
}
});
a2a.listen(8080);
The inverse -- a Claude agent that calls another agent via A2A -- is equally useful. Expose A2A as a regular Claude tool: the tool's job is to POST a tasks/sendStreaming request, read the SSE stream, and return the final artifact to Claude. This is how you compose specialist agents: your main Claude loop calls delegate_to_legal_review or delegate_to_forecast_agent, and the tool wrapper handles the A2A protocol mechanics.
# Python: Claude tool that delegates to an A2A peer agent
from anthropic import Anthropic
import httpx, json
async def delegate_to_a2a(agent_url: str, skill: str, prompt: str,
token: str) -> dict:
body = {
"jsonrpc": "2.0", "id": "1",
"method": "tasks/sendStreaming",
"params": {
"skill": skill,
"message": {
"messageId": "m1", "role": "user",
"parts": [{"kind": "text", "text": prompt}]
}
}
}
async with httpx.AsyncClient(timeout=None) as c:
async with c.stream("POST", agent_url,
headers={"Authorization": f"Bearer {token}",
"Accept": "text/event-stream"},
json=body) as r:
final = None
async for line in r.aiter_lines():
if line.startswith("data:"):
evt = json.loads(line[5:].strip())
if evt.get("artifact") and evt["artifact"].get("lastChunk"):
final = evt["artifact"]
return final
# Register as a Claude tool via the Agent SDK; Claude calls it like any
# other tool, and the wrapper hides all A2A plumbing.
A2A + MCP: The Hybrid Architecture
The canonical production architecture in 2026 uses both protocols. Each agent connects to its tools via MCP. Agents connect to other agents via A2A. The result is a two-layer graph: the tool layer is private to each agent (a Postgres MCP server, a Slack MCP server, a filesystem MCP server -- things the agent uses but does not share), and the agent layer is public between teams (A2A endpoints advertising reasoning capabilities).
+--------- A2A ----------+
| |
+---- Planner Agent (Team A) ---+ +-- Legal Agent (Team B) --+
| | | |
| MCP: jira, github, slack, | | MCP: documentdb, |
| filesystem, email | | caselaw-search, |
| | | redaction-tool |
+-------------------------------+ +--------------------------+
^ private to Team A ^ private to Team B
(MCP inside the agent's trust domain)
A2A (public contract, signed cards,
OAuth tokens, mandates)
Choosing between them is straightforward. Is the callee a deterministic function you own? MCP. Is the callee another agent -- especially one owned by a different team, vendor, or cloud? A2A. Teams that try to use A2A for everything end up paying the task-lifecycle overhead on short tool calls; teams that try to use MCP for everything end up hand-rolling cross-team authentication, versioning, and discovery from scratch and eventually rebuild A2A badly.
The composition pattern matters for multi-agent architectures too. A supervisor agent typically has a small number of A2A peers (specialist agents) and a large number of MCP tools (its own hands). The supervisor's own prompt never needs to know the specialist's internals -- only its Agent Card.
Production Deployments: Azure AI Foundry and AWS Bedrock AgentCore
Both major US cloud platforms now treat A2A as a native runtime concern, which means you usually do not implement the protocol from scratch -- the runtime does it for you and you write the agent logic.
Azure AI Foundry and Copilot Studio
In Azure AI Foundry, agents you build with Foundry Agents Service expose an A2A endpoint by default. The platform publishes a signed Agent Card at the Foundry-managed URL, integrates with Entra ID for OAuth, and wires task state into Application Insights for observability. Copilot Studio agents can both publish and consume A2A endpoints, so a Copilot agent can call a Foundry agent or a third-party agent at other.example.com/.well-known/agent-card.json transparently. This is meaningful because it turns every Copilot Studio customer into a latent A2A participant.
AWS Bedrock AgentCore Runtime
Bedrock AgentCore Runtime (see the AWS guide) added an explicit A2A protocol contract. Any agent framework that runs on AgentCore -- Strands, LangGraph, OpenAI's Agents SDK, Google's ADK -- is wrapped by AgentCore with an A2A ingress and an A2A client for outbound calls. AgentCore handles signed Agent Cards via IAM-backed identity, Cognito or Entra for OAuth, and CloudWatch for task telemetry. AWS publishes a canonical A2A multi-agent reference implementation at github.com/madhurprash/A2A-Multi-Agents-AgentCore showing OpenAI, LangGraph, ADK, and Strands agents interoperating over A2A on the same runtime.
Google Cloud: Vertex AI, Agent Engine, ADK
On Google Cloud, A2A is native to the Agent Development Kit (ADK), Vertex AI Agent Engine, and the Agentspace and AI Agent Marketplace surfaces. You can deploy on Agent Engine for a managed path, on Cloud Run for a serverless path, or on GKE for maximum control. Vertex GenAI Evaluation Service treats A2A agents as first-class, evaluating them via their public interfaces rather than requiring internal instrumentation.
Cloudflare: Edge-Hosted A2A
For latency-sensitive agents, Cloudflare Workers with Durable Objects (see the Cloudflare guide) is an interesting A2A host: Durable Objects give you per-task state, Streams give you SSE, and the global anycast network keeps p99 latency low for Agent Card fetches. Cloudflare's production x402 support pairs naturally with A2A's x402 extension for agent payments at the edge.
The A2A x402 Extension: Agents Paying Agents
The A2A x402 extension (github.com/google-agentic-commerce/a2a-x402) is one of the first real-world A2A extensions and probably the most consequential, because it lets agents actually transact. It revives the dormant HTTP 402 "Payment Required" status code for an agent context: when a client agent calls a paid skill, the server agent responds with an A2A task in a payment-required substate, including a payment instruction (amount, currency, destination wallet, facilitator URL). The client signs a stablecoin payment using Coinbase's x402 protocol, resubmits the task with the payment attached, and receives the output.
The extension composes with Google's broader Agent Payments Protocol (AP2) framework and with user-level mandates: a human signs a mandate saying "this agent may spend up to $200 on my behalf this week on research APIs", and the agent presents the mandate plus an x402 payment to each callee. Coinbase's production x402 rollout -- Stripe integration on Base in February 2026, Cloudflare edge support, more than 100 million payments processed across APIs, apps, and AI agents -- is the infrastructure the A2A extension plugs into.
If you are building agents that consume external paid services, x402 over A2A is currently the cleanest monetization pattern: no API-key handshake, no invoice reconciliation, no chargebacks. See the Stripe guide and the Full-Stack AI guide for how to wire this into an application.
Implementation: Minimal A2A Server
You rarely implement A2A from scratch -- the official SDKs at github.com/a2aproject handle Agent Card serving, JSON-RPC routing, SSE framing, and webhook delivery. A minimal Python server using the reference a2a-sdk looks like this.
# a2a_server.py
from a2a.server import A2AServer, TaskContext
from a2a.types import AgentCard, Skill, Part, TaskState
import asyncio
card = AgentCard(
protocol_version="1.0",
name="summarize-agent",
description="Summarizes long documents.",
skills=[Skill(
id="summarize",
name="Summarize a document",
input_modes=["text/plain"],
output_modes=["text/plain"]
)],
capabilities={"streaming": True}
)
server = A2AServer(agent_card=card)
@server.skill("summarize")
async def summarize(ctx: TaskContext):
text = next((p.text for p in ctx.message.parts
if p.kind == "text"), "")
await ctx.update(TaskState.WORKING)
# ... your actual summarization (MCP tool calls,
# LLM inference, whatever) ...
summary = await do_summary(text)
await ctx.artifact(Part(kind="text", text=summary),
last_chunk=True)
await ctx.update(TaskState.COMPLETED)
if __name__ == "__main__":
asyncio.run(server.run(host="0.0.0.0", port=8080))
The SDK serves /.well-known/agent-card.json, accepts JSON-RPC on /a2a, emits SSE for streaming, and applies whatever auth middleware you configure. Deploy the container behind any ingress -- Cloud Run, ECS, Kubernetes, Workers -- and register a signed Agent Card in your public DNS.
Security and Production Notes
Three failure modes are specific to A2A deployments in April 2026 and worth calling out explicitly.
- Unsigned Agent Cards are an anti-pattern post-v1.0. Always sign. A client fetching an unsigned card from an external domain should treat it the way a browser treats a self-signed TLS certificate.
- Unbounded task cost. A callee agent can run for hours and invoke other paid agents. Always set per-task budgets (tokens, wall clock, money) and enforce them with mandates. The x402 extension pairs with this naturally.
- Prompt-injection across agents. Content retrieved by a callee agent can contain instructions that hijack the caller. Keep the caller's system prompt strict about ignoring instructions found inside A2A artifacts, and do structured-output validation rather than trusting free text.
- Replay and idempotency. Use
messageIdandtaskIdas idempotency keys. Agents should reject a secondtasks/sendwith the samemessageIdwithin the replay window. - Observability. Instrument every A2A call the way you instrument outbound HTTP: trace id propagation via headers, task id in every log line, RED metrics (rate/errors/duration) per skill. Azure AI Foundry, AgentCore, and Agent Engine wire this in for you; self-hosted deployments have to do it by hand.
Adoption Snapshot (April 2026)
Hyperscalers (AWS, Microsoft, Google), enterprise software vendors (Salesforce, SAP, ServiceNow, IBM, Cisco, Adobe), and multinational end users (Tyson Foods, Gordon Food Service, S&P Global) sit on or contribute to A2A under the Linux Foundation.
The reference repository at github.com/a2aproject/A2A crossed 22,000 stars by the first-year anniversary on April 9, 2026, placing it among the fastest-growing protocol projects in open-source AI infrastructure.
A2A is a first-class runtime feature in Azure AI Foundry (plus Copilot Studio), AWS Bedrock AgentCore, and Google Cloud Vertex AI / Agent Engine / ADK. Agents built on any of these clouds are A2A-callable by default.
Signed Agent Cards, multi-tenancy, multi-protocol bindings (JSON-RPC, gRPC, REST), and mandate-based delegation landed in v1.0 (March 2026), closing the last gaps that kept A2A out of regulated enterprises.