Why ACP Changes the Way We Build with Agents

From tool-calling to agent coordination

Diagram showing agents communicating through ACP protocol

For the past year, MCP (Model Context Protocol) has been the connective tissue of the AI tooling world. It gave language models a standardized way to reach out and touch things: query a database, call an API, read a file, search the web. The model calls, the tool responds, the model continues. Clean. Simple. Useful.

But as teams start building systems where agents need to coordinate with other agents, MCP's limitations become apparent. That's what ACP (Agent Communication Protocol) addresses, and the architectural difference changes how you build.


What MCP Actually Does

MCP operates on a client-server model. The AI model is the client. MCP servers expose three primitive types: tools (functions to call), resources (data to read), and prompts (templated instructions). When a model needs something, it reaches out synchronously to a server, gets a response, and incorporates it into its reasoning.

This works well when the intelligence lives in one place. The model thinks, decides what it needs, fetches it, and keeps thinking. Context is centralized. Control is singular. There's one agent, and it has access to many tools.

The limitation shows up when you want multiple agents working together. MCP doesn't have a concept of an agent delegating to another agent. You can wire up a tool that internally calls another model, but that's an implementation hack, not a protocol feature. The orchestration logic, error handling, routing, and context passing all fall on you to build from scratch. Every team reinvents it differently, and most get it wrong the first few times.


What ACP Actually Does

ACP, developed through the BeeAI project, treats agents as first-class citizens in a network. Where MCP says "here are tools a model can call," ACP says "here are agents that can communicate with each other."

Technically, ACP is REST-based. No new transport layer to learn, just HTTP. Agents expose a standardized interface. Communication supports streaming via Server-Sent Events (SSE), so agents can send partial results while still working instead of blocking the caller. Messages are multimodal by default: text, files, and structured data travel together.

The most consequential feature is async-first design. An orchestrator can hand a task to a specialist agent and move on. When the specialist finishes (or has an intermediate update), it pushes a response. This isn't just a performance optimization. It's what makes long-running, parallel, multi-agent workflows actually work instead of timing out or blocking.

ACP also defines agent discovery. Agents register themselves with metadata: what they do, what inputs they accept, what outputs they produce. Orchestrators look up agents dynamically instead of hardcoding them. Your agent network becomes something you can compose and reconfigure at runtime.


Applying It in Practice

The practical mental model shift is this: with MCP, you design around a central intelligence with peripheral tools. With ACP, you design around a network of specialized agents, each capable of reasoning, with an orchestrator handling routing and coordination.

A concrete example: say you're building a workflow that takes a customer support ticket, researches the account history, checks contract terms, and drafts a response. With MCP and a single model, that model handles everything sequentially, accumulating context as it goes. With ACP, you can have a specialized account-history agent, a contract-interpretation agent, and a response-drafting agent, all invoked by an orchestrator and running in parallel where their inputs allow it. The orchestrator synthesizes their outputs.

Why does this matter? Specialists outperform generalists on narrow tasks. Parallel execution beats sequential. And each agent can be updated, replaced, or scaled independently. Change the contract-interpretation agent without touching the rest of the system.

The setup looks like this in practice:

  1. Define your agents as ACP-compliant services. Each one has a REST endpoint, handles a defined input schema, and returns a structured output. If your agent does work that takes time, it uses SSE to stream progress.
  2. Register agents in a registry. This makes them discoverable to orchestrators and to each other. Tags and metadata let an orchestrator find the right specialist without hardcoded routing logic.
  3. Build an orchestrator that speaks ACP. The orchestrator receives a task, queries the registry for relevant agents, fans out work, and assembles results. The orchestrator itself is an ACP agent, so it can be composed into higher-level workflows.
  4. Handle context explicitly. This is where most implementations run into friction. When you hand a task from one agent to another, the receiving agent needs enough context to do its job without re-deriving everything from scratch. The message envelope carries this, but you need to be deliberate about what you include.

What It Unlocks

Three things open up when you adopt ACP as your agent communication layer.

Genuine specialization. You can make one agent excellent at one task without affecting anything else. A legal-reasoning agent tuned on contract language. A code-review agent trained on your internal standards. These exist as independent services that any workflow can call.

Composable pipelines. Complex workflows stop being monolithic prompts and become networks of agents with defined interfaces. Adding a new step means registering a new agent and updating the orchestrator's routing logic, not rewriting a giant system prompt.

Context as a first-class concern. When a task crosses an agent boundary, context needs to travel with it. This is the hardest part. Agents don't share memory; each one starts cold. The quality of your context-passing architecture determines whether your agent network behaves like a coherent system or a collection of isolated components that occasionally talk past each other.

This is what makes shared context infrastructure critical at the ACP layer. An agent network that maintains relevant context across boundaries, without bloating every message payload, is what separates agent systems that work in demos from agent systems that work in production.

MCP gave the model a way to reach out and grab things. ACP gives agents a way to work together. The latter is the architecture that scales.


Further Reading


Brief is building the context layer for multi-agent systems. With `brief ask`, coding agents consult product context before they build, so the orchestrator doesn't have to stuff every message with background. When your agents need shared memory without payload bloat, that's the problem we're solving.

Stay in the Loop

Get notified when we publish new insights on building better AI products.

Get Updates
← Back to Blog