Skip to main content
Infrastructure

Agent Architectures: MCP, A2A, and Beyond

How the Model Context Protocol, Agent-to-Agent protocols, and emerging standards shape the modern AI agent stack — and where private data infrastructure fits in.

By ipto.ai Research

The agent protocol landscape

The AI agent ecosystem is converging on a set of open protocols that define how models interact with tools, data, and each other. Three protocols matter most today: Anthropic’s Model Context Protocol (MCP), Google’s Agent-to-Agent (A2A) protocol, and the emerging Agent Communication Protocol (ACP) from IBM and others.

Each solves a real problem. None solves the full stack. Understanding what each protocol addresses — and what it deliberately leaves out — is essential for anyone building agent infrastructure.

What MCP solves: tool and context standardization

The Model Context Protocol, open-sourced by Anthropic in late 2024 and now widely adopted, standardizes how AI models connect to external tools and data sources. Before MCP, every model-tool integration was bespoke: custom connectors, ad hoc authentication, inconsistent response formats.

MCP defines a client-server architecture where:

  • MCP servers expose tools, resources, and prompts through a uniform interface
  • MCP clients (typically AI models or agent frameworks) discover and invoke those capabilities
  • Transport is standardized over JSON-RPC, with support for both local (stdio) and remote (HTTP with SSE) connections

This is genuinely valuable. An MCP server for a CRM exposes the same interface shape as an MCP server for a database or a file system. Agent frameworks can integrate once and connect to hundreds of tools.

But MCP is a connectivity protocol, not a data infrastructure protocol. It defines how to call a tool and get a response. It does not define how to price that response, verify its provenance, enforce granular access control, or log the event for compliance. These concerns are left to the implementation behind the MCP server.

What A2A solves: agent discovery and delegation

Google’s Agent-to-Agent protocol addresses a different problem: how do agents find, negotiate with, and delegate tasks to other agents?

A2A introduces:

  • Agent Cards — JSON metadata documents that describe an agent’s capabilities, endpoints, and authentication requirements
  • Task lifecycle management — a state machine for creating, running, and completing tasks across agent boundaries
  • Artifact exchange — structured payloads for passing results between agents
  • Push notifications — so long-running delegated tasks can report back asynchronously

In a multi-agent system, A2A enables a planning agent to discover a data retrieval agent, delegate a query, and receive structured results — all without hard-coded integrations. This is the orchestration layer.

ACP from IBM follows a similar trajectory, focusing on agent interoperability with an emphasis on enterprise patterns like capability negotiation and structured message passing.

The gap: what protocols leave out

Here is the architectural reality: MCP gives agents a standard way to call tools. A2A gives agents a standard way to collaborate. Neither provides the infrastructure required to safely serve private enterprise data.

When an agent retrieves a confidential financial document through an MCP server, the protocol itself does not answer:

  • Who pays? There is no pricing metadata, metering, or settlement mechanism in the MCP specification.
  • Who is authorized? MCP supports authentication at the transport level, but granular per-document, per-tenant, per-agent permissions require a separate system.
  • Where did this data come from? Provenance — the chain from source document to retrieval unit — is not part of the protocol.
  • Who accessed what, when, and why? Audit logging for compliance is an implementation concern, not a protocol concern.

This is not a criticism of MCP or A2A. Protocols should be narrow and composable. But the gap between protocol and production is where most enterprise agent deployments stall.

The four-layer private data stack

Filling this gap requires dedicated private data infrastructure. The stack has four layers, each addressing a specific concern that protocols leave unresolved.

Retrieval. Raw private data — documents, databases, APIs — is transformed into structured, agent-consumable retrieval units. Each unit contains extracted facts, confidence scores, source provenance, and modality information. This is not chunked text from a vector database. It is structured intelligence designed for autonomous consumption. See https://docs.ipto.ai for the retrieval unit specification.

Pricing. Every retrieval event is metered. Data owners define pricing terms — per-retrieval fees, citation premiums, exclusivity tiers — and the platform enforces them at query time. This creates the economic layer that incentivizes private data supply.

Trust. Granular access controls operate at the tenant, user, and agent level. Provenance verification traces every retrieval unit back to its source. Usage policies define how data can be cited, acted upon, or redistributed. Revocation is immediate.

Audit. Every event — query, retrieval, citation, action — generates an immutable audit record. This satisfies compliance requirements in regulated industries and provides the usage-to-outcome feedback loop that improves retrieval quality over time.

How the layers connect

The following architecture shows how agent protocols and private data infrastructure integrate in a production deployment:

Agent (LLM + Framework)
  |
  |--- MCP Client ----> MCP Server (ipto.ai)
  |                        |
  |--- A2A Client ---> Data Agent (ipto.ai-backed)
                           |
                    +------+------+
                    |             |
              [Retrieval Layer]  |
                    |             |
              [Pricing Layer]    |
                    |             |
              [Trust Layer]      |
                    |             |
              [Audit Layer]      |
                    |             |
              +-----+-----+------+
              |           |
         Private Data   External
         Sources        APIs

The agent connects through standard protocols — MCP for direct tool access, A2A for delegated retrieval through a specialized data agent. Behind both interfaces, the four-layer stack handles retrieval, pricing, trust, and audit before any private data reaches the agent.

This architecture means agent developers write to standard protocols. They do not need custom integrations for data access control, billing, or compliance. The infrastructure handles it.

Integration patterns

The most common production pattern today is an MCP server backed by ipto.ai retrieval infrastructure. This works as follows:

  1. The agent framework connects to an ipto.ai MCP server using standard MCP transport
  2. The agent issues a retrieval query through the MCP tool interface
  3. The ipto.ai retrieval layer converts the query into structured retrieval units from private data sources
  4. The pricing layer meters the access and applies the data owner’s pricing terms
  5. The trust layer verifies agent permissions, enforces usage policies, and attaches provenance
  6. The audit layer logs the complete event chain
  7. The agent receives structured retrieval units through the standard MCP response format

From the agent’s perspective, this is just another MCP tool call. From the enterprise’s perspective, every access is priced, permissioned, provenanced, and audited.

A second pattern uses A2A to expose an ipto.ai data agent — an autonomous agent whose sole capability is secure private data retrieval. Other agents discover it via its Agent Card, delegate retrieval tasks through A2A, and receive structured results. This pattern is particularly useful in multi-agent orchestration systems where data retrieval is a specialized capability.

Both patterns are available through the https://api.ipto.ai endpoint, with configuration documented at https://docs.ipto.ai.

Key takeaways

  • MCP standardizes tool connectivity, giving agents a uniform interface to external data sources and tools — but it does not address pricing, trust, or audit
  • A2A standardizes agent collaboration, enabling discovery, negotiation, and delegation between agents — but the underlying data infrastructure is out of scope
  • The gap between protocol and production is where enterprise agent deployments stall: pricing, provenance, access control, and compliance are not protocol concerns
  • Private data infrastructure fills this gap with four integrated layers: retrieval, pricing, trust, and audit
  • MCP and A2A are complementary to private data infrastructure, not competitive — protocols define the interface, infrastructure provides the substance behind it
  • The dominant integration pattern is an MCP server backed by private data infrastructure, giving agents standard connectivity with enterprise-grade data governance
  • Agent developers should build to open protocols and rely on infrastructure providers for the hard problems of data economics, trust, and compliance

Frequently Asked Questions

What is the Model Context Protocol (MCP) and how does it relate to agent data access?

MCP is an open protocol that standardizes how AI models connect to external data sources and tools. It provides a uniform interface for context retrieval, but does not address data pricing, provenance tracking, access control, or audit logging. Private data infrastructure like ipto.ai complements MCP by adding these enterprise-required layers on top of the retrieval interface.

How do agent-to-agent protocols handle data retrieval and trust?

Agent-to-agent (A2A) protocols like Google's A2A enable agents to discover, negotiate, and delegate tasks to other agents. When an agent needs private data, A2A defines how it requests help from a specialized data agent — but the underlying data retrieval, pricing, and provenance still require dedicated infrastructure. A2A handles orchestration; the data layer handles the actual retrieval.

Where does private data infrastructure fit in the modern AI agent stack?

Private data infrastructure sits between agent protocols (MCP, A2A) and the actual data sources. It provides four layers: retrieval (converting private data into structured, agent-consumable units), pricing (metering every access event), trust (enforcing permissions and tracking provenance), and audit (logging for compliance). Without this layer, agents can connect to tools but cannot safely access enterprise private data.

Related Articles

Get our research delivered weekly

Deep dives on agent infrastructure, data monetization, and the future of AI — straight to your inbox.

Subscribe on Substack →

ipto.ai is building the private data infrastructure layer for the agent economy.