Skip to content

A2A + KCP

March 8, 2026 · LinkedIn

7 reactions · 0 comments · 633 views


Every enterprise AI strategy eventually hits the same question: how do agents share knowledge without leaking what they shouldn't?

Two protocols are emerging. People keep framing them as competitors. They're not. They solve different problems.

A2A is the front door. It answers: who is this agent and how do I call it? KCP is the filing cabinet. It answers: what knowledge does this agent have, and who may access each piece?

A2A handles transport-layer auth — can you talk to this agent at all? KCP handles knowledge-access auth — now that you're inside, can you read this specific file? One controls entry. The other controls access to what's behind the door.

Neither is missing anything. They describe different things.

I wrote a blog post explaining this. Then I built a runnable simulator to make the claim concrete.

Four scenarios. Clinical research, energy metering, legal delegation chains, financial AML. A Java simulation where an orchestrator discovers a research agent via A2A, authenticates via OAuth2, then the agent evaluates per-unit KCP policy. Public guidelines load immediately. Trial protocols require token validation. Patient cohort data triggers a human-in-the-loop gate and a W3C audit entry.

150 tests. All passing.

The interesting part wasn't building it. It was what the tests forced us to find: the spec had gaps that writing about it missed. Capability attenuation was declarative but not mechanically enforced. Delegation depth counting had no normative definition.

Building the thing found what writing about it couldn't.

Blog post and simulator: https://lnkd.in/eFFWfRhS


All LinkedIn posts