Skip to content

March 2026

The KCP Ecosystem: How Five Tools Turn Claude Code Into a Persistent Intelligence Platform

The KCP Ecosystem — Turning Claude Code into a Persistent Intelligence Platform


The Problem

Every session with Claude Code starts from zero.

Every AI session starts from zero — the Start-From-Zero Loop

You open a new session, and the model has no idea what you were doing yesterday. Which services are running. What you decided about the database schema last Thursday. Why you chose the library you chose. You re-explain it. Claude asks clarifying questions you answered two sessions ago. You paste the same background context you always paste. Then the work begins.

And when the work does begin, there's a different problem: output flooding the context window. Run mvn package and you get 400 lines of Maven lifecycle noise. Run terraform plan and the diff buries the actual changes in scaffolding. Run kubectl get pods cluster-wide and you've spent 8,000 tokens on status rows you didn't need.

Context flooding destroys working memory — 33.7% of a 200K context is recovery overhead

The context window is your working memory. Filling it with boilerplate and re-explaining the same setup repeatedly is waste — not just inconvenient, but structurally limiting. A 200K token context sounds vast until a third of it is recovery overhead.

What's missing is infrastructure. Not smarter prompting. Not longer context. Infrastructure — a persistent layer that handles memory, filters noise, and gives the model the right knowledge at the right moment without you having to manage it manually.

That infrastructure is KCP.

kcp-dashboard: Observability for the KCP Ecosystem

The KCP toolchain has been running in the background for weeks. kcp-commands injects manifests before Bash calls. kcp-memory indexes sessions and tool events. Events accumulate in ~/.kcp/usage.db and ~/.kcp/memory.db. The machinery works. But until today, the only way to know whether it was working well was to grep through databases and trust the numbers.

Trust is not observability. You cannot improve what you cannot see.

Today we are releasing kcp-dashboard v0.22.0 -- a terminal UI that reads both KCP databases and shows you what the guidance layer is actually doing: which commands are guided, how often manifests leave the agent needing more help, what sessions look like, and where the gaps are.

The Faster Pencil

AI does not remove the hard part of any job. It moves it — and makes it harder to ignore.

Based on a conversation between two software developers, March 2026.


Two developers were talking late one night about what AI had actually changed in their work. They had both been using it for years. They were good at it. And what they kept coming back to was something that surprised them: the more capable the tool got, the more it demanded of them — not less.

This essay is built on that conversation. But the idea they landed on has nothing to do with software. It applies to any job where thinking is the work.

Every Agent That Queries a Knowledge Manifest Reinvents Filtering

Your agent has a task, a token budget, and a manifest with 200 knowledge units. Which units should it actually read? Every team answers this question differently — custom audience filters, ad-hoc staleness checks, bespoke capability gates. The logic works, but none of it interoperates. Swap one tool for another and you rewrite the glue.

KCP v0.14 standardises the query. RFC-0014 standardises composition. Together, they solve the two problems that make knowledge manifests painful at scale.

Peter Naur Was Right in 1985, and AI Just Proved It

In 1985, the Danish computer scientist Peter Naur published a short paper called "Programming as Theory Building." His argument was simple and radical: a program is not its source code. A program is a theory — a coherent mental model of what the system does, how its parts relate to each other, and why it was built the way it was. The source code is a byproduct of that theory. A trace. Not the thing itself.

The Manifest Quality Feedback Loop

kcp-commands ships 291 manifests. Each one is a bet: that the flags we chose are the ones the agent will actually need, that the output filter is tight enough, that the preferred invocations match real usage. Some of those bets pay off. Some do not.

Until now there was no way to know which. A manifest for kubectl apply could be steering the agent into the wrong flags on every invocation, and we would never see it unless we happened to watch the session in real time. At 291 manifests and hundreds of tool calls per day, that does not scale.

Today we are shipping two small releases that close that gap: kcp-commands v0.15.0 and kcp-memory v0.7.0. Together they create a feedback loop from agent behaviour back to manifest quality -- not by guessing, but by measuring what actually happened.

From Instrumentation to Infrastructure

AI agents like Claude Code run dozens of CLI commands per session, orchestrating complex multi-step workflows. Without structured knowledge of each tool, the agent guesses flags, calls --help to discover syntax, or retries when the first attempt fails. Each mistake compounds: a wrong flag in step 3 can invalidate everything that follows.

kcp-commands solves this with manifests -- YAML files that encode exactly what an agent needs to use a CLI tool correctly: key flags, preferred invocations, output patterns to strip. The daemon injects the right manifest before each Bash call, turning an uninformed first attempt into a guided one.

kcp-memory adds the second dimension: episodic memory. Every session is indexed. Every tool call is logged. The agent can search what it did last week, recover the reasoning from a delegated subagent, and see which manifests are actually working in practice.

Together they make Claude Code measurably smarter: 33% of the context window recovered, --help calls eliminated, and an agent that learns from its own history instead of starting from zero every session.

The latest addition closes the loop: the tools now learn from their own performance. Every Bash call produces an outcome signal. kcp-memory tracks retry rates, help-followup rates, and error rates per manifest -- surfacing which ones are guiding the agent well and which ones are steering it wrong. The highest-failure manifests have already been rewritten based on the data. The infrastructure measures its own effectiveness and improves.

kcp-commands v0.9.0 and kcp-memory v0.4.0 were passive observers. They watched what Claude did, logged it, made it searchable. Useful, but limited. The tools had no opinions about their own data.

The work since then -- through kcp-commands v0.18.0 and kcp-memory v0.18.0 today -- has been about a different question: what happens when the tools know what to ignore, can measure their own quality, and maintain themselves?

From Capable to Trustworthy: How KCP Evolved from Discovery to Governance

AI agents are getting remarkably good at doing things. They read code, traverse APIs, generate summaries, and execute multi-step plans across sprawling codebases. What they are still bad at is knowing what they should not do.

Today, an agent dropped into a new repository does the equivalent of walking into a library and reading every book on every shelf before deciding which one is relevant. This is expensive, slow, and -- in environments where some shelves contain confidential material -- genuinely dangerous.

The Human in the Loop — at Design Time

Tim O'Reilly posted something this week about craftsmanship in the AI age. The question he was circling: how do you maintain quality standards when agents are doing the work?

The default answer in the industry is: keep the human in the loop. For every meaningful decision, have a human review before proceeding.

That model contains a fatal flaw.