Add knowledge.yaml to Your Project in Five Minutes
A practical walkthrough of the KCP adoption gradient — from the minimum viable manifest to a full knowledge graph. No theory. Just the steps.
A practical walkthrough of the KCP adoption gradient — from the minimum viable manifest to a full knowledge graph. No theory. Just the steps.
The debate is "RAG or knowledge graphs?" The answer is neither — and both. Most teams pick one retrieval approach and stop. The interesting question is which layer they are missing, and what blind spot that creates.
The previous post
introduced KCP and why llms.txt does not scale to production agent deployments. This post covers
what happens when you connect a knowledge.yaml manifest to a live MCP server — and why the
combination changes how agents behave.
Every mature engineering team graphs their code. Almost no one graphs their knowledge. The asymmetry is strange — and costly.
A real engineering session where Claude Code with Opus diagnosed 4 bugs, wrote 23 tests, and took a knowledge graph from zero virtual links to 11,777 — including one mistake and its recovery.
The agent answered the ROI metrics question with zero tool calls. It reported the indexing speed, the search latency, the file count, the retrieval time improvement, the test count. All correct. Every number accurate.
Then it said the metrics were validated on February 19, 2026.
The actual date was February 17.
Earlier today I published a post about Synthesis and why knowledge infrastructure is the layer the AI agent ecosystem is missing. Several people responded with a version of the same question: "We use llms.txt — isn't that enough?"
It depends on what you are trying to do. And I think the answer is worth a dedicated post.