Skip to content

Writing

The Comprehension Bottleneck: Why AI Made Creating Easy But Understanding Harder

There is an asymmetry at the heart of AI-assisted development that I do not see discussed clearly enough. Production speed has accelerated dramatically. A competent developer with Claude Code can now generate code at 10 to 66 times the traditional rate. This is real and verified. I have the commit logs and the timelines to prove it. But comprehension speed has not accelerated at the same rate. Reading code, understanding architecture, finding the right file in a 700-file codebase. These are roughly where they were before AI arrived.

From What to Why: When AI Reveals Questions You Didn't Ask

For most of my career, analysis meant asking a question and getting an answer. How many deployments last quarter? Which modules have the most open defects? What is the test coverage of the payment service? The tools were built for this. You formulated a query, you ran it, you got a number. The number was correct. And the quality of your insight was entirely bounded by the quality of your question.

I did not think of this as a limitation. It was just how analysis worked. You got better at it by learning to ask better questions. Thirty years of architecture experience is, in large part, thirty years of learning which questions to ask and in what order. The senior architect's advantage was not access to better data. It was knowing which query to run.

That model is breaking. Not because the tools got faster at answering questions, but because a new class of tooling -- AI-augmented, temporally aware, relationship-tracking -- does something structurally different. It does not just answer your question. It tells you what you should have asked instead.

Five Architecture Patterns for AI Agents That Actually Work

Most writing about AI agents is aspirational. Autonomous systems that plan, reason, and execute complex workflows end-to-end. The vision is compelling. The reality, after building and running agents in production across multiple projects, is more mundane and more useful. The patterns that survive contact with real workloads are not the clever ones. They are the simple ones that fail in predictable ways.

What follows are five architectural decisions that made the difference between agents that reliably complete tasks and agents that confidently fail. None of them are universal. Each has a specific context where it works and a specific context where it does not. I have learned both sides, sometimes expensively.

Why Temporal Matters: The Time Capsule Graph

Most systems I have built over thirty years answer one question well: what is the current state? A database query returns the latest row. A service responds with the live configuration. A dashboard shows what is happening right now. Current state is the default, and it is sufficient most of the time.

Some systems go further. They add history. An audit table, an event log, a change data capture stream. Now you can answer: what was the state at time T? Useful for compliance, useful for debugging. But still a limited question, because history stored as a sequence of snapshots tells you what changed without telling you how those changes relate to each other.

The questions I keep running into -- the ones that matter most -- are different. How did we get here? In what order? What does that trajectory mean? Those questions require something most architectures are not built to answer.

Three Decades of Architecture: What AI Actually Changes (And What Doesn't)

I have been writing software and designing systems since 1994. That is thirty-two years. Long enough to have watched several waves arrive with the promise that everything was about to change, and long enough to have noticed that the pattern of arrival is remarkably consistent. Breathless proclamation. A period of confusion as people try to apply old practices to new technology. Then a gradual, quieter recognition of what actually changed and what did not.

The More AI, The More Control

The fear is intuitive and sounds right: the more you delegate to AI, the less you understand your codebase, the less you control what ships. You become a passenger in your own project. Every prompt you type is a piece of agency you surrender.

I have thirty years of shipping software. I have watched entire teams lose control of codebases they wrote themselves, without any AI involved. And I have watched my own control over a codebase increase as I delegated more to AI. The intuition is wrong. But it is wrong in a specific way, and understanding that specificity matters.

Subscription Economics and the AI Development Workflow

The most important decision in AI-assisted development has nothing to do with models, prompts, or methodology. It is the billing model. Per-token API pricing and flat-rate subscriptions produce fundamentally different rational behaviors, and most teams do not realize they are optimizing for their invoice instead of their output.

I discovered this by accident. Building lib-pcb over eleven days -- 197,831 lines of Java, 7,461 tests, eight format parsers -- involved an intensity of AI interaction that would have been economically irrational under per-token pricing. A back-of-the-envelope estimate puts the API cost for that project somewhere around $100,000 at standard rates. On a flat subscription, the marginal cost of every additional iteration, every regenerated test suite, every discarded alternative approach was zero.

That difference shaped everything.

Documentation That Writes Itself (No, Really)

Yes, I know. "Self-writing documentation" is the perpetual motion machine of software engineering. Every generation of tooling has promised it. Javadoc would generate your API reference. README generators would scaffold your project descriptions. Wiki pages would capture institutional knowledge. Sprint retrospectives would produce living documents. None of it worked. The documentation was either generated and useless, or useful and never written.

So when I say that skill-driven development produces documentation as a natural byproduct of building software, I understand why the reasonable response is skepticism. I would be skeptical too. But after 75 skill files emerged during an 11-day build of lib-pcb, I have to describe what actually happened, because it was not what any previous "auto-documentation" approach looked like.