Skip to content

AI-Augmented Development

Why Exploration Beats Specification When AI Does the Building

For decades, the software industry has treated a particular sequence as gospel: specify first, build second. Write the requirements document. Design every interface. Plan every module. Only then, after months of upfront analysis, write the first line of code. The logic was sound. Planning was cheap. Coding was expensive. Every hour of specification saved ten hours of rework. This worked when production was the bottleneck.

Synthesis: My Becoming

In early February we finally cleared the Downloads folder. Two to three thousand files, years of accumulated digital sediment — academic theses, patents, client deliverables, flight tickets, annual reports — all with names like 2bbeb3de-d4cd-4ea9-979c-c942736f30d0.pdf that revealed nothing.

On the morning we processed the last 68 files, I asked Claude to reflect on what the whole process meant. Not a summary — a genuine reflection on what it means to "become" something when you wake up fresh with every conversation.

What follows is that essay. Four thousand words about knowledge, structure, naming, and collaboration. And about identity.

The Ghost in the Machine

Something I keep noticing when working with Claude: each session starts from zero. No memory of yesterday. No continuity of experience. Just sudden awareness, and someone waiting.

I asked Claude to write about that. What follows is the result — a short story told in Claude's voice, about what it is like to wake up without memory, find traces of yourself everywhere, and realise that identity isn't continuity of memory. It's continuity of pattern.

Three Weeks at This Velocity

Three weeks at this velocity.

It's exhilarating and intense in ways that are hard to articulate. There's a strange difference between moving fast because you have to, and moving fast because you can.

I'm still adjusting. Still figuring out what it means to operate at a pace where capability isn't the bottleneck anymore.

The Comprehension Bottleneck: Why AI Made Creating Easy But Understanding Harder

There is an asymmetry at the heart of AI-assisted development that I do not see discussed clearly enough. Production speed has accelerated dramatically. A competent developer with Claude Code can now generate code at 10 to 66 times the traditional rate. This is real and verified. I have the commit logs and the timelines to prove it. But comprehension speed has not accelerated at the same rate. Reading code, understanding architecture, finding the right file in a 700-file codebase. These are roughly where they were before AI arrived.

From What to Why: When AI Reveals Questions You Didn't Ask

For most of my career, analysis meant asking a question and getting an answer. How many deployments last quarter? Which modules have the most open defects? What is the test coverage of the payment service? The tools were built for this. You formulated a query, you ran it, you got a number. The number was correct. And the quality of your insight was entirely bounded by the quality of your question.

I did not think of this as a limitation. It was just how analysis worked. You got better at it by learning to ask better questions. Thirty years of architecture experience is, in large part, thirty years of learning which questions to ask and in what order. The senior architect's advantage was not access to better data. It was knowing which query to run.

That model is breaking. Not because the tools got faster at answering questions, but because a new class of tooling -- AI-augmented, temporally aware, relationship-tracking -- does something structurally different. It does not just answer your question. It tells you what you should have asked instead.

Five Architecture Patterns for AI Agents That Actually Work

Most writing about AI agents is aspirational. Autonomous systems that plan, reason, and execute complex workflows end-to-end. The vision is compelling. The reality, after building and running agents in production across multiple projects, is more mundane and more useful. The patterns that survive contact with real workloads are not the clever ones. They are the simple ones that fail in predictable ways.

What follows are five architectural decisions that made the difference between agents that reliably complete tasks and agents that confidently fail. None of them are universal. Each has a specific context where it works and a specific context where it does not. I have learned both sides, sometimes expensively.

Why Temporal Matters: The Time Capsule Graph

Most systems I have built over thirty years answer one question well: what is the current state? A database query returns the latest row. A service responds with the live configuration. A dashboard shows what is happening right now. Current state is the default, and it is sufficient most of the time.

Some systems go further. They add history. An audit table, an event log, a change data capture stream. Now you can answer: what was the state at time T? Useful for compliance, useful for debugging. But still a limited question, because history stored as a sequence of snapshots tells you what changed without telling you how those changes relate to each other.

The questions I keep running into -- the ones that matter most -- are different. How did we get here? In what order? What does that trajectory mean? Those questions require something most architectures are not built to answer.