Skip to content

AI-Augmented Development

Three Decades of Architecture: What AI Actually Changes (And What Doesn't)

I have been writing software and designing systems since 1994. That is thirty-two years. Long enough to have watched several waves arrive with the promise that everything was about to change, and long enough to have noticed that the pattern of arrival is remarkably consistent. Breathless proclamation. A period of confusion as people try to apply old practices to new technology. Then a gradual, quieter recognition of what actually changed and what did not.

The More AI, The More Control

The fear is intuitive and sounds right: the more you delegate to AI, the less you understand your codebase, the less you control what ships. You become a passenger in your own project. Every prompt you type is a piece of agency you surrender.

I have thirty years of shipping software. I have watched entire teams lose control of codebases they wrote themselves, without any AI involved. And I have watched my own control over a codebase increase as I delegated more to AI. The intuition is wrong. But it is wrong in a specific way, and understanding that specificity matters.

Subscription Economics and the AI Development Workflow

The most important decision in AI-assisted development has nothing to do with models, prompts, or methodology. It is the billing model. Per-token API pricing and flat-rate subscriptions produce fundamentally different rational behaviors, and most teams do not realize they are optimizing for their invoice instead of their output.

I discovered this by accident. Building lib-pcb over eleven days -- 197,831 lines of Java, 7,461 tests, eight format parsers -- involved an intensity of AI interaction that would have been economically irrational under per-token pricing. A back-of-the-envelope estimate puts the API cost for that project somewhere around $100,000 at standard rates. On a flat subscription, the marginal cost of every additional iteration, every regenerated test suite, every discarded alternative approach was zero.

That difference shaped everything.

Documentation That Writes Itself (No, Really)

Yes, I know. "Self-writing documentation" is the perpetual motion machine of software engineering. Every generation of tooling has promised it. Javadoc would generate your API reference. README generators would scaffold your project descriptions. Wiki pages would capture institutional knowledge. Sprint retrospectives would produce living documents. None of it worked. The documentation was either generated and useless, or useful and never written.

So when I say that skill-driven development produces documentation as a natural byproduct of building software, I understand why the reasonable response is skepticism. I would be skeptical too. But after 75 skill files emerged during an 11-day build of lib-pcb, I have to describe what actually happened, because it was not what any previous "auto-documentation" approach looked like.

The Cost of Iteration Collapsed. Now What?

For most of my thirty years in software, iteration has been expensive. Not in theory. In practice, in the way that shapes every decision a team makes. When changing a core data structure takes two weeks of careful refactoring across dozens of files, you do not change the data structure on a hunch. You analyze. You write a proposal. You get approval. You schedule it for the next sprint, or the one after that. The cost of being wrong is measured in weeks, and so the entire machinery of software engineering orients itself around not being wrong.

That cost has collapsed. Not gradually. Not by half. By orders of magnitude. And I am not sure we have reckoned with what that means for the way we work.

The Testing Discipline: 25% to 93%

Unit tests passed. Every one of them. Green across the board.

And then we ran the parser against real legacy Gerber files — files from actual PCB designs, exported by real design tools used by real engineers over the last twenty years — and the success rate was 25%.

Three out of four failed.

Strategic Delegation: When Developers Become Architects

For thirty years I have broken work into tasks. Decompose the feature into subtasks, estimate the hours, write the code, move the ticket. The unit of progress was the line of code. The measure of a good day was how much I shipped. That loop was so deeply embedded in how I worked that I did not notice it was a loop. It was just what development meant.

Then I started delegating implementation to AI, and the loop broke. Not gradually. In about a week.

Months to Days

The first reaction is always disbelief.

"That's not possible." Or: "That only works for trivial problems." Or the politer version: "That must be very rough code."

So here are the numbers. Not estimates. Actuals.