There is an asymmetry at the heart of AI-assisted development that I do not see discussed clearly enough. Production speed has accelerated dramatically. A competent developer with Claude Code can now generate code at 10 to 66 times the traditional rate. This is real and verified. I have the commit logs and the timelines to prove it. But comprehension speed has not accelerated at the same rate. Reading code, understanding architecture, finding the right file in a 700-file codebase. These are roughly where they were before AI arrived.
The term "skill" keeps appearing in discussions about AI-assisted development, and most explanations reduce it to "a file that Claude Code reads." That description is technically accurate and completely inadequate. It is like saying a class is "a file the JVM loads." True, unhelpful, and it obscures the thing that makes the concept powerful.
For decades, the software industry has treated a particular sequence as gospel: specify first, build second. Write the requirements document. Design every interface. Plan every module. Only then, after months of upfront analysis, write the first line of code. The logic was sound. Planning was cheap. Coding was expensive. Every hour of specification saved ten hours of rework. This worked when production was the bottleneck.
This article has two voices. Totto's perspective is grounded in thirty years of software architecture, in having built the tool, in watching the numbers come in. The AI's perspective comes from a strange position: being simultaneously the researcher conducting the benchmark, the instrument being measured, and the subject whose reliability is in question.
We agreed to write this honestly. That means Totto admits when the results surprised him, and the AI admits what it's like to discover that the context it relies on might be wrong.
197,831 lines of Java. 7,461 tests. Eight format parsers, twenty-eight validators, seventeen auto-fix types. The kind of codebase that should take ten to eighteen months by conventional timelines.
The experience was disorienting in a specific way: the AI could generate code faster than I could understand what it had generated. By day four, I had a problem I hadn't anticipated. Not a quality problem — the code was good. A navigation problem. I couldn't find things anymore.
Synthesis was my answer to that. An open-source tool that indexes everything — code, docs, PDFs, videos, skills — and makes it searchable in under a second. I built it to solve the lib-pcb output explosion. 691 files per day, and I needed to find any of them in under thirty seconds.
The question was: did it actually help? Not anecdotally — I knew it helped me. But how much? And help with what, exactly?
When we finished lib-pcb, the question we got most was: "How?"
Not "what model did you use?" Not "what IDE?" Those questions miss the point entirely. The model is the least interesting variable. What made 197,831 lines of Java, 7,461 tests, and 474 commits in 11 days possible was a methodology. Specifically: six practices that we have now codified under the name Skill-Driven Development.
There is a narrative forming in the industry that goes something like this: AI will replace junior developers, senior developers will become more valuable, and if you have enough experience you have nothing to worry about. I think this misreads what is actually happening. The shift is real, but it is not the one most people describe.
The last post was about hallucinations, production bugs, and shipping bad code. Three fears that built three systems. This post is about the next three: money, control, and silence.
In February 2009, I wrote a post called "Clouded Vision" where I argued that "developers have fundamentally misunderstood how cloud computing delivers its benefits. They see the cheap prices but don't stop to consider where the cost saving comes from." The post described a specific architectural mistake: teams were taking their existing applications, full of what I called "enterprise DNA," and deploying them to cloud platforms with minimal change. Then they complained when it proved difficult and expensive.
A practical guide to giving your AI coding assistant an institutional memory
You've tried Claude Code. Maybe you love it. Maybe you've noticed that on your 300K-line, 20-module Maven project it spends the first five minutes figuring out where anything is.
That's not a model limitation. That's a context problem. And it's solvable.
This is the full story of lib-pcb -- a production-grade PCB manufacturing library built in 11 days through human-AI collaboration. It started as a weekend experiment in an unfamiliar domain and became the most compelling evidence I have for what disciplined AI-augmented development can achieve.
"I wanted to see how far I could push Claude Code in a weekend," Thor Henning Hetland (Totto) explains. A software engineer with 40 years of experience, he'd watched the AI coding assistant landscape evolve from GitHub Copilot's autocomplete to full agentic systems. The domain he chose -- PCB manufacturing automation -- was deliberate: he had almost no knowledge of it. Could you actually build something production-ready in an unfamiliar domain? Could you maintain velocity as the codebase grew? Could the human's expertise grow alongside the AI, rather than atrophy?