Simon Willison Warns of Cognitive Debt From AI-Written Code, Proposes Interactive Explanations
When developers lose track of how agent-written code works, the resulting cognitive debt can be worse than technical debt
Willison argues that while some AI-generated code is simple enough to understand at a glance — fetching data from a database and outputting JSON, for example — the core logic of applications often becomes a black box that developers can no longer confidently reason about.
His proposed solution: interactive explanations. Rather than simply reading through generated code, Willison advocates building visual, interactive tools that demonstrate how the code works. He illustrates the concept by exploring how word cloud algorithms function, turning a curiosity sparked by Max Woolf's AI coding experiments into a hands-on learning experience.
The approach connects to his broader concept of "asynchronous research projects" — using AI agents to explore technical topics in depth, then distilling the results into explanations that rebuild the developer's understanding. It's a framework for maintaining human agency in an era where much of the actual code is written by machines.
Analysis
Why This Matters
As AI coding tools become ubiquitous, the gap between "code that works" and "code developers understand" is widening. Willison's framing of cognitive debt gives the industry a vocabulary for a problem many developers feel but few articulate.
Background
Willison has been one of the most thoughtful voices on practical AI use in software development, consistently publishing real-world workflows rather than theoretical frameworks.
What to Watch
Whether the concept of cognitive debt gains traction alongside technical debt in engineering culture, and whether tooling emerges to help developers maintain understanding of AI-generated codebases.