Whose Code Is It Anyway?
A story about messy code, clean code, and AI.
A year ago, I did not know it at the time, but I had vibe-coded my first word puzzle game called Scramble. It worked. I shipped it. I posted in LinkedIn, and moved on.
The anniversary seemed like a good time to revisit it — see how it held up, maybe finally make those improvements I'd promised myself.
What I found was textbook vibe code (before the term even existed): no documentation, duplicate logic for mobile and desktop, no tests, no security, and a cursor rule that starts with "You are a Senior Front-End Developer..."
Fixing it meant doing everything developers love to hate: understanding legacy code with no docs, writing the documentation that should have existed, eliminating duplication, writing tests just to understand what the code does, and refactoring toward something cleaner and maintainable.
Technical debt is the number one cause of developer frustration — 62% cite it as a problem, twice as much as any other issue.
Which is exactly why I didn't do it myself. I handed it to AI.
Within a day, I had:
The code went from "it works, don't touch it" to "here's how it works, feel free to update it." Check out Scramble V2.
I genuinely loved the experience. Which made me wonder: why are so many developers still resistant to AI?
The Five Things Developers Hate (That AI Handles Beautifully)
Let me be specific about what I handed off.
1. Understanding Legacy Code Without Documentation
The original Scramble had zero explanation of how it worked. The game logic was tangled with UI concerns. State management was... creative.
AI didn't complain. It read the code, inferred the intent, and explained it back to me. Then it generated documentation that would have taken me hours to write — and I would have resented every minute.
2. Writing Documentation
Developers spend more than 30 minutes a day searching for solutions to technical problems, often because documentation is missing or outdated. Everyone knows docs are important. Almost no one wants to write them.
AI wrote my CLAUDE.md, README, and inline comments. Not because it enjoys documentation, but because it doesn't have preferences. It just does the work.
3. Eliminating Duplicate Code
Scramble had separate implementations for mobile and desktop — similar logic, slightly different, maintained in parallel. The kind of thing that happens when you're shipping fast and promise yourself you'll clean it up later.
AI consolidated it into a single responsive implementation. No emotional attachment to the original code. No "but I spent hours on this" resistance.
4. Writing Unit Tests (Especially for Existing Behaviour)
Writing tests for code you didn't write — or code you wrote so long ago it might as well be someone else's — is tedious. You're not building anything new. You're just trying to understand what already exists.
AI generated 22+ test cases that captured the game's word detection, chain reactions, and scoring logic. Tests I would have procrastinated on indefinitely.
5. Refactoring to Clean Architecture
The original code was a monolith. Game logic, UI, state — all intertwined.
AI separated it into pure engine functions (no React dependencies, fully testable) and a clean state management layer. The kind of refactoring that's "important but not urgent" — which means it never happens.
The Mindset Shift
Thirty years ago, I was a developer. I remember what it felt like to write code, debug it, and finally get it working — success felt earned. But times have changed. So has my mindset:
Recommended by LinkedIn
For many developers, code is the finite output — the thing they're paid to produce. Every line represents thought, debugging, refinement. Throwing it away feels like waste.
However, I've started treating code as disposable. Rewritable. What matters is: Does it work? Does the user get what they need? Can I describe what I want clearly enough to rebuild it?
The economics of software have changed. When code generation is cheap, you stop optimising the code — and start optimising for outcomes.
The Question No One's Asking
Here's the thing that's been bothering me.
Think about everything we've been taught makes code "good": abstraction, modularity, meaningful variable names, DRY, KISS, the Single Responsibility Principle, small focused functions.
These aren't arbitrary preferences. They emerged from decades of experience about what makes code survivable over time. But look at why they exist.
Robert C. Martin's Clean Code — the book that shaped a generation of developers — is explicit about this: "Programming is not about telling the computer what to do. Programming is the art of telling another human what the computer should do."
Every principle flows from this assumption. Meaningful names exist because humans need to understand what a variable represents. Modularity exists because humans can only hold so much context in their heads. Comments exist to explain intent to other humans.
These are maintenance strategies. They assume code will be read, understood, and modified by people.
But if code is regenerated whenever we need changes, who are we writing clean code for?
The AI doesn't need your variable names to be meaningful. It doesn't care if you repeated yourself. It won't get confused by a 500-line function.
Clean code was never about the code — it was about communication. AI reads your code — it just doesn't care if it's elegant.
Will code even need to be maintained by people?
Where This Breaks Down (For Now)
I want to be honest about the limitations — at least as of December 2025.
Scramble is a small project — a few thousand lines of code, contained scope. The kind of problem AI handles well.
Large, complex codebases are a different story. Context limits mean AI can only process a limited window of code at once — in a massive monorepo, that's like trying to understand a novel by reading one paragraph at a time. Every session starts cold, with no memory of what came before. And 66% of developers report spending more time fixing AI-generated code that looks correct but isn't — the "almost right" problem is a real issue.
But the tools are improving fast. Agents now maintain their own notes and to-do lists across sessions. Models ask clarifying questions rather than assuming. Planning modes break complex work into steps before diving in. Every few months brings meaningful progress.
For small, well-scoped problems — AI is already excellent. For enterprise-scale systems with decades of accumulated complexity — we're still in early days. But the trajectory is clear.
What This Means
As for me? I got a year's worth of overdue maintenance done in a day. The code is cleaner, better documented, and properly tested. I loved the process.
Getting here meant unlearning what I thought I knew. Our assumptions about what's valuable — craft, ownership, elegant code, maintenance — were formed in a different era. Those assumptions need recalibrating. The world has changed. And so must we.
If you're on that journey too — I hope you found my experience useful.
Sources
Great article, Tommy! Things are changing so fast, it is just incredible …
Good read that Tommy! When interviewing developers and we discuss best practice and clean code, they’ll often talk about SOLID principles and DRY etc. but I always ask them WHY we follow them; what’s the purpose of all these principles… often they haven’t really thought about it, but I think you have summed it up nicely there! Hope you are well mate, happy Xmas.
Happy Xmas Tommy!
Clean code was never about the code — it was about communication. That resonates.