Building Codebases for AI-Assisted Development
AI has become part of the development workflow for almost every team. Most teams generate a considerable amount of PRODUCTION CODE using AI agents. Some do it successfully and others don't. The ones that fail are mostly the ones that do not realize the fact -> AI agents are only as good as the context provided.
If your codebase is poor and you do not have strong engineers in your team, you are more likely to generate garbage.
Foundations Matter
I’ve already written about the importance of engineering fundamentals in the AI era here. I’m going to reiterate it.
None of this is new. However the consequence of ignoring these fundamentals has changed. A messy codebase used to slow developers down. Now it misleads AI too. Technical debt used to compound gradually. Today, it almost compounds after AI every autocomplete.
Designing with AI-Agents in Mind
One of the repositories my team built at Codewalla recently was an end-to-end automation suite - Playwright with TypeScript, running against QA and staging environments. Automation is a reliable stress test for consistent AI code generation. If selectors aren’t consistent, or responsibilities aren’t clearly separated, those inconsistencies spread quickly. AI generating tests in such a codebase, will amplify whatever chaos exists. So the team started by codifying the rules.
1. Make Standards Explicit
In many teams, coding conventions and best practices live in senior engineers’ heads. Let’s face it, engineers have made a living out of it for decades. So there is always this temptation to leave it as is. However, it is very important to make it part of codebases today.
We encode these conventions into an AI_RULES.md file at the root of the repository. This ensures that any one including junior developers and AI agents follow the same best practices.
2. Add Sample Prompts to the Repository
Now that we have defined the AI rules, how do we enforce and atleast make it obvious for new devs that they have to use these AI rules? And how do we do this across different tools like cursor, windsurf, github copilot, etc?
We create a .prompts/ directory for repeatable workflows. Each file has info on how a task should be performed within the boundaries of this repository. Below is a sample prompt.
3. Ground AI in the Real System using tools / MCPs
Every configuration needed for AI assistants are committed directly to the repository.
This specific setup uses:
Recommended by LinkedIn
Before generating code, AI agents can:
Code Generation is now grounded.
4. The all-important README
Like with any other repository, README is super important. However, we add more context to the readme file that will help AI Assisted programming.
We added a “Recommended Workspace Setup” section that specifies exactly how the system should be used:
AI coding assistants perform better when they see the system as a whole. Instead of relying on fragmented knowledge amongst the development team. We document the ideal setup so that every contributor and every AI session starts from the same foundation.
This repository structure reinforces boundaries and communicates architectural intent. When AI generates a sanity test, it knows where it belongs.
The Side Effect
Getting a repository to work on a dev’s local machine used to take most of a day. It usually required a walkthrough from a senior engineer to explain folder structure, conventions, and unwritten assumptions.
With this setup, devs were able to clone the repository, open it in Cursor (or any AI native IDE of their choice), let the IDE follow the instructions and run the code. A developer can start contributing in under an hour. More importantly, they do not need a senior engineer to assist with setup or explain implicit logic.
The repository explains itself. It was a consequence of removing ambiguity. When a codebase is designed properly, it reduces coordination overhead as well.
Bottom Line
If AI is part of your development workflow, your codebases better have the necessary harness to reduce garbage code being generated. AI will amplify whatever it finds. If the structure is disciplined, it generates production-ready code. If it’s messy, it scales the mess.
Prashant Srinivasan Absolutely, context is the king for AI assisted development. I found spec-kit to be helpful for agentic based development. Guides the team towards common standards for development. https://github.com/github/spec-kit