Maximizing Developer Value: Using AI Across the Entire Code Lifecycle
There’s now a wealth of information and tutorials floating around about how you can go from 0 to 1 with AI, or even vibe code personalized apps or prototypes? But what’s next, and how do we use AI effectively across all parts of our coding development?
Understanding the Code Completion Curve
The feature or code completion curve tends to look something like this when you consider the amount of lines of code you write, to what actually ends up shipping based on completing the work:
What happens at certain parts of the graph above roughly equates to how we can think about using AI for different ‘tasks’ and ‘roles’ in the coding lifecycle, positioning AI for success, and also setting us developers up to maximize our value in the cycle.
Let’s talk about the first part of this graph, where we’re writing code in order to get a feature or requirement working, or what I’m calling the Generate zone.
The 'Generate' Zone: Using AI as a Creative Foil
Interestingly (or not), most of the AI examples I find online anymore are heavily weighted to this code zone (I see you vibe coding). If you’ve attempted an AI experiment, this is probably where you started, and either had success, mixed success, or no success.
This also happens to be the zone where we, as humans, are the most effective. Creativity isn’t bred from next token prediction, it’s about building business context in the broader context of our applications we create and operate, and design ways to creatively implement a requirement. If we just try to put AI in this box, no one ends up happy except for the 30 minute app creators.
As we shift from generation in a 0 to 1 scenario to an incremental feature, using AI here can also help to improve our ability to onboard to code, learn it, and make changes. Here, we are asserting control over which AI tools we are using, and leveraging components of AI for efficiency and quality. Yet, we are not giving up control over what we learn, and it’s important to draw the distinction so we use the right AI components.
You’ve also generated the most code that you have to maintain when you finally get the thing working the first time, which leads us into the second, or ‘Refactor’ zone.
The 'Refactor' Zone: Optimizing Code with AI Assistance
Here, in the refactor and optimize zone, we apply our knowledge of coding principles, pain of the past, and other knowledge about things like performance to slowly reduce the code we built, refactor it based on good coding practices, and make the code ready to ‘ship’. Here, there’s an art and a science, but we’re using what we’ve generated as the input, and our standards and observations to reduce and refactor.
The blue zone also happens to be the next ‘underserved’ area I think about more and more for AI. While we need to provide a lot of input here, as developers we can provide specs, build observability platforms, and set goals for code refactoring that tend to rely better on an existing knowledge base than in pure generation. Which leads us to the final, or ‘move on’ zone.
The 'Move On' Zone: Knowing When Good Enough is Best
Recommended by LinkedIn
I call this the ‘move on’ zone, but you could also call this the danger zone of development. If things are going good enough, we need to know when to stop optimization for optimization sake and move on, and avoid the refactoring loop that can happen as we add features and then continually revisit ‘good enough’ code..
Why These Zones Matter for Effective AI Integration
For starters, when we use AI, it’s important to recognize the benefit that AI and LLM’s can provide us while also not losing the creativity and systems thinking that humans can, in fact, provide. In all cases, its about the systems around AI that we need to create and maintain in order for the LLM and us to build good software.
Practical Recommendations for Using AI in Each Zone
If you’re getting started, here are a few recommendations I have as you think about the ‘zones’, and using AI in each zone. While not all inclusive, hopefully these provide a few nuggets or places you can use LLMs or your existing systems to improve LLM performance.
Generate
Refactor
Move on
Share your own experiences with AI in the code lifecycle in the comments below!
This week, in what happened in my mailbox last week:
Let’s address another comment from one of my initial posts about AI usefulness:
YES! It can be a big daunting to think about AI especially in the context of the current market where you hear about ‘AI taking jobs’ and all these other things (look at annual reports for the full story on these companies and don’t get sucked into the AI-ness of it).
In one of my first roles out of college I was a process analyst where we value stream mapped a number of client activities. Value stream mapping is a great way to identify things in the process that arent fun or add value and see how AI can potentially work to remove the work.
Another way to help remove the drudgery is to consider what the end goals are of any process and simply apply these as a part of a prompt to thinking models. This may prompt new ways to think about a problem and remove the toil, or identify further toil that can be automated or changed.
Ultimately, the goal is to lean into what humans do best with AI being one of the most capable tools we have available today.
My experience has been similar. When I was going from prompt to fully-deployed Azure instances via TF, once the agent had gotten the TF created and validated, I started using prompts to refactor: Update all instances to communicate over PrivateLink connections where supported Extract any remaining values out into parameters files, especially resource group names if not already done etc, etc. What impressed me was how much of it was already following best practice and my guidance only needed to be gentle. My hunch is that, like everything AI these days, this will only get better and require less guidance. Because at the end of the day, I've always been applying these types of patterns and processes to my code--I just had to do both the thinking and the writing. It'll figure out how to do both eventually I believe.
Excellent write up. My takeaway, right or wrong, is that these zones imply singular focus on a task at hand as we then move on to the next problem. 2 things there: 1. This has me thinking about…what if we have an entire application that’s just move on? Do we naturally drift backwards into refactor as we scale? 2. Agentic code assistants have the capacity to help you and wreck you at the same time. They are amazing at getting something working and also amazing at introducing junk and clutter. The reason I bring that up is mostly because the assistance becomes less modular and we race to the “move on…” phase at a much faster and expansive scale which creates more danger zones. Really enjoyed it, man. Thank you.