AI Assisted Coding - Longer Term View
Google Banana NanoV3

AI Assisted Coding - Longer Term View

This is the 4th article in my series on AI assisted coding. I have now been through writing 2 AI assisted SaaS applications natively hosted on Amazon AWS. For one I was using Gemini 2.5 and then Gemini 3. For the other I am using Codex based on ChatGPT 5.2. Both have advanced to the point where the application is in use by non-technical users and delivering value. Both projects were vastly accelerated with AI assisted coding techniques.

Takeaways:

  • The agents are getting better at a faster rate
  • I need to do less and less coding, and more and more detailed specification
  • Agents are getting better at troubleshooting errors and error messages
  • Complex tasks still benefit from detailed architectural knowledge
  • The context window is still a limiting factor

My feelings at this state in my projects is that I rarely have to "code" in the traditional sense. I usually need to help refine ideas, help troubleshoot errors, and align the overall application to be coherent and follow best practices in areas like security, information architecture and components. I am pleasantly surprised that the AI assistants I have used are relatively up to speed on packages to use for certain tasks. From time to time, the AIs still recommend outdated approaches like old versions for AWS client or obsolete NodeJS frameworks.

If you are working on a well understood stack like Typescript/NodeJS, Java, or Python I think each of the major AI assisted models will leapfrog each other to getting better and better code. I feel like the effort of switching assistants may not be worthwhile (nor would the costs). At this point in time, I would give an edge to Claude Code on everything with Codex and Gemini 3 not too far behind. I have not yet worked with Grok, Qwen or other LLMs on coding tasks. All the people who play this like a horse race are probably spending too much time on tool selection. Picking complementary engines would make sense - one for requirements, maybe one for code generation, possibly altering which one generates test cases or integration. Like code stacks, the benefits of one over the other will blur as the competition drives improvement.

Speaking of the context window, a large codebase or a very long coding/debugging session will usually fill the context window and require conversation compression. Additionally, restarts of the IDE (I use VS Code) require "saving" of the conversation. The compression and saving of context is still not where I would like it to be. I find myself seeing similar errors from the AI assistant (please compile before deploying to AWS to avoid unnecessary pipeline errors - why can't it understand that). I think there will be a time when large streams of low density information like debugging output will not be conflated with high density information from the developer. As these areas get solved, the context window problem will be less impactful.

During my most recent project, I have become more aware of the use of AI frameworks and have leveraged SpecKit which is open source. I like the way the framework approaches the problem. It uses a universal constitution that ties together the entire project with high level directives. It then has additional md files that provide a context at the project level. Each feature is built as a specification which goes through agent driven clarification, planning, tasking steps before reaching implementation. I feel this approach yields something that allows the agent to code large swaths of the feature "hands off" from me. I then do review, manual testing, enhance unit testing and integration testing. The results are good and pretty robust - I can spend a morning specifying a feature, planning and coding it and by afternoon I am testing and committing it to the repo. These are large, complex features and the productivity is amazing.

All is not a bed of roses - the whole package of applications is still tricky to deliver. All AI application "look and feel" the same, so more time needs to be spent on how to reliably specify a UI set of guidelines, icons, layouts that are consistently applied Complex environments like the AWS reference architectures still require thoughtful input and planning, handling multiple environments needs to evolve and using best practices in areas like package selection and application security still require human guidance.

Ongoing weaknesses in the AI assisted coding process that I see are:

  • UI design and coherence - if any company can build an AI agent to help translate Figma designs to exact implementation, you will make a lot of $$
  • Architectural understanding - the AIs still need a lot of structure, but within boundaries adhere to the paradigm well
  • Security - I think there is still a lot of work to be done here on best practices on everything from endpoint security, to secrets management, to package select and testing.

I will watch the space, because I think there is a lot of value in using the frameworks to get around the context window limitations of dealing with large code bases, lots of files, or long chat histories. Frameworks also help enforce consistency, leave a "paper trail" of artifacts and provide a workflow that aligns with requirements driven development. I am aware a lot of technology leaders are moving on from some of the Open Source frameworks to roll their own with system prompt personality differing "roles" of agents taking on tasks. This space will evolve quickly and may provide teams with a better way of leveraging AI development across multiple developers on a project.

My next installment will focus on the impact these tools will have in our industry of software engineering.

This is great, Michael. AI-assisted coding has really transformed the landscape. I've found it makes workflows smoother, but you still need solid oversight. When diving into contract work, what strategies do you use to evaluate potential projects?

Like
Reply

To view or add a comment, sign in

More articles by Michael Amster

  • So I made a thing...

    Some people know I am an avid audiophile. I have collected vinyl since I was a teen and I'm pretty lucky that I still…

    27 Comments
  • My Idiot-Savant Pair Programming Partner

    Part 3 in my AI Assisted Programming Series Now that I have settled into AI Assisted coding, I would have to summarize…

    12 Comments
  • Start Fast, Then Slow Down

    This is the second article in my series AI Assisted Coding - I was excited to have a green field opportunity to apply…

    3 Comments
  • AI Assisted Coding - Part 1 The Background

    Like many other technology leaders, I have been pushed, cajoled and piqued by the move to AI assisted coding. At Visa…

    11 Comments
  • How Being a Yearbook Editor Prepared Me for Technical Management

    As I watched my Yearbook Co-Editor daughter stress and struggle with the final stages of her book, I thought back to my…

    1 Comment
  • Learning Mid-Career - Coursera Data Science

    As a VP of Engineering at Ticketmaster, I was working on a number of projects that leveraged Data Science techniques…

    9 Comments
  • The Bleeding Edge - Part I

    My CEO said that I am an empirical learner. This he mentioned could be a double edged sword since I tend to understand…

    11 Comments
  • Playing with SVG

    I have been working on my site to bring it up to modern standards including responsive design. Though I was happy with…

    1 Comment

Others also viewed

Explore content categories