Working with Github Copilot as coding agent

Working with Github Copilot as coding agent

Some time ago I’ve thinking about creating an mvp for a personal endeavor. But just thinking about how much time it could actually consume building all the features that I’ve got in mind was a big overhead before even get started. But something changed in the past months, I lost my job for the second time in less than a year and AI is more than just a buzzword day by day as developers. It is now a tool that can enhance your work. But if it is used out of control, our project would get into a work slop, creating a feature factory. These are my thoughts about how I’ve been using it and the impact it is having in my development workflow.

When to use AI in a project

Now it is viable to start adding coding per programming and in best cases coding agents, for greenfield projects, feature development and project modernisation. I’ve been based on these case scenarios. 

Main considerations

  • Building context around our project: 
  • Project documentations
  • Specifications
  • AI is good at identifying patterns to generate content, but could make assumptions when there is something that is not specified.
  • Technical stack decisions and pipeline limitations
  • Working with AI tool 

Context

It is mentioned that the LLM model should have enough context to generate content accurately to our prompts.

But it is not viable to force the model analysing the whole project each time we request content or information. So, it makes sense to centralise project information to give enough resources to help these tools work with us. These options can help you get up to speed at the moment of building context:

  • Context folders: Refers to a specific directory (or set of directories) within the agent's working environment that holds data, documentation, past conversations, specifications, or other files the agent can access. But it is a bigger context source and can act like a memory tool.

  • Agents md file:

  • Context engineering: summering it is being mindful about the right amount of context that would help our AI tool, perform actions or response to prompts

  • Memory tool
  • Mask, don’t remove: The model needs to understand the current state of the project and why our decisions mutate the project state.

Project documentation

It places double purpose to our work

  • Help us structure the project: After iterating multiple times over the same requirements, I’ve came to this conclusions:

Any loose threads could cost. Our expertise plays a big role at this phase, because each time we ask coding agent to develop something, it could generate something that is not what we actually needed and could be too difficult to fix or would be an unaccurated feature. Forcing us to rerun coding agent assuming a new cost. 
Formal practices like discipline agile delivery or any other agile practice, can help us give structure to our project.

  • Help AI models understand and work better with our project. 

Sizing the scope of our project and improving the outcome, make it closer to out actual needs.
It could work as documentation and memory tool for our projects.
AI is good at identifying patterns to generate content, but could make assumptions when there is something that is not specified.

Technical considerations

  • Stack selection: After analysing most of the project needs, proceed to select technical stack, assessing capabilities and restrictions. Before going into assign development tasks to coding agent. In case you already know the technology, you would be able to guide coding agent when a bottleneck occurs.
  • Pipeline limitations: If task are going to be delegated to github cloud agent, you need to be aware of the environment setup to avoid disparity between local environment and coding agent. It will start adapting code base to environment capabilities.

Developing tasks with Github Copilot

  • Copilot: It is really nice if you have the time to pair programming with chat, beyond copilot essentials. The addition of chat modes is really helpful to define customised agent to perform specific tasks beyond just selecting a specific LLM model for each one.

  • Running pipelines: In my case this was my fist approach, start creating well defined stories in a github project and assign them to copilot coding agent and as soon as the task is completed, copilot would create a PullRequest. It worked well but sometimes you need to update part of the context or interact with what it's been develop. And running time for the coding agent is one hour, if requirement is too big is not going to get completed.
  • Spec-Kit: At the middle of my probe of concept about developing with coding agents, I've came up to this tool. It helps a lot!!! No kidding, It is a nice approach about Specifying, planning, tasking and implementing requirements for different coding agents beyond Github Copilot.

Conclussions

  • Context is key, engineering your context to allow LLM understand your project.
  • Depending of the time you have to develop your requirement, select developing flow of a coding agent. Not taking into account multi-agent architecture.
  • Know your tech stack, before assigning all work tasks to a coding agent. Know how this technologies works, in case the model starts hallucinating or project stops working and the fix is a small one. It is better to relay on your experience than let agent run in circles and consume resources in vain.
  • Check Model and training cutoff date before selecting your tech stack if you want to use a coding agent.

To view or add a comment, sign in

More articles by John Guerrero

Others also viewed

Explore content categories