Pat's Claude Code handbook
Getting a powerful LLM and shovelling some coins into it is only part of the job. What you need is method. Even better, a text file with prompts you can copy and paste over and over again.
(B.T.W. By way of introduction, a coding agent is a chatbot that can make changes to files on your computer, usually within a particular directory that you point it at. Claude Code is one such agent. There are others. Feel free to reach out with questions.)
Greenfields (new application)
Start with this:
I would like to make a (describe app). Please think about all the dimensions of this request and generate a list of all the clarifying questions you should ask me before beginning work on the task.
Asking for clarifying questions is a good tactic with any LLM, because the model has zillions of dimensions and all of them have generic defaults based on averages. So if you ask "pre questions" then you can broaden things out from the training set's assumption that you are trying to write a ReactJS note-taking app.
The key word "think" takes extra time and money, but kicks things up a notch from "generate plausible text" to "use the available context to avoid saying something silly."
To answer the agent's questions, you can basically write the answers to all the questions (at least, the ones you think are important) one after another in a big prompt, ending with one of the instructions below.
For simpler apps your instruction is:
Generate CLAUDE.md and DESIGN.md for the desired app.
And then once you are happy with DESIGN.md:
write the app
For more complex apps:
Generate CLAUDE.md, DESIGN.md and TASKS.md (a list of tasks along with unique ID, title, status and notes).
Then your go-to prompt is:
Read TASKS.md and implement the next task, updating the status in TASKS.md and other files as required.
As far as I can tell, there's nothing special about any of these markdown files except perhaps CLAUDE.md (if you are using Claude). Sometimes when I started a new session, it was necessary to explicitly prompt the agent to look at certain files, but if they were listed in CLAUDE.md then I didn't need to. (But, non-determinism, so who knows.)
Generating/rewriting code
Sometimes the model writes me some code and I don't like it, but I can't put my finger on why. If I say this to the model then it will probably try to please me, and I don't want that. A good solution is to ask for alternatives:
What are the different styles in which the code for (describe task) can be written?
You can then say something like:
Rewrite the app to use the (whatever it called the option you liked) style.
There is a general principle here about finding common language. If the model calls it X, and you want it, use that word X. Ask questions to find out what words the model knows.
Fixing bugs
The bug-fixing cycle involves identifying a defect, identifying the cause, implementing a fix and validating that the defect is now gone.
I use a text file called BUGS.md because I don't know how to set up a Jira integration:
Create a file called BUGS.md to store a record of bugs (including solved bugs). Each bug should have a unique number to identify it, a title, a status, and any brief notes required about the bug. The first bug is: (describe unwanted behaviour here).
Please make any necessary changes to CLAUDE.md so that you will know how to use the BUGS.md file in future sessions.
Then because my brain has been rotted, I don't even look at this text file, I ask the agent:
List unsolved bugs
I copy-paste the following prompts as required:
Please record a new bug: (describe unwanted behaviour here).
What is the likely cause of bug #_, and how can we fix it? Think about it.
The agent will often jump to conclusions about the cause of the bug, implementing a non-fix that adds cruft to your codebase. I find the following prompts useful to bring some discipline to the process:
Please search the web to see if this is a known issue with a pattern for fixing it.
Is there any way to instrument the current code to validate that we have correctly diagnosed the problem, before implementing a fix?
In this scenario the agent would add print statements to the codebase and predict the output that we would see if its diagnosis of the bug was correct. The output of print statements is language, and it's a large language model, so it's good at predicting that kind of thing. You can paste the debug output back into the agent, and it will know whether its hypothesis was proven or disproven.
Once I am happy with the plan:
Please implement this fix and commit to git. Do not mark the bug as resolved. Mark the bug as "testing fix", and git commit. I will do a final test before asking you to mark the bug resolved.
This verbiage is because it grinds my gears when the agent says "Yay us, we fixed the bug ✅✅✅🎉🎊✨🌟🙌😄🥳💖🎶💃🕺" and marks the bug as resolved, and I haven't even tested it yet.
If the fix was a dud:
Bug #_ is still happening. Please revert the last git commit and think about some other explanations for bug #_.
If the agent has added a lot of debug statements, I need them taken out before my final round of testing:
Bug #_ appears to be fixed. Please remove all the debugging statements, mark the bug as "testing fix", and git commit. I will do a final test before asking you to mark the bug resolved.
Finally:
Mark this bug as resolved and git commit
Adding features
I would like to keep track of feature requests in the repo. What are our options for doing that?
(I accepted Claude's suggestion of a FEATURES.md file.)
New feature request: (describe feature). What are our options for implementing this feature, and what clarifying questions should I answer before we decide on an approach? Think about it and search the web if required.
(Given options 1-4...)
Document option 1 in FEATURES.md so that we can implement it another time.
Or, if I want to proceed now:
Go with option 1. (Additional requirements here.) Git commit and mark the feature as "testing"; I will tell you when to mark it complete.
A note on Git usage
Conclusion
Coding agents have truly unlimited capacity to waste your time if you are not careful -- sort of like the pokies or doom-scrolling -- but with the right nudges they can be a very effective tool.
You're happier without Jira. Also, nice article!
Pat this is pure gold. You should not be giving this away for free (but thank you 🙏 )