Pat's Claude Code handbook
A panaroma of the birth of a new universe where stars have bits of javascript wafting around them, and a gigantic octopus reaches out to everybody.

Pat's Claude Code handbook

Getting a powerful LLM and shovelling some coins into it is only part of the job. What you need is method. Even better, a text file with prompts you can copy and paste over and over again.

(B.T.W. By way of introduction, a coding agent is a chatbot that can make changes to files on your computer, usually within a particular directory that you point it at. Claude Code is one such agent. There are others. Feel free to reach out with questions.)

Greenfields (new application)

Start with this:

I would like to make a (describe app). Please think about all the dimensions of this request and generate a list of all the clarifying questions you should ask me before beginning work on the task.

Asking for clarifying questions is a good tactic with any LLM, because the model has zillions of dimensions and all of them have generic defaults based on averages. So if you ask "pre questions" then you can broaden things out from the training set's assumption that you are trying to write a ReactJS note-taking app.

The key word "think" takes extra time and money, but kicks things up a notch from "generate plausible text" to "use the available context to avoid saying something silly."

To answer the agent's questions, you can basically write the answers to all the questions (at least, the ones you think are important) one after another in a big prompt, ending with one of the instructions below.

For simpler apps your instruction is:

Generate CLAUDE.md and DESIGN.md for the desired app.

And then once you are happy with DESIGN.md:

write the app

For more complex apps:

Generate CLAUDE.md, DESIGN.md and TASKS.md (a list of tasks along with unique ID, title, status and notes).

Then your go-to prompt is:

Read TASKS.md and implement the next task, updating the status in TASKS.md and other files as required.

As far as I can tell, there's nothing special about any of these markdown files except perhaps CLAUDE.md (if you are using Claude). Sometimes when I started a new session, it was necessary to explicitly prompt the agent to look at certain files, but if they were listed in CLAUDE.md then I didn't need to. (But, non-determinism, so who knows.)

Generating/rewriting code

Sometimes the model writes me some code and I don't like it, but I can't put my finger on why. If I say this to the model then it will probably try to please me, and I don't want that. A good solution is to ask for alternatives:

What are the different styles in which the code for (describe task) can be written?

You can then say something like:

Rewrite the app to use the (whatever it called the option you liked) style.

There is a general principle here about finding common language. If the model calls it X, and you want it, use that word X. Ask questions to find out what words the model knows.

Fixing bugs

The bug-fixing cycle involves identifying a defect, identifying the cause, implementing a fix and validating that the defect is now gone.

I use a text file called BUGS.md because I don't know how to set up a Jira integration:

Create a file called BUGS.md to store a record of bugs (including solved bugs). Each bug should have a unique number to identify it, a title, a status, and any brief notes required about the bug. The first bug is: (describe unwanted behaviour here).
Please make any necessary changes to CLAUDE.md so that you will know how to use the BUGS.md file in future sessions.

Then because my brain has been rotted, I don't even look at this text file, I ask the agent:

List unsolved bugs

I copy-paste the following prompts as required:

Please record a new bug: (describe unwanted behaviour here).
What is the likely cause of bug #_, and how can we fix it? Think about it.

The agent will often jump to conclusions about the cause of the bug, implementing a non-fix that adds cruft to your codebase. I find the following prompts useful to bring some discipline to the process:

Please search the web to see if this is a known issue with a pattern for fixing it.
Is there any way to instrument the current code to validate that we have correctly diagnosed the problem, before implementing a fix?

In this scenario the agent would add print statements to the codebase and predict the output that we would see if its diagnosis of the bug was correct. The output of print statements is language, and it's a large language model, so it's good at predicting that kind of thing. You can paste the debug output back into the agent, and it will know whether its hypothesis was proven or disproven.

Once I am happy with the plan:

Please implement this fix and commit to git. Do not mark the bug as resolved. Mark the bug as "testing fix", and git commit. I will do a final test before asking you to mark the bug resolved.

This verbiage is because it grinds my gears when the agent says "Yay us, we fixed the bug ✅✅✅🎉🎊✨🌟🙌😄🥳💖🎶💃🕺" and marks the bug as resolved, and I haven't even tested it yet.

If the fix was a dud:

Bug #_ is still happening. Please revert the last git commit and think about some other explanations for bug #_.

If the agent has added a lot of debug statements, I need them taken out before my final round of testing:

Bug #_ appears to be fixed. Please remove all the debugging statements, mark the bug as "testing fix", and git commit. I will do a final test before asking you to mark the bug resolved.

Finally:

Mark this bug as resolved and git commit

Adding features

I would like to keep track of feature requests in the repo. What are our options for doing that?

(I accepted Claude's suggestion of a FEATURES.md file.)

New feature request: (describe feature). What are our options for implementing this feature, and what clarifying questions should I answer before we decide on an approach? Think about it and search the web if required.

(Given options 1-4...)

Document option 1 in FEATURES.md so that we can implement it another time.

Or, if I want to proceed now:

Go with option 1. (Additional requirements here.) Git commit and mark the feature as "testing"; I will tell you when to mark it complete.

A note on Git usage

  • I ask the model to create a git commit before I test any code, so that I have an unambiguous way to refer to the version of the code that I tested. #devops
  • By making lots of commits, I don't have to worry about "do I want the agent to actually make a change or just talk to me about it", since I can revert anything.

Conclusion

Coding agents have truly unlimited capacity to waste your time if you are not careful -- sort of like the pokies or doom-scrolling -- but with the right nudges they can be a very effective tool.

Article content
(c) Marvel Studios, used without permission.


You're happier without Jira. Also, nice article!

Like
Reply

Pat this is pure gold. You should not be giving this away for free (but thank you 🙏 )

To view or add a comment, sign in

More articles by Patrick Conheady

  • Using an AI agent to capture knowledge about processes

    Introduction I am experimenting with an AI agent to help document existing processes by looking at historical examples…

    1 Comment
  • Project governance vs project management

    For years I thought "project governance" was a meaningless phrase, basically "project management" but with…

    1 Comment
  • Azure networking concepts

    The most common question I have to answer when it comes to Azure virtual networking is: how do I associate a route…

  • Why do we need this RFC?

    RFCs are the laws of the internet. They explain how protocols like the Internet Protocol, DNS and Ethernet work.

    2 Comments
  • Some basics of cybersecurity

    Here are some basic concepts I find helpful when thinking about the security of a computer system, reading about new…

    2 Comments
  • How are large computer systems made?

    Introduction Consider a large retailer with hundreds of shops, a headquarters and a website where you can buy things…

  • Diffs and patches in law and software engineering

    One of the things that both lawyers and software engineers both do, but do completely differently, is diffing and…

    3 Comments
  • A good idea stuck inside a bad idea

    The image at the start of this article is Stringer Bell, a crime boss in The Wire, chairing a meeting with his…

    1 Comment
  • If you cannot fail then you cannot succeed either

    We want to plan for success, not failure. The best plan is one which makes failure vanishingly unlikely.

    1 Comment
  • Passing the buck the right way

    A key principle at the intersection of agile and DevOps is to push responsibility down the org chart, as close to the…

Explore content categories