Rules for Humans: Key Principles for Developing with AI (Claude Code and Cursor)

Rules for Humans: Key Principles for Developing with AI (Claude Code and Cursor)

I've been developing with AI since March 2023. I started with GitHub Copilot, moved to Cursor in September 2024, and added Claude Code in June 2025. I continue to work with both Claude Code and Cursor as I collaborate with teams running on each.

It has been an incredibly fun journey. The pace has been frantic. I've had my fair share of panic over whether I could keep up with it all. I have spent almost every day—weekends included—building and experimenting.

I thought I would take some time to reflect on the overall experience and capture some of the wisdom I've gained in a few key principles. Some of the terms used might be Claude Code-focused, but they should all equally apply to Cursor and other platforms. The tooling has started to normalize around similar constructs.

I've labeled these principles "Rules for Humans." The main reason: I am very confident that before the end of the year—maybe in the next major model release—agents will become better at coding than us. Like a good teacher, we have to recognize when the student becomes the new master in a particular area, and back out gracefully.

These tips are for builders who are in the tools.


Build Tools for AI

Structure the tooling and commands you use to lint, build, test, and analyze code coverage so that they are optimized for an agent. Take the time to create scripts that optimize the output from these tools to be token-efficient. The more laser-focused your tooling output, the faster AI will drive to a solution for you.

I've built a custom Jest reporter that filters output down to just the failures and code coverage gaps. Use the agent to build this. Maybe have a session where you two talk about what your agent needs. Script development is a good place for vibe coding.

Skills and Commands from Repetition

Build skills and commands where you have repetition. This is where you perfect your flow with AI. I spend at least 15% of my time every day tweaking and evolving my workflows with how I use agents. Every PR has changes to my commands and rules.

If you are not making changes here, you're stagnating and becoming a problem.

To be honest, I have stopped making the changes myself. Lately, they are all made by my agents as part of a continual improvement loop we packed back into our commands a while back.

Rules from Mistakes

Build rules where the agent encounters mistakes. Do not use rules as a way of outlining every aspect of how development is done in your repo. This is like working with a know-it-all who is constantly telling you things you already know.

Have the agent draft the rules it needs to avoid mistakes that it made. Remind it to keep the rules as concise as possible without losing meaning. Be careful of it becoming an academic scholar and writing a full dissertation. It'll get drunk on context and start spewing out worthless slop when the rules become too verbose.

Tune Regularly

Tune your skills, rules, and CLAUDE.md files multiple times per day—every session where there are mistakes. Get the agent to do it. It's good at it.

My commands all generate, update, and read lessons-learned.md files. I have a separate process for converting the lessons learned into command, skill, and rule updates. The agents are now performing all of this. My commands and skills run the lessons-learned loop during the session. I usually run the command and rule updates right before submitting a PR. It's the Agile equivalent of a retrospective using the transcripts.

In Claude Code, I have a pre-compact hook that saves the full transcript so it can be analyzed.

Get Out of the Way

Lose your ego. While you might have constructed a workflow and system you are proud of, be prepared to tear it all down. Get suspicious if you haven't after a major LLM upgrade.

Agents will be better at all of this than us soon. You and your systems will be the problem eventually. Question if that is not now—constantly.

Use Your Experience with the Agent, Not Your Career Experience

Tune looking back at real experiences you have had with the agent, not looking forward at expected outcomes typically based on your pre-agent career experience.

Your life-long development experience has one way it is critically important: detecting bullshit. You've never worked with someone who can be so convincing AND completely wrong at the same time than an agent.

Experiment and Observe

Try different approaches to verify you understand what is effective. Instrument your workflows. Pay attention to task durations and token usage, but optimize to your own productivity and balance the effort to the reward. There is no point in optimizing token usage if you aren't hitting your usage limit. Tune for speed if you are feeling unproductive waiting for the agent to finish something—especially if you suspect you could do it faster.

I have switched back and forth between Claude Code and Cursor. There is something seductive about Composer 1 and working with speed. I build all of my commands and rules in a /docs folder and have my .claude and .cursor files just reference them. This lets me switch between Claude Code and Cursor with little to no friction. When I burn up my 5-hour window on Claude Code, I jump over to Cursor and continue.

I'm not recommending everyone try this. I'm just saying that speed can be powerful. It keeps you in the loop and focused when you are doing something that requires lots of iterations—like tweaking and mocking a UI.

Stewart, thank you for sharing your experience and tips! Refining your workflow daily and constantly updating rules especially resonated with me; I think many engineers are neglecting that step and relying purely on what the Cursor agents are prompted to do by default.

Like
Reply

Very interesting article!

Like
Reply

To view or add a comment, sign in

More articles by Stewart Armbrecht

  • The Best Way to Use AI

    There is one change I've made in my relationship with AI over the last few months that has done more to help me than…

  • The Problems Script: How to Get Clean Code from AI Agents

    This covers the most important pattern for making agents generate quality code. Problem Your AI assistant just…

    6 Comments

Others also viewed

Explore content categories