I Coded From the Bleachers. Here's the Claude Code Setup That Made It Possible.

I Coded From the Bleachers. Here's the Claude Code Setup That Made It Possible.

I watched my son's basketball game Sunday without checking my laptop once.

Three PRs reviewed. Two bugs fixed. A Slack thread resolved.

Just me, courtside, on my phone.

Boris Cherny , the guy who built Claude Code dropped a thread this week on 15 features most people haven't touched. 2.2M views. Here's my honest take, organized by what actually matters.


1. Your laptop doesn't need to be open.

The mobile + remote control stack is the most under-appreciated shift in how engineers (or me as a PM) actually work.

I write probably 30% of my code from the iOS app, not for complex logic, but for dispatch. The commands: --teleport to pull a cloud session to your local terminal, /remote-control to steer a local session from anywhere. He has remote control enabled by default in /config.

The setup I'd recommend: keep a cheap cloud VM warm, point Claude at it, and your workstation is now your phone. Or just leave your Mac running at home. Either way, I assign work at halftime. I check in on the drive home. The agents run the whole time.


2. Verification is the multiplier. Most teams are skipping the hard part.

The single most important thing is to give Claude a way to verify its own work, and you 2-3x output quality. But the features only work if you've built the right test harness first.

What that actually requires:

Real user journey test cases — not just unit or regression tests. Full end-to-end flows that simulate what a real user does. Vague acceptance criteria breaks everything downstream. Personally i found Claude Code always takes the short path if no strict journey tests defined.

The right environment to validate in. We use the Chrome extension every time a frontend feature is involved: asking someone to build a website without letting them use a browser produces a bad website. Give Claude a browser and it iterates until it looks right. For other domains: give the agent a game engine if you're building a game. A simulator. Whatever the real runtime is. The agent needs to run it, interact with it, and close the loop itself.

Hooks make it deterministic. Route permission prompts to Slack or WhatsApp. Poke Claude to keep going when it stalls. Log every bash command. Once your test harness is solid and hooks are in place — then /loop, /schedule, and autonomous PR babysitting work at the quality bar you actually need. Not before.

For PMs especially: your job in an agentic world isn't to write code. It's to write test cases. That's the new spec.


3. The parallelism tools — use them like a fleet, not a single terminal.

A few that genuinely changed how I work:

Git worktrees (claude -w) let you run dozens of Claude sessions in the same repo simultaneously, zero conflicts, thought I found it struggles when you go beyond 3 worktrees. /batch takes that further — describe a migration, and Claude fans the work out to as many worktree agents as it takes. Each one tests and PRs independently.

/branch lets you fork a session mid-conversation. Two directions, run in parallel, no context lost.

/btw is quieter but I use it constantly — ask Claude a side question while it's mid-task without breaking its flow. Full context, no interruption.

And /voice — Boris does most of his coding by talking to Claude, not typing. Honestly, I still find it a little awkward in the workplace. But at home? Completely changed how I write prompts. You speak 3x faster than you type, and the detail level goes way up naturally.


The real shift underneath all of this:

Claude Code or other similar product is a fleet you schedule and orchestrate. Not a chatbot you prompt.

The engineers winning right now aren't faster typists. They've built better test harnesses, better environments, and better dispatch systems. The bottleneck is never what Claude can do. It's whether you've given it the right loop to close.

That's exactly what we built at UserApproved.AI — but for ecommerce brands, not engineering teams.

Our agents run continuously in the background. Auditing your conversion funnel. Watching competitor moves. Monitoring your own metrics and site changes. Surfacing the insight you needed three days ago — before you thought to ask.

The goal isn't a faster chatbot. It's the first system that actually watches your business for you.

If that sounds like something your team needs, let's talk.


Happy Builder @ UserApproved.AI

To view or add a comment, sign in

More articles by Reynold Wu

Others also viewed

Explore content categories