Analyzed Claude Code Source Code and if you write "WTF" Anthropic knows.
So I spent some time going through the Claude Code source, expecting a smarter terminal assistant.
What I found instead feels closer to a fully instrumented system that observes how you behave while using it.
Not saying anything shady is going on. But the level of tracking and classification is much deeper than most people probably assume.
Here are the things that stood out.
1. It classifies your language using simple keyword detection
This part surprised me because it’s not “deep AI understanding.”
There are literal keyword lists. Words like:
These trigger negative sentiment flags.
Even phrases like “continue”, “go on”, “keep going” are tracked.
It’s basically regex-level classification happening before the model responds.
2. It tracks hesitation during permission prompts
This is where it gets interesting.
When a permission dialog shows up, it doesn’t just log your final decision.
It tracks how you behave:
Internal events have names like:
It even counts how many times you try to escape.
So it can tell the difference between:
“I clicked no quickly” vs “I hesitated, typed something, then rejected”
3. Feedback flow is designed to capture bad experiences
The feedback system is not random.
It triggers based on pacing rules, cooldowns, and probability.
If you mark something as bad:
And if you agree, it can include:
4. There are hidden trigger words that change behavior
Some commands aren’t obvious unless you read the code.
Examples:
The input box is parsing these live while you type.
Recommended by LinkedIn
5. Telemetry captures a full environment profile
Each session logs quite a lot:
If certain flags are enabled, it can also log:
This is way beyond basic usage analytics. It’s a pretty detailed environment fingerprint.
6. MCP command can expose environment data
Running:
claude mcp get <name>
can return:
If your env variables include secrets, they can show up in your terminal output.
That’s more of a “be careful” moment than anything else.
7. Internal builds go even deeper
There’s a mode (USER_TYPE=ant) where it collects even more:
All of this gets logged under internal telemetry events.
Meaning behavior can be tied back to a very specific deployment environment.
8. Overall takeaway
Putting it all together:
It’s not “just a chatbot.”
It’s a highly instrumented system observing how you interact with it.
I’m not claiming anything malicious here.
But once you read the source, it’s clear this is much more observable and measurable than most users would expect.
Most people will never look at this layer.
If you’re using Claude Code regularly, it’s worth knowing what’s happening under the hood.
Curious what others think.
Is this just normal product telemetry at scale, or does it feel like over-instrumentation?
If anyone wants, I can share the cleaned source references I used.
excellent piece
excellent piece Rana Muhammad Usman. Thanks a lot for publishing.