Messing around with claw-code: 100% offline agentic dev using LM Studio.

Messing around with claw-code: 100% offline agentic dev using LM Studio.

April 1, 2026, was a wild ride for the AI world. Anthropic’s Claude Code source code leaked, likely due to a misconfigured release process that caused the version 2.1.88 update package to include a direct reference to their internal source code.

This sparked the immediate creation of the instructkr/claw-code GitHub repo, a complete rewrite in Rust of the Claude-Code harness. It made headlines as the fastest repository in history to surpass 100K stars. At the time of writing, it sits at 120k stars and over 100k forks.

Since I am currently deep into exploring local LLMs, a simple thought crossed my mind: Could I get this cutting-edge agentic software to talk to my local Qwen3.5 LLM?

The answer is yes, absolutely. And it was surprisingly easy.

The Testground Setup.

Cloning the repo and building the project were done in no time. In fact, the bulk of the time I spent on this Wednesday evening side-project was dedicated to creating a Podman image and container. This ensures claw-code lives in an isolated environment, completely sandboxed from my personal data or other projects

Once the container was ready, the configuration was simple:

  • I overwrote the ANTHROPIC_BASE_URL variable with my local LM Studio Server REST API URL.
  • I swapped the ANTHROPIC_API_KEY for a valid LM Studio API token.

With the routing in place, I kicked off the project with: cargo run

Article content
Claw Code main menu screen after the first run.

Success.

Interacting with this intelligence, I opted for the most complex prompt imaginable:

hello
Article content
Ml Studio showing the loaded model Qwen3.5 generating tokens.

On LM Studio, I could observe Qwen3.5 (qwen3.5-35b-a3b) jumping into action and generating tokens. The claw-code agent was successfully talking to my local LLM.

Article content
Claw-code's first response.

Success, again! Claw-Code is successfully talking to my local LLM setup.

Test 1: The Vibe-Coders "Hello World".

I asked claw-code to build a simple hello world HTML page.

Build me a simple html page with css and JS that, when opened, presents me with a hello world screen.

About 50 seconds later, claw-code was done.

Article content
Claw-code informing me the task for the "Hello World" project has been completed.

As to be expected from such a vague prompt, the resulting page presented itself in heavy Neumorphism with blue-purple gradients.

Article content
The resulting Hello World page.

Success! We were on a roll.

Test 2: A web app with little to no context.

Time for a looser prompt targeting a more complex solution. I used one of my standard quick benchmark prompts:

Build a webapp (can be reactive) for a local music library. It uses the Browser Storage API to scan a folder for music files, then displays the found tracks in a filterable list that can be sorted for metadata of the track.

After some back-and-forth between me and claw-code to clarify the full scope of the project, the agent and Qwen3.5 went to work. About an hour later, I was presented with a functional page that allows me to select a local folder, then scans the directory, and presents me with all the found audio files in a filterable list.

Article content
The Music Library page that displays all found tracks in a filterable list.

The Verdict.

Is this a usable web app ready to deploy? Of course not.

But this side quest wasn't about vibe-coding a music library app. It was about successfully setting up the claw-code agent in combination with a locally hosted Qwen3.5 instance. This lays the groundwork for more elaborate, agentic, spec-driven development scenarios that I will be exploring in the coming days.

Best of all, it is 100% offline, without a single cloud token spent.

If you are keen to learn more, don’t follow me. Instead:

Cheers! 🍻

To view or add a comment, sign in

More articles by Roger Keller

Others also viewed

Explore content categories