CLI Coding Agents Tierlist

CLI Coding Agents Tierlist

Almost every major AI lab now ships its own CLI agent. I’ve spent the last few months testing a bunch of them, focusing on the native tools — the ones built by the companies that created the underlying models. Essentially, these aren’t wrappers or third-party interfaces but the official implementations:

These tools bring AI assistance straight into your terminal and lets these AI models interact with your files and computer tools, whether through straightforward command-line interfaces or more interactive terminal UIs. I’ve ranked them into three tiers: S-tier tools are the production-ready daily drivers, A-tier tools shine in specific workflows, and B-tier tools are solid but come with some real caveats. I’ve also added 2 extra tiers for some additional CLI and IDE tools that don’t have their own native LLMs.

Claude Code

Getting it set up is simple:

# macOS, Linux, or WSL
curl -fsSL https://claude.ai/install.sh | bash

# Or via Homebrew
brew install --cask claude-code        

Once you’re in, just run claude and start chatting about your codebase or files. Session management feels natural, and you can pick up where you left off with claude --continue, list recent sessions with claude --list, or resume a specific one by ID. You can also rename a session to something more intuitive by using /rename. What really sets it apart is plan mode. Ask it to refactor a module or add a feature and it first lays out the steps so you can review them.

Gemini CLI

Google’s interactive terminal agent works really well alongside Jules, their autonomous coding agent that handles long-running work in the background.

Installation is straightforward:

npm install -g @google/gemini-cli        

Fire it up with gemini and you’re off. Resuming sessions is easy with gemini --continue. You can run /auth and choose between using an API key, your Google account, or Google Cloud Vertex account. The real magic happens when you hand off bigger tasks to Jules for async execution. You can install the official extension via:

gemini extensions install https://github.com/gemini-cli-extensions/jules        

Next, you can kick off a heavy refactoring job, walk away, close the laptop, and come back later to finished work.

/jules refactor the authentication module to use JWT        

Mistral Vibe

Mistral’s CLI is built on their Devstral and Codestral models. The one-liner install is:

curl -LsSf https://mistral.ai/vibe/install.sh | bash        

Run vibe in the terminal and you’re in. You can also run vibe-acp which lets you use the Agent Client Protocol (ACP) mode which lets you use the same sessions in your IDE and terminal. You can also fine-tune models directly on your own codebase so it learns your internal patterns and conventions.

Kimi CLI

Moonshot AI’s terminal agent flies a bit under the radar, but the shell integration makes it feel like it belongs in your workflow. Setup is quick:

curl -LsSf https://code.kimi.com/install.sh | bash        

Type kimi and you’re ready. The killer feature is the shell mode: hit Ctrl-X to drop into a normal shell, run whatever commands you need, then hit Ctrl-X again to jump right back into the agent conversation.

Qwen Code

Alibaba’s CLI agent started with a similar approach to Gemini CLI but is tuned specifically for their Qwen models. You can install it a couple of ways:

# Via npm
npm install -g @qwen-code/qwen-code

# Or via Homebrew
brew install qwen-code        

Run qwen and it walks you through OAuth authentication. Inside the session you have handy commands like /compress to keep token usage in check, /clear to start fresh, and /stats to see where you stand. It also auto-detects images and can switch to vision models when it makes sense.

OpenAI Codex CLI

OpenAI’s official terminal tool is included with every ChatGPT plan — Free, Go, Plus, Pro, Business, Edu, and Enterprise. Install and log in like this:

npm install -g @openai/codex
codex login        

Then just run codex to start a session. It handles resuming previous work cleanly and generates multi-step plans before it acts. The model quality is excellent and everything feels fast. It’s a strong choice for quick tasks when you’re already in the OpenAI ecosystem.

Open Source Models with Native CLIs

One of the best developments in this space is that both Claude Code and Codex CLI can now run models from almost anywhere. Run GLM in Claude Code? MiniMax in Codex? The easiest way is with Ollama. It provides compatibility layers that let you plug local or open-source models straight into the native CLIs:

  • For Claude Code: ollama launch claude
  • For Codex CLI: ollama launch codex

Once launched, you can select the model to use and its not just the native provider’s models anymore, but other open-source models as well. This works great with strong open models like Qwen3, GLM-5, MiniMax, Kimi-K2 variants, or whatever you have pulled locally. No extra proxies needed in most cases.

This means you’re not locked into one company’s models. You can run Claude Code with a free local model in the morning and switch to a cloud provider in the afternoon. Same great agent experience, different brains under the hood.

Common Patterns Across All Six

After using all of these tools, a few things stand out across the board.

  • Every CLI provider remembers your context between sessions. The exact flags differ, but being able to pick up right where you left off is now table stakes.
  • They all generate a plan before taking action. You can auto-approve steps when you trust the flow or review each change manually.
  • Repository awareness is strong. They understand your full codebase even when it spans hundreds of files.
  • Safety defaults are sensible. None of them run commands blindly — they show diffs or proposed changes and ask for permission. YOLO mode is there when you need speed.
  • File operations, Git integration, and basic terminal awareness are all present, though the polish varies from tool to tool.
  • Access with a free tier subscription is mostly available across the tools, with some only working with an API key but the majority allowing key or subscription-based account logins.

Other CLI and IDE tools for AI

GitHub Copilot CLI is the agent-powered terminal tool included with Copilot subscriptions. The big advantage is native GitHub context — it understands your issues, PRs, and repositories without any extra setup. It has MCP support and can handle autonomous task execution.

Kiro CLI (formerly Amazon Q Developer CLI) is AWS’s AI-powered terminal agent. It offers custom agents, MCP support, automation hooks, and directory-based conversation persistence. If you live in the AWS ecosystem, this one slots right into your existing infrastructure.

Cursor CLI gives you a full terminal agent with access to GPT-5, Claude 4, Gemini, and anything else that speaks the OpenAI-compatible API. It supports scripts, automations, and headless mode for CI/CD.

Sourcegraph Amp is built on Sourcegraph’s code graph for deep repository understanding and supports multiple models, but it requires their platform.

Aider is an open-source CLI that works with any model — Claude, GPT-4, Qwen, DeepSeek, or local models via Ollama. Its Git integration is excellent.

Continue is a highly extensible open-source platform for custom AI assistants (20k+ stars on GitHub) that you can run in your IDE or terminal.

Warp is a terminal replacement with built-in LLM features and multi-model support, though it’s more of a full terminal than a pure coding agent.

To view or add a comment, sign in

More articles by Hamman Samuel

Others also viewed

Explore content categories