OpenRouter app · rank #3Snapshot 2026-04-14
Claude Code
Claude Code is Anthropic's agentic coding tool that reads your entire codebase, plans and executes changes across files, runs tests, and iterates on failures, all from natural language prompts.
3.51T
Tokens (30d)
50.7M
Requests
$14.90M
Monthly cost
$4.50
Blended $/M tokens
Ranks in the 93th percentile by spend and 90th by raw tokens among the top 30 apps.
Primary driver
Anthropic: Claude Opus 4.6 is Claude Code's top model at 814.2B — 23.2% of total token volume and 57.9% of monthly cost ($8.63M).
Top-20 model breakdown
Ranked by monthly cost. Prices from OpenRouter's live catalog. Blended $/M = price_in × 72% + price_out × 28%.
| # | Model | Vendor | Tokens | % of vol. | $/M in | $/M out | Blended $/M | Monthly cost | % of spend |
|---|---|---|---|---|---|---|---|---|---|
| 1 | Anthropic: Claude Opus 4.6 anthropic/claude-4.6-opus-20260205 | anthropic | 814.2B | 23.2% | $5.00 | $25.00 | $10.60 | $8.63M | 57.9% |
| 2 | Anthropic: Claude Sonnet 4.6 anthropic/claude-4.6-sonnet-20260217 | anthropic | 500.5B | 14.3% | $3.00 | $15.00 | $6.36 | $3.18M | 21.4% |
| 3 | Anthropic: Claude Opus 4.5 anthropic/claude-4.5-opus-20251124 | anthropic | 60.6B | 1.7% | $5.00 | $25.00 | $10.60 | $643K | 4.3% |
| 4 | OpenAI: GPT-5.4 openai/gpt-5.4-20260305 | openai | 101.0B | 2.9% | $2.50 | $15.00 | $6.00 | $606K | 4.1% |
| 5 | Anthropic: Claude Haiku 4.5 anthropic/claude-4.5-haiku-20251001 | anthropic | 230.9B | 6.6% | $1.00 | $5.00 | $2.12 | $489K | 3.3% |
| 6 | Qwen: Qwen3.6 Plus qwen/qwen3.6-plus-04-02 | qwen | 584.1B | 16.6% | $0.33 | $1.95 | $0.78 | $456K | 3.1% |
| 7 | Anthropic: Claude Sonnet 4.5 anthropic/claude-4.5-sonnet-20250929 | anthropic | 51.7B | 1.5% | $3.00 | $15.00 | $6.36 | $329K | 2.2% |
| 8 | Google: Gemini 3.1 Pro Preview Custom Tools google/gemini-3.1-pro-preview-20260219 | 46.0B | 1.3% | $2.00 | $12.00 | $4.80 | $221K | 1.5% | |
| 9 | StepFun: Step 3.5 Flash stepfun/step-3.5-flash | stepfun | 550.1B | 15.7% | $0.10 | $0.30 | $0.16 | $86K | 0.6% |
| 10 | MoonshotAI: Kimi K2.5 moonshotai/kimi-k2.5-0127 | moonshotai | 60.2B | 1.7% | $0.38 | $1.72 | $0.76 | $46K | 0.3% |
| 11 | Qwen: Qwen3.6 Plus qwen/qwen3.6-plus-preview | qwen | 55.0B | 1.6% | $0.33 | $1.95 | $0.78 | $43K | 0.3% |
| 12 | Z.ai: GLM 5 z-ai/glm-5-20260211 | z-ai | 36.1B | 1.0% | $0.72 | $2.30 | $1.16 | $42K | 0.3% |
| 13 | Z.ai: GLM 5.1 z-ai/glm-5.1-20260406 | z-ai | 17.7B | 0.5% | $0.95 | $3.15 | $1.57 | $28K | 0.2% |
| 14 | Xiaomi: MiMo-V2-Pro xiaomi/mimo-v2-pro-20260318 | xiaomi | 15.3B | 0.4% | $1.00 | $3.00 | $1.56 | $24K | 0.2% |
| 15 | Google: Gemini 3 Flash Preview google/gemini-3-flash-preview-20251217 | 18.2B | 0.5% | $0.50 | $3.00 | $1.20 | $22K | 0.1% | |
| 16 | NVIDIA: Nemotron 3 Super nvidia/nemotron-3-super-120b-a12b-20230311 | nvidia | 91.2B | 2.6% | $0.10 | $0.50 | $0.21 | $19K | 0.1% |
| 17 | MiniMax: MiniMax M2.7 minimax/minimax-m2.7-20260318 | minimax | 32.6B | 0.9% | $0.30 | $1.20 | $0.55 | $18K | 0.1% |
| 18 | MiniMax: MiniMax M2.5 minimax/minimax-m2.5-20260211 | minimax | 30.8B | 0.9% | $0.12 | $0.99 | $0.36 | $11K | 0.1% |
| 19 | DeepSeek: DeepSeek V3.2 deepseek/deepseek-v3.2-20251201 | deepseek | 18.3B | 0.5% | $0.26 | $0.38 | $0.29 | $5K | 0.0% |
| 20 | openrouter/hunter-alpha openrouter/hunter-alpha | openrouter | 25.0B | 0.7% | — | — | — | — | 0.0% |
Cross-references
- Full leaderboard: OpenRouter app leaderboard
- Inverted view: Which models AI agents actually use
- Related benchmarks: /agentic
- Source: openrouter.ai/apps/claude-code
We reply within 48 hours
Know Claude Code better than we do?
If these numbers don't match what you see from inside this app, tell us. We reply within 48 hours and update the analysis.
Tell us what you found →
✓ No newsletter✓ Real humans read this✓ 30 seconds to send