Which models do AI agents actually use?
OpenRouter publishes the top-20 model mix for every app that routes through it. We invert that data: for every model in the catalog, we now know exactly how many apps rely on it, how much token volume they drive through it, and — matched against the live pricing catalog — how much they spend on it. Four rankings below.
99
Distinct models in play
30
Apps analysed
35.56T
Total tokens (30d)
$74.62M
Total cost
Four different winners
- Most dollars: Anthropic: Claude Opus 4.6 — $25.10M/month across 24 apps. Premium pricing compounds even at modest volume.
- Most tokens: Xiaomi: MiMo-V2-Pro — 5.49T through 15 apps. High-volume cheap models absorb the long tail.
- Most adopted: Qwen: Qwen3.6 Plus — shows up in 27 of 30 apps. "Everyone tries it at least a little" ≠ "anyone uses it primarily".
- Most #1 slots: MiniMax: MiniMax M2.5 — the top model in 4 of 30 apps. This is the closest proxy for "what agents actually lean on when it matters".
Grouped by vendor
Same data, rolled up one level: every model for each vendor summed into one row. Blended $/M is vendor-weighted by real usage, so premium vendors show their true price per token.
| # | Vendor | Models | Apps | Tokens | Monthly cost | Avg blended $/M | #1 slots |
|---|---|---|---|---|---|---|---|
| 1 | anthropic | 8 | 27 | 5.98T | $46.55M | $7.78 | 6 |
| 2 | xiaomi | 3 | 16 | 6.19T | $9.00M | $1.45 | 2 |
| 3 | z-ai | 6 | 24 | 3.20T | $6.01M | $1.88 | 0 |
| 4 | openai | 21 | 24 | 747.5B | $3.90M | $5.22 | 4 |
| 5 | 13 | 29 | 2.08T | $2.42M | $1.16 | 3 | |
| 6 | qwen | 13 | 21 | 3.05T | $2.35M | $0.77 | 4 |
| 7 | minimax | 2 | 20 | 5.41T | $2.29M | $0.42 | 4 |
| 8 | stepfun | 1 | 16 | 3.99T | $623K | $0.16 | 1 |
| 9 | moonshotai | 3 | 20 | 643.7B | $491K | $0.76 | 0 |
| 10 | deepseek | 6 | 24 | 1.46T | $453K | $0.31 | 4 |
| 11 | nvidia | 2 | 15 | 1.17T | $248K | $0.21 | 1 |
| 12 | arcee-ai | 1 | 11 | 604.7B | $240K | $0.40 | 0 |
| 13 | x-ai | 3 | 6 | 49.4B | $34K | $0.69 | 0 |
| 14 | amazon | 1 | 1 | 10.5B | $10K | $0.92 | 0 |
| 15 | mistralai | 3 | 2 | 196.3B | $5K | $0.03 | 1 |
| 16 | sao10k | 2 | 1 | 9.3B | $645 | $0.07 | 0 |
| 17 | aion-labs | 1 | 1 | 533.7M | $547 | $1.02 | 0 |
| 18 | microsoft | 1 | 1 | 864.1M | $536 | $0.62 | 0 |
| 19 | anthracite-org | 1 | 1 | 38.7M | $138 | $3.56 | 0 |
| 20 | thedrummer | 2 | 1 | 236.7M | $85 | $0.36 | 0 |
| 21 | nousresearch | 2 | 1 | 185.3M | $37 | $0.20 | 0 |
| 22 | kwaipilot | 1 | 1 | 58.2M | $32 | $0.55 | 0 |
| 23 | gryphe | 1 | 1 | 47.3M | $3 | $0.06 | 0 |
| 24 | openrouter | 2 | 13 | 744.8B | $0 | — | 0 |
Top 30 by monthly cost
Where the dollars actually go. Summed across every app that uses each model.
| # | Model | Vendor | $/M in | $/M out | Tokens | Monthly cost | Apps | #1 in | Top-3 in |
|---|---|---|---|---|---|---|---|---|---|
| 1 | Anthropic: Claude Opus 4.6 | anthropic | $5.00 | $25.00 | 2.37T | $25.10M | 24 | 3 | 13 |
| 2 | Anthropic: Claude Sonnet 4.6 | anthropic | $3.00 | $15.00 | 2.62T | $16.67M | 24 | 2 | 9 |
| 3 | Xiaomi: MiMo-V2-Pro | xiaomi | $1.00 | $3.00 | 5.49T | $8.57M | 15 | 2 | 5 |
| 4 | Z.ai: GLM 5 Turbo | z-ai | $1.20 | $4.00 | 2.84T | $5.64M | 7 | 0 | 2 |
| 5 | Anthropic: Claude Sonnet 4.5 | anthropic | $3.00 | $15.00 | 460.6B | $2.93M | 18 | 1 | 4 |
| 6 | OpenAI: GPT-5.4 | openai | $2.50 | $15.00 | 478.4B | $2.87M | 17 | 2 | 2 |
| 7 | Qwen: Qwen3.6 Plus | qwen | $0.33 | $1.95 | 2.98T | $2.33M | 27 | 4 | 8 |
| 8 | MiniMax: MiniMax M2.5 | minimax | $0.12 | $0.99 | 3.70T | $1.34M | 15 | 4 | 5 |
| 9 | Google: Gemini 3 Flash Preview | $0.50 | $3.00 | 994.8B | $1.19M | 24 | 3 | 6 | |
| 10 | MiniMax: MiniMax M2.7 | minimax | $0.30 | $1.20 | 1.72T | $947K | 19 | 0 | 4 |
| 11 | Anthropic: Claude Haiku 4.5 | anthropic | $1.00 | $5.00 | 444.5B | $942K | 13 | 0 | 2 |
| 12 | OpenAI: GPT-5.3-Codex | openai | $1.75 | $14.00 | 172.2B | $892K | 10 | 1 | 2 |
| 13 | Anthropic: Claude Opus 4.5 | anthropic | $5.00 | $25.00 | 83.1B | $881K | 10 | 0 | 0 |
| 14 | Google: Gemini 3.1 Pro Preview Custom Tools | $2.00 | $12.00 | 141.9B | $681K | 22 | 0 | 0 | |
| 15 | StepFun: Step 3.5 Flash | stepfun | $0.10 | $0.30 | 3.99T | $623K | 16 | 1 | 6 |
| 16 | MoonshotAI: Kimi K2.5 | moonshotai | $0.38 | $1.72 | 629.5B | $477K | 19 | 0 | 0 |
| 17 | Xiaomi: MiMo-V2-Omni | xiaomi | $0.40 | $2.00 | 466.0B | $395K | 3 | 0 | 1 |
| 18 | DeepSeek: DeepSeek V3.2 | deepseek | $0.26 | $0.38 | 1.26T | $371K | 24 | 4 | 5 |
| 19 | NVIDIA: Nemotron 3 Super | nvidia | $0.10 | $0.50 | 1.17T | $248K | 15 | 1 | 1 |
| 20 | Arcee AI: Trinity Large Thinking | arcee-ai | $0.22 | $0.85 | 604.7B | $240K | 12 | 0 | 0 |
| 21 | Z.ai: GLM 5 | z-ai | $0.72 | $2.30 | 197.8B | $230K | 16 | 0 | 1 |
| 22 | Google: Gemini 2.5 Flash | $0.30 | $2.50 | 243.7B | $223K | 10 | 0 | 1 | |
| 23 | Google: Gemini 2.5 Pro | $1.25 | $10.00 | 46.8B | $173K | 8 | 0 | 0 | |
| 24 | Google: Gemini 2.5 Flash Lite | $0.10 | $0.40 | 591.4B | $109K | 3 | 0 | 2 | |
| 25 | Z.ai: GLM 5.1 | z-ai | $0.95 | $3.15 | 57.7B | $90K | 13 | 0 | 0 |
| 26 | OpenAI: GPT-5.4 Mini | openai | $0.75 | $4.50 | 33.6B | $60K | 6 | 1 | 1 |
| 27 | OpenAI: GPT-4.1 Mini | openai | $0.40 | $1.60 | 49.5B | $36K | 3 | 0 | 0 |
| 28 | Xiaomi: MiMo-V2-Flash | xiaomi | $0.09 | $0.29 | 234.8B | $34K | 6 | 0 | 0 |
| 29 | DeepSeek: DeepSeek V3 0324 | deepseek | $0.20 | $0.77 | 94.6B | $34K | 4 | 0 | 2 |
| 30 | Z.ai: GLM 4.7 | z-ai | $0.39 | $1.75 | 42.2B | $33K | 4 | 0 | 0 |
Top 30 by raw token volume
How much work (in tokens) each model carries. Cheap models lead here.
| # | Model | Vendor | $/M in | $/M out | Tokens | Monthly cost | Apps | #1 in | Top-3 in |
|---|---|---|---|---|---|---|---|---|---|
| 1 | Xiaomi: MiMo-V2-Pro | xiaomi | $1.00 | $3.00 | 5.49T | $8.57M | 15 | 2 | 5 |
| 2 | StepFun: Step 3.5 Flash | stepfun | $0.10 | $0.30 | 3.99T | $623K | 16 | 1 | 6 |
| 3 | MiniMax: MiniMax M2.5 | minimax | $0.12 | $0.99 | 3.70T | $1.34M | 15 | 4 | 5 |
| 4 | Qwen: Qwen3.6 Plus | qwen | $0.33 | $1.95 | 2.98T | $2.33M | 27 | 4 | 8 |
| 5 | Z.ai: GLM 5 Turbo | z-ai | $1.20 | $4.00 | 2.84T | $5.64M | 7 | 0 | 2 |
| 6 | Anthropic: Claude Sonnet 4.6 | anthropic | $3.00 | $15.00 | 2.62T | $16.67M | 24 | 2 | 9 |
| 7 | Anthropic: Claude Opus 4.6 | anthropic | $5.00 | $25.00 | 2.37T | $25.10M | 24 | 3 | 13 |
| 8 | MiniMax: MiniMax M2.7 | minimax | $0.30 | $1.20 | 1.72T | $947K | 19 | 0 | 4 |
| 9 | DeepSeek: DeepSeek V3.2 | deepseek | $0.26 | $0.38 | 1.26T | $371K | 24 | 4 | 5 |
| 10 | NVIDIA: Nemotron 3 Super | nvidia | $0.10 | $0.50 | 1.17T | $248K | 15 | 1 | 1 |
| 11 | Google: Gemini 3 Flash Preview | $0.50 | $3.00 | 994.8B | $1.19M | 24 | 3 | 6 | |
| 12 | openrouter/hunter-alpha | openrouter | — | — | 741.6B | $0 | 13 | 0 | 1 |
| 13 | MoonshotAI: Kimi K2.5 | moonshotai | $0.38 | $1.72 | 629.5B | $477K | 19 | 0 | 0 |
| 14 | Arcee AI: Trinity Large Thinking | arcee-ai | $0.22 | $0.85 | 604.7B | $240K | 12 | 0 | 0 |
| 15 | Google: Gemini 2.5 Flash Lite | $0.10 | $0.40 | 591.4B | $109K | 3 | 0 | 2 | |
| 16 | OpenAI: GPT-5.4 | openai | $2.50 | $15.00 | 478.4B | $2.87M | 17 | 2 | 2 |
| 17 | Xiaomi: MiMo-V2-Omni | xiaomi | $0.40 | $2.00 | 466.0B | $395K | 3 | 0 | 1 |
| 18 | Anthropic: Claude Sonnet 4.5 | anthropic | $3.00 | $15.00 | 460.6B | $2.93M | 18 | 1 | 4 |
| 19 | Anthropic: Claude Haiku 4.5 | anthropic | $1.00 | $5.00 | 444.5B | $942K | 13 | 0 | 2 |
| 20 | Google: Gemini 2.5 Flash | $0.30 | $2.50 | 243.7B | $223K | 10 | 0 | 1 | |
| 21 | Xiaomi: MiMo-V2-Flash | xiaomi | $0.09 | $0.29 | 234.8B | $34K | 6 | 0 | 0 |
| 22 | Z.ai: GLM 5 | z-ai | $0.72 | $2.30 | 197.8B | $230K | 16 | 0 | 1 |
| 23 | Mistral: Mistral Nemo | mistralai | $0.02 | $0.04 | 195.7B | $5K | 1 | 1 | 1 |
| 24 | OpenAI: GPT-5.3-Codex | openai | $1.75 | $14.00 | 172.2B | $892K | 10 | 1 | 2 |
| 25 | Google: Gemini 3.1 Pro Preview Custom Tools | $2.00 | $12.00 | 141.9B | $681K | 22 | 0 | 0 | |
| 26 | DeepSeek: DeepSeek V3 0324 | deepseek | $0.20 | $0.77 | 94.6B | $34K | 4 | 0 | 2 |
| 27 | Anthropic: Claude Opus 4.5 | anthropic | $5.00 | $25.00 | 83.1B | $881K | 10 | 0 | 0 |
| 28 | Z.ai: GLM 4.5 Air | z-ai | $0.13 | $0.85 | 60.6B | $20K | 4 | 0 | 0 |
| 29 | Z.ai: GLM 5.1 | z-ai | $0.95 | $3.15 | 57.7B | $90K | 13 | 0 | 0 |
| 30 | OpenAI: GPT-4.1 Mini | openai | $0.40 | $1.60 | 49.5B | $36K | 3 | 0 | 0 |
Top 30 by number of apps using the model
Breadth of adoption — models that appear in the top-20 mix of the most apps.
| # | Model | Vendor | $/M in | $/M out | Tokens | Monthly cost | Apps | #1 in | Top-3 in |
|---|---|---|---|---|---|---|---|---|---|
| 1 | Qwen: Qwen3.6 Plus | qwen | $0.33 | $1.95 | 2.98T | $2.33M | 27 | 4 | 8 |
| 2 | Anthropic: Claude Sonnet 4.6 | anthropic | $3.00 | $15.00 | 2.62T | $16.67M | 24 | 2 | 9 |
| 3 | Anthropic: Claude Opus 4.6 | anthropic | $5.00 | $25.00 | 2.37T | $25.10M | 24 | 3 | 13 |
| 4 | DeepSeek: DeepSeek V3.2 | deepseek | $0.26 | $0.38 | 1.26T | $371K | 24 | 4 | 5 |
| 5 | Google: Gemini 3 Flash Preview | $0.50 | $3.00 | 994.8B | $1.19M | 24 | 3 | 6 | |
| 6 | Google: Gemini 3.1 Pro Preview Custom Tools | $2.00 | $12.00 | 141.9B | $681K | 22 | 0 | 0 | |
| 7 | MiniMax: MiniMax M2.7 | minimax | $0.30 | $1.20 | 1.72T | $947K | 19 | 0 | 4 |
| 8 | MoonshotAI: Kimi K2.5 | moonshotai | $0.38 | $1.72 | 629.5B | $477K | 19 | 0 | 0 |
| 9 | Anthropic: Claude Sonnet 4.5 | anthropic | $3.00 | $15.00 | 460.6B | $2.93M | 18 | 1 | 4 |
| 10 | OpenAI: GPT-5.4 | openai | $2.50 | $15.00 | 478.4B | $2.87M | 17 | 2 | 2 |
| 11 | StepFun: Step 3.5 Flash | stepfun | $0.10 | $0.30 | 3.99T | $623K | 16 | 1 | 6 |
| 12 | Z.ai: GLM 5 | z-ai | $0.72 | $2.30 | 197.8B | $230K | 16 | 0 | 1 |
| 13 | Xiaomi: MiMo-V2-Pro | xiaomi | $1.00 | $3.00 | 5.49T | $8.57M | 15 | 2 | 5 |
| 14 | MiniMax: MiniMax M2.5 | minimax | $0.12 | $0.99 | 3.70T | $1.34M | 15 | 4 | 5 |
| 15 | NVIDIA: Nemotron 3 Super | nvidia | $0.10 | $0.50 | 1.17T | $248K | 15 | 1 | 1 |
| 16 | openrouter/hunter-alpha | openrouter | — | — | 741.6B | $0 | 13 | 0 | 1 |
| 17 | Anthropic: Claude Haiku 4.5 | anthropic | $1.00 | $5.00 | 444.5B | $942K | 13 | 0 | 2 |
| 18 | Z.ai: GLM 5.1 | z-ai | $0.95 | $3.15 | 57.7B | $90K | 13 | 0 | 0 |
| 19 | Arcee AI: Trinity Large Thinking | arcee-ai | $0.22 | $0.85 | 604.7B | $240K | 12 | 0 | 0 |
| 20 | Google: Gemini 2.5 Flash | $0.30 | $2.50 | 243.7B | $223K | 10 | 0 | 1 | |
| 21 | OpenAI: GPT-5.3-Codex | openai | $1.75 | $14.00 | 172.2B | $892K | 10 | 1 | 2 |
| 22 | Anthropic: Claude Opus 4.5 | anthropic | $5.00 | $25.00 | 83.1B | $881K | 10 | 0 | 0 |
| 23 | Google: Gemini 2.5 Pro | $1.25 | $10.00 | 46.8B | $173K | 8 | 0 | 0 | |
| 24 | Google: Gemini 3.1 Flash Lite Preview | $0.25 | $1.50 | 41.9B | $25K | 8 | 0 | 0 | |
| 25 | Z.ai: GLM 5 Turbo | z-ai | $1.20 | $4.00 | 2.84T | $5.64M | 7 | 0 | 2 |
| 26 | Xiaomi: MiMo-V2-Flash | xiaomi | $0.09 | $0.29 | 234.8B | $34K | 6 | 0 | 0 |
| 27 | OpenAI: GPT-5.4 Mini | openai | $0.75 | $4.50 | 33.6B | $60K | 6 | 1 | 1 |
| 28 | Google: Nano Banana Pro (Gemini 3 Pro Image Preview) | $2.00 | $12.00 | 1.8B | $9K | 6 | 0 | 1 | |
| 29 | xAI: Grok 4.1 Fast | x-ai | $0.20 | $0.50 | 32.3B | $9K | 5 | 0 | 0 |
| 30 | DeepSeek: DeepSeek V3.2 Exp | deepseek | $0.27 | $0.41 | 28.5B | $9K | 5 | 0 | 0 |
Models that rank #1 in an app's mix (14 models)
Strongest 'primary driver' signal. Being #1 in even one app beats showing up in a long tail.
| # | Model | Vendor | $/M in | $/M out | Tokens | Monthly cost | Apps | #1 in | Top-3 in |
|---|---|---|---|---|---|---|---|---|---|
| 1 | MiniMax: MiniMax M2.5 | minimax | $0.12 | $0.99 | 3.70T | $1.34M | 15 | 4 | 5 |
| 2 | Qwen: Qwen3.6 Plus | qwen | $0.33 | $1.95 | 2.98T | $2.33M | 27 | 4 | 8 |
| 3 | DeepSeek: DeepSeek V3.2 | deepseek | $0.26 | $0.38 | 1.26T | $371K | 24 | 4 | 5 |
| 4 | Anthropic: Claude Opus 4.6 | anthropic | $5.00 | $25.00 | 2.37T | $25.10M | 24 | 3 | 13 |
| 5 | Google: Gemini 3 Flash Preview | $0.50 | $3.00 | 994.8B | $1.19M | 24 | 3 | 6 | |
| 6 | Xiaomi: MiMo-V2-Pro | xiaomi | $1.00 | $3.00 | 5.49T | $8.57M | 15 | 2 | 5 |
| 7 | Anthropic: Claude Sonnet 4.6 | anthropic | $3.00 | $15.00 | 2.62T | $16.67M | 24 | 2 | 9 |
| 8 | OpenAI: GPT-5.4 | openai | $2.50 | $15.00 | 478.4B | $2.87M | 17 | 2 | 2 |
| 9 | StepFun: Step 3.5 Flash | stepfun | $0.10 | $0.30 | 3.99T | $623K | 16 | 1 | 6 |
| 10 | NVIDIA: Nemotron 3 Super | nvidia | $0.10 | $0.50 | 1.17T | $248K | 15 | 1 | 1 |
| 11 | Anthropic: Claude Sonnet 4.5 | anthropic | $3.00 | $15.00 | 460.6B | $2.93M | 18 | 1 | 4 |
| 12 | Mistral: Mistral Nemo | mistralai | $0.02 | $0.04 | 195.7B | $5K | 1 | 1 | 1 |
| 13 | OpenAI: GPT-5.3-Codex | openai | $1.75 | $14.00 | 172.2B | $892K | 10 | 1 | 2 |
| 14 | OpenAI: GPT-5.4 Mini | openai | $0.75 | $4.50 | 33.6B | $60K | 6 | 1 | 1 |
Methodology
- Built by inverting the app leaderboard JSON — for every (app, model) pair in the top-20 slice, aggregate by model.
- Permaslugs with date suffixes (
anthropic/claude-4.6-opus-20260205) are matched back to the base catalog ID via vendor-scoped token-bag overlap so dollar totals aren't missed. - "#1 in N apps" means the model holds the top token slot for that app's 30-day window. It does not mean the model is objectively best — just that this particular app routes the most tokens to it.
- Free models (Nemotron 3 Super, MiniMax M2.5 free tier) contribute to token and adoption rankings but $0 to spend. Stealth/alpha lanes without listed pricing are counted in adoption but not dollars.
npx tsx scripts/fetch-openrouter-apps.tsModel getting ignored?
Think a model deserves a closer look or isn't priced correctly? Tell us and we'll re-run the match against our catalog.