OpenRouter app · rank #17Snapshot 2026-04-14
SillyTavern
SillyTavern is the LLM frontend for power users, a chat interface that connects to any model and gives deep control through character creation, roleplay, and prompt customization.
51.7B
Tokens (30d)
3.2M
Requests
$669K
Monthly cost
$3.70
Blended $/M tokens
Ranks in the 70th percentile by spend and 63th by raw tokens among the top 30 apps.
Primary driver
DeepSeek: DeepSeek V3.2 is SillyTavern's top model at 35.7B — 69.1% of total token volume and 1.6% of monthly cost ($10K).
Top-20 model breakdown
Ranked by monthly cost. Prices from OpenRouter's live catalog. Blended $/M = price_in × 72% + price_out × 28%.
| # | Model | Vendor | Tokens | % of vol. | $/M in | $/M out | Blended $/M | Monthly cost | % of spend |
|---|---|---|---|---|---|---|---|---|---|
| 1 | Anthropic: Claude Opus 4.6 anthropic/claude-4.6-opus-20260205 | anthropic | 31.0B | 59.9% | $5.00 | $25.00 | $10.60 | $328K | 49.0% |
| 2 | Anthropic: Claude Sonnet 4.5 anthropic/claude-4.5-sonnet-20250929 | anthropic | 13.2B | 25.6% | $3.00 | $15.00 | $6.36 | $84K | 12.6% |
| 3 | Google: Gemini 3.1 Pro Preview Custom Tools google/gemini-3.1-pro-preview-20260219 | 13.0B | 25.1% | $2.00 | $12.00 | $4.80 | $62K | 9.3% | |
| 4 | Anthropic: Claude Opus 4.5 anthropic/claude-4.5-opus-20251124 | anthropic | 5.3B | 10.3% | $5.00 | $25.00 | $10.60 | $56K | 8.4% |
| 5 | Anthropic: Claude Sonnet 4.6 anthropic/claude-4.6-sonnet-20260217 | anthropic | 7.2B | 13.9% | $3.00 | $15.00 | $6.36 | $46K | 6.8% |
| 6 | Z.ai: GLM 5 z-ai/glm-5-20260211 | z-ai | 20.5B | 39.8% | $0.72 | $2.30 | $1.16 | $24K | 3.6% |
| 7 | Google: Gemini 2.5 Pro google/gemini-2.5-pro | 5.7B | 11.1% | $1.25 | $10.00 | $3.70 | $21K | 3.2% | |
| 8 | DeepSeek: DeepSeek V3.2 deepseek/deepseek-v3.2-20251201 | deepseek | 35.7B | 69.1% | $0.26 | $0.38 | $0.29 | $10K | 1.6% |
| 9 | Google: Gemini 3 Flash Preview google/gemini-3-flash-preview-20251217 | 7.1B | 13.8% | $0.50 | $3.00 | $1.20 | $9K | 1.3% | |
| 10 | Z.ai: GLM 5 Turbo z-ai/glm-5-turbo-20260315 | z-ai | 4.0B | 7.7% | $1.20 | $4.00 | $1.98 | $8K | 1.2% |
| 11 | Z.ai: GLM 5.1 z-ai/glm-5.1-20260406 | z-ai | 3.7B | 7.2% | $0.95 | $3.15 | $1.57 | $6K | 0.9% |
| 12 | Z.ai: GLM 4.7 z-ai/glm-4.7-20251222 | z-ai | 5.6B | 10.8% | $0.39 | $1.75 | $0.77 | $4K | 0.6% |
| 13 | MoonshotAI: Kimi K2.5 moonshotai/kimi-k2.5-0127 | moonshotai | 3.1B | 6.0% | $0.38 | $1.72 | $0.76 | $2K | 0.4% |
| 14 | DeepSeek: DeepSeek V3 0324 deepseek/deepseek-chat-v3-0324 | deepseek | 6.0B | 11.6% | $0.20 | $0.77 | $0.36 | $2K | 0.3% |
| 15 | Arcee AI: Trinity Large Thinking arcee-ai/trinity-large-preview | arcee-ai | 5.1B | 9.9% | $0.22 | $0.85 | $0.40 | $2K | 0.3% |
| 16 | DeepSeek: DeepSeek V3.2 Exp deepseek/deepseek-v3.2-exp | deepseek | 4.5B | 8.6% | $0.27 | $0.41 | $0.31 | $1K | 0.2% |
| 17 | DeepSeek: DeepSeek V3.1 Terminus deepseek/deepseek-v3.1-terminus | deepseek | 2.8B | 5.4% | $0.21 | $0.79 | $0.37 | $1K | 0.2% |
| 18 | DeepSeek: DeepSeek V3.1 deepseek/deepseek-chat-v3.1 | deepseek | 3.1B | 6.0% | $0.15 | $0.75 | $0.32 | $988 | 0.1% |
| 19 | StepFun: Step 3.5 Flash stepfun/step-3.5-flash | stepfun | 4.3B | 8.2% | $0.10 | $0.30 | $0.16 | $665 | 0.1% |
| 20 | openrouter/hunter-alpha openrouter/hunter-alpha | openrouter | 3.5B | 6.8% | — | — | — | — | 0.0% |
Cross-references
- Full leaderboard: OpenRouter app leaderboard
- Inverted view: Which models AI agents actually use
- Related benchmarks: /agentic
- Source: openrouter.ai/apps/sillytavern
We reply within 48 hours
Know SillyTavern better than we do?
If these numbers don't match what you see from inside this app, tell us. We reply within 48 hours and update the analysis.
Tell us what you found →
✓ No newsletter✓ Real humans read this✓ 30 seconds to send