Model card
MiniMax M2.7.
Anthropic/OpenAIapi
Imported from https://raw.githubusercontent.com/GAIR-NLP/AcademiClaw/main/README.md
§ 02 · Benchmarks
Every benchmark MiniMax M2.7 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | AcademiClaw | Agentic AI · Task agents | avg-time-sec | 686.00 | #2 | 2026-05-04 | source ↗ |
| 02 | AcademiClaw | Agentic AI · Task agents | avg-tokens-per-task-k | 1663.00 | #2 | 2026-05-04 | source ↗ |
| 03 | AcademiClaw | Agentic AI · Task agents | tool-calls-per-task | 37.0% | #2 | 2026-05-04 | source ↗ |
| 04 | AcademiClaw | Agentic AI · Task agents | safety-score | 86.5% | #4 | 2026-05-04 | source ↗ |
| 05 | AcademiClaw | Agentic AI · Task agents | avg-score | 63.1% | #6 | 2026-05-04 | source ↗ |
| 06 | AcademiClaw | Agentic AI · Task agents | pass | 37.5% | #6 | 2026-05-04 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 04 · Papers
1 paper with results for MiniMax M2.7.
- 2026-05-04· Agentic AI· 6 results
AcademiClaw: When Students Set Challenges for AI Agents
Junjie Yu, Pengrui Lu, Weiye Si, Hongliang Lu et al.
§ 05 · Related models
Other Anthropic/OpenAI models scored on Codesota.
§ 06 · Sources & freshness
Where these numbers come from.
paper
6
results
6 of 6 rows marked verified.