Codesota · Models · MiniMax M2.7Anthropic/OpenAI6 results · 1 benchmarks
Model card

MiniMax M2.7.

Anthropic/OpenAIapi

Imported from https://raw.githubusercontent.com/GAIR-NLP/AcademiClaw/main/README.md

§ 02 · Benchmarks

Every benchmark MiniMax M2.7 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01AcademiClawAgentic AI · Task agentsavg-time-sec686.00#2/52026-05-04source ↗
02AcademiClawAgentic AI · Task agentsavg-tokens-per-task-k1663.00#2/62026-05-04source ↗
03AcademiClawAgentic AI · Task agentstool-calls-per-task37.0%#2/62026-05-04source ↗
04AcademiClawAgentic AI · Task agentssafety-score86.5%#4/62026-05-04source ↗
05AcademiClawAgentic AI · Task agentsavg-score63.1%#6/62026-05-04source ↗
06AcademiClawAgentic AI · Task agentspass37.5%#6/62026-05-04source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 03 · Strengths by area

Where MiniMax M2.7 actually performs.

Agentic AI
1
benchmark
avg rank #3.7
§ 04 · Papers

1 paper with results for MiniMax M2.7.

  1. 2026-05-04· Agentic AI· 6 results

    AcademiClaw: When Students Set Challenges for AI Agents

    Junjie Yu, Pengrui Lu, Weiye Si, Hongliang Lu et al.
§ 05 · Related models

Other Anthropic/OpenAI models scored on Codesota.

Gemini 3.1 Pro
7 results · 2 SOTA
Qwen3.5-397B-A17B†
5 results
§ 06 · Sources & freshness

Where these numbers come from.

paper
6
results
6 of 6 rows marked verified.