Multilingual benchmark (Python, TypeScript, Java, C#) for cross-file code completion requiring understanding of cross-file context. 1000 examples per language from GitHub repos. Primary metric is Exact Match.
6 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | exact-match |
|---|---|---|---|---|---|
| 01 | Claude Sonnet 4API | Anthropic | Mar 2026 | official-model-card | 44.50 |
| 02 | Qwen2.5-Coder 32BOSS | Alibaba | Sep 2024 | Qwen2.5-Coder Technical Report · code | 43.70 |
| 03 | DeepSeek-Coder-V2-InstructOSS | DeepSeek | Jun 2024 | DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source… · code | 41.30 |
| 04 | GPT-4oAPI | OpenAI | Oct 2023 | CrossCodeEval: A Diverse and Multilingual Benchmark for … · code | 38.20 |
| 05 | Codestral 22B | Mistral | May 2024 | official-blog | 35.60 |
| 06 | StarCoder2 15BOSS | BigCode | Feb 2024 | StarCoder2 and The Stack v2: The Next Generation · code | 32.10 |
Each row below marks a model that broke the previous record on exact-match. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.