Model card
CodeLlama 70B.
Metaopen-source70B params
§ 01 · Benchmarks
Every benchmark CodeLlama 70B has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | SWE-Bench | Computer Code · Code Generation | resolve-rate | 29.8% | #26 | 2024-12-01 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 03 · Papers
1 paper with results for CodeLlama 70B.
- 2023-10-10· Computer Code· 1 result
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models
Other Meta models scored on Codesota.
DeiT-B Distilled
86M params · 2 results · 1 SOTA
Llama 3 70B
8 results
Llama 3.1 405B
6 results
Llama-4-Maverick
400B total / 17B active (128 experts) params · 6 results
Llama 3.1 70B
4 results
Code Llama 34B
Unknown params · 2 results
ConvNeXt V2 Huge
650M params · 2 results
ConvNeXt V2 Base
89M params · 1 result
§ 05 · Sources & freshness
Where these numbers come from.
swebench-leaderboard
1
result
1 of 1 rows marked verified.