Codesota · Models · CodeLlama 70BMeta1 results · 1 benchmarks
Model card

CodeLlama 70B.

Metaopen-source70B params
§ 01 · Benchmarks

Every benchmark CodeLlama 70B has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01SWE-BenchComputer Code · Code Generationresolve-rate29.8%#26/322024-12-01source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where CodeLlama 70B actually performs.

Computer Code
1
benchmark
avg rank #26.0
§ 03 · Papers

1 paper with results for CodeLlama 70B.

  1. 2023-10-10· Computer Code· 1 result

    SWE-bench: Can Language Models Resolve Real-World GitHub Issues?

    Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models

Other Meta models scored on Codesota.

DeiT-B Distilled
86M params · 2 results · 1 SOTA
Llama 3 70B
8 results
Llama 3.1 405B
6 results
Llama-4-Maverick
400B total / 17B active (128 experts) params · 6 results
Llama 3.1 70B
4 results
Code Llama 34B
Unknown params · 2 results
ConvNeXt V2 Huge
650M params · 2 results
ConvNeXt V2 Base
89M params · 1 result
§ 05 · Sources & freshness

Where these numbers come from.

swebench-leaderboard
1
result
1 of 1 rows marked verified.