Codesota · Models · Llama 3.1 70BMeta4 results · 4 benchmarks
Model card

Llama 3.1 70B.

Metaopen-source
§ 01 · Benchmarks

Every benchmark Llama 3.1 70B has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01GPQAReasoning · Multi-step Reasoningaccuracy41.7%#32/33source ↗
02MATHReasoning · Mathematical Reasoningaccuracy68.0%#32/34source ↗
03HumanEvalComputer Code · Code Generationpass@180.5%#36/42source ↗
04MMLUReasoning · Commonsense Reasoningaccuracy82.0%#40/41source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Llama 3.1 70B actually performs.

Reasoning
3
benchmarks
avg rank #34.7
Computer Code
1
benchmark
avg rank #36.0
§ 04 · Related models

Other Meta models scored on Codesota.

DeiT-B Distilled
86M params · 2 results · 1 SOTA
Llama 3 70B
8 results
Llama 3.1 405B
6 results
Llama-4-Maverick
400B total / 17B active (128 experts) params · 6 results
Code Llama 34B
Unknown params · 2 results
ConvNeXt V2 Huge
650M params · 2 results
CodeLlama 70B
70B params · 1 result
ConvNeXt V2 Base
89M params · 1 result
§ 05 · Sources & freshness

Where these numbers come from.

openai-simple-evals
4
results
0 of 4 rows marked verified.