Model card
Code Llama 34B.
Metaopen-sourceUnknown paramsLlama 2 fine-tuned
34B code-specialized Llama 2. Released Aug 2023. Strong HumanEval performance.
§ 01 · Benchmarks
Every benchmark Code Llama 34B has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | MBPP | Computer Code · Code Generation | pass@1 | 62.6% | #18 | — | source ↗ |
| 02 | HumanEval | Computer Code · Code Generation | pass@1 | 62.4% | #40 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Code Llama 34B actually performs.
§ 04 · Related models
Other Meta models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
2
results
2 of 2 rows marked verified.