LiveCodeBench is a holistic and contamination-free benchmark for evaluating Large Language Models (LLMs) for code. It collects problems from periodic contests on platforms like LeetCode, AtCoder, and Codeforces to evaluate Code LLMs across various code-related scenarios continuously over time, including code generation, code execution, and test output prediction.
Pass@1 is the reported evaluation metric for LiveCodeBench. Codesota tracks published model scores on this metric so readers can compare state-of-the-art results across sources and model families.
Higher is better
| Rank | Model | Trust | Score | Year | Source |
|---|---|---|---|---|---|
| 01 | Qwen2.5-72B-Instruct | paper | 55.5 | N/A | Source ↗ |