CodeElo is a competition-level code generation benchmark built from CodeForces problems and introduced in the paper “CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings” (arXiv:2501.01257). The Hugging Face dataset (Qwen/CodeElo) contains CodeForces problem metadata (problem id, url, title, difficulty rating, tags, contest division, time/memory limits, problem statement, IO examples, and notes) for the evaluation set (recent contest problems used by the benchmark). The benchmark standardizes evaluation by submitting solutions to the official CodeForces judge and computing Elo-style ratings for models, enabling direct comparison between LLMs and human competitors.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.