Codesota · Computer Vision · Few-Shot Image Classification · CodeForces (CodeElo)Tasks/Computer Vision/Few-Shot Image Classification
Few-Shot Image Classification · benchmark dataset · EN

CodeElo.

CodeElo is a competition-level code generation benchmark built from CodeForces problems and introduced in the paper “CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings” (arXiv:2501.01257). The Hugging Face dataset (Qwen/CodeElo) contains CodeForces problem metadata (problem id, url, title, difficulty rating, tags, contest division, time/memory limits, problem statement, IO examples, and notes) for the evaluation set (recent contest problems used by the benchmark). The benchmark standardizes evaluation by submitting solutions to the official CodeForces judge and computing Elo-style ratings for models, enabling direct comparison between LLMs and human competitors.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies