Codesota · Natural Language Processing · Language Modeling · AutoLogiTasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

AutoLogi: Automated Logic Puzzle Benchmark.

AutoLogi is a bilingual benchmark of automatically generated, open-ended logic puzzles designed to evaluate the logical reasoning abilities of large language models. Instances are synthesized by a programmatic generator with program-based verification to ensure solvability and correctness, and the generation process supports controllable difficulty levels to better distinguish model capabilities. The dataset was published alongside the paper “AutoLogi: Automated Generation of Logic Puzzles for Evaluating Reasoning Abilities of Large Language Models” (arXiv:2502.16906). The Hugging Face release (qzhu/AutoLogi) is licensed under Apache-2.0 and contains on the order of 1K–10K examples. Used in post-training evaluations (Table 11) of Qwen3.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
AutoLogi — Language Modeling benchmark · Codesota | CodeSOTA