AutoLogi is a bilingual benchmark of automatically generated, open-ended logic puzzles designed to evaluate the logical reasoning abilities of large language models. Instances are synthesized by a programmatic generator with program-based verification to ensure solvability and correctness, and the generation process supports controllable difficulty levels to better distinguish model capabilities. The dataset was published alongside the paper “AutoLogi: Automated Generation of Logic Puzzles for Evaluating Reasoning Abilities of Large Language Models” (arXiv:2502.16906). The Hugging Face release (qzhu/AutoLogi) is licensed under Apache-2.0 and contains on the order of 1K–10K examples. Used in post-training evaluations (Table 11) of Qwen3.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.