Codesota · General · Vision-Language Models · OlympiadBench (full)Tasks/General/Vision-Language Models
Vision-Language Models · benchmark dataset · EN

OlympiadBench: A Challenging Benchmark for Promoting AGI with Olympiad-Level Bilingual Multimodal Scientific Problems.

OlympiadBench is an Olympiad-level bilingual multimodal scientific benchmark (paper/ACL 2024, arXiv:2402.14008). It contains 8,476 problems drawn from high‑difficulty mathematics and physics competitions (including examples from Chinese exams) presented in both Chinese and English. Problems are multimodal (text + images) and the dataset includes expert annotations with step‑by‑step reasoning and final answers. The benchmark is intended to evaluate advanced reasoning, multimodal understanding, and problem‑solving capabilities of LLMs and LMMs (tasks: question answering / visual question answering). The Hugging Face dataset page groups the data into multiple subsets (math/physics, Chinese/English, multimodal/text‑only variants) and the paper/report refers to evaluations on a reported “full” split. (Sources: arXiv:2402.14008, ACL 2024 paper, Hugging Face dataset page, OpenBMB GitHub.)

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies