Codesota · Natural Language Processing · Language Modeling · INCLUDETasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

INCLUDE: Evaluating Multilingual Language Understanding with Regional Knowledge.

INCLUDE is a multilingual, knowledge- and reasoning-centric evaluation benchmark built from local academic and professional exam sources to measure multilingual LLM performance in real regional contexts. According to the paper (arXiv:2411.19799) INCLUDE comprises a large evaluation suite (the paper reports 197,243 QA pairs in total) covering regional/cultural knowledge across many topics and 44 written languages. A released Hugging Face dataset variant (CohereLabs/include-base-44) is a curated subset described as "INCLUDE-base (44 languages)" and contains 22,637 4-option multiple-choice questions spanning 57 topics (domains include chemistry, biology, legal, finance, medical, climate, art, code). Metadata on the HF page lists the 44 languages, Apache-2.0 license, task categories (multiple-choice, text2text-generation), and links to the paper. Note: the Qwen3 paper (arXiv:2505.09388) reports using INCLUDE with 10% sampling for some evaluations (used in post-training, Table 11). Source: arXiv:2411.19799 and Hugging Face dataset page CohereLabs/include-base-44.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies