INCLUDE is a multilingual, knowledge- and reasoning-centric evaluation benchmark built from local academic and professional exam sources to measure multilingual LLM performance in real regional contexts. According to the paper (arXiv:2411.19799) INCLUDE comprises a large evaluation suite (the paper reports 197,243 QA pairs in total) covering regional/cultural knowledge across many topics and 44 written languages. A released Hugging Face dataset variant (CohereLabs/include-base-44) is a curated subset described as "INCLUDE-base (44 languages)" and contains 22,637 4-option multiple-choice questions spanning 57 topics (domains include chemistry, biology, legal, finance, medical, climate, art, code). Metadata on the HF page lists the 44 languages, Apache-2.0 license, task categories (multiple-choice, text2text-generation), and links to the paper. Note: the Qwen3 paper (arXiv:2505.09388) reports using INCLUDE with 10% sampling for some evaluations (used in post-training, Table 11). Source: arXiv:2411.19799 and Hugging Face dataset page CohereLabs/include-base-44.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.