MultiChallenge is a multi-turn conversational evaluation benchmark designed to measure LLMs' ability to conduct realistic, multi-turn conversations with human users. The benchmark identifies four categories of realistic conversational challenges (e.g., instruction retention, inference/memory across turns, handling versioned or updated information, and context allocation) that require integrated instruction-following, context management, and in-context reasoning. The dataset was created via a hybrid data-generation process (LLM agents plus human review) and includes an automatic evaluation pipeline that uses LLMs-as-judges with instance-level rubrics, which the authors report aligns well with experienced human raters. In the paper's reported evaluations, current frontier models score well below saturation on MultiChallenge (all <50% average accuracy; top reported model Claude 3.5 Sonnet reached ≤41.4%), demonstrating that MultiChallenge exposes realistic multi-turn failure modes not captured by prior multi-turn benchmarks. The benchmark is accompanied by a public leaderboard (Scale) and a GitHub repo with details and data generation code. Table 6 in the paper summarizes the multi-turn evaluation setup and results.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.