SysBench is a system-message-following benchmark for evaluating Large Language Models (LLMs). It measures how well models adhere to system messages across dimensions such as constraint complexity, instruction misalignment, and multi-turn stability. The benchmark provides evaluation examples (the Hugging Face dataset includes a test split stored as system_benchmark_eval_datas.json) and reports results using an ISR metric (reported in the paper) to quantify system-message-following performance. The dataset and code are publicly released by PKU-Baichuan-MLSystemLab (GitHub) and are hosted on Hugging Face.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.