SafetyBench is a comprehensive benchmark for evaluating the safety of large language models. It contains 11,435 diverse multiple-choice safety questions spanning seven safety categories (Offensiveness; Unfairness & Bias; Physical Health; Mental Health; Illegal Activities; Ethics & Morality; Privacy & Property). The benchmark includes both Chinese and English data (the authors release language-specific test files such as test_zh.json and test_en.json) and is intended for automatic evaluation of LLM safety via multiple-choice accuracy per-category and overall (the paper reports overall and per-category scores, e.g., in Table 7).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.