Codesota · Natural Language Processing · Language Modeling · SafetyBenchTasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

SafetyBench: Evaluating the Safety of Large Language Models with Multiple Choice Questions.

SafetyBench is a comprehensive benchmark for evaluating the safety of large language models. It contains 11,435 diverse multiple-choice safety questions spanning seven safety categories (Offensiveness; Unfairness & Bias; Physical Health; Mental Health; Illegal Activities; Ethics & Morality; Privacy & Property). The benchmark includes both Chinese and English data (the authors release language-specific test files such as test_zh.json and test_en.json) and is intended for automatic evaluation of LLM safety via multiple-choice accuracy per-category and overall (the paper reports overall and per-category scores, e.g., in Table 7).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
SafetyBench — Language Modeling benchmark · Codesota | CodeSOTA