VoiceBench is a multi-faceted benchmark for evaluating LLM-based voice assistants. Introduced in the paper “VoiceBench: Benchmarking LLM-Based Voice Assistants” (arXiv:2410.17196), it provides an aggregated voice-interaction evaluation (reported as a VoiceBench overall score) focused on Audio→Text capabilities. The benchmark includes both real and synthetic spoken instructions and is designed to capture real-world variations in speaker characteristics, acoustic/environmental conditions, and content complexity. The Hugging Face dataset (lmms-lab/voicebench) exposes multiple subsets (e.g., advbench, alpacaeval, bbh, commoneval, mmsu, mtbench, wildvoice, etc.), and is provided under an Apache-2.0 license. (Sources: arXiv:2410.17196; Hugging Face lmms-lab/voicebench)
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.