OpenAudioBench is an audio understanding evaluation dataset published on Hugging Face by baichuan-inc. It is designed to benchmark multimodal and audio-focused language models across multiple audio-based tasks including logical reasoning, general-knowledge and open-ended/question-answering scenarios. The public Hugging Face dataset repo contains evaluation data directories (e.g., eval_datas/web_questions and eval_datas/reasoning_qa) with audio files and accompanying CSV metadata. The dataset on HF shows a default/test split with ~2.9k rows and audio durations in the release ranging roughly from ~1s up to ~50s. The dataset card and repo files (audio WAVs and CSVs) indicate it is intended as an evaluation collection for audio-driven QA and reasoning tasks; the subset referred to in your paper as the "LlamaQuestions" audio-driven question-answering task corresponds to the audio QA evaluation data included in this OpenAudioBench release. Author/owner: baichuan-inc. Hugging Face dataset page: https://huggingface.co/datasets/baichuan-inc/OpenAudioBench.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.