Codesota · Audio · Audio-Language Models · OpenAudioBench - LlamaQuestionsTasks/Audio/Audio-Language Models
Audio-Language Models · benchmark dataset · EN

OpenAudioBench.

OpenAudioBench is an audio understanding evaluation dataset published on Hugging Face by baichuan-inc. It is designed to benchmark multimodal and audio-focused language models across multiple audio-based tasks including logical reasoning, general-knowledge and open-ended/question-answering scenarios. The public Hugging Face dataset repo contains evaluation data directories (e.g., eval_datas/web_questions and eval_datas/reasoning_qa) with audio files and accompanying CSV metadata. The dataset on HF shows a default/test split with ~2.9k rows and audio durations in the release ranging roughly from ~1s up to ~50s. The dataset card and repo files (audio WAVs and CSVs) indicate it is intended as an evaluation collection for audio-driven QA and reasoning tasks; the subset referred to in your paper as the "LlamaQuestions" audio-driven question-answering task corresponds to the audio QA evaluation data included in this OpenAudioBench release. Author/owner: baichuan-inc. Hugging Face dataset page: https://huggingface.co/datasets/baichuan-inc/OpenAudioBench.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies