Codesota · Audio · Audio-Language Models · NSynthTasks/Audio/Audio-Language Models
Audio-Language Models · benchmark dataset · EN

NSynth.

The NSynth dataset contains four-second 16 kHz audio snippets for each instrument, ranging over every pitch of a standard MIDI piano and five different velocities. It's designed as a benchmark for audio machine learning and a foundation for future datasets. It includes information on the source and family of sound production for each instrument, as well as features like pitch, velocity, sample rate, and sonic qualities. The dataset has train, valid, and test splits, with instruments not overlapping between these splits.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies