Codesota · Audio · Automatic Speech Recognition · SEED Seed-TTS test-zhTasks/Audio/Automatic Speech Recognition
Automatic Speech Recognition · benchmark dataset · EN

seed-tts-eval (Seed-TTS evaluation test set) — test-zh.

seed-tts-eval (referred to in papers as SEED test-zh / test-en) is a held-out zero-shot evaluation test set released alongside ByteDance's Seed-TTS work to measure content consistency and other objective metrics for text-to-speech systems. It contains separate Mandarin (test-zh) and English (test-en) subsets assembled from public corpora: 2000 Mandarin samples extracted from DiDiSpeech-2 and 1000 English samples from Common Voice (per the project README). The repo provides evaluation scripts and recommended objective metrics used in the Seed-TTS paper (e.g., WER/CER computed with strong ASR models and speaker-similarity computed with WavLM-based embeddings). Primary sources: the official GitHub repo (BytedanceSpeech/seed-tts-eval) which hosts the test lists and evaluation code, and the Seed-TTS paper (arXiv:2406.02430) that references and uses this test set.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies