seed-tts-eval (referred to in papers as SEED test-zh / test-en) is a held-out zero-shot evaluation test set released alongside ByteDance's Seed-TTS work to measure content consistency and other objective metrics for text-to-speech systems. It contains separate Mandarin (test-zh) and English (test-en) subsets assembled from public corpora: 2000 Mandarin samples extracted from DiDiSpeech-2 and 1000 English samples from Common Voice (per the project README). The repo provides evaluation scripts and recommended objective metrics used in the Seed-TTS paper (e.g., WER/CER computed with strong ASR models and speaker-similarity computed with WavLM-based embeddings). Primary sources: the official GitHub repo (BytedanceSpeech/seed-tts-eval) which hosts the test lists and evaluation code, and the Seed-TTS paper (arXiv:2406.02430) that references and uses this test set.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.