Speech data from 110 English speakers with various accents. Used for multi-speaker TTS.
6 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | mos |
|---|---|---|---|---|---|
| 01 | NaturalSpeech 3 | Microsoft Research | Mar 2024 | NaturalSpeech 3: Zero-Shot Copier-Free TTS with Flow Mat… | 4.36 |
| 02 | VITSOSS | Kakao | Jun 2021 | VITS: Conditional Variational Autoencoder with Adversari… | 4.21 |
| 03 | StyleTTS 2OSS | Columbia University | Jun 2023 | StyleTTS 2: Towards Human-Level Text-to-Speech through S… | 4.19 |
| 04 | VALL-E 2 | Microsoft | Jun 2024 | VALL-E 2: Neural Codec Language Models are Human Parity … | 4.18 |
| 05 | XTTS v2OSS | Coqui AI | Apr 2023 | XTTS: A Massively Multilingual Zero-Shot Text-to-Speech … | 4.14 |
| 06 | YourTTSOSS | Edresson Casanova et al. | Feb 2022 | YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Sh… | 4.07 |
Each row below marks a model that broke the previous record on mos. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.