Codesota · Audio · Voice cloning · LibriTTS test-clean (Zero-Shot TTS)Tasks/Audio/Voice cloning
Voice cloning · benchmark dataset · 2019 · EN

LibriTTS test-clean zero-shot TTS evaluation.

Standard zero-shot voice-cloning / TTS evaluation using LibriTTS test-clean speaker prompts. WER on resynthesized utterances (measured with a frozen ASR like HuBERT-Large or Whisper) is the primary intelligibility metric (lower=better); speaker similarity (SECS) is a secondary metric.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

3 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
wer · lower is better
wer· primary
3 rows
#ModelOrgSubmittedPaper / codewer
01NaturalSpeech 3MicrosoftApr 2026editorial1.81
02VoiceboxMeta AIApr 2026editorial1.90
03VALL-EMicrosoftApr 2026editorial5.90
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

1 steps
of state of the art.

Each row below marks a model that broke the previous record on wer. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Lower scores win. Each subsequent entry improved upon the previous best.

SOTA line · wer
  1. Apr 5, 2026NaturalSpeech 3Microsoft1.81
Fig 3 · SOTA-setting models only. 1 entries span Apr 2026 Apr 2026.
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies