Codesota · Models · wav2vec 2.0 Large (960h)Meta AI3 results · 2 benchmarks
Model card

wav2vec 2.0 Large (960h).

Meta AIopen-source317M paramsCNN feature encoder + Transformer

wav2vec 2.0 Large fine-tuned on LibriSpeech 960h. Self-supervised pretraining on LibriLight 60k.

§ 01 · Benchmarks

Every benchmark wav2vec 2.0 Large (960h) has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01Common VoiceSpeech · Speech Recognitionwer10.5%#2/32020-06-20source ↗
02LibriSpeechSpeech · Speech Recognitionwer-test-clean1.8%#4/92020-06-20source ↗
03LibriSpeechSpeech · Speech Recognitionwer-test-other3.3%#7/82020-06-20source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where wav2vec 2.0 Large (960h) actually performs.

Speech
2
benchmarks
avg rank #4.3
§ 03 · Papers

1 paper with results for wav2vec 2.0 Large (960h).

  1. 2020-06-20· Speech· 3 results

    wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations

§ 04 · Related models

Other Meta AI models scored on Codesota.

GENRE
1 result · 1 SOTA
SeamlessM4T v2 Large
2.3B params · 1 result · 1 SOTA
DINOv2 (ViT-g) + Linear
Unknown params · 1 result
Fairseq S2T (MuST-C)
~150M params · 1 result
Mask2Former (Swin-L)
Unknown params · 1 result
MusicGen Large
3.3B params · 1 result
Voicebox
330M params · 1 result
convnext_base.fb_in22k_ft_in1k
1 result
§ 05 · Sources & freshness

Where these numbers come from.

arxiv
3
results
3 of 3 rows marked verified.