Codesota · Models · Whisper Large-v2OpenAI4 results · 3 benchmarks
Model card

Whisper Large-v2.

OpenAIopen-source1.5B paramsTransformer encoder-decoder

OpenAI Whisper (arXiv 2212.04356).

§ 01 · Benchmarks

Every benchmark Whisper Large-v2 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01Common VoiceSpeech · Speech Recognitionwer11.2%#1/32022-12-06source ↗
02LibriSpeechSpeech · Speech Recognitionwer-test-clean2.7%#1/92022-12-06source ↗
03LibriSpeechSpeech · Speech Recognitionwer-test-other5.2%#1/82022-12-06source ↗
04MuST-C En-De tst-COMMONSpeech · Speech Translationbleu29.0%#2/3source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Whisper Large-v2 actually performs.

Speech
3
benchmarks
avg rank #1.3
§ 03 · Papers

1 paper with results for Whisper Large-v2.

  1. 2022-12-06· Speech· 3 results

    Robust Speech Recognition via Large-Scale Weak Supervision (Whisper)

§ 04 · Related models

Other OpenAI models scored on Codesota.

GPT-4o
Undisclosed params · 35 results · 9 SOTA
o3
16 results · 5 SOTA
o4-mini
13 results · 3 SOTA
o3 (high)
2 results · 1 SOTA
o4-mini (high)
1 result · 1 SOTA
o1
11 results
GPT-5
8 results
o1-preview
Undisclosed params · 8 results
§ 05 · Sources & freshness

Where these numbers come from.

arxiv
3
results
editorial
1
result
3 of 4 rows marked verified.