Codesota · Models · Mistral-Large-Instruct-2411mistralai14 results · 3 benchmarks
Model card

Mistral-Large-Instruct-2411.

mistralaiopen-source123B params
§ 01 · Benchmarks

Every benchmark Mistral-Large-Instruct-2411 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalaverage69.8%#1/491source ↗
02Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalpolemo2-in89.1%#2/490source ↗
03Polish EQ-BenchNatural Language Processing · Polish Emotional Intelligenceeq-score77.3%#2/101source ↗
04Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalcbd44.0%#3/490source ↗
05Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generaldyk74.2%#3/489source ↗
06Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalbelebele92.6%#4/490source ↗
07Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generaleq-bench63.1%#7/299source ↗
08CPTU-BenchNatural Language Processing · Polish Text Understandingsentiment4.3%#11/93source ↗
09CPTU-BenchNatural Language Processing · Polish Text Understandingaverage4.0%#12/93source ↗
10CPTU-BenchNatural Language Processing · Polish Text Understandingphraseology4.0%#12/93source ↗
11CPTU-BenchNatural Language Processing · Polish Text Understandinglanguage-understanding4.0%#16/93source ↗
12CPTU-BenchNatural Language Processing · Polish Text Understandingtricky-questions3.7%#17/93source ↗
13Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalpolqa-open-book91.5%#18/489source ↗
14Open PL LLM LeaderboardNatural Language Processing · Polish LLM Generalppc77.0%#78/490source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Mistral-Large-Instruct-2411 actually performs.

Natural Language Processing
3
benchmarks
avg rank #13.3
§ 04 · Related models

Other mistralai models scored on Codesota.

Ministral-8B-Instruct-2410
0 results
Mistral-7B-Instruct-v0.1
0 results
Mistral-7B-Instruct-v0.2
0 results
Mistral-7B-Instruct-v0.3
7.25B params · 0 results
Mistral-7B-v0.3
0 results
Mistral-Large-Instruct-2407
123B params · 0 results
Mistral-Nemo-Base-2407
0 results
Mistral-Nemo-Instruct-2407
12.2B params · 0 results
§ 05 · Sources & freshness

Where these numbers come from.

speakleash/open_pl_llm_leaderboard
8
results
SpeakLeash/CPTU-Bench
5
results
SpeakLeash/Polish-EQ-Bench
1
result
14 of 14 rows marked verified.