Model card
Mistral-7B-Instruct-v0.1.
Mistral AIopen-sourceMistral 7B with instruction tuning
Mistral-7B-Instruct-v0.1. Zero-shot evaluation on CNN/DailyMail reported in arXiv:2507.05123 (Jul 2025). Best zero-shot open-source result among 7B-class models in that study.
§ 01 · Benchmarks
Every benchmark Mistral-7B-Instruct-v0.1 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | cnn-/-daily-mail | Computer Vision · Optical Character Recognition | rouge-2 | 16.4% | #26 | 2025-07-01 | source ↗ |
| 02 | cnn-/-daily-mail | Computer Vision · Optical Character Recognition | rouge-1 | 37.4% | #31 | 2025-07-01 | source ↗ |
| 03 | cnn-/-daily-mail | Computer Vision · Optical Character Recognition | rouge-l | 24.5% | #33 | 2025-07-01 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Mistral-7B-Instruct-v0.1 actually performs.
§ 03 · Papers
1 paper with results for Mistral-7B-Instruct-v0.1.
- 2025-07-01· Natural Language Processing· 3 results
An Evaluation of Large Language Models on Text Summarization Tasks Using Prompt Engineering Techniques
§ 04 · Related models
Other Mistral AI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
3
results
3 of 3 rows marked verified.