Model card
Mistral 7B.
Mistral AIopen-weightUnknown paramsTransformer (decoder-only, GQA + sliding window attention)
Mistral 7B v0.1. Competitive with 13B models despite smaller size.
§ 01 · Benchmarks
Every benchmark Mistral 7B has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | CoNLL-2003 | Natural Language Processing · Named Entity Recognition | f1 | 83.5% | #7 | 2023-10-10 | source ↗ |
| 02 | SNLI | Natural Language Processing · Natural Language Inference | accuracy | 85.6% | #8 | 2023-10-10 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Mistral 7B actually performs.
§ 03 · Papers
1 paper with results for Mistral 7B.
- 2023-10-10· Natural Language Processing· 2 results
Mistral 7B
§ 04 · Related models
Other Mistral AI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
2
results
2 of 2 rows marked verified.