Model card
mDeBERTa-v3-base.
Microsoftopen-source86M paramsDeBERTa-v3 (multilingual)
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training. ICLR 2023.
§ 01 · Benchmarks
Every benchmark mDeBERTa-v3-base has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | XNLI | Natural Language Processing · Zero-Shot Classification | accuracy | 80.8% | #3 | 2023-01-01 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where mDeBERTa-v3-base actually performs.
§ 03 · Papers
1 paper with results for mDeBERTa-v3-base.
- 2023-01-01· Natural Language Processing· 1 result
DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
§ 04 · Related models
Other Microsoft models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
1
result
1 of 1 rows marked verified.