Model card
XLM-RoBERTa-large.
Facebook AIopen-source560M paramsRoBERTa-large (multilingual)
Unsupervised Cross-lingual Representation Learning at Scale. ACL 2020.
§ 01 · Benchmarks
Every benchmark XLM-RoBERTa-large has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | XNLI | Natural Language Processing · Zero-Shot Classification | accuracy | 83.6% | #2 | 2019-11-05 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where XLM-RoBERTa-large actually performs.
§ 03 · Papers
1 paper with results for XLM-RoBERTa-large.
- 2019-11-05· Natural Language Processing· 1 result
Unsupervised Cross-lingual Representation Learning at Scale
§ 04 · Related models
Other Facebook AI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
1
result
1 of 1 rows marked verified.