Model card
RoBERTa-large.
Facebook AIopen-source355M paramsBERT-large (robustly optimized)
RoBERTa: A Robustly Optimized BERT Pretraining Approach. 2019.
§ 01 · Benchmarks
Every benchmark RoBERTa-large has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | GLUE | Natural Language Processing · Fill-Mask | avg-score | 88.5% | #3 | 2019-07-26 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where RoBERTa-large actually performs.
§ 03 · Papers
1 paper with results for RoBERTa-large.
- 2019-07-26· Natural Language Processing· 1 result
RoBERTa: A Robustly Optimized BERT Pretraining Approach
§ 04 · Related models
Other Facebook AI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
1
result
1 of 1 rows marked verified.