Codesota · Models · RoBERTa-largeFacebook AI1 results · 1 benchmarks
Model card

RoBERTa-large.

Facebook AIopen-source355M paramsBERT-large (robustly optimized)

RoBERTa: A Robustly Optimized BERT Pretraining Approach. 2019.

§ 01 · Benchmarks

Every benchmark RoBERTa-large has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01GLUENatural Language Processing · Fill-Maskavg-score88.5%#3/32019-07-26source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where RoBERTa-large actually performs.

Natural Language Processing
1
benchmark
avg rank #3.0
§ 03 · Papers

1 paper with results for RoBERTa-large.

  1. 2019-07-26· Natural Language Processing· 1 result

    RoBERTa: A Robustly Optimized BERT Pretraining Approach

§ 04 · Related models

Other Facebook AI models scored on Codesota.

SpanBERT
1 result
RoBERTa (single model)
0 results
XLM-RoBERTa-large
560M params · 0 results
§ 05 · Sources & freshness

Where these numbers come from.

arxiv
1
result
1 of 1 rows marked verified.