Codesota · Models · XLM-RoBERTa-largeFacebook AI1 results · 1 benchmarks
Model card

XLM-RoBERTa-large.

Facebook AIopen-source560M paramsRoBERTa-large (multilingual)

Unsupervised Cross-lingual Representation Learning at Scale. ACL 2020.

§ 01 · Benchmarks

Every benchmark XLM-RoBERTa-large has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01XNLINatural Language Processing · Zero-Shot Classificationaccuracy83.6%#2/32019-11-05source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where XLM-RoBERTa-large actually performs.

Natural Language Processing
1
benchmark
avg rank #2.0
§ 03 · Papers

1 paper with results for XLM-RoBERTa-large.

  1. 2019-11-05· Natural Language Processing· 1 result

    Unsupervised Cross-lingual Representation Learning at Scale

§ 04 · Related models

Other Facebook AI models scored on Codesota.

SpanBERT
1 result
RoBERTa (single model)
0 results
RoBERTa-large
355M params · 0 results
§ 05 · Sources & freshness

Where these numbers come from.

arxiv
1
result
1 of 1 rows marked verified.