Codesota · Models · BRIOYale NLP6 results · 2 benchmarks
Model card

BRIO.

Yale NLPopen-sourceUnknown paramsBART-large with contrastive learning objective

BRIO: Bringing Order to Abstractive Summarization. Liu et al. ACL 2022. Trains a BART-large model using a contrastive loss that assigns probability mass proportional to candidate summary quality, achieving SOTA on CNN/DM and XSum.

§ 01 · Benchmarks

Every benchmark BRIO has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01CNN/DailyMailNatural Language Processing · Text Summarizationrouge-147.8%#1/62022-03-31source ↗
02CNN/DailyMailNatural Language Processing · Text Summarizationrouge-223.6%#1/32022-03-31source ↗
03CNN/DailyMailNatural Language Processing · Text Summarizationrouge-l44.6%#1/62022-03-31source ↗
04cnn-/-daily-mailComputer Vision · Optical Character Recognitionrouge-147.8%#2/332022-03-31source ↗
05cnn-/-daily-mailComputer Vision · Optical Character Recognitionrouge-223.8%#2/332022-03-31source ↗
06cnn-/-daily-mailComputer Vision · Optical Character Recognitionrouge-l44.5%#2/332022-03-31source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where BRIO actually performs.

Natural Language Processing
1
benchmark
avg rank #1.0
Computer Vision
1
benchmark
avg rank #2.0
§ 03 · Papers

1 paper with results for BRIO.

  1. 2022-03-31· Natural Language Processing· 6 results

    BRIO: Bringing Order to Abstractive Summarization

§ 05 · Sources & freshness

Where these numbers come from.

arxiv
6
results
6 of 6 rows marked verified.