Codesota · Models · TriSum-JTriSum Authors3 results · 1 benchmarks
Model card

TriSum-J.

TriSum Authorsopen-sourceBART-large distilled from GPT-3.5 with structured rationale

TriSum: Learning Summarization Ability from Large Language Models with Structured Rationale. Zhang et al. NAACL 2024. Joint learning stage variant. Distills GPT-3.5 rationale (aspect-triple-summary structure) into a smaller BART model.

§ 01 · Benchmarks

Every benchmark TriSum-J has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01cnn-/-daily-mailComputer Vision · Optical Character Recognitionrouge-222.7%#5/332024-03-15source ↗
02cnn-/-daily-mailComputer Vision · Optical Character Recognitionrouge-145.7%#6/332024-03-15source ↗
03cnn-/-daily-mailComputer Vision · Optical Character Recognitionrouge-l41.9%#6/332024-03-15source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where TriSum-J actually performs.

Computer Vision
1
benchmark
avg rank #5.7
§ 03 · Papers

1 paper with results for TriSum-J.

  1. 2024-03-15· Natural Language Processing· 3 results

    TriSum: Learning Summarization Ability from Large Language Models with Structured Rationale

§ 05 · Sources & freshness

Where these numbers come from.

arxiv
3
results
3 of 3 rows marked verified.