Dataset from Papers With Code
32 results indexed across 7 metrics. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | bert |
|---|---|---|---|---|---|
| 01 | T5B Baseline | — | Oct 2023 | papers-with-code · code | 0.951 |
| 02 | FactT5B | — | Oct 2023 | papers-with-code · code | 0.951 |
| 03 | JointGT Baseline | — | Oct 2023 | papers-with-code · code | 0.949 |
| 04 | FactJointGT | — | Oct 2023 | papers-with-code · code | 0.949 |
| 05 | GPT-2-Large (fine-tuning) | — | Jul 2021 | HTLM: Hyper-Text Pre-Training and Prompting of Language … | 0.940 |
| 06 | HTLM (fine-tuning) | — | Jul 2021 | HTLM: Hyper-Text Pre-Training and Prompting of Language … | 0.940 |
| # | Model | Org | Submitted | Paper / code | bleu |
|---|---|---|---|---|---|
| 01 | T5B Baseline | — | Oct 2023 | papers-with-code · code | 48.47 |
| 02 | FactT5B | — | Oct 2023 | papers-with-code · code | 48.37 |
| 03 | JointGT Baseline | — | Oct 2023 | papers-with-code · code | 47.51 |
| 04 | FactJointGT | — | Oct 2023 | papers-with-code · code | 47.39 |
| 05 | HTLM (fine-tuning) | — | Jul 2021 | HTLM: Hyper-Text Pre-Training and Prompting of Language … | 47.20 |
| 06 | GPT-2-Large (fine-tuning) | — | Jul 2021 | HTLM: Hyper-Text Pre-Training and Prompting of Language … | 47 |
| # | Model | Org | Submitted | Paper / code | bleurt |
|---|---|---|---|---|---|
| 01 | T5B Baseline | — | Oct 2023 | papers-with-code · code | 0.675 |
| 02 | FactT5B | — | Oct 2023 | papers-with-code · code | 0.674 |
| 03 | JointGT Baseline | — | Oct 2023 | papers-with-code · code | 0.673 |
| 04 | FactJointGT | — | Oct 2023 | papers-with-code · code | 0.673 |
| 05 | GPT-2-Large (fine-tuning) | — | Jul 2021 | HTLM: Hyper-Text Pre-Training and Prompting of Language … | 0.400 |
| 06 | HTLM (fine-tuning) | — | Jul 2021 | HTLM: Hyper-Text Pre-Training and Prompting of Language … | 0.400 |
| # | Model | Org | Submitted | Paper / code | factspotter |
|---|---|---|---|---|---|
| 01 | FactT5B | — | Oct 2023 | papers-with-code · code | 97.60 |
| 02 | FactJointGT | — | Oct 2023 | papers-with-code · code | 97.25 |
| 03 | T5B Baseline | — | Oct 2023 | papers-with-code · code | 96.65 |
| 04 | JointGT Baseline | — | Oct 2023 | papers-with-code · code | 95.86 |
| # | Model | Org | Submitted | Paper / code | meteor |
|---|---|---|---|---|---|
| 01 | T5B Baseline | — | Oct 2023 | papers-with-code · code | 0.407 |
| 02 | FactT5B | — | Oct 2023 | papers-with-code · code | 0.407 |
| 03 | JointGT Baseline | — | Oct 2023 | papers-with-code · code | 0.404 |
| 04 | FactJointGT | — | Oct 2023 | papers-with-code · code | 0.403 |
| 05 | HTLM (fine-tuning) | — | Jul 2021 | HTLM: Hyper-Text Pre-Training and Prompting of Language … | 0.390 |
| 06 | GPT-2-Large (fine-tuning) | — | Jul 2021 | HTLM: Hyper-Text Pre-Training and Prompting of Language … | 0.390 |
| # | Model | Org | Submitted | Paper / code | mover |
|---|---|---|---|---|---|
| 01 | HTLM (fine-tuning) | — | Jul 2021 | HTLM: Hyper-Text Pre-Training and Prompting of Language … | 0.510 |
| 02 | GPT-2-Large (fine-tuning) | — | Jul 2021 | HTLM: Hyper-Text Pre-Training and Prompting of Language … | 0.510 |
| # | Model | Org | Submitted | Paper / code | ter |
|---|---|---|---|---|---|
| 01 | GPT-2-Large (fine-tuning) | — | Jul 2021 | HTLM: Hyper-Text Pre-Training and Prompting of Language … | 0.460 |
| 02 | HTLM (fine-tuning) | — | Jul 2021 | HTLM: Hyper-Text Pre-Training and Prompting of Language … | 0.440 |
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.