Model card
GPT-2-Medium (prefix-tuning).
OpenAIunknown355M paramsTransformer
GPT-2 Medium with prefix-tuning (0.1% params) on E2E NLG. Reported in HTLM paper (arxiv:2107.06955).
§ 01 · Benchmarks
Every benchmark GPT-2-Medium (prefix-tuning) has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | e2e | Computer Vision · Optical Character Recognition | cider | 2.5% | #1 | 2021-07-14 | source ↗ |
| 02 | e2e | Computer Vision · Optical Character Recognition | rouge-l | 71.4% | #2 | 2021-07-14 | source ↗ |
| 03 | e2e | Computer Vision · Optical Character Recognition | bleu | 69.7% | #4 | 2021-07-14 | source ↗ |
| 04 | e2e | Computer Vision · Optical Character Recognition | meteor | 46.1% | #4 | 2021-07-14 | source ↗ |
| 05 | e2e | Computer Vision · Optical Character Recognition | nist | 8.8% | #4 | 2021-07-14 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where GPT-2-Medium (prefix-tuning) actually performs.
§ 03 · Papers
1 paper with results for GPT-2-Medium (prefix-tuning).
- 2021-07-14· Computer Vision· 5 results
HTLM: Hyper-Text Pre-Training and Prompting of Language Models
§ 04 · Related models
Other OpenAI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
papers-with-code
5
results
5 of 5 rows marked verified.