Model card
GPT-2-Medium (fine-tuning).
OpenAIunknown355M paramsTransformer
GPT-2 Medium fine-tuned on E2E NLG. Reported in HTLM paper (arxiv:2107.06955).
§ 01 · Benchmarks
Every benchmark GPT-2-Medium (fine-tuning) has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | e2e | Computer Vision · Optical Character Recognition | cider | 2.5% | #2 | 2021-07-14 | source ↗ |
| 02 | e2e | Computer Vision · Optical Character Recognition | meteor | 46.2% | #2 | 2021-07-14 | source ↗ |
| 03 | e2e | Computer Vision · Optical Character Recognition | rouge-l | 71.0% | #4 | 2021-07-14 | source ↗ |
| 04 | e2e | Computer Vision · Optical Character Recognition | bleu | 68.2% | #6 | 2021-07-14 | source ↗ |
| 05 | e2e | Computer Vision · Optical Character Recognition | nist | 8.6% | #6 | 2021-07-14 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where GPT-2-Medium (fine-tuning) actually performs.
§ 03 · Papers
1 paper with results for GPT-2-Medium (fine-tuning).
- 2021-07-14· Computer Vision· 5 results
HTLM: Hyper-Text Pre-Training and Prompting of Language Models
§ 04 · Related models
Other OpenAI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
papers-with-code
5
results
5 of 5 rows marked verified.