Model card
PLBART.
UCLA / Columbia Universitycode-lm140M paramsTransformer encoder-decoderMIT
Unified Pre-training for Program Understanding and Generation via denoising autoencoding. NAACL 2021. From arXiv 2103.06333.
§ 01 · Benchmarks
Every benchmark PLBART has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | codesearchnet---java | Computer Vision · Optical Character Recognition | smoothed-bleu-4 | 18.4% | #7 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 05 · Sources & freshness
Where these numbers come from.
codexglue-leaderboard
1
result
1 of 1 rows marked verified.