Codesota · Models · PLBARTUCLA / Columbia University1 results · 1 benchmarks
Model card

PLBART.

UCLA / Columbia Universitycode-lm140M paramsTransformer encoder-decoderMIT

Unified Pre-training for Program Understanding and Generation via denoising autoencoding. NAACL 2021. From arXiv 2103.06333.

§ 01 · Benchmarks

Every benchmark PLBART has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01codesearchnet---javaComputer Vision · Optical Character Recognitionsmoothed-bleu-418.4%#7/14source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where PLBART actually performs.

Computer Vision
1
benchmark
avg rank #7.0
§ 05 · Sources & freshness

Where these numbers come from.

codexglue-leaderboard
1
result
1 of 1 rows marked verified.