Codesota · Models · BART-base (STSM)Meta5 results · 1 benchmarks
Model card

BART-base (STSM).

Metaunknown139M paramsTransformer

BART-base baseline from STSM paper (Self-training from Self-memory, arxiv:2401.10567, Jan 2024).

§ 01 · Benchmarks

Every benchmark BART-base (STSM) has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01e2eComputer Vision · Optical Character Recognitionbleu65.7%#8/92024-01-19source ↗
02e2eComputer Vision · Optical Character Recognitioncider2.2%#8/92024-01-19source ↗
03e2eComputer Vision · Optical Character Recognitionmeteor45.6%#8/92024-01-19source ↗
04e2eComputer Vision · Optical Character Recognitionrouge-l68.8%#8/92024-01-19source ↗
05e2eComputer Vision · Optical Character Recognitionnist8.5%#9/92024-01-19source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where BART-base (STSM) actually performs.

Computer Vision
1
benchmark
avg rank #8.2
§ 03 · Papers

1 paper with results for BART-base (STSM).

  1. 2024-01-19· 5 results

    Self-training from Self-memory in Data-to-text Generation

    Hoang-Thang Ta, Abu Bakar Siddiqur Rahman, Akira Utsumi
§ 04 · Related models

Other Meta models scored on Codesota.

DeiT-B Distilled
86M params · 2 results · 1 SOTA
Llama 3 70B
8 results
Llama 3.1 405B
6 results
Llama-4-Maverick
400B total / 17B active (128 experts) params · 6 results
Llama 3.1 70B
4 results
Code Llama 34B
Unknown params · 2 results
ConvNeXt V2 Huge
650M params · 2 results
CodeLlama 70B
70B params · 1 result
§ 05 · Sources & freshness

Where these numbers come from.

papers-with-code
5
results
5 of 5 rows marked verified.