Dataset from Papers With Code
6 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | accuracy |
|---|---|---|---|---|---|
| 01 | XLMft UDA | — | Sep 2019 | Bridging the domain gap in cross-lingual document classi… · code | 96.05 |
| 02 | MultiFiT, pseudo | — | Sep 2019 | MultiFiT: Efficient Multi-lingual Language Model Fine-tu… · code | 89.42 |
| 03 | Massively Multilingual Sentence Embeddings | — | Dec 2018 | Massively Multilingual Sentence Embeddings for Zero-Shot… · code | 77.95 |
| 04 | BiLSTM (UN) | — | May 2018 | A Corpus for Multilingual Document Classification in Eig… · code | 74.52 |
| 05 | BiLSTM (Europarl) | — | May 2018 | A Corpus for Multilingual Document Classification in Eig… · code | 72.83 |
| 06 | MultiCCA + CNN | — | May 2018 | A Corpus for Multilingual Document Classification in Eig… · code | 72.38 |
Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.