Codesota · Computer Vision · Document Layout Analysis · document-layout-recognition-challenge-testTasks/Computer Vision/Document Layout Analysis
Document Layout Analysis · benchmark dataset · 2019 · EN

ICDAR 2019 Recognition of Documents with Complex Layouts (RDCL2019) - Test Set.

The RDCL2019 test set from the ICDAR 2019 Competition on Recognition of Documents with Complex Layouts. Comprises 85 scanned page images from contemporary magazines and technical/scientific publications (PRImA Layout Analysis Dataset). Evaluation measures region segmentation and classification using Weighted F1-score across layout classes. A continuous competition allowing post-2019 submissions via the Aletheia evaluation tool.

Saturated benchmark· last significant update Sep 2019

Competition benchmark from 2019. Top score (Overall F1 ~0.92) held by fglihai and USYD NLP_CS29-2. Research community moved to DocLayNet, D4LA, and DocStructBench for newer benchmarking. No new papers report on this specific test set in 2024-2025.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

18 results indexed across 6 metrics. Shaded row marks current SOTA; ties broken by submission date.


Primary
f1 · higher is better
All metrics
figure, list, overall, table, text, title
figure
3 rows
#ModelOrgSubmittedPaper / codefigure
01fglihaiSep 2019ICDAR 2019 RDCL2019 Competition0.970
02USYD NLP_CS29-2Sep 2019ICDAR 2019 RDCL2019 Competition0.960
03Faster R-CNNOSSMicrosoft ResearchSep 2019ICDAR 2019 RDCL2019 Competition0.950
list
3 rows
#ModelOrgSubmittedPaper / codelist
01fglihaiSep 2019ICDAR 2019 RDCL2019 Competition0.900
02USYD NLP_CS29-2Sep 2019ICDAR 2019 RDCL2019 Competition0.900
03Faster R-CNNOSSMicrosoft ResearchSep 2019ICDAR 2019 RDCL2019 Competition0.890
overall
3 rows
#ModelOrgSubmittedPaper / codeoverall
01USYD NLP_CS29-2Sep 2019ICDAR 2019 RDCL2019 Competition0.920
02fglihaiSep 2019ICDAR 2019 RDCL2019 Competition0.920
03Faster R-CNNOSSMicrosoft ResearchSep 2019ICDAR 2019 RDCL2019 Competition0.910
table
3 rows
#ModelOrgSubmittedPaper / codetable
01USYD NLP_CS29-2Sep 2019ICDAR 2019 RDCL2019 Competition0.960
02fglihaiSep 2019ICDAR 2019 RDCL2019 Competition0.960
03Faster R-CNNOSSMicrosoft ResearchSep 2019ICDAR 2019 RDCL2019 Competition0.950
text
3 rows
#ModelOrgSubmittedPaper / codetext
01fglihaiSep 2019ICDAR 2019 RDCL2019 Competition0.930
02USYD NLP_CS29-2Sep 2019ICDAR 2019 RDCL2019 Competition0.930
03Faster R-CNNOSSMicrosoft ResearchSep 2019ICDAR 2019 RDCL2019 Competition0.920
title
3 rows
#ModelOrgSubmittedPaper / codetitle
01USYD NLP_CS29-2Sep 2019ICDAR 2019 RDCL2019 Competition0.840
02fglihaiSep 2019ICDAR 2019 RDCL2019 Competition0.840
03Faster R-CNNOSSMicrosoft ResearchSep 2019ICDAR 2019 RDCL2019 Competition0.820
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies