Codesota · Computer Vision · Optical Character Recognition · KITAB-BenchTasks/Computer Vision/Optical Character Recognition
Optical Character Recognition · benchmark dataset · 2024 · AR

KITAB Arabic OCR Benchmark.

8,809 Arabic text samples across 9 domains. Tests Arabic script recognition.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

14 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
cer · lower is better
cer· primary
14 rows
#ModelOrgSubmittedPaper / codecer
01Gemini 2.0 FlashAPIGoogleDec 2025alphaxiv-leaderboard0.130
02AIN 7BOSSResearchDec 2025alphaxiv-leaderboard0.200
03GPT-4oAPIOpenAIDec 2025alphaxiv-leaderboard0.310
04GPT-4o miniOpenAIDec 2025alphaxiv-leaderboard0.430
05Azure OCRMicrosoftDec 2025alphaxiv-leaderboard0.520
06TesseractOSSGoogle (Open Source)Dec 2025alphaxiv-leaderboard0.540
07EasyOCROSSJaidedAIDec 2025alphaxiv-leaderboard0.580
08PaddleOCROSSBaiduDec 2025alphaxiv-leaderboard0.790
09Gemma 3GoogleApr 2026kitab-bench-leaderboard1.05
10qwen2.5-vl-7bApr 2026kitab-bench-leaderboard1.20
11Qwen2-VL 7BAlibabaApr 2026kitab-bench-leaderboard1.48
12QaariMBZUAIApr 2026kitab-bench-leaderboard1.80
13ArabicNougatcommunityApr 2026kitab-bench-leaderboard4.37
14SuryaVikParuchuriApr 2026kitab-bench-leaderboard4.95
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

1 steps
of state of the art.

Each row below marks a model that broke the previous record on cer. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Lower scores win. Each subsequent entry improved upon the previous best.

SOTA line · cer
  1. Dec 16, 2025Gemini 2.0 FlashGoogle0.130
Fig 3 · SOTA-setting models only. 1 entries span Dec 2025 Dec 2025.
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies