Reading Comprehension

Understanding and answering questions about passages.

1
Datasets
2
Results
accuracy
Canonical metric
Canonical Benchmark

RACE

Canonical multiple-choice reading comprehension benchmark built from English exams for Chinese middle and high school students. ~28K passages and ~100K questions. Evaluated as accuracy over RACE-M (middle) + RACE-H (high) combined.

Primary metric: accuracy
View full leaderboard

Top 10

Leading models on RACE.

RankModelaccuracyYearSource
1
Megatron-BERT
90.92026paper
2
ALBERT (Ensemble)
89.42026paper

What were you looking for on Reading Comprehension?

Didn't find the model, metric, or dataset you needed? Tell us in one line. We read every message and reply within 48 hours.

All datasets

1 dataset tracked for this task.

Related tasks

Other tasks in Natural Language Processing.

Reply within 48 hours · No newsletter

Didn't find what you came for?

Still looking for something on Reading Comprehension? A missing model, a stale score, a benchmark we should cover — drop it here and we'll handle it.

Real humans read every message. We track what people are asking for and prioritize accordingly.