Adversarial Robustness

Defending against adversarial examples.

1
Datasets
3
Results
Robust Accuracy
Canonical metric
Canonical Benchmark

RobustBench CIFAR-10 Linf

RobustBench standardized adversarial robustness benchmark: CIFAR-10 with Linf perturbation eps=8/255, evaluated with AutoAttack.

Primary metric: Robust Accuracy
View full leaderboard

Top 10

Leading models on RobustBench CIFAR-10 Linf.

RankModelRobust AccuracyYearSource
1
Peng et al. 2023 (WRN-70-16)
71.12026paper
2
Wang et al. 2023 (WRN-70-16)
70.72026paper
3
Gowal et al. 2021 (WRN-70-16)
66.12026paper

What were you looking for on Adversarial Robustness?

Didn't find the model, metric, or dataset you needed? Tell us in one line. We read every message and reply within 48 hours.

All datasets

1 dataset tracked for this task.

Related tasks

Other tasks in Adversarial.

Reply within 48 hours · No newsletter

Didn't find what you came for?

Still looking for something on Adversarial Robustness? A missing model, a stale score, a benchmark we should cover — drop it here and we'll handle it.

Real humans read every message. We track what people are asking for and prioritize accordingly.