Adversarial Attacks

Generating adversarial examples to fool models.

1
Datasets
3
Results
Attack Success Rate
Canonical metric
Canonical Benchmark

RobustBench CIFAR-10 Linf (AutoAttack)

RobustBench CIFAR-10 benchmark under Linf eps=8/255 AutoAttack. Framed as attack success rate against defended models (100 - robust accuracy).

Primary metric: Attack Success Rate
View full leaderboard

Top 10

Leading models on RobustBench CIFAR-10 Linf (AutoAttack).

RankModelAttack Success RateYearSource
1
AutoAttack vs Undefended ResNet
1002026paper
2
AutoAttack vs Wang 2023
29.32026paper
3
AutoAttack vs Peng 2023
28.82026paper

All datasets

1 dataset tracked for this task.

Related tasks

Other tasks in Adversarial.