Adversarial Attacks

Generating adversarial examples to fool models.

1
Datasets
3
Results
Attack Success Rate
Canonical metric
Canonical Benchmark

RobustBench CIFAR-10 Linf (AutoAttack)

RobustBench CIFAR-10 benchmark under Linf eps=8/255 AutoAttack. Framed as attack success rate against defended models (100 - robust accuracy).

Primary metric: Attack Success Rate
View full leaderboard

Top 10

Leading models on RobustBench CIFAR-10 Linf (AutoAttack).

RankModelAttack Success RateYearSource
1
AutoAttack vs Undefended ResNet
1002026paper
2
AutoAttack vs Wang 2023
29.32026paper
3
AutoAttack vs Peng 2023
28.82026paper

What were you looking for on Adversarial Attacks?

Didn't find the model, metric, or dataset you needed? Tell us in one line. We read every message and reply within 48 hours.

All datasets

1 dataset tracked for this task.

Related tasks

Other tasks in Adversarial.

Reply within 48 hours · No newsletter

Didn't find what you came for?

Still looking for something on Adversarial Attacks? A missing model, a stale score, a benchmark we should cover — drop it here and we'll handle it.

Real humans read every message. We track what people are asking for and prioritize accordingly.