Adversarial Attacks
Generating adversarial examples to fool models.
1
Datasets
3
Results
Attack Success Rate
Canonical metric
Canonical Benchmark
RobustBench CIFAR-10 Linf (AutoAttack)
RobustBench CIFAR-10 benchmark under Linf eps=8/255 AutoAttack. Framed as attack success rate against defended models (100 - robust accuracy).
Primary metric: Attack Success Rate
Top 10
Leading models on RobustBench CIFAR-10 Linf (AutoAttack).
All datasets
1 dataset tracked for this task.
Related tasks
Other tasks in Adversarial.