- Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier
- On Need for Topology-Aware Generative Models for Manifold-Based Defenses
- Certified Defenses for Adversarial Patches
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
- Breaking Certified Defenses: Semantic Adversarial Examples with Spoofed Robustness Certificates
- Enhancing Adversarial Defense by k-Winners-Take-All
- Adversarial Training and Provable Defenses: Bridging the Gap
- Defending Against Physically Realizable Attacks on Image Classification
- Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
- Adversarial Training and Provable Defenses: Bridging the Gap
- Optimal Strategies Against Generative Attacks
- MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius