Human Aligned Compression for Robust Models
By: Samuel Räber , Andreas Plesner , Till Aczel and more
Potential Business Impact:
Protects AI from fake pictures by cleaning them.
Adversarial attacks on image models threaten system robustness by introducing imperceptible perturbations that cause incorrect predictions. We investigate human-aligned learned lossy compression as a defense mechanism, comparing two learned models (HiFiC and ELIC) against traditional JPEG across various quality levels. Our experiments on ImageNet subsets demonstrate that learned compression methods outperform JPEG, particularly for Vision Transformer architectures, by preserving semantically meaningful content while removing adversarial noise. Even in white-box settings where attackers can access the defense, these methods maintain substantial effectiveness. We also show that sequential compression--applying rounds of compression/decompression--significantly enhances defense efficacy while maintaining classification performance. Our findings reveal that human-aligned compression provides an effective, computationally efficient defense that protects the image features most relevant to human and machine understanding. It offers a practical approach to improving model robustness against adversarial threats.
Similar Papers
Optimized Learned Image Compression for Facial Expression Recognition
CV and Pattern Recognition
Makes computers understand faces better, even when shrunk.
Model Compression vs. Adversarial Robustness: An Empirical Study on Language Models for Code
Software Engineering
Makes AI code checkers less safe when smaller.
Keep It Real: Challenges in Attacking Compression-Based Adversarial Purification
CV and Pattern Recognition
Makes AI images harder for hackers to trick.