RobustMask: Certified Robustness against Adversarial Neural Ranking Attack via Randomized Masking
By: Jiawei Liu , Zhuo Chen , Rui Zhu and more
Potential Business Impact:
Protects search results from fake information.
Neural ranking models have achieved remarkable progress and are now widely deployed in real-world applications such as Retrieval-Augmented Generation (RAG). However, like other neural architectures, they remain vulnerable to adversarial manipulations: subtle character-, word-, or phrase-level perturbations can poison retrieval results and artificially promote targeted candidates, undermining the integrity of search engines and downstream systems. Existing defenses either rely on heuristics with poor generalization or on certified methods that assume overly strong adversarial knowledge, limiting their practical use. To address these challenges, we propose RobustMask, a novel defense that combines the context-prediction capability of pretrained language models with a randomized masking-based smoothing mechanism. Our approach strengthens neural ranking models against adversarial perturbations at the character, word, and phrase levels. Leveraging both the pairwise comparison ability of ranking models and probabilistic statistical analysis, we provide a theoretical proof of RobustMask's certified top-K robustness. Extensive experiments further demonstrate that RobustMask successfully certifies over 20% of candidate documents within the top-10 ranking positions against adversarial perturbations affecting up to 30% of their content. These results highlight the effectiveness of RobustMask in enhancing the adversarial robustness of neural ranking models, marking a significant step toward providing stronger security guarantees for real-world retrieval systems.
Similar Papers
CertMask: Certifiable Defense Against Adversarial Patches via Theoretically Optimal Mask Coverage
CV and Pattern Recognition
Protects computer eyes from fake pictures.
ByteShield: Adversarially Robust End-to-End Malware Detection through Byte Masking
Cryptography and Security
Blocks computer viruses from tricking security programs.
Assessing Representation Stability for Transformer Models
Machine Learning (CS)
Stops bad words from tricking computers.