RLBind: Adversarial-Invariant Cross-Modal Alignment for Unified Robust Embeddings
By: Yuhong Lu
Potential Business Impact:
Makes robots see and hear safely, even if tricked.
Unified multi-modal encoders that bind vision, audio, and other sensors into a shared embedding space are attractive building blocks for robot perception and decision-making. However, on-robot deployment exposes the vision branch to adversarial and natural corruptions, making robustness a prerequisite for safety. Prior defenses typically align clean and adversarial features within CLIP-style encoders and overlook broader cross-modal correspondence, yielding modest gains and often degrading zero-shot transfer. We introduce RLBind, a two-stage adversarial-invariant cross-modal alignment framework for robust unified embeddings. Stage 1 performs unsupervised fine-tuning on clean-adversarial pairs to harden the visual encoder. Stage 2 leverages cross-modal correspondence by minimizing the discrepancy between clean/adversarial features and a text anchor, while enforcing class-wise distributional alignment across modalities. Extensive experiments on Image, Audio, Thermal, and Video data show that RLBind consistently outperforms the LanguageBind backbone and standard fine-tuning baselines in both clean accuracy and norm-bounded adversarial robustness. By improving resilience without sacrificing generalization, RLBind provides a practical path toward safer multi-sensor perception stacks for embodied robots in navigation, manipulation, and other autonomy settings.
Similar Papers
Towards Language-Independent Face-Voice Association with Multimodal Foundation Models
Audio and Speech Processing
Lets computers recognize voices in new languages.
When Alignment Fails: Multimodal Adversarial Attacks on Vision-Language-Action Models
CV and Pattern Recognition
Makes robots understand and obey commands better.
Modest-Align: Data-Efficient Alignment for Vision-Language Models
CV and Pattern Recognition
Makes AI understand pictures and words better with less data.