Geometry-Aware Backdoor Attacks: Leveraging Curvature in Hyperbolic Embeddings
By: Ali Baheri
Potential Business Impact:
Makes AI models easier to trick with secret codes.
Non-Euclidean foundation models increasingly place representations in curved spaces such as hyperbolic geometry. We show that this geometry creates a boundary-driven asymmetry that backdoor triggers can exploit. Near the boundary, small input changes appear subtle to standard input-space detectors but produce disproportionately large shifts in the model's representation space. Our analysis formalizes this effect and also reveals a limitation for defenses: methods that act by pulling points inward along the radius can suppress such triggers, but only by sacrificing useful model sensitivity in that same direction. Building on these insights, we propose a simple geometry-adaptive trigger and evaluate it across tasks and architectures. Empirically, attack success increases toward the boundary, whereas conventional detectors weaken, mirroring the theoretical trends. Together, these results surface a geometry-specific vulnerability in non-Euclidean models and offer analysis-backed guidance for designing and understanding the limits of defenses.
Similar Papers
Learning Along the Arrow of Time: Hyperbolic Geometry for Backward-Compatible Representation Learning
Machine Learning (CS)
Keeps old computer memories useful for new programs.
Angular Gradient Sign Method: Uncovering Vulnerabilities in Hyperbolic Networks
Machine Learning (CS)
Makes AI smarter by tricking it in new ways.
The Curved Spacetime of Transformer Architectures
Machine Learning (CS)
Makes AI understand words by bending their meanings.