Score: 1

Shortcut Invariance: Targeted Jacobian Regularization in Disentangled Latent Space

Published: November 24, 2025 | arXiv ID: 2511.19525v1

By: Shivam Pal, Sakshi Varshney, Piyush Rai

Potential Business Impact:

Makes AI ignore fake clues and learn better.

Business Areas:
Darknet Internet Services

Deep neural networks are prone to learning shortcuts, spurious and easily learned correlations in training data that cause severe failures in out-of-distribution (OOD) generalization. A dominant line of work seeks robustness by learning a robust representation, often explicitly partitioning the latent space into core and spurious components; this approach can be complex, brittle, and difficult to scale. We take a different approach, instead of a robust representation, we learn a robust function. We present a simple and effective training method that renders the classifier functionally invariant to shortcut signals. Our method operates within a disentangled latent space, which is essential as it isolates spurious and core features into distinct dimensions. This separation enables the identification of candidate shortcut features by their strong correlation with the label, used as a proxy for semantic simplicity. The classifier is then desensitized to these features by injecting targeted, anisotropic latent noise during training. We analyze this as targeted Jacobian regularization, which forces the classifier to ignore spurious features and rely on more complex, core semantic signals. The result is state-of-the-art OOD performance on established shortcut learning benchmarks.

Country of Origin
🇮🇳 India

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)