Unsupervised Invariant Risk Minimization
By: Yotam Norman, Ron Meir
Potential Business Impact:
Teaches computers to learn without answers.
We propose a novel unsupervised framework for \emph{Invariant Risk Minimization} (IRM), extending the concept of invariance to settings where labels are unavailable. Traditional IRM methods rely on labeled data to learn representations that are robust to distributional shifts across environments. In contrast, our approach redefines invariance through feature distribution alignment, enabling robust representation learning from unlabeled data. We introduce two methods within this framework: Principal Invariant Component Analysis (PICA), a linear method that extracts invariant directions under Gaussian assumptions, and Variational Invariant Autoencoder (VIAE), a deep generative model that disentangles environment-invariant and environment-dependent latent factors. Our approach is based on a novel ``unsupervised'' structural causal model and supports environment-conditioned sample-generation and intervention. Empirical evaluations on synthetic dataset and modified versions of MNIST demonstrate the effectiveness of our methods in capturing invariant structure, preserving relevant information, and generalizing across environments without access to labels.
Similar Papers
Invariant Learning with Annotation-free Environments
Machine Learning (CS)
Finds hidden patterns to make AI work anywhere.
Quantifying Distributional Invariance in Causal Subgraph for IRM-Free Graph Generalization
Machine Learning (CS)
Finds important graph parts that work everywhere.
Strengthening Anomaly Awareness
High Energy Physics - Phenomenology
Finds weird things computers missed before.