On the Identifiability of Causal Abstractions
By: Xiusi Li, Sékou-Oumar Kaba, Siamak Ravanbakhsh
Potential Business Impact:
Teaches computers to understand cause and effect.
Causal representation learning (CRL) enhances machine learning models' robustness and generalizability by learning structural causal models associated with data-generating processes. We focus on a family of CRL methods that uses contrastive data pairs in the observable space, generated before and after a random, unknown intervention, to identify the latent causal model. (Brehmer et al., 2022) showed that this is indeed possible, given that all latent variables can be intervened on individually. However, this is a highly restrictive assumption in many systems. In this work, we instead assume interventions on arbitrary subsets of latent variables, which is more realistic. We introduce a theoretical framework that calculates the degree to which we can identify a causal model, given a set of possible interventions, up to an abstraction that describes the system at a higher level of granularity.
Similar Papers
Towards Interpretable Deep Generative Models via Causal Representation Learning
Machine Learning (Stat)
Makes AI understand how things cause each other.
Learning General Causal Structures with Hidden Dynamic Process for Climate Analysis
Machine Learning (CS)
Finds hidden causes of weather changes.
Generalization Analysis for Supervised Contrastive Representation Learning under Non-IID Settings
Machine Learning (Stat)
Helps computers learn better from recycled data.