Score: 0

Representation Learning for Distributional Perturbation Extrapolation

Published: April 25, 2025 | arXiv ID: 2504.18522v1

By: Julius von Kügelgen , Jakob Ketterer , Xinwei Shen and more

Potential Business Impact:

Predicts how cells change with new treatments.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

We consider the problem of modelling the effects of unseen perturbations such as gene knockdowns or drug combinations on low-level measurements such as RNA sequencing data. Specifically, given data collected under some perturbations, we aim to predict the distribution of measurements for new perturbations. To address this challenging extrapolation task, we posit that perturbations act additively in a suitable, unknown embedding space. More precisely, we formulate the generative process underlying the observed data as a latent variable model, in which perturbations amount to mean shifts in latent space and can be combined additively. Unlike previous work, we prove that, given sufficiently diverse training perturbations, the representation and perturbation effects are identifiable up to affine transformation, and use this to characterize the class of unseen perturbations for which we obtain extrapolation guarantees. To estimate the model from data, we propose a new method, the perturbation distribution autoencoder (PDAE), which is trained by maximising the distributional similarity between true and predicted perturbation distributions. The trained model can then be used to predict previously unseen perturbation distributions. Empirical evidence suggests that PDAE compares favourably to existing methods and baselines at predicting the effects of unseen perturbations.

Country of Origin
🇨🇭 Switzerland

Page Count
24 pages

Category
Statistics:
Machine Learning (Stat)