Probability Density from Latent Diffusion Models for Out-of-Distribution Detection
By: Joonas Järve, Karl Kaspar Haavel, Meelis Kull
Potential Business Impact:
Helps AI know when it sees something new.
Despite rapid advances in AI, safety remains the main bottleneck to deploying machine-learning systems. A critical safety component is out-of-distribution detection: given an input, decide whether it comes from the same distribution as the training data. In generative models, the most natural OOD score is the data likelihood. Actually, under the assumption of uniformly distributed OOD data, the likelihood is even the optimal OOD detector, as we show in this work. However, earlier work reported that likelihood often fails in practice, raising doubts about its usefulness. We explore whether, in practice, the representation space also suffers from the inability to learn good density estimation for OOD detection, or if it is merely a problem of the pixel space typically used in generative models. To test this, we trained a Variational Diffusion Model not on images, but on the representation space of a pre-trained ResNet-18 to assess the performance of our likelihood-based detector in comparison to state-of-the-art methods from the OpenOOD suite.
Similar Papers
Revisiting Likelihood-Based Out-of-Distribution Detection by Modeling Representations
Machine Learning (CS)
Makes AI better at spotting fake or wrong information.
A Simple and Effective Method for Uncertainty Quantification and OOD Detection
Machine Learning (CS)
Finds when computer guesses are wrong.
Latent space analysis and generalization to out-of-distribution data
Machine Learning (Stat)
Finds when computers are shown wrong information.