Revisiting Likelihood-Based Out-of-Distribution Detection by Modeling Representations
By: Yifan Ding , Arturas Aleksandraus , Amirhossein Ahmadian and more
Potential Business Impact:
Makes AI better at spotting fake or wrong information.
Out-of-distribution (OOD) detection is critical for ensuring the reliability of deep learning systems, particularly in safety-critical applications. Likelihood-based deep generative models have historically faced criticism for their unsatisfactory performance in OOD detection, often assigning higher likelihood to OOD data than in-distribution samples when applied to image data. In this work, we demonstrate that likelihood is not inherently flawed. Rather, several properties in the images space prohibit likelihood as a valid detection score. Given a sufficiently good likelihood estimator, specifically using the probability flow formulation of a diffusion model, we show that likelihood-based methods can still perform on par with state-of-the-art methods when applied in the representation space of pre-trained encoders. The code of our work can be found at $\href{https://github.com/limchaos/Likelihood-OOD.git}{\texttt{https://github.com/limchaos/Likelihood-OOD.git}}$.
Similar Papers
Probability Density from Latent Diffusion Models for Out-of-Distribution Detection
Machine Learning (CS)
Helps AI know when it sees something new.
Revisiting Out-of-Distribution Detection in Real-time Object Detection: From Benchmark Pitfalls to a New Mitigation Paradigm
CV and Pattern Recognition
Teaches computers to ignore fake objects.
Improving Out-of-Distribution Detection with Markov Logic Networks
Machine Learning (CS)
Helps computers spot fake or wrong information.