Enclosing Prototypical Variational Autoencoder for Explainable Out-of-Distribution Detection
By: Conrad Orglmeister , Erik Bochinski , Volker Eiselein and more
Potential Business Impact:
Helps computers know when they don't know.
Understanding the decision-making and trusting the reliability of Deep Machine Learning Models is crucial for adopting such methods to safety-relevant applications. We extend self-explainable Prototypical Variational models with autoencoder-based out-of-distribution (OOD) detection: A Variational Autoencoder is applied to learn a meaningful latent space which can be used for distance-based classification, likelihood estimation for OOD detection, and reconstruction. The In-Distribution (ID) region is defined by a Gaussian mixture distribution with learned prototypes representing the center of each mode. Furthermore, a novel restriction loss is introduced that promotes a compact ID region in the latent space without collapsing it into single points. The reconstructive capabilities of the Autoencoder ensure the explainability of the prototypes and the ID region of the classifier, further aiding the discrimination of OOD samples. Extensive evaluations on common OOD detection benchmarks as well as a large-scale dataset from a real-world railway application demonstrate the usefulness of the approach, outperforming previous methods.
Similar Papers
A Variational Information Theoretic Approach to Out-of-Distribution Detection
Machine Learning (CS)
Teaches computers to spot fake or wrong information.
Optimizing Latent Dimension Allocation in Hierarchical VAEs: Balancing Attenuation and Information Retention for OOD Detection
Machine Learning (CS)
Finds weird computer inputs before they cause problems.
Guaranteeing Out-Of-Distribution Detection in Deep RL via Transition Estimation
Machine Learning (CS)
Helps robots know when they are lost.