Pseudo-label Induced Subspace Representation Learning for Robust Out-of-Distribution Detection
By: Tarhib Al Azad, Faizul Rakib Sayem, Shahana Ibrahim
Potential Business Impact:
Helps AI spot fake or new information.
Out-of-distribution (OOD) detection lies at the heart of robust artificial intelligence (AI), aiming to identify samples from novel distributions beyond the training set. Recent approaches have exploited feature representations as distinguishing signatures for OOD detection. However, most existing methods rely on restrictive assumptions on the feature space that limit the separability between in-distribution (ID) and OOD samples. In this work, we propose a novel OOD detection framework based on a pseudo-label-induced subspace representation, that works under more relaxed and natural assumptions compared to existing feature-based techniques. In addition, we introduce a simple yet effective learning criterion that integrates a cross-entropy-based ID classification loss with a subspace distance-based regularization loss to enhance ID-OOD separability. Extensive experiments validate the effectiveness of our framework.
Similar Papers
Prompt Optimization Meets Subspace Representation Learning for Few-shot Out-of-Distribution Detection
Machine Learning (CS)
AI spots new things it hasn't seen before.
Activation Subspaces for Out-of-Distribution Detection
Machine Learning (CS)
Helps computers spot fake or wrong information.
Perturbations in the Orthogonal Complement Subspace for Efficient Out-of-Distribution Detection
Machine Learning (Stat)
Helps computers know when they see something new.