Towards Robust Artificial Intelligence: Self-Supervised Learning Approach for Out-of-Distribution Detection
By: Wissam Salhab , Darine Ameyed , Hamid Mcheick and more
Potential Business Impact:
Helps AI spot bad data without examples.
Robustness in AI systems refers to their ability to maintain reliable and accurate performance under various conditions, including out-of-distribution (OOD) samples, adversarial attacks, and environmental changes. This is crucial in safety-critical systems, such as autonomous vehicles, transportation, or healthcare, where malfunctions could have severe consequences. This paper proposes an approach to improve OOD detection without the need of labeled data, thereby increasing the AI systems' robustness. The proposed approach leverages the principles of self-supervised learning, allowing the model to learn useful representations from unlabeled data. Combined with graph-theoretical techniques, this enables the more efficient identification and categorization of OOD samples. Compared to existing state-of-the-art methods, this approach achieved an Area Under the Receiver Operating Characteristic Curve (AUROC) = 0.99.
Similar Papers
Out-of-Distribution Detection for Safety Assurance of AI and Autonomous Systems
Artificial Intelligence
Helps self-driving cars spot unexpected dangers.
Can We Ignore Labels In Out of Distribution Detection?
Machine Learning (CS)
Finds when AI can't tell good from bad.
Pseudo-label Induced Subspace Representation Learning for Robust Out-of-Distribution Detection
Machine Learning (CS)
Helps AI spot fake or new information.