Score: 0

Out-of-Distribution Detection Methods Answer the Wrong Questions

Published: July 2, 2025 | arXiv ID: 2507.01831v1

By: Yucen Lily Li , Daohan Lu , Polina Kirichenko and more

Potential Business Impact:

Finds when computers are guessing wrong things.

Business Areas:
Intrusion Detection Information Technology, Privacy and Security

To detect distribution shifts and improve model safety, many out-of-distribution (OOD) detection methods rely on the predictive uncertainty or features of supervised models trained on in-distribution data. In this paper, we critically re-examine this popular family of OOD detection procedures, and we argue that these methods are fundamentally answering the wrong questions for OOD detection. There is no simple fix to this misalignment, since a classifier trained only on in-distribution classes cannot be expected to identify OOD points; for instance, a cat-dog classifier may confidently misclassify an airplane if it contains features that distinguish cats from dogs, despite generally appearing nothing alike. We find that uncertainty-based methods incorrectly conflate high uncertainty with being OOD, while feature-based methods incorrectly conflate far feature-space distance with being OOD. We show how these pathologies manifest as irreducible errors in OOD detection and identify common settings where these methods are ineffective. Additionally, interventions to improve OOD detection such as feature-logit hybrid methods, scaling of model and data size, epistemic uncertainty representation, and outlier exposure also fail to address this fundamental misalignment in objectives. We additionally consider unsupervised density estimation and generative models for OOD detection, which we show have their own fundamental limitations.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
26 pages

Category
Computer Science:
Machine Learning (CS)