Memorize Early, Then Query: Inlier-Memorization-Guided Active Outlier Detection
By: Minseo Kang, Seunghwan Park, Dongha Kim
Potential Business Impact:
Finds weird data by teaching computers what's normal.
Outlier detection (OD) aims to identify abnormal instances, known as outliers or anomalies, by learning typical patterns of normal data, or inliers. Performing OD under an unsupervised regime-without any information about anomalous instances in the training data-is challenging. A recently observed phenomenon, known as the inlier-memorization (IM) effect, where deep generative models (DGMs) tend to memorize inlier patterns during early training, provides a promising signal for distinguishing outliers. However, existing unsupervised approaches that rely solely on the IM effect still struggle when inliers and outliers are not well-separated or when outliers form dense clusters. To address these limitations, we incorporate active learning to selectively acquire informative labels, and propose IMBoost, a novel framework that explicitly reinforces the IM effect to improve outlier detection. Our method consists of two stages: 1) a warm-up phase that induces and promotes the IM effect, and 2) a polarization phase in which actively queried samples are used to maximize the discrepancy between inlier and outlier scores. In particular, we propose a novel query strategy and tailored loss function in the polarization phase to effectively identify informative samples and fully leverage the limited labeling budget. We provide a theoretical analysis showing that the IMBoost consistently decreases inlier risk while increasing outlier risk throughout training, thereby amplifying their separation. Extensive experiments on diverse benchmark datasets demonstrate that IMBoost not only significantly outperforms state-of-the-art active OD methods but also requires substantially less computational cost.
Similar Papers
OOD Detection with immature Models
Machine Learning (CS)
Makes AI better at spotting fake pictures.
Large Language Model Enhanced Graph Invariant Contrastive Learning for Out-of-Distribution Recommendation
Information Retrieval
Helps movie suggestions work even with new users.
One Model, Many Behaviors: Training-Induced Effects on Out-of-Distribution Detection
CV and Pattern Recognition
Helps computers know when they see something new.