Bridging Unsupervised and Semi-Supervised Anomaly Detection: A Theoretically-Grounded and Practical Framework with Synthetic Anomalies
By: Matthew Lau , Tian-Yi Zhou , Xiangchi Yuan and more
Potential Business Impact:
Finds hidden problems by creating fake ones.
Anomaly detection (AD) is a critical task across domains such as cybersecurity and healthcare. In the unsupervised setting, an effective and theoretically-grounded principle is to train classifiers to distinguish normal data from (synthetic) anomalies. We extend this principle to semi-supervised AD, where training data also include a limited labeled subset of anomalies possibly present in test time. We propose a theoretically-grounded and empirically effective framework for semi-supervised AD that combines known and synthetic anomalies during training. To analyze semi-supervised AD, we introduce the first mathematical formulation of semi-supervised AD, which generalizes unsupervised AD. Here, we show that synthetic anomalies enable (i) better anomaly modeling in low-density regions and (ii) optimal convergence guarantees for neural network classifiers -- the first theoretical result for semi-supervised AD. We empirically validate our framework on five diverse benchmarks, observing consistent performance gains. These improvements also extend beyond our theoretical framework to other classification-based AD methods, validating the generalizability of the synthetic anomaly principle in AD.
Similar Papers
Towards Real Unsupervised Anomaly Detection Via Confident Meta-Learning
CV and Pattern Recognition
Finds bad things even in messy data.
Unsupervised Surrogate Anomaly Detection
Machine Learning (CS)
Finds weird things in data.
A Transfer Learning Framework for Anomaly Detection in Multivariate IoT Traffic Data
Machine Learning (CS)
Finds hidden problems in computer data without labels.