Strengthening Anomaly Awareness
By: Adam Banda, Charanjit K. Khosa, Veronica Sanz
Potential Business Impact:
Finds weird things computers missed before.
We present a refined version of the Anomaly Awareness framework for enhancing unsupervised anomaly detection. Our approach introduces minimal supervision into Variational Autoencoders (VAEs) through a two-stage training strategy: the model is first trained in an unsupervised manner on background data, and then fine-tuned using a small sample of labeled anomalies to encourage larger reconstruction errors for anomalous samples. We validate the method across diverse domains, including the MNIST dataset with synthetic anomalies, network intrusion data from the CICIDS benchmark, collider physics data from the LHCO2020 dataset, and simulated events from the Standard Model Effective Field Theory (SMEFT). The latter provides a realistic example of subtle kinematic deviations in Higgs boson production. In all cases, the model demonstrates improved sensitivity to unseen anomalies, achieving better separation between normal and anomalous samples. These results indicate that even limited anomaly information, when incorporated through targeted fine-tuning, can substantially improve the generalization and performance of unsupervised models for anomaly detection.
Similar Papers
Bayesian Autoencoder for Medical Anomaly Detection: Uncertainty-Aware Approach for Brain 2 MRI Analysis
Machine Learning (CS)
Finds hidden problems in brain scans.
Anomaly Detection in Time Series Data Using Reinforcement Learning, Variational Autoencoder, and Active Learning
Machine Learning (CS)
Finds weird patterns in data automatically.
Application of Deep Generative Models for Anomaly Detection in Complex Financial Transactions
Machine Learning (CS)
Finds hidden money crimes in big bank transfers.