Towards Foundation Auto-Encoders for Time-Series Anomaly Detection
By: Gastón García González , Pedro Casas , Emilio Martínez and more
Potential Business Impact:
Finds weird patterns in data streams.
We investigate a novel approach to time-series modeling, inspired by the successes of large pretrained foundation models. We introduce FAE (Foundation Auto-Encoders), a foundation generative-AI model for anomaly detection in time-series data, based on Variational Auto-Encoders (VAEs). By foundation, we mean a model pretrained on massive amounts of time-series data which can learn complex temporal patterns useful for accurate modeling, forecasting, and detection of anomalies on previously unseen datasets. FAE leverages VAEs and Dilated Convolutional Neural Networks (DCNNs) to build a generic model for univariate time-series modeling, which could eventually perform properly in out-of-the-box, zero-shot anomaly detection applications. We introduce the main concepts of FAE, and present preliminary results in different multi-dimensional time-series datasets from various domains, including a real dataset from an operational mobile ISP, and the well known KDD 2021 Anomaly Detection dataset.
Similar Papers
Strengthening Anomaly Awareness
High Energy Physics - Phenomenology
Finds weird things computers missed before.
Improving Variational Autoencoder using Random Fourier Transformation: An Aviation Safety Anomaly Detection Case-Study
Machine Learning (CS)
Helps computers find weird things in data faster.
Counterfactual Explanation for Auto-Encoder Based Time-Series Anomaly Detection
Machine Learning (CS)
Explains why machines are acting weird.