MADLLM: Multivariate Anomaly Detection via Pre-trained LLMs
By: Wei Tao , Xiaoyang Qu , Kai Lu and more
Potential Business Impact:
Finds weird patterns in data using smart text tricks.
When applying pre-trained large language models (LLMs) to address anomaly detection tasks, the multivariate time series (MTS) modality of anomaly detection does not align with the text modality of LLMs. Existing methods simply transform the MTS data into multiple univariate time series sequences, which can cause many problems. This paper introduces MADLLM, a novel multivariate anomaly detection method via pre-trained LLMs. We design a new triple encoding technique to align the MTS modality with the text modality of LLMs. Specifically, this technique integrates the traditional patch embedding method with two novel embedding approaches: Skip Embedding, which alters the order of patch processing in traditional methods to help LLMs retain knowledge of previous features, and Feature Embedding, which leverages contrastive learning to allow the model to better understand the correlations between different features. Experimental results demonstrate that our method outperforms state-of-the-art methods in various public anomaly detection datasets.
Similar Papers
A Time Series Multitask Framework Integrating a Large Language Model, Pre-Trained Time Series Model, and Knowledge Graph
Machine Learning (CS)
Helps computers understand time data with words.
Harnessing Vision-Language Models for Time Series Anomaly Detection
CV and Pattern Recognition
Find weird patterns in data using AI.
Enhancing Time Series Forecasting via Multi-Level Text Alignment with LLMs
Computation and Language
Helps computers predict future trends from data.