CCE: Confidence-Consistency Evaluation for Time Series Anomaly Detection
By: Zhijie Zhong , Zhiwen Yu , Yiu-ming Cheung and more
Potential Business Impact:
Better checks for weird computer activity.
Time Series Anomaly Detection metrics serve as crucial tools for model evaluation. However, existing metrics suffer from several limitations: insufficient discriminative power, strong hyperparameter dependency, sensitivity to perturbations, and high computational overhead. This paper introduces Confidence-Consistency Evaluation (CCE), a novel evaluation metric that simultaneously measures prediction confidence and uncertainty consistency. By employing Bayesian estimation to quantify the uncertainty of anomaly scores, we construct both global and event-level confidence and consistency scores for model predictions, resulting in a concise CCE metric. Theoretically and experimentally, we demonstrate that CCE possesses strict boundedness, Lipschitz robustness against score perturbations, and linear time complexity $\mathcal{O}(n)$. Furthermore, we establish RankEval, a benchmark for comparing the ranking capabilities of various metrics. RankEval represents the first standardized and reproducible evaluation pipeline that enables objective comparison of evaluation metrics. Both CCE and RankEval implementations are fully open-source.
Similar Papers
Cumulative Consensus Score: Label-Free and Model-Agnostic Evaluation of Object Detectors in Deployment
CV and Pattern Recognition
Checks if computer vision sees things right.
Systematic Evaluation of Uncertainty Estimation Methods in Large Language Models
Computation and Language
Helps computers know when they are wrong.
CoCAI: Copula-based Conformal Anomaly Identification for Multivariate Time-Series
Machine Learning (CS)
Finds weird patterns in data, predicts future events.