Score: 1

A Theoretical Analysis of Detecting Large Model-Generated Time Series

Published: November 10, 2025 | arXiv ID: 2511.07104v1

By: Junji Hou , Junzhou Zhao , Shuo Zhang and more

Potential Business Impact:

Finds fake computer-made number patterns.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Motivated by the increasing risks of data misuse and fabrication, we investigate the problem of identifying synthetic time series generated by Time-Series Large Models (TSLMs) in this work. While there are extensive researches on detecting model generated text, we find that these existing methods are not applicable to time series data due to the fundamental modality difference, as time series usually have lower information density and smoother probability distributions than text data, which limit the discriminative power of token-based detectors. To address this issue, we examine the subtle distributional differences between real and model-generated time series and propose the contraction hypothesis, which states that model-generated time series, unlike real ones, exhibit progressively decreasing uncertainty under recursive forecasting. We formally prove this hypothesis under theoretical assumptions on model behavior and time series structure. Model-generated time series exhibit progressively concentrated distributions under recursive forecasting, leading to uncertainty contraction. We provide empirical validation of the hypothesis across diverse datasets. Building on this insight, we introduce the Uncertainty Contraction Estimator (UCE), a white-box detector that aggregates uncertainty metrics over successive prefixes to identify TSLM-generated time series. Extensive experiments on 32 datasets show that UCE consistently outperforms state-of-the-art baselines, offering a reliable and generalizable solution for detecting model-generated time series.

Country of Origin
🇨🇳 China

Page Count
23 pages

Category
Computer Science:
Artificial Intelligence