STELLA: Guiding Large Language Models for Time Series Forecasting with Semantic Abstractions
By: Junjie Fan , Hongye Zhao , Linduo Wei and more
Potential Business Impact:
Helps computers predict future events more accurately.
Recent adaptations of Large Language Models (LLMs) for time series forecasting often fail to effectively enhance information for raw series, leaving LLM reasoning capabilities underutilized. Existing prompting strategies rely on static correlations rather than generative interpretations of dynamic behavior, lacking critical global and instance-specific context. To address this, we propose STELLA (Semantic-Temporal Alignment with Language Abstractions), a framework that systematically mines and injects structured supplementary and complementary information. STELLA employs a dynamic semantic abstraction mechanism that decouples input series into trend, seasonality, and residual components. It then translates intrinsic behavioral features of these components into Hierarchical Semantic Anchors: a Corpus-level Semantic Prior (CSP) for global context and a Fine-grained Behavioral Prompt (FBP) for instance-level patterns. Using these anchors as prefix-prompts, STELLA guides the LLM to model intrinsic dynamics. Experiments on eight benchmark datasets demonstrate that STELLA outperforms state-of-the-art methods in long- and short-term forecasting, showing superior generalization in zero-shot and few-shot settings. Ablation studies further validate the effectiveness of our dynamically generated semantic anchors.
Similar Papers
Semantic-Enhanced Time-Series Forecasting via Large Language Models
Machine Learning (CS)
Helps computers predict future events better.
Semantic-Enhanced Time-Series Forecasting via Large Language Models
Machine Learning (CS)
Helps computers predict future events better.
Guided by Stars: Interpretable Concept Learning Over Time Series via Temporal Logic Semantics
Machine Learning (CS)
Explains why machines make decisions about time data.