Rationale-Grounded In-Context Learning for Time Series Reasoning with Multimodal Large Language Models
By: Qingxiang Liu , Zhiqing Cui , Xiaoliang Luo and more
Potential Business Impact:
Teaches computers to understand time patterns better.
The underperformance of existing multimodal large language models for time series reasoning lies in the absence of rationale priors that connect temporal observations to their downstream outcomes, which leads models to rely on superficial pattern matching rather than principled reasoning. We therefore propose the rationale-grounded in-context learning for time series reasoning, where rationales work as guiding reasoning units rather than post-hoc explanations, and develop the RationaleTS method. Specifically, we firstly induce label-conditioned rationales, composed of reasoning paths from observable evidence to the potential outcomes. Then, we design the hybrid retrieval by balancing temporal patterns and semantic contexts to retrieve correlated rationale priors for the final in-context inference on new samples. We conduct extensive experiments to demonstrate the effectiveness and efficiency of our proposed RationaleTS on three-domain time series reasoning tasks. We will release our code for reproduction.
Similar Papers
A Survey of Reasoning and Agentic Systems in Time Series with Large Language Models
Artificial Intelligence
Helps computers understand and act on changing information.
Toward Reasoning-Centric Time-Series Analysis
Artificial Intelligence
Helps computers understand why things change.
Reason2Decide: Rationale-Driven Multi-Task Learning
Artificial Intelligence
Helps doctors make better choices with clear reasons.