An Exploratory Study to Repurpose LLMs to a Unified Architecture for Time Series Classification
By: Hansen He, Shuheng Li
Time series classification (TSC) is a core machine learning problem with broad applications. Recently there has been growing interest in repurposing large language models (LLMs) for TSC, motivated by their strong reasoning and generalization ability. Prior work has primarily focused on alignment strategies that explicitly map time series data into the textual domain; however, the choice of time series encoder architecture remains underexplored. In this work, we conduct an exploratory study of hybrid architectures that combine specialized time series encoders with a frozen LLM backbone. We evaluate a diverse set of encoder families, including Inception, convolutional, residual, transformer-based, and multilayer perceptron architectures, among which the Inception model is the only encoder architecture that consistently yields positive performance gains when integrated with an LLM backbone. Overall, this study highlights the impact of time series encoder choice in hybrid LLM architectures and points to Inception-based models as a promising direction for future LLM-driven time series learning.
Similar Papers
Using Pre-trained LLMs for Multivariate Time Series Forecasting
Machine Learning (CS)
Helps predict future needs using smart computer brains.
Large Language Models are Few-shot Multivariate Time Series Classifiers
Machine Learning (CS)
Helps computers learn from very little data.
Enhancing LLM Reasoning for Time Series Classification by Tailored Thinking and Fused Decision
Artificial Intelligence
Helps computers understand patterns in data better.