Accelerating Time Series Foundation Models with Speculative Decoding
By: Pranav Subbaraman , Fang Sun , Yue Yao and more
Potential Business Impact:
Speeds up predictions for websites and apps.
Modern web applications--from real-time content recommendation and dynamic pricing to CDN optimization--increasingly rely on time-series forecasting to deliver personalized experiences to billions of users. Large-scale Transformer-based models have achieved state-of-the-art performance in time-series forecasting but suffer from high computational costs, limiting their deployment in latency-sensitive web applications. To address this challenge, we propose a general inference acceleration framework that adapts speculative decoding to autoregressive time-series models. Our approach employs a smaller "draft" model to propose future time-series patches, which are then verified in parallel by a larger "target" model, reducing the number of sequential forward passes required. We address key technical challenges in adapting this technique from discrete language tokens to continuous time-series distributions, including the design of acceptance criteria for multivariate Gaussian patches and practical variants that balance efficiency with accuracy. Through experiments on time series forecasting benchmarks relevant to web applications, we demonstrate significant inference speedups while maintaining competitive accuracy. The framework requires no architectural modifications to existing foundation models, making it immediately applicable to accelerate deployed time-series forecasting systems. Our implementation can be found at https://github.com/PranavSubbaraman/STRIDE
Similar Papers
Scaling LLM Speculative Decoding: Non-Autoregressive Forecasting in Large-Batch Scenarios
Computation and Language
Makes AI write faster without wasting power.
Fast Inference via Hierarchical Speculative Decoding
Machine Learning (CS)
Makes AI write stories much faster.
Fast Inference via Hierarchical Speculative Decoding
Machine Learning (CS)
Makes AI write faster by checking its work.