There is No "apple" in Timeseries: Rethinking TSFM through the Lens of Invariance
By: Arian Prabowo, Flora D. Salim
Potential Business Impact:
Makes computer predictions from time data better.
Timeseries foundation models (TSFMs) have multiplied, yet lightweight supervised baselines and even classical models often match them. We argue this gap stems from the naive importation of NLP or CV pipelines. In language and vision, large web-scale corpora densely capture human concepts i.e. there are countless images and text of apples. In contrast, timeseries data is built to complement the image and text modalities. There are no timeseries dataset that contains the concept apple. As a result, the scrape-everything-online paradigm fails for TS. We posit that progress demands a shift from opportunistic aggregation to principled design: constructing datasets that systematically span the space of invariance that preserve temporal semantics. To this end, we suggest that the ontology of timeseries invariances should be built based on first principles. Only by ensuring representational completeness through invariance coverage can TSFMs achieve the aligned structure necessary for generalisation, reasoning, and truly emergent behaviour.
Similar Papers
Time Series Foundation Models: Benchmarking Challenges and Requirements
Machine Learning (CS)
Tests if future predictions are truly new.
On the Internal Semantics of Time-Series Foundation Models
Machine Learning (CS)
Shows how computers understand time patterns.
Re(Visiting) Time Series Foundation Models in Finance
Computational Finance
Teaches computers to predict stock prices better.