Beyond Accuracy: Are Time Series Foundation Models Well-Calibrated?
By: Coen Adler , Yuxin Chang , Felix Draxler and more
Potential Business Impact:
Makes computer predictions more trustworthy and accurate.
The recent development of foundation models for time series data has generated considerable interest in using such models across a variety of applications. Although foundation models achieve state-of-the-art predictive performance, their calibration properties remain relatively underexplored, despite the fact that calibration can be critical for many practical applications. In this paper, we investigate the calibration-related properties of five recent time series foundation models and two competitive baselines. We perform a series of systematic evaluations assessing model calibration (i.e., over- or under-confidence), effects of varying prediction heads, and calibration under long-term autoregressive forecasting. We find that time series foundation models are consistently better calibrated than baseline models and tend not to be either systematically over- or under-confident, in contrast to the overconfidence often seen in other deep learning models.
Similar Papers
Beyond Overconfidence: Foundation Models Redefine Calibration in Deep Neural Networks
Machine Learning (CS)
Makes AI smarter and more honest about what it knows.
Modèles de Fondation et Ajustement : Vers une Nouvelle Génération de Modèles pour la Prévision des Séries Temporelles
Machine Learning (CS)
Predicts future events with new data.
Comparative Analysis of Time Series Foundation Models for Demographic Forecasting: Enhancing Predictive Accuracy in US Population Dynamics
Machine Learning (CS)
Predicts population changes for better city planning.