A comparison between initialization strategies for the infinite hidden Markov model
By: Federico P. Cortese, Luca Rossini
Potential Business Impact:
Helps computers find hidden patterns in changing data.
Infinite hidden Markov models provide a flexible framework for modelling time series with structural changes and complex dynamics, without requiring the number of latent states to be specified in advance. This flexibility is achieved through the hierarchical Dirichlet process prior, while efficient Bayesian inference is enabled by the beam sampler, which combines dynamic programming with slice sampling to truncate the infinite state space adaptively. Despite extensive methodological developments, the role of initialization in this framework has received limited attention. This study addresses this gap by systematically evaluating initialization strategies commonly used for finite hidden Markov models and assessing their suitability in the infinite setting. Results from both simulated and real datasets show that distance-based clustering initializations consistently outperform model-based and uniform alternatives, the latter being the most widely adopted in the existing literature.
Similar Papers
Variational Inference for Fully Bayesian Hierarchical Linear Models
Methodology
Speeds up data analysis, but can be less accurate.
Advanced posterior analyses of hidden Markov models: finite Markov chain imbedding and hybrid decoding
Machine Learning (Stat)
Helps computers understand hidden patterns in data.
Efficient Inference for Coupled Hidden Markov Models in Continuous Time and Discrete Space
Machine Learning (Stat)
Helps predict how fires spread by watching them.