Approximate Bayesian inference for cumulative probit regression models
By: Emanuele Aliverti
Potential Business Impact:
Helps computers learn from ranked data faster.
Ordinal categorical data are routinely encountered in a wide range of practical applications. When the primary goal is to construct a regression model for ordinal outcomes, cumulative link models represent one of the most popular choices to link the cumulative probabilities of the response with a set of covariates through a parsimonious linear predictor, shared across response categories. When the number of observations grows, standard sampling algorithms for Bayesian inference scale poorly, making posterior computation increasingly challenging in large datasets. In this article, we propose three scalable algorithms for approximating the posterior distribution of the regression coefficients in cumulative probit models relying on Variational Bayes and Expectation Propagation. We compare the proposed approaches with inference based on Markov Chain Monte Carlo, demonstrating superior computational performance and remarkable accuracy; finally, we illustrate the utility of the proposed algorithms on a challenging case study to investigate the structure of a criminal network.
Similar Papers
Multinomial probit model based on joint quantile regression
Methodology
Helps understand choices by looking at different possibilities.
Scalable Variable Selection and Model Averaging for Latent Regression Models Using Approximate Variational Bayes
Methodology
Finds best patterns in complex data faster.
Scalable and robust regression models for continuous proportional data
Methodology
Makes data analysis more reliable and accurate.