Possibilistic inferential models: a review
By: Ryan Martin
Potential Business Impact:
Makes computer guesses more trustworthy and flexible.
An inferential model (IM) is a model describing the construction of provably reliable, data-driven uncertainty quantification and inference about relevant unknowns. IMs and Fisher's fiducial argument have similar objectives, but a fundamental distinction between the two is that the former doesn't require that uncertainty quantification be probabilistic, offering greater flexibility and allowing for a proof of its reliability. Important recent developments have been made thanks in part to newfound connections with the imprecise probability literature, in particular, possibility theory. The brand of possibilistic IMs studied here are straightforward to construct, have very strong frequentist-like reliability properties, and offer fully conditional, Bayesian-like (imprecise) probabilistic reasoning. This paper reviews these key recent developments, describing the new theory, methods, and computational tools. A generalization of the basic possibilistic IM is also presented, making new and unexpected connections with ideas in modern statistics and machine learning, e.g., bootstrap and conformal prediction.
Similar Papers
Valid and efficient possibilistic structure learning in Gaussian linear regression
Methodology
Finds the best way to explain data.
No-prior Bayesian inference reIMagined: probabilistic approximations of inferential models
Methodology
Makes computer guesses more trustworthy with data.
Maxitive Donsker-Varadhan Formulation for Possibilistic Variational Inference
Machine Learning (Stat)
Lets computers learn better with less information.