Comparing Variable Selection and Model Averaging Methods for Logistic Regression
By: Nikola Sekulovski , František Bartoš , Don van den Bergh and more
Potential Business Impact:
Finds best way to predict yes/no answers.
Model uncertainty is a central challenge in statistical models for binary outcomes such as logistic regression, arising when it is unclear which predictors should be included in the model. Many methods have been proposed to address this issue for logistic regression, but their relative performance under realistic conditions remains poorly understood. We therefore conducted a preregistered, simulation-based comparison of 28 established methods for variable selection and inference under model uncertainty, using 11 empirical datasets spanning a range of sample sizes and numbers of predictors, in cases both with and without separation. We found that Bayesian model averaging methods based on g-priors, particularly with g = max(n, p^2), show the strongest overall performance when separation is absent. When separation occurs, penalized likelihood approaches, especially the LASSO, provide the most stable results, while Bayesian model averaging with the local empirical Bayes (EB-local) prior is competitive in both situations. These findings offer practical guidance for applied researchers on how to effectively address model uncertainty in logistic regression in modern empirical and machine learning research.
Similar Papers
Scalable Variable Selection and Model Averaging for Latent Regression Models Using Approximate Variational Bayes
Methodology
Finds best patterns in complex data faster.
Post-selection Inference in Regression Models for Group Testing Data
Methodology
Fixes math for doctors with messy notes.
Testing-driven Variable Selection in Bayesian Modal Regression
Methodology
Finds important clues in messy data.