A subsampling approach for large data sets when the Generalised Linear Model is potentially misspecified
By: Amalan Mahendran, Helen Thompson, James M. McGree
Potential Business Impact:
Makes big data analysis faster and more accurate.
Subsampling is a computationally efficient and scalable method to draw inference in large data settings based on a subset of the data rather than needing to consider the whole dataset. When employing subsampling techniques, a crucial consideration is how to select an informative subset based on the queries posed by the data analyst. A recently proposed method for this purpose involves randomly selecting samples from the large dataset based on subsampling probabilities. However, a major drawback of this approach is that the derived subsampling probabilities are typically based on an assumed statistical model which may be difficult to correctly specify in practice. To address this limitation, we propose to determine subsampling probabilities based on a statistical model that we acknowledge may be misspecified. To do so, we propose to evaluate the subsampling probabilities based on the Mean Squared Error (MSE) of the predictions from a model that is not assumed to completely describe the large dataset. We apply our subsampling approach in a simulation study and for the analysis of two real-world large datasets, where its performance is benchmarked against existing subsampling techniques. The findings suggest that there is value in adopting our approach over current practice.
Similar Papers
Optional subsampling for generalized estimating equations in growing-dimensional longitudinal Data
Computation
Helps analyze big health data faster.
Prediction-Oriented Subsampling from Data Streams
Machine Learning (CS)
Teaches computers to learn from fast-moving information.
Subbagging Variable Selection for Big Data
Methodology
Helps computers find important data in huge amounts.