On the (In)Significance of Feature Selection in High-Dimensional Datasets
By: Bhavesh Neekhra, Debayan Gupta, Partha Pratim Chakravarti
Potential Business Impact:
Randomly picking data works as well as picking.
Extensive research has been done on feature selection (FS) algorithms for high-dimensional datasets aiming to improve model performance, reduce computational cost and identify features of interest. We test the null hypothesis of using randomly selected features to compare against features selected by FS algorithms to validate the performance of the latter. Our results show that FS on high-dimensional datasets (in particular gene expression) in classification tasks is not useful. We find that (1) models trained on small subsets (0.02%-1% of all features) of randomly selected features almost always perform comparably to those trained on all features, and (2) a "typical"- sized random subset provides comparable or superior performance to that of top-k features selected in various published studies. Thus, our work challenges many feature selection results on high dimensional datasets, particularly in computational genomics. It raises serious concerns about studies that propose drug design or targeted interventions based on computationally selected genes, without further validation in a wet lab.
Similar Papers
On the (In)Significance of Feature Selection in High-Dimensional Datasets
Machine Learning (CS)
Random data often predicts as well as chosen data.
HeFS: Helper-Enhanced Feature Selection via Pareto-Optimized Genetic Search
Machine Learning (CS)
Finds hidden clues to make predictions better.
Improving statistical learning methods via features selection without replacement sampling and random projection
Quantitative Methods
Finds cancer genes better for new treatments.