Feature Augmentations for High-Dimensional Learning
By: Xiaonan Zhu, Bingyan Wang, Jianqing Fan
Potential Business Impact:
Makes computer learning better by simplifying data.
High-dimensional measurements are often correlated which motivates their approximation by factor models. This holds also true when features are engineered via low-dimensional interactions or kernel tricks. This often results in over parametrization and requires a fast dimensionality reduction. We propose a simple technique to enhance the performance of supervised learning algorithms by augmenting features with factors extracted from design matrices and their transformations. This is implemented by using the factors and idiosyncratic residuals which significantly weaken the correlations between input variables and hence increase the interpretability of learning algorithms and numerical stability. Extensive experiments on various algorithms and real-world data in diverse fields are carried out, among which we put special emphasis on the stock return prediction problem with Chinese financial news data due to the increasing interest in NLP problems in financial studies. We verify the capability of the proposed feature augmentation approach to boost overall prediction performance with the same algorithm. The approach bridges a gap in research that has been overlooked in previous studies, which focus either on collecting additional data or constructing more powerful algorithms, whereas our method lies in between these two directions using a simple PCA augmentation.
Similar Papers
Transfer learning for high-dimensional Factor-augmented sparse linear model
Methodology
Improves predictions using extra data.
Transfer learning for high-dimensional Factor-augmented sparse model
Methodology
Improves predictions using extra data sources.
Factor Augmented Quantile Regression Model
Methodology
Helps computers understand complicated data better.