Score: 0

Optimal Data Splitting for Holdout Cross-Validation in Large Covariance Matrix Estimation

Published: March 19, 2025 | arXiv ID: 2503.15186v1

By: Lamia Lamrani, Christian Bongiorno, Marc Potters

Potential Business Impact:

Improves computer guesses about data patterns.

Cross-validation is a statistical tool that can be used to improve large covariance matrix estimation. Although its efficiency is observed in practical applications, the theoretical reasons behind it remain largely intuitive, with formal proofs currently lacking. To carry on analytical analysis, we focus on the holdout method, a single iteration of cross-validation, rather than the traditional $k$-fold approach. We derive a closed-form expression for the estimation error when the population matrix follows a white inverse Wishart distribution, and we observe the optimal train-test split scales as the square root of the matrix dimension. For general population matrices, we connected the error to the variance of eigenvalues distribution, but approximations are necessary. Interestingly, in the high-dimensional asymptotic regime, both the holdout and $k$-fold cross-validation methods converge to the optimal estimator when the train-test ratio scales with the square root of the matrix dimension.

Country of Origin
🇫🇷 France

Page Count
14 pages

Category
Mathematics:
Statistics Theory