Optimal Krylov On Average
By: Qi Luo, Florian Schäfer
Potential Business Impact:
Makes computer math problems solve faster.
We propose an adaptive randomized truncation estimator for Krylov subspace methods that optimizes the trade-off between the solution variance and the computational cost, while remaining unbiased. The estimator solves a constrained optimization problem to compute the truncation probabilities on the fly, with minimal computational overhead. The problem has a closed-form solution when the improvement of the deterministic algorithm satisfies a diminishing returns property. We prove that obtaining the optimal adaptive truncation distribution is impossible in the general case. Without the diminishing return condition, our estimator provides a suboptimal but still unbiased solution. We present experimental results in GP hyperparameter training and competitive physics-informed neural networks problem to demonstrate the effectiveness of our approach.
Similar Papers
Randomized block Krylov method for approximation of truncated tensor SVD
Numerical Analysis
Makes big data smaller for computers.
Private Statistical Estimation via Truncation
Machine Learning (CS)
Protects private data while learning from it.
Preconditioned Truncated Single-Sample Estimators for Scalable Stochastic Optimization
Numerical Analysis
Makes computer math problems faster and more accurate.