Solving a Machine Learning Regression Problem Based on the Theory of Random Functions
By: Yuriy N. Bakhvalov
This paper studies a machine learning regression problem as a multivariate approximation problem using the framework of the theory of random functions. An ab initio derivation of a regression method is proposed, starting from postulates of indifference. It is shown that if a probability measure on an infinite-dimensional function space possesses natural symmetries (invariance under translation, rotation, scaling, and Gaussianity), then the entire solution scheme, including the kernel form, the type of regularization, and the noise parameterization, follows analytically from these postulates. The resulting kernel coincides with a generalized polyharmonic spline; however, unlike existing approaches, it is not chosen empirically but arises as a consequence of the indifference principle. This result provides a theoretical foundation for a broad class of smoothing and interpolation methods, demonstrating their optimality in the absence of a priori information.
Similar Papers
Machine-Learning-Assisted Comparison of Regression Functions
Methodology
Compares data patterns even with many details.
Bayesian Kernel Regression for Functional Data
Machine Learning (Stat)
Predicts complex patterns more accurately and faster.
Generalisation and benign over-fitting for linear regression onto random functional covariates
Machine Learning (Stat)
Helps computers learn from messy, connected data.