Learning Reconstructive Embeddings in Reproducing Kernel Hilbert Spaces via the Representer Theorem
By: Enrique Feito-Casares, Francisco M. Melgarejo-Meseguer, José-Luis Rojo-Álvarez
Potential Business Impact:
Finds hidden patterns in complex information.
Motivated by the growing interest in representation learning approaches that uncover the latent structure of high-dimensional data, this work proposes new algorithms for reconstruction-based manifold learning within Reproducing-Kernel Hilbert Spaces (RKHS). Each observation is first reconstructed as a linear combination of the other samples in the RKHS, by optimizing a vector form of the Representer Theorem for their autorepresentation property. A separable operator-valued kernel extends the formulation to vector-valued data while retaining the simplicity of a single scalar similarity function. A subsequent kernel-alignment task projects the data into a lower-dimensional latent space whose Gram matrix aims to match the high-dimensional reconstruction kernel, thus transferring the auto-reconstruction geometry of the RKHS to the embedding. Therefore, the proposed algorithms represent an extended approach to the autorepresentation property, exhibited by many natural data, by using and adapting well-known results of Kernel Learning Theory. Numerical experiments on both simulated (concentric circles and swiss-roll) and real (cancer molecular activity and IoT network intrusions) datasets provide empirical evidence of the practical effectiveness of the proposed approach.
Similar Papers
A Kernel-based Stochastic Approximation Framework for Nonlinear Operator Learning
Machine Learning (Stat)
Teaches computers to solve hard math problems.
Kernel-based Stochastic Approximation Framework for Nonlinear Operator Learning
Machine Learning (Stat)
Teaches computers to solve hard math problems.
General reproducing properties in RKHS with application to derivative and integral operators
Statistics Theory
Helps computers learn from data with function values.