Compressive Meta-Learning
By: Daniel Mas Montserrat , David Bonet , Maria Perera and more
Potential Business Impact:
Learns from data without seeing all of it.
The rapid expansion in the size of new datasets has created a need for fast and efficient parameter-learning techniques. Compressive learning is a framework that enables efficient processing by using random, non-linear features to project large-scale databases onto compact, information-preserving representations whose dimensionality is independent of the number of samples and can be easily stored, transferred, and processed. These database-level summaries are then used to decode parameters of interest from the underlying data distribution without requiring access to the original samples, offering an efficient and privacy-friendly learning framework. However, both the encoding and decoding techniques are typically randomized and data-independent, failing to exploit the underlying structure of the data. In this work, we propose a framework that meta-learns both the encoding and decoding stages of compressive learning methods by using neural networks that provide faster and more accurate systems than the current state-of-the-art approaches. To demonstrate the potential of the presented Compressive Meta-Learning framework, we explore multiple applications -- including neural network-based compressive PCA, compressive ridge regression, compressive k-means, and autoencoders.
Similar Papers
Compressive Modeling and Visualization of Multivariate Scientific Data using Implicit Neural Representation
Machine Learning (CS)
Shrinks big science data, keeping all details.
Navigating High Dimensional Concept Space with Metalearning
Machine Learning (CS)
Teaches computers to learn new ideas fast.
Probabilistic and nonlinear compressive sensing
Machine Learning (CS)
Finds best data patterns faster than other methods.