Integrating Predictive and Generative Capabilities by Latent Space Design via the DKL-VAE Model
By: Boris N. Slautin , Utkarsh Pratiush , Doru C. Lupascu and more
Potential Business Impact:
Creates new designs with desired features.
We introduce a Deep Kernel Learning Variational Autoencoder (VAE-DKL) framework that integrates the generative power of a Variational Autoencoder (VAE) with the predictive nature of Deep Kernel Learning (DKL). The VAE learns a latent representation of high-dimensional data, enabling the generation of novel structures, while DKL refines this latent space by structuring it in alignment with target properties through Gaussian Process (GP) regression. This approach preserves the generative capabilities of the VAE while enhancing its latent space for GP-based property prediction. We evaluate the framework on two datasets: a structured card dataset with predefined variational factors and the QM9 molecular dataset, where enthalpy serves as the target function for optimization. The model demonstrates high-precision property prediction and enables the generation of novel out-of-training subset structures with desired characteristics. The VAE-DKL framework offers a promising approach for high-throughput material discovery and molecular design, balancing structured latent space organization with generative flexibility.
Similar Papers
Latent-Autoregressive GP-VAE Language Model
Machine Learning (CS)
Lets computers write stories by understanding time.
An Introduction to Discrete Variational Autoencoders
Machine Learning (CS)
Teaches computers to understand words by grouping them.
Physically Interpretable Representation Learning with Gaussian Mixture Variational AutoEncoder (GM-VAE)
Machine Learning (CS)
Finds hidden patterns in messy science data.