Disentangled and Distilled Encoder for Out-of-Distribution Reasoning with Rademacher Guarantees
By: Zahra Rahiminasab, Michael Yuhas, Arvind Easwaran
Potential Business Impact:
Makes AI understand new things without forgetting old ones.
Recently, the disentangled latent space of a variational autoencoder (VAE) has been used to reason about multi-label out-of-distribution (OOD) test samples that are derived from different distributions than training samples. Disentangled latent space means having one-to-many maps between latent dimensions and generative factors or important characteristics of an image. This paper proposes a disentangled distilled encoder (DDE) framework to decrease the OOD reasoner size for deployment on resource-constrained devices while preserving disentanglement. DDE formalizes student-teacher distillation for model compression as a constrained optimization problem while preserving disentanglement with disentanglement constraints. Theoretical guarantees for disentanglement during distillation based on Rademacher complexity are established. The approach is evaluated empirically by deploying the compressed model on an NVIDIA
Similar Papers
Enclosing Prototypical Variational Autoencoder for Explainable Out-of-Distribution Detection
Machine Learning (CS)
Helps computers know when they don't know.
VAE-based Feature Disentanglement for Data Augmentation and Compression in Generalized GNSS Interference Classification
Machine Learning (CS)
Makes GPS signals work better by shrinking data.
Distillation of a tractable model from the VQ-VAE
Machine Learning (CS)
Makes AI understand and create better.