Disentanglement of Sources in a Multi-Stream Variational Autoencoder
By: Veranika Boukun, Jörg Lücke
Potential Business Impact:
Separates sounds and writings into their own parts.
Variational autoencoders (VAEs) are a leading approach to address the problem of learning disentangled representations. Typically a single VAE is used and disentangled representations are sought in its continuous latent space. Here we explore a different approach by using discrete latents to combine VAE-representations of individual sources. The combination is done based on an explicit model for source combination, and we here use a linear combination model which is well suited, e.g., for acoustic data. We formally define such a multi-stream VAE (MS-VAE) approach, derive its inference and learning equations, and we numerically investigate its principled functionality. The MS-VAE is domain-agnostic, and we here explore its ability to separate sources into different streams using superimposed hand-written digits, and mixed acoustic sources in a speaker diarization task. We observe a clear separation of digits, and on speaker diarization we observe an especially low rate of missed speakers. Numerical experiments further highlight the flexibility of the approach across varying amounts of supervision and training data.
Similar Papers
Variational decomposition autoencoding improves disentanglement of latent representations
Machine Learning (CS)
**Finds hidden patterns in sounds and body signals.**
An Introduction to Discrete Variational Autoencoders
Machine Learning (CS)
Teaches computers to understand words by grouping them.
VAE-based Feature Disentanglement for Data Augmentation and Compression in Generalized GNSS Interference Classification
Machine Learning (CS)
Makes GPS signals work better by shrinking data.