Score: 1

Disentanglement of Sources in a Multi-Stream Variational Autoencoder

Published: October 17, 2025 | arXiv ID: 2510.15669v1

By: Veranika Boukun, Jörg Lücke

Potential Business Impact:

Separates sounds and writings into their own parts.

Business Areas:
Autonomous Vehicles Transportation

Variational autoencoders (VAEs) are a leading approach to address the problem of learning disentangled representations. Typically a single VAE is used and disentangled representations are sought in its continuous latent space. Here we explore a different approach by using discrete latents to combine VAE-representations of individual sources. The combination is done based on an explicit model for source combination, and we here use a linear combination model which is well suited, e.g., for acoustic data. We formally define such a multi-stream VAE (MS-VAE) approach, derive its inference and learning equations, and we numerically investigate its principled functionality. The MS-VAE is domain-agnostic, and we here explore its ability to separate sources into different streams using superimposed hand-written digits, and mixed acoustic sources in a speaker diarization task. We observe a clear separation of digits, and on speaker diarization we observe an especially low rate of missed speakers. Numerical experiments further highlight the flexibility of the approach across varying amounts of supervision and training data.

Country of Origin
🇩🇪 🇦🇹 Austria, Germany

Page Count
10 pages

Category
Statistics:
Machine Learning (Stat)