Latent Multi-view Learning for Robust Environmental Sound Representations
By: Sivan Ding , Julia Wilkins , Magdalena Fuentes and more
Potential Business Impact:
Helps computers understand sounds better.
Self-supervised learning (SSL) approaches, such as contrastive and generative methods, have advanced environmental sound representation learning using unlabeled data. However, how these approaches can complement each other within a unified framework remains relatively underexplored. In this work, we propose a multi-view learning framework that integrates contrastive principles into a generative pipeline to capture sound source and device information. Our method encodes compressed audio latents into view-specific and view-common subspaces, guided by two self-supervised objectives: contrastive learning for targeted information flow between subspaces, and reconstruction for overall information preservation. We evaluate our method on an urban sound sensor network dataset for sound source and sensor classification, demonstrating improved downstream performance over traditional SSL techniques. Additionally, we investigate the model's potential to disentangle environmental sound attributes within the structured latent space under varied training configurations.
Similar Papers
Latent Multi-view Learning for Robust Environmental Sound Representations
Sound
Helps computers understand sounds better by learning from noise.
Variational Self-Supervised Learning
Machine Learning (CS)
Teaches computers to learn from pictures without labels.
Leveraging Audio-Visual Data to Reduce the Multilingual Gap in Self-Supervised Speech Models
Computation and Language
Helps computers understand many languages better.