Score: 2

Sparse Autoencoders Make Audio Foundation Models more Explainable

Published: September 29, 2025 | arXiv ID: 2509.24793v1

By: Théo Mariotte , Martin Lebourdais , Antonio Almudévar and more

Potential Business Impact:

Unlocks secrets in sound computer models.

Business Areas:
Speech Recognition Data and Analytics, Software

Audio pretrained models are widely employed to solve various tasks in speech processing, sound event detection, or music information retrieval. However, the representations learned by these models are unclear, and their analysis mainly restricts to linear probing of the hidden representations. In this work, we explore the use of Sparse Autoencoders (SAEs) to analyze the hidden representations of pretrained models, focusing on a case study in singing technique classification. We first demonstrate that SAEs retain both information about the original representations and class labels, enabling their internal structure to provide insights into self-supervised learning systems. Furthermore, we show that SAEs enhance the disentanglement of vocal attributes, establishing them as an effective tool for identifying the underlying factors encoded in the representations.


Page Count
5 pages

Category
Computer Science:
Sound