Learning Interpretable Features in Audio Latent Spaces via Sparse Autoencoders
By: Nathan Paek , Yongyi Zang , Qihui Yang and more
Potential Business Impact:
Lets AI music makers control sound details.
While sparse autoencoders (SAEs) successfully extract interpretable features from language models, applying them to audio generation faces unique challenges: audio's dense nature requires compression that obscures semantic meaning, and automatic feature characterization remains limited. We propose a framework for interpreting audio generative models by mapping their latent representations to human-interpretable acoustic concepts. We train SAEs on audio autoencoder latents, then learn linear mappings from SAE features to discretized acoustic properties (pitch, amplitude, and timbre). This enables both controllable manipulation and analysis of the AI music generation process, revealing how acoustic properties emerge during synthesis. We validate our approach on continuous (DiffRhythm-VAE) and discrete (EnCodec, WavTokenizer) audio latent spaces, and analyze DiffRhythm, a state-of-the-art text-to-music model, to demonstrate how pitch, timbre, and loudness evolve throughout generation. While our work is only done on audio modality, our framework can be extended to interpretable analysis of visual latent space generation models.
Similar Papers
Sparse Autoencoders Make Audio Foundation Models more Explainable
Sound
Unlocks secrets in sound computer models.
Interpretable Embeddings with Sparse Autoencoders: A Data Analysis Toolkit
Artificial Intelligence
Finds hidden ideas in text data.
Sparse Autoencoders are Topic Models
CV and Pattern Recognition
Finds hidden themes in pictures and words.