Codec2Vec: Self-Supervised Speech Representation Learning Using Neural Speech Codecs
By: Wei-Cheng Tseng, David Harwath
Potential Business Impact:
Makes computers understand speech better, faster, smaller.
Recent advancements in neural audio codecs have not only enabled superior audio compression but also enhanced speech synthesis techniques. Researchers are now exploring their potential as universal acoustic feature extractors for a broader range of speech processing tasks. Building on this trend, we introduce Codec2Vec, the first speech representation learning framework that relies exclusively on discrete audio codec units. This approach offers several advantages, including improved data storage and transmission efficiency, faster training, and enhanced data privacy. We explore masked prediction with various training target derivation strategies to thoroughly understand the effectiveness of this framework. Evaluated on the SUPERB benchmark, Codec2Vec achieves competitive performance compared to continuous-input models while reducing storage requirements by up to 16.5x and training time by 2.3x, showcasing its scalability and efficiency.
Similar Papers
MelCap: A Unified Single-Codebook Neural Codec for High-Fidelity Audio Compression
Sound
Makes music and speech sound clear with less data.
DeCodec: Rethinking Audio Codecs as Universal Disentangled Representation Learners
Sound
Separates voices from noise for clearer sound.
Modeling strategies for speech enhancement in the latent space of a neural audio codec
Sound
Makes noisy speech clear by learning its hidden sounds.