MBCodec:Thorough disentangle for high-fidelity audio compression
By: Ruonan Zhang , Xiaoyang Hao , Yichen Han and more
Potential Business Impact:
Makes computer voices sound more real.
High-fidelity neural audio codecs in Text-to-speech (TTS) aim to compress speech signals into discrete representations for faithful reconstruction. However, prior approaches faced challenges in effectively disentangling acoustic and semantic information within tokens, leading to a lack of fine-grained details in synthesized speech. In this study, we propose MBCodec, a novel multi-codebook audio codec based on Residual Vector Quantization (RVQ) that learns a hierarchically structured representation. MBCodec leverages self-supervised semantic tokenization and audio subband features from the raw signals to construct a functionally-disentangled latent space. In order to encourage comprehensive learning across various layers of the codec embedding space, we introduce adaptive dropout depths to differentially train codebooks across layers, and employ a multi-channel pseudo-quadrature mirror filter (PQMF) during training. By thoroughly decoupling semantic and acoustic features, our method not only achieves near-lossless speech reconstruction but also enables a remarkable 170x compression of 24 kHz audio, resulting in a low bit rate of just 2.2 kbps. Experimental evaluations confirm its consistent and substantial outperformance of baselines across all evaluations.
Similar Papers
DeCodec: Rethinking Audio Codecs as Universal Disentangled Representation Learners
Sound
Separates voices from noise for clearer sound.
A Streamable Neural Audio Codec with Residual Scalar-Vector Quantization for Real-Time Communication
Sound
Makes online calls sound clearer, faster, and cheaper.
PURE Codec: Progressive Unfolding of Residual Entropy for Speech Codec Learning
Sound
Makes phone calls sound clearer, even with background noise.