NanoCodec: Towards High-Quality Ultra Fast Speech LLM Inference
By: Edresson Casanova , Paarth Neekhara , Ryan Langman and more
Potential Business Impact:
Makes AI understand voices much faster.
Large Language Models (LLMs) have significantly advanced audio processing by leveraging audio codecs to discretize audio into tokens, enabling the application of language modeling techniques to speech data. However, existing audio codecs often operate at high frame rates, leading to slow training and inference, particularly for autoregressive models. To address this, there is growing interest in low frame-rate audio codecs, which reduce the number of autoregressive steps required to generate one second of audio. In this paper, we conduct ablation studies to examine the impact of frame rate, bitrate, and causality on codec reconstruction quality. Based on our findings, we introduce NanoCodec, a state-of-the-art audio codec that achieves high-quality compression at just 12.5 frames per second (FPS). NanoCodec outperforms related works across various bitrate ranges, establishing a new benchmark for low-latency and efficient Speech LLM training and inference.
Similar Papers
U-Codec: Ultra Low Frame-rate Neural Speech Codec for Fast High-fidelity Speech Generation
Sound
Makes voices sound real with less data.
FlexiCodec: A Dynamic Neural Audio Codec for Low Frame Rates
Sound
Makes talking computers understand speech better.
PhoenixCodec: Taming Neural Speech Coding for Extreme Low-Resource Scenarios
Audio and Speech Processing
Makes phone calls clear with very little internet.