Index-ASR Technical Report
By: Zheshu Song , Lu Wang , Wei Deng and more
Potential Business Impact:
Makes voice assistants understand better, less mistakes.
Automatic speech recognition (ASR) has witnessed remarkable progress in recent years, largely driven by the emergence of LLM-based ASR paradigm. Despite their strong performance on a variety of open-source benchmarks, existing LLM-based ASR systems still suffer from two critical limitations. First, they are prone to hallucination errors, often generating excessively long and repetitive outputs that are not well grounded in the acoustic input. Second, they provide limited support for flexible and fine-grained contextual customization. To address these challenges, we propose Index-ASR, a large-scale LLM-based ASR system designed to simultaneously enhance robustness and support customizable hotword recognition. The core idea of Index-ASR lies in the integration of LLM and large-scale training data enriched with background noise and contextual information. Experimental results show that our Index-ASR achieves strong performance on both open-source benchmarks and in-house test sets, highlighting its robustness and practicality for real-world ASR applications.
Similar Papers
FunAudio-ASR Technical Report
Computation and Language
Makes talking computers understand messy, noisy speech.
FunAudio-ASR Technical Report
Computation and Language
Makes talking computers understand messy, noisy speech.
Index-MSR: A high-efficiency multimodal fusion framework for speech recognition
Audio and Speech Processing
Makes talking computers understand videos better.