Score: 1

Fewer Hallucinations, More Verification: A Three-Stage LLM-Based Framework for ASR Error Correction

Published: May 30, 2025 | arXiv ID: 2505.24347v2

By: Yangui Fang , Baixu Cheng , Jing Peng and more

Potential Business Impact:

Fixes computer speech mistakes without changing good words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Automatic Speech Recognition (ASR) error correction aims to correct recognition errors while preserving accurate text. Although traditional approaches demonstrate moderate effectiveness, LLMs offer a paradigm that eliminates the need for training and labeled data. However, directly using LLMs will encounter hallucinations problem, which may lead to the modification of the correct text. To address this problem, we propose the Reliable LLM Correction Framework (RLLM-CF), which consists of three stages: (1) error pre-detection, (2) chain-of-thought sub-tasks iterative correction, and (3) reasoning process verification. The advantage of our method is that it does not require additional information or fine-tuning of the model, and ensures the correctness of the LLM correction under multi-pass programming. Experiments on AISHELL-1, AISHELL-2, and Librispeech show that the GPT-4o model enhanced by our framework achieves 21%, 11%, 9%, and 11.4% relative reductions in CER/WER.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
7 pages

Category
Computer Science:
Computation and Language