From Speech to Subtitles: Evaluating ASR Models in Subtitling Italian Television Programs
By: Alessandro Lucca, Francesco Pierri
Potential Business Impact:
Helps make videos easier to understand for everyone.
Subtitles are essential for video accessibility and audience engagement. Modern Automatic Speech Recognition (ASR) systems, built upon Encoder-Decoder neural network architectures and trained on massive amounts of data, have progressively reduced transcription errors on standard benchmark datasets. However, their performance in real-world production environments, particularly for non-English content like long-form Italian videos, remains largely unexplored. This paper presents a case study on developing a professional subtitling system for an Italian media company. To inform our system design, we evaluated four state-of-the-art ASR models (Whisper Large v2, AssemblyAI Universal, Parakeet TDT v3 0.6b, and WhisperX) on a 50-hour dataset of Italian television programs. The study highlights their strengths and limitations, benchmarking their performance against the work of professional human subtitlers. The findings indicate that, while current models cannot meet the media industry's accuracy needs for full autonomy, they can serve as highly effective tools for enhancing human productivity. We conclude that a human-in-the-loop (HITL) approach is crucial and present the production-grade, cloud-based infrastructure we designed to support this workflow.
Similar Papers
Benchmarking Automatic Speech Recognition Models for African Languages
Computation and Language
Helps computers understand many African languages.
Automatic Speech Recognition in the Modern Era: Architectures, Training, and Evaluation
Audio and Speech Processing
Makes computers understand spoken words better.
Zero-Shot Recognition of Dysarthric Speech Using Commercial Automatic Speech Recognition and Multimodal Large Language Models
Audio and Speech Processing
Helps people with speech problems talk to computers.