ViCocktail: Automated Multi-Modal Data Collection for Vietnamese Audio-Visual Speech Recognition
By: Thai-Binh Nguyen , Thi Van Nguyen , Quoc Truong Do and more
Potential Business Impact:
Helps computers understand talking in noisy places.
Audio-Visual Speech Recognition (AVSR) has gained significant attention recently due to its robustness against noise, which often challenges conventional speech recognition systems that rely solely on audio features. Despite this advantage, AVSR models remain limited by the scarcity of extensive datasets, especially for most languages beyond English. Automated data collection offers a promising solution. This work presents a practical approach to generate AVSR datasets from raw video, refining existing techniques for improved efficiency and accessibility. We demonstrate its broad applicability by developing a baseline AVSR model for Vietnamese. Experiments show the automatically collected dataset enables a strong baseline, achieving competitive performance with robust ASR in clean conditions and significantly outperforming them in noisy environments like cocktail parties. This efficient method provides a pathway to expand AVSR to more languages, particularly under-resourced ones.
Similar Papers
Cocktail-Party Audio-Visual Speech Recognition
Sound
Helps computers understand talking even in loud places.
Zero-AVSR: Zero-Shot Audio-Visual Speech Recognition with LLMs by Learning Language-Agnostic Speech Representations
CV and Pattern Recognition
Lets computers understand any spoken language.
Scalable Frameworks for Real-World Audio-Visual Speech Recognition
Audio and Speech Processing
Helps computers understand speech even with noise.