A Novel Data Augmentation Approach for Automatic Speaking Assessment on Opinion Expressions
By: Chung-Chun Wang , Jhen-Ke Lin , Hao-Chien Lu and more
Potential Business Impact:
Teaches computers to judge speaking skills from voice.
Automated speaking assessment (ASA) on opinion expressions is often hampered by the scarcity of labeled recordings, which restricts prompt diversity and undermines scoring reliability. To address this challenge, we propose a novel training paradigm that leverages a large language models (LLM) to generate diverse responses of a given proficiency level, converts responses into synthesized speech via speaker-aware text-to-speech synthesis, and employs a dynamic importance loss to adaptively reweight training instances based on feature distribution differences between synthesized and real speech. Subsequently, a multimodal large language model integrates aligned textual features with speech signals to predict proficiency scores directly. Experiments conducted on the LTTC dataset show that our approach outperforms methods relying on real data or conventional augmentation, effectively mitigating low-resource constraints and enabling ASA on opinion expressions with cross-modal information.
Similar Papers
Beyond Modality Limitations: A Unified MLLM Approach to Automated Speaking Assessment with Effective Curriculum Learning
Computation and Language
Helps computers judge how well people speak.
Advancing Automated Speaking Assessment Leveraging Multifaceted Relevance and Grammar Information
Computation and Language
Helps computers judge speaking better by checking words and grammar.
Mitigating Data Imbalance in Automated Speaking Assessment
Computation and Language
Helps computers judge speaking better for everyone.