Advancing Automated Speaking Assessment Leveraging Multifaceted Relevance and Grammar Information
By: Hao-Chien Lu , Jhen-Ke Lin , Hong-Yun Lin and more
Potential Business Impact:
Helps computers judge speaking better by checking words and grammar.
Current automated speaking assessment (ASA) systems for use in multi-aspect evaluations often fail to make full use of content relevance, overlooking image or exemplar cues, and employ superficial grammar analysis that lacks detailed error types. This paper ameliorates these deficiencies by introducing two novel enhancements to construct a hybrid scoring model. First, a multifaceted relevance module integrates question and the associated image content, exemplar, and spoken response of an L2 speaker for a comprehensive assessment of content relevance. Second, fine-grained grammar error features are derived using advanced grammar error correction (GEC) and detailed annotation to identify specific error categories. Experiments and ablation studies demonstrate that these components significantly improve the evaluation of content relevance, language use, and overall ASA performance, highlighting the benefits of using richer, more nuanced feature sets for holistic speaking assessment.
Similar Papers
A Novel Data Augmentation Approach for Automatic Speaking Assessment on Opinion Expressions
Computation and Language
Teaches computers to judge speaking skills from voice.
Beyond Modality Limitations: A Unified MLLM Approach to Automated Speaking Assessment with Effective Curriculum Learning
Computation and Language
Helps computers judge how well people speak.
An Effective Strategy for Modeling Score Ordinality and Non-uniform Intervals in Automated Speaking Assessment
Audio and Speech Processing
Helps computers judge how well people speak English.