Mitigating Data Imbalance in Automated Speaking Assessment
By: Fong-Chun Tsai , Kuan-Tang Huang , Bi-Cheng Yan and more
Potential Business Impact:
Helps computers judge speaking better for everyone.
Automated Speaking Assessment (ASA) plays a crucial role in evaluating second-language (L2) learners proficiency. However, ASA models often suffer from class imbalance, leading to biased predictions. To address this, we introduce a novel objective for training ASA models, dubbed the Balancing Logit Variation (BLV) loss, which perturbs model predictions to improve feature representation for minority classes without modifying the dataset. Evaluations on the ICNALE benchmark dataset show that integrating the BLV loss into a celebrated text-based (BERT) model significantly enhances classification accuracy and fairness, making automated speech evaluation more robust for diverse learners.
Similar Papers
A Novel Data Augmentation Approach for Automatic Speaking Assessment on Opinion Expressions
Computation and Language
Teaches computers to judge speaking skills from voice.
Mind the Language Gap: Automated and Augmented Evaluation of Bias in LLMs for High- and Low-Resource Languages
Computation and Language
Finds and fixes unfairness in AI language.
Beyond Modality Limitations: A Unified MLLM Approach to Automated Speaking Assessment with Effective Curriculum Learning
Computation and Language
Helps computers judge how well people speak.