Who Said What WSW 2.0? Enhanced Automated Analysis of Preschool Classroom Speech
By: Anchen Sun , Tiantian Feng , Gabriela Gutierrez and more
Potential Business Impact:
Helps teachers understand kids' classroom talk better.
This paper introduces an automated framework WSW2.0 for analyzing vocal interactions in preschool classrooms, enhancing both accuracy and scalability through the integration of wav2vec2-based speaker classification and Whisper (large-v2 and large-v3) speech transcription. A total of 235 minutes of audio recordings (160 minutes from 12 children and 75 minutes from 5 teachers), were used to compare system outputs to expert human annotations. WSW2.0 achieves a weighted F1 score of .845, accuracy of .846, and an error-corrected kappa of .672 for speaker classification (child vs. teacher). Transcription quality is moderate to high with word error rates of .119 for teachers and .238 for children. WSW2.0 exhibits relatively high absolute agreement intraclass correlations (ICC) with expert transcriptions for a range of classroom language features. These include teacher and child mean utterance length, lexical diversity, question asking, and responses to questions and other utterances, which show absolute agreement intraclass correlations between .64 and .98. To establish scalability, we apply the framework to an extensive dataset spanning two years and over 1,592 hours of classroom audio recordings, demonstrating the framework's robustness for broad real-world applications. These findings highlight the potential of deep learning and natural language processing techniques to revolutionize educational research by providing accurate measures of key features of preschool classroom speech, ultimately guiding more effective intervention strategies and supporting early childhood language development.
Similar Papers
Zero-Shot KWS for Children's Speech using Layer-Wise Features from SSL Models
Audio and Speech Processing
Helps voice assistants understand kids better.
Whisper Speaker Identification: Leveraging Pre-Trained Multilingual Transformers for Robust Speaker Embeddings
Sound
Identifies speakers in any language, even noisy ones.
The NTNU System at the S&I Challenge 2025 SLA Open Track
Computation and Language
Tests speaking skills better by combining sound and words.