Layer-Wise Analysis of Self-Supervised Representations for Age and Gender Classification in Children's Speech
By: Abhijit Sinha , Harishankar Kumar , Mohit Joshi and more
Potential Business Impact:
Helps computers tell kids' ages and genders.
Children's speech presents challenges for age and gender classification due to high variability in pitch, articulation, and developmental traits. While self-supervised learning (SSL) models perform well on adult speech tasks, their ability to encode speaker traits in children remains underexplored. This paper presents a detailed layer-wise analysis of four Wav2Vec2 variants using the PFSTAR and CMU Kids datasets. Results show that early layers (1-7) capture speaker-specific cues more effectively than deeper layers, which increasingly focus on linguistic information. Applying PCA further improves classification, reducing redundancy and highlighting the most informative components. The Wav2Vec2-large-lv60 model achieves 97.14% (age) and 98.20% (gender) on CMU Kids; base-100h and large-lv60 models reach 86.05% and 95.00% on PFSTAR. These results reveal how speaker traits are structured across SSL model depth and support more targeted, adaptive strategies for child-aware speech interfaces.
Similar Papers
Can Layer-wise SSL Features Improve Zero-Shot ASR Performance for Children's Speech?
Audio and Speech Processing
Makes computers understand kids' talking better.
Zero-Shot KWS for Children's Speech using Layer-Wise Features from SSL Models
Audio and Speech Processing
Helps voice assistants understand kids better.
Layer-wise Analysis for Quality of Multilingual Synthesized Speech
Audio and Speech Processing
Makes computer voices sound more human-like.