Score: 2

Employing self-supervised learning models for cross-linguistic child speech maturity classification

Published: June 10, 2025 | arXiv ID: 2506.08999v1

By: Theo Zhang , Madurya Suresh , Anne S. Warlaumont and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Helps computers understand babies' sounds better.

Business Areas:
Speech Recognition Data and Analytics, Software

Speech technology systems struggle with many downstream tasks for child speech due to small training corpora and the difficulties that child speech pose. We apply a novel dataset, SpeechMaturity, to state-of-the-art transformer models to address a fundamental classification task: identifying child vocalizations. Unlike previous corpora, our dataset captures maximally ecologically-valid child vocalizations across an unprecedented sample, comprising children acquiring 25+ languages in the U.S., Bolivia, Vanuatu, Papua New Guinea, Solomon Islands, and France. The dataset contains 242,004 labeled vocalizations, magnitudes larger than previous work. Models were trained to distinguish between cry, laughter, mature (consonant+vowel), and immature speech (just consonant or vowel). Models trained on the dataset outperform state-of-the-art models trained on previous datasets, achieved classification accuracy comparable to humans, and were robust across rural and urban settings.

Country of Origin
🇺🇸 United States

Page Count
5 pages

Category
Computer Science:
Computation and Language