Score: 0

Cross-Lingual Interleaving for Speech Language Models

Published: December 1, 2025 | arXiv ID: 2512.01865v1

By: Adel Moumen, Guangzhi Sun, Philip C. Woodland

Potential Business Impact:

Helps computers understand many languages from talking.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Spoken Language Models (SLMs) aim to learn linguistic competence directly from speech using discrete units, widening access to Natural Language Processing (NLP) technologies for languages with limited written resources. However, progress has been largely English-centric due to scarce spoken evaluation benchmarks and training data, making cross-lingual learning difficult. We present a cross-lingual interleaving method that mixes speech tokens across languages without textual supervision. We also release an EN-FR training dataset, TinyStories (~42k hours), together with EN-FR spoken StoryCloze and TopicCloze benchmarks for cross-lingual semantic evaluation, both synthetically generated using GPT-4. On 360M and 1B SLMs under matched training-token budgets, interleaving improves monolingual semantic accuracy, enables robust cross-lingual continuation, and strengthens cross-lingual hidden-state alignment. Taken together, these results indicate that cross-lingual interleaving is a simple, scalable route to building multilingual SLMs that understand and converse across languages. All resources will be made open-source to support reproducibility.

Country of Origin
🇬🇧 United Kingdom

Page Count
5 pages

Category
Computer Science:
Computation and Language