Score: 2

Text-Speech Language Models with Improved Cross-Modal Transfer by Aligning Abstraction Levels

Published: March 8, 2025 | arXiv ID: 2503.06211v1

By: Santiago Cuervo , Adel Moumen , Yanis Labrak and more

Potential Business Impact:

Makes computers understand talking and writing together.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Text-Speech Language Models (TSLMs) -- language models trained to jointly process and generate text and speech -- aim to enable cross-modal knowledge transfer to overcome the scaling limitations of unimodal speech LMs. The predominant approach to TSLM training expands the vocabulary of a pre-trained text LM by appending new embeddings and linear projections for speech, followed by fine-tuning on speech data. We hypothesize that this method limits cross-modal transfer by neglecting feature compositionality, preventing text-learned functions from being fully leveraged at appropriate abstraction levels. To address this, we propose augmenting vocabulary expansion with modules that better align abstraction levels across layers. Our models, \textsc{SmolTolk}, rival or surpass state-of-the-art TSLMs trained with orders of magnitude more compute. Representation analyses and improved multimodal performance suggest our method enhances cross-modal transfer.

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Computation and Language