Score: 0

MTP-S2UT: Enhancing Speech-to-Speech Translation Quality with Multi-token Prediction

Published: October 11, 2025 | arXiv ID: 2510.10003v1

By: Jianjin Wang , Runsong Zhao , Xiaoqian Liu and more

Potential Business Impact:

Translates spoken words better by understanding more meaning.

Business Areas:
Translation Service Professional Services

Current direct speech-to-speech translation methods predominantly employ speech tokens as intermediate representations. However, a single speech token is not dense in semantics, so we generally need multiple tokens to express a complete semantic unit. To address this limitation, we introduce multi-token prediction (MTP) loss into speech-to-unit translation (S2UT) models, enabling models to predict multiple subsequent tokens at each position, thereby capturing more complete semantics and enhancing information density per position. Initial MTP implementations apply the loss at the final layer, which improves output representation but initiates information enrichment too late. We hypothesize that advancing the information enrichment process to intermediate layers can achieve earlier and more effective enhancement of hidden representation. Consequently, we propose MTP-S2UT loss, applying MTP loss to hidden representation where CTC loss is computed. Experiments demonstrate that all MTP loss variants consistently improve the quality of S2UT translation, with MTP-S2UT achieving the best performance.

Page Count
5 pages

Category
Computer Science:
Computation and Language