MTP-S2UT: Enhancing Speech-to-Speech Translation Quality with Multi-token Prediction
By: Jianjin Wang , Runsong Zhao , Xiaoqian Liu and more
Potential Business Impact:
Translates spoken words better by understanding more meaning.
Current direct speech-to-speech translation methods predominantly employ speech tokens as intermediate representations. However, a single speech token is not dense in semantics, so we generally need multiple tokens to express a complete semantic unit. To address this limitation, we introduce multi-token prediction (MTP) loss into speech-to-unit translation (S2UT) models, enabling models to predict multiple subsequent tokens at each position, thereby capturing more complete semantics and enhancing information density per position. Initial MTP implementations apply the loss at the final layer, which improves output representation but initiates information enrichment too late. We hypothesize that advancing the information enrichment process to intermediate layers can achieve earlier and more effective enhancement of hidden representation. Consequently, we propose MTP-S2UT loss, applying MTP loss to hidden representation where CTC loss is computed. Experiments demonstrate that all MTP loss variants consistently improve the quality of S2UT translation, with MTP-S2UT achieving the best performance.
Similar Papers
VocalNet-M2: Advancing Low-Latency Spoken Language Modeling via Integrated Multi-Codebook Tokenization and Multi-Token Prediction
Computation and Language
Speeds up talking computers, making them faster.
FastMTP: Accelerating LLM Inference with Enhanced Multi-Token Prediction
Machine Learning (CS)
Makes AI write much faster without mistakes.
Predicting the Order of Upcoming Tokens Improves Language Modeling
Machine Learning (CS)
Teaches computers to guess words better.