LatinX: Aligning a Multilingual TTS Model with Direct Preference Optimization
By: Luis Felipe Chary, Miguel Arjona Ramirez
Potential Business Impact:
Keeps your voice the same when translating languages.
We present LatinX, a multilingual text-to-speech (TTS) model for cascaded speech-to-speech translation that preserves the source speaker's identity across languages. LatinX is a 12-layer decoder-only Transformer trained in three stages: (i) pre-training for text-to-audio mapping, (ii) supervised fine-tuning for zero-shot voice cloning, and (iii) alignment with Direct Preference Optimization (DPO) using automatically labeled pairs based on Word Error Rate (WER) and speaker-similarity metrics. Trained on English and Romance languages with emphasis on Portuguese, LatinX with DPO consistently reduces WER and improves objective similarity over the fine-tuned baseline. Human evaluations further indicate stronger perceived speaker similarity than a strong baseline (XTTSv2), revealing gaps between objective and subjective measures. We provide cross-lingual analyses and discuss balanced preference signals and lower-latency architectures as future work.
Similar Papers
Implicit Cross-Lingual Rewarding for Efficient Multilingual Preference Alignment
Computation and Language
Teaches AI to understand more languages from English.
Advancing Zero-shot Text-to-Speech Intelligibility across Diverse Domains via Preference Alignment
Sound
Makes computer voices speak clearly, even tricky words.
DPO-Tuned Large Language Models for Segmentation in Simultaneous Speech Translation
Computation and Language
Makes real-time translation sound more natural.