SeqPO-SiMT: Sequential Policy Optimization for Simultaneous Machine Translation
By: Ting Xu , Zhichao Huang , Jiankai Sun and more
Potential Business Impact:
Translates languages faster and better, like a human.
We present Sequential Policy Optimization for Simultaneous Machine Translation (SeqPO-SiMT), a new policy optimization framework that defines the simultaneous machine translation (SiMT) task as a sequential decision making problem, incorporating a tailored reward to enhance translation quality while reducing latency. In contrast to popular Reinforcement Learning from Human Feedback (RLHF) methods, such as PPO and DPO, which are typically applied in single-step tasks, SeqPO-SiMT effectively tackles the multi-step SiMT task. This intuitive framework allows the SiMT LLMs to simulate and refine the SiMT process using a tailored reward. We conduct experiments on six datasets from diverse domains for En to Zh and Zh to En SiMT tasks, demonstrating that SeqPO-SiMT consistently achieves significantly higher translation quality with lower latency. In particular, SeqPO-SiMT outperforms the supervised fine-tuning (SFT) model by 1.13 points in COMET, while reducing the Average Lagging by 6.17 in the NEWSTEST2021 En to Zh dataset. While SiMT operates with far less context than offline translation, the SiMT results of SeqPO-SiMT on 7B LLM surprisingly rival the offline translation of high-performing LLMs, including Qwen-2.5-7B-Instruct and LLaMA-3-8B-Instruct.
Similar Papers
Redefining Machine Simultaneous Interpretation: From Incremental Translation to Human-Like Strategies
Computation and Language
Translates languages faster by changing sentences.
SimulPL: Aligning Human Preferences in Simultaneous Machine Translation
Computation and Language
Makes live translations faster and better.
LLMs Can Achieve High-quality Simultaneous Machine Translation as Efficiently as Offline
Computation and Language
Lets computers translate speech as it happens.