Redefining Machine Simultaneous Interpretation: From Incremental Translation to Human-Like Strategies
By: Qianen Zhang, Zeyu Yang, Satoshi Nakamura
Potential Business Impact:
Translates languages faster by changing sentences.
Simultaneous Machine Translation (SiMT) requires high-quality translations under strict real-time constraints, which traditional policies with only READ/WRITE actions cannot fully address. We extend the action space of SiMT with four adaptive actions: Sentence_Cut, Drop, Partial_Summarization and Pronominalization, which enable real-time restructuring, omission, and simplification while preserving semantic fidelity. We adapt these actions in a large language model (LLM) framework and construct training references through action-aware prompting. To evaluate both quality and word-level monotonicity, we further develop a latency-aware TTS pipeline that maps textual outputs to speech with realistic timing. Experiments on the ACL60/60 English-Chinese, English-German and English-Japanese benchmarks show that our framework consistently improves semantic metrics and achieves lower delay compared to reference translations and salami-based baselines. Notably, combining Drop and Sentence_Cut leads to consistent improvements in the balance between fluency and latency. These results demonstrate that enriching the action space of LLM-based SiMT provides a promising direction for bridging the gap between human and machine interpretation.
Similar Papers
LLMs Can Achieve High-quality Simultaneous Machine Translation as Efficiently as Offline
Computation and Language
Lets computers translate speech as it happens.
SimulPL: Aligning Human Preferences in Simultaneous Machine Translation
Computation and Language
Makes live translations faster and better.
SeqPO-SiMT: Sequential Policy Optimization for Simultaneous Machine Translation
Computation and Language
Translates languages faster and better, like a human.