Source-primed Multi-turn Conversation Helps Large Language Models Translate Documents
By: Hanxu Hu, Jannis Vamvas, Rico Sennrich
Potential Business Impact:
Translates whole documents better by remembering past parts.
LLMs have paved the way for truly simple document-level machine translation, but challenges such as omission errors remain. In this paper, we study a simple method for handling document-level machine translation, by leveraging previous contexts in a multi-turn conversational manner. Specifically, by decomposing documents into segments and iteratively translating them while maintaining previous turns, this method ensures coherent translations without additional training, and can fully re-use the KV cache of previous turns thus minimizing computational overhead. We further propose a `source-primed' method that first provides the whole source document before multi-turn translation. We empirically show this multi-turn method outperforms both translating entire documents in a single turn and translating each segment independently according to multiple automatic metrics in representative LLMs, establishing a strong baseline for document-level translation using LLMs.
Similar Papers
Improving LLM-based Document-level Machine Translation with Multi-Knowledge Fusion
Computation and Language
Improves computer translation by using summaries and key words.
Multilingual Contextualization of Large Language Models for Document-Level Machine Translation
Computation and Language
Translates whole books, not just sentences.
Contextual Cues in Machine Translation: Investigating the Potential of Multi-Source Input Strategies in LLMs and NMT Systems
Computation and Language
Improves computer translations by adding extra clues.