Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages
By: David Samuel , Lilja Øvrelid , Erik Velldal and more
Potential Business Impact:
Teaches computers to speak less common languages well.
We propose a post-training method for lower-resource languages that preserves fluency of language models even when aligned by disfluent reward models. Preference-optimization is now a well-researched topic, but previous work has mostly addressed models for English and Chinese. Lower-resource languages lack both datasets written by native speakers and language models capable of generating fluent synthetic data. Thus, in this work, we focus on developing a fluent preference-aligned language model without any instruction-tuning data in the target language. Our approach uses an on-policy training method, which we compare with two common approaches: supervised finetuning on machine-translated data and multilingual finetuning. We conduct a case study on Norwegian Bokmål and evaluate fluency through native-speaker assessments. The results show that the on-policy aspect is crucial and outperforms the alternatives without relying on any hard-to-obtain data.
Similar Papers
Improving LLMs for Machine Translation Using Synthetic Preference Data
Computation and Language
Makes computer translations much better and more accurate.
Alignment as Distribution Learning: Your Preference Model is Explicitly a Language Model
Machine Learning (CS)
Makes AI better at following instructions.
Multilingual MFA: Forced Alignment on Low-Resource Related Languages
Computation and Language
Helps computers understand new languages faster.