From Scratch to Fine-Tuned: A Comparative Study of Transformer Training Strategies for Legal Machine Translation
By: Amit Barman, Atanu Mandal, Sudip Kumar Naskar
Potential Business Impact:
Translates legal papers into Hindi for easier understanding.
In multilingual nations like India, access to legal information is often hindered by language barriers, as much of the legal and judicial documentation remains in English. Legal Machine Translation (L-MT) offers a scalable solution to this challenge by enabling accurate and accessible translations of legal documents. This paper presents our work for the JUST-NLP 2025 Legal MT shared task, focusing on English-Hindi translation using Transformer-based approaches. We experiment with 2 complementary strategies, fine-tuning a pre-trained OPUS-MT model for domain-specific adaptation and training a Transformer model from scratch using the provided legal corpus. Performance is evaluated using standard MT metrics, including SacreBLEU, chrF++, TER, ROUGE, BERTScore, METEOR, and COMET. Our fine-tuned OPUS-MT model achieves a SacreBLEU score of 46.03, significantly outperforming both baseline and from-scratch models. The results highlight the effectiveness of domain adaptation in enhancing translation quality and demonstrate the potential of L-MT systems to improve access to justice and legal transparency in multilingual contexts.
Similar Papers
Transformer-Based Low-Resource Language Translation: A Study on Standard Bengali to Sylheti
Computation and Language
Translates rare languages better than big AI.
Exploring Parameter-Efficient Fine-Tuning and Backtranslation for the WMT 25 General Translation Task
Computation and Language
Improves Japanese to English translation quality.
Text to Trust: Evaluating Fine-Tuning and LoRA Trade-offs in Language Models for Unfair Terms of Service Detection
Computation and Language
Helps computers find unfair contract rules faster.