ACADATA: Parallel Dataset of Academic Data for Machine Translation
By: Iñaki Lacunza , Javier Garcia Gilabert , Francesca De Luca Fornaciari and more
Potential Business Impact:
Helps computers translate science papers better.
We present ACADATA, a high-quality parallel dataset for academic translation, that consists of two subsets: ACAD-TRAIN, which contains approximately 1.5 million author-generated paragraph pairs across 96 language directions and ACAD-BENCH, a curated evaluation set of almost 6,000 translations covering 12 directions. To validate its utility, we fine-tune two Large Language Models (LLMs) on ACAD-TRAIN and benchmark them on ACAD-BENCH against specialized machine-translation systems, general-purpose, open-weight LLMs, and several large-scale proprietary models. Experimental results demonstrate that fine-tuning on ACAD-TRAIN leads to improvements in academic translation quality by +6.1 and +12.4 d-BLEU points on average for 7B and 2B models respectively, while also improving long-context translation in a general domain by up to 24.9% when translating out of English. The fine-tuned top-performing model surpasses the best propietary and open-weight models on academic translation domain. By releasing ACAD-TRAIN, ACAD-BENCH and the fine-tuned models, we provide the community with a valuable resource to advance research in academic domain and long-context translation.
Similar Papers
ACADATA: Parallel Dataset of Academic Data for Machine Translation
Computation and Language
Helps computers translate science papers better.
AFRIDOC-MT: Document-level MT Corpus for African Languages
Computation and Language
Translates African languages better for everyone.
A fully automated and scalable Parallel Data Augmentation for Low Resource Languages using Image and Text Analytics
Computation and Language
Helps computers understand many languages better.