Iterative Layer Pruning for Efficient Translation Inference
By: Yasmin Moslem, Muhammad Hazim Al Farouq, John D. Kelleher
Potential Business Impact:
Makes translation programs smaller and faster.
Large language models (LLMs) have transformed many areas of natural language processing, including machine translation. However, efficient deployment of LLMs remains challenging due to their intensive computational requirements. In this paper, we address this challenge and present our submissions to the Model Compression track at the Conference on Machine Translation (WMT 2025). In our experiments, we investigate iterative layer pruning guided by layer importance analysis. We evaluate this method using the Aya-Expanse-8B model for translation from Czech to German, and from English to Egyptian Arabic. Our approach achieves substantial reductions in model size and inference time, while maintaining the translation quality of the baseline models.
Similar Papers
Iterative Layer-wise Distillation for Efficient Compression of Large Language Models
Computation and Language
Makes big AI models smaller but still smart.
IG-Pruning: Input-Guided Block Pruning for Large Language Models
Computation and Language
Makes smart computer programs run faster.
E$^3$-Pruner: Towards Efficient, Economical, and Effective Layer Pruning for Large Language Models
Computation and Language
Makes big AI models smaller and faster.