Towards Inclusive NLP: Assessing Compressed Multilingual Transformers across Diverse Language Benchmarks
By: Maitha Alshehhi, Ahmed Sharshar, Mohsen Guizani
Potential Business Impact:
Makes AI understand many languages, even rare ones.
Although LLMs have attained significant success in high-resource languages, their capacity in low-resource linguistic environments like Kannada and Arabic is not yet fully understood. This work benchmarking the performance of multilingual and monolingual Large Language Models (LLMs) across Arabic, English, and Indic languages, with particular emphasis on the effects of model compression strategies such as pruning and quantization. Findings shows significant performance differences driven by linguistic diversity and resource availability on SOTA LLMS as BLOOMZ, AceGPT, Jais, LLaMA-2, XGLM, and AraGPT2. We find that multilingual versions of the model outperform their language-specific counterparts across the board, indicating substantial cross-lingual transfer benefits. Quantization (4-bit and 8-bit) is effective in maintaining model accuracy while promoting efficiency, but aggressive pruning significantly compromises performance, especially in bigger models. Our findings pinpoint key strategies to construct scalable and fair multilingual NLP solutions and underscore the need for interventions to address hallucination and generalization errors in the low-resource setting.
Similar Papers
Utilizing Multilingual Encoders to Improve Large Language Models for Low-Resource Languages
Computation and Language
Helps computers understand many languages better.
Utilizing Multilingual Encoders to Improve Large Language Models for Low-Resource Languages
Computation and Language
Helps computers understand many languages better.
LLM Compression: How Far Can We Go in Balancing Size and Performance?
Computation and Language
Makes smart computer programs run faster and smaller.