Can maiBERT Speak for Maithili?
By: Sumit Yadav , Raju Kumar Yadav , Utsav Maskey and more
Potential Business Impact:
Helps computers understand a new language.
Natural Language Understanding (NLU) for low-resource languages remains a major challenge in NLP due to the scarcity of high-quality data and language-specific models. Maithili, despite being spoken by millions, lacks adequate computational resources, limiting its inclusion in digital and AI-driven applications. To address this gap, we introducemaiBERT, a BERT-based language model pre-trained specifically for Maithili using the Masked Language Modeling (MLM) technique. Our model is trained on a newly constructed Maithili corpus and evaluated through a news classification task. In our experiments, maiBERT achieved an accuracy of 87.02%, outperforming existing regional models like NepBERTa and HindiBERT, with a 0.13% overall accuracy gain and 5-7% improvement across various classes. We have open-sourced maiBERT on Hugging Face enabling further fine-tuning for downstream tasks such as sentiment analysis and Named Entity Recognition (NER).
Similar Papers
Can maiBERT Speak for Maithili?
Computation and Language
Helps computers understand a new language.
SentiMaithili: A Benchmark Dataset for Sentiment and Reason Generation for the Low-Resource Maithili Language
Computation and Language
Helps computers understand feelings in a language.
Transformer-Based Low-Resource Language Translation: A Study on Standard Bengali to Sylheti
Computation and Language
Translates rare languages better than big AI.