Comparative Study of Pre-Trained BERT and Large Language Models for Code-Mixed Named Entity Recognition
By: Mayur Shirke , Amey Shembade , Pavan Thorat and more
Potential Business Impact:
Helps computers understand mixed Hindi-English text.
Named Entity Recognition (NER) in code-mixed text, particularly Hindi-English (Hinglish), presents unique challenges due to informal structure, transliteration, and frequent language switching. This study conducts a comparative evaluation of code-mixed fine-tuned models and non-code-mixed multilingual models, along with zero-shot generative large language models (LLMs). Specifically, we evaluate HingBERT, HingMBERT, and HingRoBERTa (trained on code-mixed data), and BERT Base Cased, IndicBERT, RoBERTa and MuRIL (trained on non-code-mixed multilingual data). We also assess the performance of Google Gemini in a zero-shot setting using a modified version of the dataset with NER tags removed. All models are tested on a benchmark Hinglish NER dataset using Precision, Recall, and F1-score. Results show that code-mixed models, particularly HingRoBERTa and HingBERT-based fine-tuned models, outperform others - including closed-source LLMs like Google Gemini - due to domain-specific pretraining. Non-code-mixed models perform reasonably but show limited adaptability. Notably, Google Gemini exhibits competitive zero-shot performance, underlining the generalization strength of modern LLMs. This study provides key insights into the effectiveness of specialized versus generalized models for code-mixed NER tasks.
Similar Papers
Code-Mix Sentiment Analysis on Hinglish Tweets
Computation and Language
Helps companies understand what people say online.
Enhancing Hindi NER in Low Context: A Comparative study of Transformer-based models with vs. without Retrieval Augmentation
Computation and Language
Helps computers understand Hindi text better.
Enhancing Multilingual Language Models for Code-Switched Input Data
Computation and Language
Helps computers understand mixed languages in chats.