Enhancing BERT Fine-Tuning for Sentiment Analysis in Lower-Resourced Languages
By: Jozef Kubík, Marek Šuppa, Martin Takáč
Potential Business Impact:
Teaches computers new languages with less data.
Limited data for low-resource languages typically yield weaker language models (LMs). Since pre-training is compute-intensive, it is more pragmatic to target improvements during fine-tuning. In this work, we examine the use of Active Learning (AL) methods augmented by structured data selection strategies which we term 'Active Learning schedulers', to boost the fine-tuning process with a limited amount of training data. We connect the AL to data clustering and propose an integrated fine-tuning pipeline that systematically combines AL, clustering, and dynamic data selection schedulers to enhance model's performance. Experiments in the Slovak, Maltese, Icelandic and Turkish languages show that the use of clustering during the fine-tuning phase together with AL scheduling can simultaneously produce annotation savings up to 30% and performance improvements up to four F1 score points, while also providing better fine-tuning stability.
Similar Papers
State of the Art in Text Classification for South Slavic Languages: Fine-Tuning or Prompting?
Computation and Language
New AI understands many languages for tasks.
Active Learning via Vision-Language Model Adaptation with Open Data
CV and Pattern Recognition
Makes AI learn better with less labeled data.
LAUD: Integrating Large Language Models with Active Learning for Unlabeled Data
Machine Learning (CS)
Teaches computers to learn from less data.