Score: 0

Enhancing BERT Fine-Tuning for Sentiment Analysis in Lower-Resourced Languages

Published: December 1, 2025 | arXiv ID: 2512.01460v1

By: Jozef Kubík, Marek Šuppa, Martin Takáč

Potential Business Impact:

Teaches computers new languages with less data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Limited data for low-resource languages typically yield weaker language models (LMs). Since pre-training is compute-intensive, it is more pragmatic to target improvements during fine-tuning. In this work, we examine the use of Active Learning (AL) methods augmented by structured data selection strategies which we term 'Active Learning schedulers', to boost the fine-tuning process with a limited amount of training data. We connect the AL to data clustering and propose an integrated fine-tuning pipeline that systematically combines AL, clustering, and dynamic data selection schedulers to enhance model's performance. Experiments in the Slovak, Maltese, Icelandic and Turkish languages show that the use of clustering during the fine-tuning phase together with AL scheduling can simultaneously produce annotation savings up to 30% and performance improvements up to four F1 score points, while also providing better fine-tuning stability.

Page Count
13 pages

Category
Computer Science:
Computation and Language