Meta-Pretraining for Zero-Shot Cross-Lingual Named Entity Recognition in Low-Resource Philippine Languages
By: David Demitri Africa , Suchir Salhan , Yuval Weiss and more
Potential Business Impact:
Teaches computers to understand new languages faster.
Named-entity recognition (NER) in low-resource languages is usually tackled by finetuning very large multilingual LMs, an option that is often infeasible in memory- or latency-constrained settings. We ask whether small decoder LMs can be pretrained so that they adapt quickly and transfer zero-shot to languages unseen during pretraining. To this end we replace part of the autoregressive objective with first-order model-agnostic meta-learning (MAML). Tagalog and Cebuano are typologically similar yet structurally different in their actor/non-actor voice systems, and hence serve as a challenging test-bed. Across four model sizes (11 M - 570 M) MAML lifts zero-shot micro-F1 by 2-6 pp under head-only tuning and 1-3 pp after full tuning, while cutting convergence time by up to 8%. Gains are largest for single-token person entities that co-occur with Tagalog case particles si/ni, highlighting the importance of surface anchors.
Similar Papers
Learning Dynamics of Meta-Learning in Small Model Pretraining
Computation and Language
Trains AI faster and easier to understand.
PANER: A Paraphrase-Augmented Framework for Low-Resource Named Entity Recognition
Computation and Language
Helps computers find specific words in text.
Enhancing NER Performance in Low-Resource Pakistani Languages using Cross-Lingual Data Augmentation
Computation and Language
Helps computers understand rare languages better.