Lexicalized Constituency Parsing for Middle Dutch: Low-resource Training and Cross-Domain Generalization
By: Yiming Liang, Fang Zhao
Recent years have seen growing interest in applying neural networks and contextualized word embeddings to the parsing of historical languages. However, most advances have focused on dependency parsing, while constituency parsing for low-resource historical languages like Middle Dutch has received little attention. In this paper, we adapt a transformer-based constituency parser to Middle Dutch, a highly heterogeneous and low-resource language, and investigate methods to improve both its in-domain and cross-domain performance. We show that joint training with higher-resource auxiliary languages increases F1 scores by up to 0.73, with the greatest gains achieved from languages that are geographically and temporally closer to Middle Dutch. We further evaluate strategies for leveraging newly annotated data from additional domains, finding that fine-tuning and data combination yield comparable improvements, and our neural parser consistently outperforms the currently used PCFG-based parser for Middle Dutch. We further explore feature-separation techniques for domain adaptation and demonstrate that a minimum threshold of approximately 200 examples per domain is needed to effectively enhance cross-domain performance.
Similar Papers
Ground Truth Generation for Multilingual Historical NLP using LLMs
Computation and Language
Helps computers understand old books and writings.
Winning with Less for Low Resource Languages: Advantage of Cross-Lingual English_Persian Argument Mining Model over LLM Augmentation
Computation and Language
Helps computers understand arguments in any language.
Multilingual BERT language model for medical tasks: Evaluation on domain-specific adaptation and cross-linguality
Computation and Language
Helps doctors understand patient notes in any language.