Ground Truth Generation for Multilingual Historical NLP using LLMs
By: Clovis Gladstone, Zhao Fang, Spencer Dean Stewart
Potential Business Impact:
Helps computers understand old books and writings.
Historical and low-resource NLP remains challenging due to limited annotated data and domain mismatches with modern, web-sourced corpora. This paper outlines our work in using large language models (LLMs) to create ground-truth annotations for historical French (16th-20th centuries) and Chinese (1900-1950) texts. By leveraging LLM-generated ground truth on a subset of our corpus, we were able to fine-tune spaCy to achieve significant gains on period-specific tests for part-of-speech (POS) annotations, lemmatization, and named entity recognition (NER). Our results underscore the importance of domain-specific models and demonstrate that even relatively limited amounts of synthetic data can improve NLP tools for under-resourced corpora in computational humanities research.
Similar Papers
Can LLMs extract human-like fine-grained evidence for evidence-based fact-checking?
Computation and Language
Helps computers find truth in online comments.
Named Entity Recognition of Historical Text via Large Language Model
Digital Libraries
Helps computers find names in old writings.
Towards Corpus-Grounded Agentic LLMs for Multilingual Grammatical Analysis
Computation and Language
AI helps understand language rules in many languages.