Score: 1

Ground Truth Generation for Multilingual Historical NLP using LLMs

Published: November 18, 2025 | arXiv ID: 2511.14688v1

By: Clovis Gladstone, Zhao Fang, Spencer Dean Stewart

Potential Business Impact:

Helps computers understand old books and writings.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Historical and low-resource NLP remains challenging due to limited annotated data and domain mismatches with modern, web-sourced corpora. This paper outlines our work in using large language models (LLMs) to create ground-truth annotations for historical French (16th-20th centuries) and Chinese (1900-1950) texts. By leveraging LLM-generated ground truth on a subset of our corpus, we were able to fine-tune spaCy to achieve significant gains on period-specific tests for part-of-speech (POS) annotations, lemmatization, and named entity recognition (NER). Our results underscore the importance of domain-specific models and demonstrate that even relatively limited amounts of synthetic data can improve NLP tools for under-resourced corpora in computational humanities research.

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Computation and Language