It's All About the Confidence: An Unsupervised Approach for Multilingual Historical Entity Linking using Large Language Models
By: Cristian Santini, Marieke Van Erp, Mehwish Alam
Despite the recent advancements in NLP with the advent of Large Language Models (LLMs), Entity Linking (EL) for historical texts remains challenging due to linguistic variation, noisy inputs, and evolving semantic conventions. Existing solutions either require substantial training data or rely on domain-specific rules that limit scalability. In this paper, we present MHEL-LLaMo (Multilingual Historical Entity Linking with Large Language MOdels), an unsupervised ensemble approach combining a Small Language Model (SLM) and an LLM. MHEL-LLaMo leverages a multilingual bi-encoder (BELA) for candidate retrieval and an instruction-tuned LLM for NIL prediction and candidate selection via prompt chaining. Our system uses SLM's confidence scores to discriminate between easy and hard samples, applying an LLM only for hard cases. This strategy reduces computational costs while preventing hallucinations on straightforward cases. We evaluate MHEL-LLaMo on four established benchmarks in six European languages (English, Finnish, French, German, Italian and Swedish) from the 19th and 20th centuries. Results demonstrate that MHEL-LLaMo outperforms state-of-the-art models without requiring fine-tuning, offering a scalable solution for low-resource historical EL. The implementation of MHEL-LLaMo is available on Github.
Similar Papers
Evaluation of LLMs on Long-tail Entity Linking in Historical Documents
Computation and Language
Helps computers understand rare names and places.
Named Entity Recognition of Historical Text via Large Language Model
Digital Libraries
Helps computers find names in old writings.
Harnessing Deep LLM Participation for Robust Entity Linking
Computation and Language
Helps computers understand names in text better.