Evaluating LLMs for Historical Document OCR: A Methodological Framework for Digital Humanities
By: Maria Levchenko
Potential Business Impact:
Helps computers read old handwriting better.
Digital humanities scholars increasingly use Large Language Models for historical document digitization, yet lack appropriate evaluation frameworks for LLM-based OCR. Traditional metrics fail to capture temporal biases and period-specific errors crucial for historical corpus creation. We present an evaluation methodology for LLM-based historical OCR, addressing contamination risks and systematic biases in diplomatic transcription. Using 18th-century Russian Civil font texts, we introduce novel metrics including Historical Character Preservation Rate (HCPR) and Archaic Insertion Rate (AIR), alongside protocols for contamination control and stability testing. We evaluate 12 multimodal LLMs, finding that Gemini and Qwen models outperform traditional OCR while exhibiting over-historicization: inserting archaic characters from incorrect historical periods. Post-OCR correction degrades rather than improves performance. Our methodology provides digital humanities practitioners with guidelines for model selection and quality assessment in historical corpus digitization.
Similar Papers
Multimodal LLMs for OCR, OCR Post-Correction, and Named Entity Recognition in Historical Documents
Computation and Language
Reads old German books better than ever before.
Towards a standardized methodology and dataset for evaluating LLM-based digital forensic timeline analysis
Cryptography and Security
Tests how computers find clues in digital crime scenes.
Early evidence of how LLMs outperform traditional systems on OCR/HTR tasks for historical records
CV and Pattern Recognition
AI reads old handwriting better than machines.