Multimodal LLMs for Historical Dataset Construction from Archival Image Scans: German Patents (1877-1918)
By: Niclas Griesshaber, Jochen Streb
We leverage multimodal large language models (LLMs) to construct a dataset of 306,070 German patents (1877-1918) from 9,562 archival image scans using our LLM-based pipeline powered by Gemini-2.5-Pro and Gemini-2.5-Flash-Lite. Our benchmarking exercise provides tentative evidence that multimodal LLMs can create higher quality datasets than our research assistants, while also being more than 795 times faster and 205 times cheaper in constructing the patent dataset from our image corpus. About 20 to 50 patent entries are embedded on each page, arranged in a double-column format and printed in Gothic and Roman fonts. The font and layout complexity of our primary source material suggests to us that multimodal LLMs are a paradigm shift in how datasets are constructed in economic history. We open-source our benchmarking and patent datasets as well as our LLM-based data pipeline, which can be easily adapted to other image corpora using LLM-assisted coding tools, lowering the barriers for less technical researchers. Finally, we explain the economics of deploying LLMs for historical dataset construction and conclude by speculating on the potential implications for the field of economic history.
Similar Papers
Can LLMs Credibly Transform the Creation of Panel Data from Diverse Historical Tables?
General Economics
Turns old paper records into useful computer data.
Vision-Enabled LLMs in Historical Lexicography: Digitising and Enriching Estonian-German Dictionaries from the 17th and 18th Centuries
Computation and Language
Helps old Estonian words become understandable today.
Multimodal LLMs for OCR, OCR Post-Correction, and Named Entity Recognition in Historical Documents
Computation and Language
Reads old German books better than ever before.