Score: 1

pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs

Published: January 5, 2026 | arXiv ID: 2601.02285v2

By: Tobias Schimanski , Imene Kolli , Yu Fan and more

Potential Business Impact:

Helps computers answer questions from PDF documents.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

PDFs are the second-most used document type on the internet (after HTML). Yet, existing QA datasets commonly start from text sources or only address specific domains. In this paper, we present pdfQA, a multi-domain 2K human-annotated (real-pdfQA) and 2K synthetic dataset (syn-pdfQA) differentiating QA pairs in ten complexity dimensions (e.g., file type, source modality, source position, answer type). We apply and evaluate quality and difficulty filters on both datasets, obtaining valid and challenging QA pairs. We answer the questions with open-source LLMs, revealing existing challenges that correlate with our complexity dimensions. pdfQA presents a basis for end-to-end QA pipeline evaluation, testing diverse skill sets and local optimizations (e.g., in information retrieval or parsing).

Country of Origin
🇨🇭 Switzerland

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Computation and Language