pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs
By: Tobias Schimanski , Imene Kolli , Yu Fan and more
Potential Business Impact:
Helps computers answer questions from PDF documents.
PDFs are the second-most used document type on the internet (after HTML). Yet, existing QA datasets commonly start from text sources or only address specific domains. In this paper, we present pdfQA, a multi-domain 2K human-annotated (real-pdfQA) and 2K synthetic dataset (syn-pdfQA) differentiating QA pairs in ten complexity dimensions (e.g., file type, source modality, source position, answer type). We apply and evaluate quality and difficulty filters on both datasets, obtaining valid and challenging QA pairs. We answer the questions with open-source LLMs, revealing existing challenges that correlate with our complexity dimensions. pdfQA presents a basis for end-to-end QA pipeline evaluation, testing diverse skill sets and local optimizations (e.g., in information retrieval or parsing).
Similar Papers
pdfQA: Diverse, Challenging, and Realistic Question Answering over PDFs
Computation and Language
Helps computers answer questions from PDF documents.
FlipVQA-Miner: Cross-Page Visual Question-Answer Mining from Textbooks
Artificial Intelligence
Makes AI smarter using old school books.
Hierarchical Vision-Language Reasoning for Multimodal Multiple-Choice Question Answering
Information Retrieval
Helps computers understand Japanese documents better.