DocHop-QA: Towards Multi-Hop Reasoning over Multimodal Document Collections
By: Jiwon Park , Seohyun Pyeon , Jinwoo Kim and more
Potential Business Impact:
Helps computers answer questions from many science papers.
Despite recent advances in large language models (LLMs), most QA benchmarks are still confined to single-paragraph or single-document settings, failing to capture the complexity of real-world information-seeking tasks. Practical QA often requires multi-hop reasoning over information distributed across multiple documents, modalities, and structural formats. Although prior datasets made progress in this area, they rely heavily on Wikipedia-based content and unimodal plain text, with shallow reasoning paths that typically produce brief phrase-level or single-sentence answers, thus limiting their realism and generalizability. We propose DocHop-QA, a large-scale benchmark comprising 11,379 QA instances for multimodal, multi-document, multi-hop question answering. Constructed from publicly available scientific documents sourced from PubMed, DocHop-QA is domain-agnostic and incorporates diverse information formats, including textual passages, tables, and structural layout cues. Unlike existing datasets, DocHop-QA does not rely on explicitly hyperlinked documents; instead, it supports open-ended reasoning through semantic similarity and layout-aware evidence synthesis. To scale realistic QA construction, we designed an LLM-driven pipeline grounded in 11 high-frequency scientific question concepts. We evaluated DocHop-QA through four tasks spanning structured index prediction, generative answering, and multimodal integration, reflecting both discriminative and generative paradigms. These tasks demonstrate DocHop-QA's capacity to support complex, multimodal reasoning across multiple documents.
Similar Papers
PluriHop: Exhaustive, Recall-Sensitive QA over Distractor-Rich Corpora
Computation and Language
Finds answers in many reports, even tricky ones.
NovelHopQA: Diagnosing Multi-Hop Reasoning Failures in Long Narrative Contexts
Computation and Language
Helps computers understand long stories and answer questions.
DEEPAMBIGQA: Ambiguous Multi-hop Questions for Benchmarking LLM Answer Completeness
Computation and Language
Helps computers answer tricky questions better.