Score: 1

EvidenceBench: A Benchmark for Extracting Evidence from Biomedical Papers

Published: April 25, 2025 | arXiv ID: 2504.18736v2

By: Jianyou Wang , Weili Cao , Kaicheng Wang and more

Potential Business Impact:

Finds science facts in papers for researchers.

Business Areas:
Biometrics Biotechnology, Data and Analytics, Science and Engineering

We study the task of automatically finding evidence relevant to hypotheses in biomedical papers. Finding relevant evidence is an important step when researchers investigate scientific hypotheses. We introduce EvidenceBench to measure models performance on this task, which is created by a novel pipeline that consists of hypothesis generation and sentence-by-sentence annotation of biomedical papers for relevant evidence, completely guided by and faithfully following existing human experts judgment. We demonstrate the pipeline's validity and accuracy with multiple sets of human-expert annotations. We evaluated a diverse set of language models and retrieval systems on the benchmark and found that model performances still fall significantly short of the expert level on this task. To show the scalability of our proposed pipeline, we create a larger EvidenceBench-100k with 107,461 fully annotated papers with hypotheses to facilitate model training and development. Both datasets are available at https://github.com/EvidenceBench/EvidenceBench

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
35 pages

Category
Computer Science:
Computation and Language