EvidenceBench: A Benchmark for Extracting Evidence from Biomedical Papers
By: Jianyou Wang , Weili Cao , Kaicheng Wang and more
Potential Business Impact:
Finds science facts in papers for researchers.
We study the task of automatically finding evidence relevant to hypotheses in biomedical papers. Finding relevant evidence is an important step when researchers investigate scientific hypotheses. We introduce EvidenceBench to measure models performance on this task, which is created by a novel pipeline that consists of hypothesis generation and sentence-by-sentence annotation of biomedical papers for relevant evidence, completely guided by and faithfully following existing human experts judgment. We demonstrate the pipeline's validity and accuracy with multiple sets of human-expert annotations. We evaluated a diverse set of language models and retrieval systems on the benchmark and found that model performances still fall significantly short of the expert level on this task. To show the scalability of our proposed pipeline, we create a larger EvidenceBench-100k with 107,461 fully annotated papers with hypotheses to facilitate model training and development. Both datasets are available at https://github.com/EvidenceBench/EvidenceBench
Similar Papers
Rethinking Evidence Hierarchies in Medical Language Benchmarks: A Critical Evaluation of HealthBench
Artificial Intelligence
Makes health AI trustworthy using proven guidelines
HypoBench: Towards Systematic and Principled Benchmarking for Hypothesis Generation
Artificial Intelligence
Tests AI to find better science ideas.
EvidenceOutcomes: a Dataset of Clinical Trial Publications with Clinically Meaningful Outcomes
Computation and Language
Helps computers find important health results in studies.