SciCUEval: A Comprehensive Dataset for Evaluating Scientific Context Understanding in Large Language Models
By: Jing Yu , Yuqi Tang , Kehua Feng and more
Potential Business Impact:
Tests AI's smarts in science fields.
Large Language Models (LLMs) have shown impressive capabilities in contextual understanding and reasoning. However, evaluating their performance across diverse scientific domains remains underexplored, as existing benchmarks primarily focus on general domains and fail to capture the intricate complexity of scientific data. To bridge this gap, we construct SciCUEval, a comprehensive benchmark dataset tailored to assess the scientific context understanding capability of LLMs. It comprises ten domain-specific sub-datasets spanning biology, chemistry, physics, biomedicine, and materials science, integrating diverse data modalities including structured tables, knowledge graphs, and unstructured texts. SciCUEval systematically evaluates four core competencies: Relevant information identification, Information-absence detection, Multi-source information integration, and Context-aware inference, through a variety of question formats. We conduct extensive evaluations of state-of-the-art LLMs on SciCUEval, providing a fine-grained analysis of their strengths and limitations in scientific context understanding, and offering valuable insights for the future development of scientific-domain LLMs.
Similar Papers
CURIE: Evaluating LLMs On Multitask Scientific Long Context Understanding and Reasoning
Computation and Language
Helps computers solve science problems better.
LC-Eval: A Bilingual Multi-Task Evaluation Benchmark for Long-Context Understanding
Computation and Language
Tests how well computers understand long stories.
MME-SCI: A Comprehensive and Challenging Science Benchmark for Multimodal Large Language Models
Computation and Language
Tests AI's science smarts in many languages.