Highlighting Case Studies in LLM Literature Review of Interdisciplinary System Science
By: Lachlan McGinness, Peter Baumgartner
Potential Business Impact:
Helps scientists find answers in research papers faster.
Large Language Models (LLMs) were used to assist four Commonwealth Scientific and Industrial Research Organisation (CSIRO) researchers to perform systematic literature reviews (SLR). We evaluate the performance of LLMs for SLR tasks in these case studies. In each, we explore the impact of changing parameters on the accuracy of LLM responses. The LLM was tasked with extracting evidence from chosen academic papers to answer specific research questions. We evaluate the models' performance in faithfully reproducing quotes from the literature and subject experts were asked to assess the model performance in answering the research questions. We developed a semantic text highlighting tool to facilitate expert review of LLM responses. We found that state of the art LLMs were able to reproduce quotes from texts with greater than 95% accuracy and answer research questions with an accuracy of approximately 83%. We use two methods to determine the correctness of LLM responses; expert review and the cosine similarity of transformer embeddings of LLM and expert answers. The correlation between these methods ranged from 0.48 to 0.77, providing evidence that the latter is a valid metric for measuring semantic similarity.
Similar Papers
A Multi-Task Evaluation of LLMs' Processing of Academic Text Input
Computation and Language
Computers can't yet judge science papers well.
Large Language Models for Full-Text Methods Assessment: A Case Study on Mediation Analysis
Computation and Language
Helps computers understand science papers better.
Leveraging LLMs for Semi-Automatic Corpus Filtration in Systematic Literature Reviews
Machine Learning (CS)
Helps scientists find important research faster.