Can ChatGPT evaluate research environments? Evidence from REF2021
By: Kayvan Kousha, Mike Thelwall, Elizabeth Gadd
UK academic departments are evaluated partly on the statements that they write about the value of their research environments for the Research Excellence Framework (REF) periodic assessments. These statements mix qualitative narratives and quantitative data, typically requiring time-consuming and difficult expert judgements to assess. This article investigates whether Large Language Models (LLMs) can support the process or validate the results, using the UK REF2021 unit-level environment statements as a test case. Based on prompts mimicking the REF guidelines, ChatGPT 4o-mini scores correlated positively with expert scores in almost all 34 (field-based) Units of Assessment (UoAs). ChatGPT's scores had moderate to strong positive Spearman correlations with REF expert scores in 32 out of 34 UoAs: 14 UoAs above 0.7 and a further 13 between 0.6 and 0.7. Only two UoAs had weak or no significant associations (Classics and Clinical Medicine). From further tests for UoA34, multiple LLMs had significant positive correlations with REF2021 environment scores (all p < .001), with ChatGPT 5 performing best (r=0.81; $ρ$=0.82), followed by ChatGPT-4o-mini (r=0.68; $ρ$=0.67) and Gemini Flash 2.5 (r=0.67; $ρ$=0.69). If LLM-generated scores for environment statements are used in future to help reduce workload, support more consistent interpretation, and complement human review then caution must be exercised because of the potential for biases, inaccuracy in some cases, and unwanted systemic effects. Even the strong correlations found here seem unlikely to be judged close enough to expert scores to fully delegate the assessment task to LLMs.
Similar Papers
In which fields do ChatGPT 4o scores align better than citations with research quality?
Digital Libraries
Helps judge research quality better than old methods.
Assisting Research Proposal Writing with Large Language Models: Evaluation and Refinement
Computation and Language
Makes AI writing more honest and accurate.
Can Smaller Large Language Models Evaluate Research Quality?
Digital Libraries
Small AI can judge research quality like experts.