Score: 0

Evaluating Metrics for Safety with LLM-as-Judges

Published: December 17, 2025 | arXiv ID: 2512.15617v1

By: Kester Clegg , Richard Hawkins , Ibrahim Habli and more

Potential Business Impact:

Makes AI safer for important jobs.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

LLMs (Large Language Models) are increasingly used in text processing pipelines to intelligently respond to a variety of inputs and generation tasks. This raises the possibility of replacing human roles that bottleneck existing information flows, either due to insufficient staff or process complexity. However, LLMs make mistakes and some processing roles are safety critical. For example, triaging post-operative care to patients based on hospital referral letters, or updating site access schedules in nuclear facilities for work crews. If we want to introduce LLMs into critical information flows that were previously performed by humans, how can we make them safe and reliable? Rather than make performative claims about augmented generation frameworks or graph-based techniques, this paper argues that the safety argument should focus on the type of evidence we get from evaluation points in LLM processes, particularly in frameworks that employ LLM-as-Judges (LaJ) evaluators. This paper argues that although we cannot get deterministic evaluations from many natural language processing tasks, by adopting a basket of weighted metrics it may be possible to lower the risk of errors within an evaluation, use context sensitivity to define error severity and design confidence thresholds that trigger human review of critical LaJ judgments when concordance across evaluators is low.

Country of Origin
🇬🇧 United Kingdom

Page Count
18 pages

Category
Computer Science:
Computation and Language