Evaluating Metrics for Safety with LLM-as-Judges
By: Kester Clegg , Richard Hawkins , Ibrahim Habli and more
Potential Business Impact:
Makes AI safer for important jobs.
LLMs (Large Language Models) are increasingly used in text processing pipelines to intelligently respond to a variety of inputs and generation tasks. This raises the possibility of replacing human roles that bottleneck existing information flows, either due to insufficient staff or process complexity. However, LLMs make mistakes and some processing roles are safety critical. For example, triaging post-operative care to patients based on hospital referral letters, or updating site access schedules in nuclear facilities for work crews. If we want to introduce LLMs into critical information flows that were previously performed by humans, how can we make them safe and reliable? Rather than make performative claims about augmented generation frameworks or graph-based techniques, this paper argues that the safety argument should focus on the type of evidence we get from evaluation points in LLM processes, particularly in frameworks that employ LLM-as-Judges (LaJ) evaluators. This paper argues that although we cannot get deterministic evaluations from many natural language processing tasks, by adopting a basket of weighted metrics it may be possible to lower the risk of errors within an evaluation, use context sensitivity to define error severity and design confidence thresholds that trigger human review of critical LaJ judgments when concordance across evaluators is low.
Similar Papers
The Scales of Justitia: A Comprehensive Survey on Safety Evaluation of LLMs
Computation and Language
Makes AI safer by checking its bad ideas.
Know Thy Judge: On the Robustness Meta-Evaluation of LLM Safety Judges
Machine Learning (CS)
Makes AI judges more honest and reliable.
Safer or Luckier? LLMs as Safety Evaluators Are Not Robust to Artifacts
Computation and Language
AI judges can be tricked into calling bad things safe.