Score: 1

Interpreting LLM-as-a-Judge Policies via Verifiable Global Explanations

Published: October 9, 2025 | arXiv ID: 2510.08120v1

By: Jasmina Gajcin , Erik Miehling , Rahul Nair and more

BigTech Affiliations: IBM

Potential Business Impact:

Finds hidden rules in AI's text judgments.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Using LLMs to evaluate text, that is, LLM-as-a-judge, is increasingly being used at scale to augment or even replace human annotations. As such, it is imperative that we understand the potential biases and risks of doing so. In this work, we propose an approach for extracting high-level concept-based global policies from LLM-as-a-Judge. Our approach consists of two algorithms: 1) CLoVE (Contrastive Local Verifiable Explanations), which generates verifiable, concept-based, contrastive local explanations and 2) GloVE (Global Verifiable Explanations), which uses iterative clustering, summarization and verification to condense local rules into a global policy. We evaluate GloVE on seven standard benchmarking datasets for content harm detection. We find that the extracted global policies are highly faithful to decisions of the LLM-as-a-Judge. Additionally, we evaluated the robustness of global policies to text perturbations and adversarial attacks. Finally, we conducted a user study to evaluate user understanding and satisfaction with global policies.

Country of Origin
🇺🇸 United States

Page Count
12 pages

Category
Computer Science:
Computation and Language