Testing for LLM response differences: the case of a composite null consisting of semantically irrelevant query perturbations
By: Aranyak Acharyya, Carey E. Priebe, Hayden S. Helm
Potential Business Impact:
Tests if AI answers are the same, even with small changes.
Given an input query, generative models such as large language models produce a random response drawn from a response distribution. Given two input queries, it is natural to ask if their response distributions are the same. While traditional statistical hypothesis testing is designed to address this question, the response distribution induced by an input query is often sensitive to semantically irrelevant perturbations to the query, so much so that a traditional test of equality might indicate that two semantically equivalent queries induce statistically different response distributions. As a result, the outcome of the statistical test may not align with the user's requirements. In this paper, we address this misalignment by incorporating into the testing procedure consideration of a collection of semantically similar queries. In our setting, the mapping from the collection of user-defined semantically similar queries to the corresponding collection of response distributions is not known a priori and must be estimated, with a fixed budget. Although the problem we address is quite general, we focus our analysis on the setting where the responses are binary, show that the proposed test is asymptotically valid and consistent, and discuss important practical considerations with respect to power and computation.
Similar Papers
Statistical Hypothesis Testing for Auditing Robustness in Language Models
Computation and Language
Checks if AI answers change when you change its input.
Practically significant differences between conditional distribution functions
Econometrics
Tests if two groups are different enough.
Hypothesis Testing for Quantifying LLM-Human Misalignment in Multiple Choice Settings
Computers and Society
Tests if computer brains copy people's choices.