Log Probability Tracking of LLM APIs
By: Timothée Chauvin , Erwan Le Merrer , François Taïani and more
Potential Business Impact:
Checks if AI language models change secretly.
When using an LLM through an API provider, users expect the served model to remain consistent over time, a property crucial for the reliability of downstream applications and the reproducibility of research. Existing audit methods are too costly to apply at regular time intervals to the wide range of available LLM APIs. This means that model updates are left largely unmonitored in practice. In this work, we show that while LLM log probabilities (logprobs) are usually non-deterministic, they can still be used as the basis for cost-effective continuous monitoring of LLM APIs. We apply a simple statistical test based on the average value of each token logprob, requesting only a single token of output. This is enough to detect changes as small as one step of fine-tuning, making this approach more sensitive than existing methods while being 1,000x cheaper. We introduce the TinyChange benchmark as a way to measure the sensitivity of audit methods in the context of small, realistic model changes.
Similar Papers
Are You Getting What You Pay For? Auditing Model Substitution in LLM APIs
Computation and Language
Checks if AI companies cheat with cheaper models.
You've Changed: Detecting Modification of Black-Box Large Language Models
Computation and Language
Finds when AI language changes unexpectedly.
Evaluating the Use of Large Language Models as Synthetic Social Agents in Social Science Research
Artificial Intelligence
Makes AI better at guessing, not knowing for sure.