Let's Measure Information Step-by-Step: LLM-Based Evaluation Beyond Vibes
By: Zachary Robertson, Sanmi Koyejo
Potential Business Impact:
Makes AI tell truth, not lies.
We develop mechanisms for evaluating AI systems without ground truth by exploiting a connection between gaming resistance and output quality. The data processing inequality ensures post-hoc attempts to game a metric degrades both information content and task performance. We prove that f-mutual information measures are the unique gaming resistant mechanisms under natural conditions, with the overseer acting as an agent. While Shannon mutual information faces exponential sample complexity, bounded measures like total variation distance remain tractable. Empirically, across ten domains from translation to peer review, all information-theoretic mechanisms achieve perfect discrimination (d > 0.5) between faithful and strategic agents. In contrast, LLM judges exhibit systematic evaluation inversion, preferring fabricated content over accurate summaries. Our mechanisms show 10-100x better robustness to adversarial manipulation than current practices. We also find performance follows an inverted-U curve with compression ratio, peaking at 10:1 where agent responses exhibit optimal information diversity (3 effective dimensions), giving a bias-variance perspective on when our approach is expected to be most effective.
Similar Papers
Let's Measure Information Step-by-Step: LLM-Based Evaluation Beyond Vibes
Machine Learning (CS)
Makes AI judge itself fairly, even without answers.
Mutual Information Tracks Policy Coherence in Reinforcement Learning
Artificial Intelligence
Helps robots fix themselves when they break.
Financial Information Theory
Portfolio Management
Finds hidden patterns in stock market data.