Drawing Conclusions from Draws: Rethinking Preference Semantics in Arena-Style LLM Evaluation
By: Raphael Tang , Crystina Zhang , Wenyan Li and more
Potential Business Impact:
Makes AI rating systems fairer by understanding "draws."
In arena-style evaluation of large language models (LLMs), two LLMs respond to a user query, and the user chooses the winning response or deems the "battle" a draw, resulting in an adjustment to the ratings of both models. The prevailing approach for modeling these rating dynamics is to view battles as two-player game matches, as in chess, and apply the Elo rating system and its derivatives. In this paper, we critically examine this paradigm. Specifically, we question whether a draw genuinely means that the two models are equal and hence whether their ratings should be equalized. Instead, we conjecture that draws are more indicative of query difficulty: if the query is too easy, then both models are more likely to succeed equally. On three real-world arena datasets, we show that ignoring rating updates for draws yields a 1-3% relative increase in battle outcome prediction accuracy (which includes draws) for all four rating systems studied. Further analyses suggest that draws occur more for queries rated as very easy and those as highly objective, with risk ratios of 1.37 and 1.35, respectively. We recommend future rating systems to reconsider existing draw semantics and to account for query properties in rating updates.
Similar Papers
Rating competitors in games with strength-dependent tie probabilities
Methodology
Makes game ratings fairer by counting ties.
Who is a Better Player: LLM against LLM
Artificial Intelligence
Tests AI's smartness by playing board games.
am-ELO: A Stable Framework for Arena-based LLM Evaluation
Artificial Intelligence
Makes AI judging fairer and more reliable.