On Evaluating LLM Alignment by Evaluating LLMs as Judges
By: Yixin Liu, Pengfei Liu, Arman Cohan
Potential Business Impact:
Tests if AI follows your wishes without reading its answers.
Alignment with human preferences is an important evaluation aspect of LLMs, requiring them to be helpful, honest, safe, and to precisely follow human instructions. Evaluating large language models' (LLMs) alignment typically involves directly assessing their open-ended responses, requiring human annotators or strong LLM judges. Conversely, LLMs themselves have also been extensively evaluated as judges for assessing alignment. In this work, we examine the relationship between LLMs' generation and evaluation capabilities in aligning with human preferences. To this end, we first conduct a comprehensive analysis of the generation-evaluation consistency (GE-consistency) among various LLMs, revealing a strong correlation between their generation and evaluation capabilities when evaluated by a strong LLM preference oracle. Utilizing this finding, we propose a benchmarking paradigm that measures LLM alignment with human preferences without directly evaluating their generated outputs, instead assessing LLMs in their role as evaluators. Our evaluation shows that our proposed benchmark, AlignEval, matches or surpasses widely used automatic LLM evaluation benchmarks, such as AlpacaEval and Arena-Hard, in capturing human preferences when ranking LLMs. Our study offers valuable insights into the connection between LLMs' generation and evaluation capabilities, and introduces a benchmark that assesses alignment without directly evaluating model outputs.
Similar Papers
LLM-as-a-Judge: Rapid Evaluation of Legal Document Recommendation for Retrieval-Augmented Generation
Computation and Language
Lets computers judge legal AI work fairly.
An Empirical Study of LLM-as-a-Judge: How Design Choices Impact Evaluation Reliability
Computation and Language
Helps computers judge other computers' answers.
Neither Valid nor Reliable? Investigating the Use of LLMs as Judges
Computation and Language
Makes AI judges for writing less trustworthy.