HypoEval: Hypothesis-Guided Evaluation for Natural Language Generation
By: Mingxuan Li, Hanchen Li, Chenhao Tan
Potential Business Impact:
Helps computers judge writing better with less help.
Large language models (LLMs) have demonstrated great potential for automating the evaluation of natural language generation. Previous frameworks of LLM-as-a-judge fall short in two ways: they either use zero-shot setting without consulting any human input, which leads to low alignment, or fine-tune LLMs on labeled data, which requires a non-trivial number of samples. Moreover, previous methods often provide little reasoning behind automated evaluations. In this paper, we propose HypoEval, Hypothesis-guided Evaluation framework, which first uses a small corpus of human evaluations to generate more detailed rubrics for human judgments and then incorporates a checklist-like approach to combine LLM's assigned scores on each decomposed dimension to acquire overall scores. With only 30 human evaluations, HypoEval achieves state-of-the-art performance in alignment with both human rankings (Spearman correlation) and human scores (Pearson correlation), on average outperforming G-Eval by 11.86% and fine-tuned Llama-3.1-8B-Instruct with at least 3 times more human evaluations by 11.95%. Furthermore, we conduct systematic studies to assess the robustness of HypoEval, highlighting its effectiveness as a reliable and interpretable automated evaluation framework.
Similar Papers
Multi-Agent LLM Judge: automatic personalized LLM judge design for evaluating natural language generation applications
Computation and Language
Helps computers judge writing better than people.
RocketEval: Efficient Automated LLM Evaluation via Grading Checklist
Computation and Language
Lets computers judge other computers cheaper and better.
GraphEval: A Lightweight Graph-Based LLM Framework for Idea Evaluation
Machine Learning (CS)
Helps computers judge ideas better and spot copying.