Leveraging LLMs to Evaluate Usefulness of Document
By: Xingzhu Wang , Erhan Zhang , Yiqun Chen and more
Potential Business Impact:
Lets computers guess what users like best.
The conventional Cranfield paradigm struggles to effectively capture user satisfaction due to its weak correlation between relevance and satisfaction, alongside the high costs of relevance annotation in building test collections. To tackle these issues, our research explores the potential of leveraging large language models (LLMs) to generate multilevel usefulness labels for evaluation. We introduce a new user-centric evaluation framework that integrates users' search context and behavioral data into LLMs. This framework uses a cascading judgment structure designed for multilevel usefulness assessments, drawing inspiration from ordinal regression techniques. Our study demonstrates that when well-guided with context and behavioral information, LLMs can accurately evaluate usefulness, allowing our approach to surpass third-party labeling methods. Furthermore, we conduct ablation studies to investigate the influence of key components within the framework. We also apply the labels produced by our method to predict user satisfaction, with real-world experiments indicating that these labels substantially improve the performance of satisfaction prediction models.
Similar Papers
LLM-Driven Usefulness Judgment for Web Search Evaluation
Information Retrieval
Helps search engines understand what you *really* need.
LLM-Driven Usefulness Labeling for IR Evaluation
Information Retrieval
Computers learn to judge search results better.
Validating LLM-Generated Relevance Labels for Educational Resource Search
Information Retrieval
Helps computers judge if school search results are good.