LiveRAG: A diverse Q&A dataset with varying difficulty level for RAG evaluation
By: David Carmel , Simone Filice , Guy Horowitz and more
Potential Business Impact:
Tests AI to answer questions better.
With Retrieval Augmented Generation (RAG) becoming more and more prominent in generative AI solutions, there is an emerging need for systematically evaluating their effectiveness. We introduce the LiveRAG benchmark, a publicly available dataset of 895 synthetic questions and answers designed to support systematic evaluation of RAG-based Q&A systems. This synthetic benchmark is derived from the one used during the SIGIR'2025 LiveRAG Challenge, where competitors were evaluated under strict time constraints. It is augmented with information that was not made available to competitors during the Challenge, such as the ground-truth answers, together with their associated supporting claims which were used for evaluating competitors' answers. In addition, each question is associated with estimated difficulty and discriminability scores, derived from applying an Item Response Theory model to competitors' responses. Our analysis highlights the benchmark's questions diversity, the wide range of their difficulty levels, and their usefulness in differentiating between system capabilities. The LiveRAG benchmark will hopefully help the community advance RAG research, conduct systematic evaluation, and develop more robust Q&A systems.
Similar Papers
RAGtifier: Evaluating RAG Generation Approaches of State-of-the-Art RAG Systems for the SIGIR LiveRAG Competition
Information Retrieval
Makes AI answer questions more truthfully.
Diverse And Private Synthetic Datasets Generation for RAG evaluation: A multi-agent framework
Computation and Language
Makes AI safer by hiding private info.
Can we Evaluate RAGs with Synthetic Data?
Computation and Language
Makes AI answer questions better, but not always.