Score: 2

LiveRAG: A diverse Q&A dataset with varying difficulty level for RAG evaluation

Published: November 18, 2025 | arXiv ID: 2511.14531v1

By: David Carmel , Simone Filice , Guy Horowitz and more

Potential Business Impact:

Tests AI to answer questions better.

Business Areas:
Q&A Community and Lifestyle

With Retrieval Augmented Generation (RAG) becoming more and more prominent in generative AI solutions, there is an emerging need for systematically evaluating their effectiveness. We introduce the LiveRAG benchmark, a publicly available dataset of 895 synthetic questions and answers designed to support systematic evaluation of RAG-based Q&A systems. This synthetic benchmark is derived from the one used during the SIGIR'2025 LiveRAG Challenge, where competitors were evaluated under strict time constraints. It is augmented with information that was not made available to competitors during the Challenge, such as the ground-truth answers, together with their associated supporting claims which were used for evaluating competitors' answers. In addition, each question is associated with estimated difficulty and discriminability scores, derived from applying an Item Response Theory model to competitors' responses. Our analysis highlights the benchmark's questions diversity, the wide range of their difficulty levels, and their usefulness in differentiating between system capabilities. The LiveRAG benchmark will hopefully help the community advance RAG research, conduct systematic evaluation, and develop more robust Q&A systems.


Page Count
14 pages

Category
Computer Science:
Computation and Language