Can we Evaluate RAGs with Synthetic Data?
By: Jonas van Elburg, Peter van der Putten, Maarten Marx
Potential Business Impact:
Makes AI answer questions better, but not always.
We investigate whether synthetic question-answer (QA) data generated by large language models (LLMs) can serve as an effective proxy for human-labeled benchmarks when such data is unavailable. We assess the reliability of synthetic benchmarks across two experiments: one varying retriever parameters while keeping the generator fixed, and another varying the generator with fixed retriever parameters. Across four datasets, of which two open-domain and two proprietary, we find that synthetic benchmarks reliably rank the RAGs varying in terms of retriever configuration, aligning well with human-labeled benchmark baselines. However, they fail to produce consistent RAG rankings when comparing generator architectures. The breakdown possibly arises from a combination of task mismatch between the synthetic and human benchmarks, and stylistic bias favoring certain generators.
Similar Papers
LiveRAG: A diverse Q&A dataset with varying difficulty level for RAG evaluation
Computation and Language
Tests AI to answer questions better.
Aligning LLMs for the Classroom with Knowledge-Based Retrieval -- A Comparative RAG Study
Artificial Intelligence
Makes AI answers for school more truthful.
Diverse And Private Synthetic Datasets Generation for RAG evaluation: A multi-agent framework
Computation and Language
Makes AI safer by hiding private info.