Constructing and Evaluating Declarative RAG Pipelines in PyTerrier
By: Craig Macdonald , Jinyuan Fang , Andrew Parry and more
Potential Business Impact:
Builds better search answers from many documents.
Search engines often follow a pipeline architecture, where complex but effective reranking components are used to refine the results of an initial retrieval. Retrieval augmented generation (RAG) is an exciting application of the pipeline architecture, where the final component generates a coherent answer for the users from the retrieved documents. In this demo paper, we describe how such RAG pipelines can be formulated in the declarative PyTerrier architecture, and the advantages of doing so. Our PyTerrier-RAG extension for PyTerrier provides easy access to standard RAG datasets and evaluation measures, state-of-the-art LLM readers, and using PyTerrier's unique operator notation, easy-to-build pipelines. We demonstrate the succinctness of indexing and RAG pipelines on standard datasets (including Natural Questions) and how to build on the larger PyTerrier ecosystem with state-of-the-art sparse, learned-sparse, and dense retrievers, and other neural rankers.
Similar Papers
RAG Without the Lag: Interactive Debugging for Retrieval-Augmented Generation Pipelines
Human-Computer Interaction
Helps AI assistants find correct answers faster.
Never Come Up Empty: Adaptive HyDE Retrieval for Improving LLM Developer Support
Software Engineering
Makes computer helpers give better, true answers.
All for law and law for all: Adaptive RAG Pipeline for Legal Research
Computation and Language
Helps lawyers find correct legal answers faster.