Pseudo Relevance Feedback is Enough to Close the Gap Between Small and Large Dense Retrieval Models
By: Hang Li , Xiao Wang , Bevan Koopman and more
Potential Business Impact:
Makes small AI search better than big AI.
Scaling dense retrievers to larger large language model (LLM) backbones has been a dominant strategy for improving their retrieval effectiveness. However, this has substantial cost implications: larger backbones require more expensive hardware (e.g. GPUs with more memory) and lead to higher indexing and querying costs (latency, energy consumption). In this paper, we challenge this paradigm by introducing PromptPRF, a feature-based pseudo-relevance feedback (PRF) framework that enables small LLM-based dense retrievers to achieve effectiveness comparable to much larger models. PromptPRF uses LLMs to extract query-independent, structured and unstructured features (e.g., entities, summaries, chain-of-thought keywords, essay) from top-ranked documents. These features are generated offline and integrated into dense query representations via prompting, enabling efficient retrieval without additional training. Unlike prior methods such as GRF, which rely on online, query-specific generation and sparse retrieval, PromptPRF decouples feedback generation from query processing and supports dense retrievers in a fully zero-shot setting. Experiments on TREC DL and BEIR benchmarks demonstrate that PromptPRF consistently improves retrieval effectiveness and offers favourable cost-effectiveness trade-offs. We further present ablation studies to understand the role of positional feedback and analyse the interplay between feature extractor size, PRF depth, and model performance. Our findings demonstrate that with effective PRF design, scaling the retriever is not always necessary, narrowing the gap between small and large models while reducing inference cost.
Similar Papers
LLM-VPRF: Large Language Model Based Vector Pseudo Relevance Feedback
Information Retrieval
Makes AI find information better, even with big brains.
Generalized Pseudo-Relevance Feedback
Information Retrieval
Improves search results by learning from what you find.
A Little More Like This: Text-to-Image Retrieval with Vision-Language Models Using Relevance Feedback
CV and Pattern Recognition
Improves image search by learning from results.