LLM-VPRF: Large Language Model Based Vector Pseudo Relevance Feedback
By: Hang Li , Shengyao Zhuang , Bevan Koopman and more
Potential Business Impact:
Makes AI find information better, even with big brains.
Vector Pseudo Relevance Feedback (VPRF) has shown promising results in improving BERT-based dense retrieval systems through iterative refinement of query representations. This paper investigates the generalizability of VPRF to Large Language Model (LLM) based dense retrievers. We introduce LLM-VPRF and evaluate its effectiveness across multiple benchmark datasets, analyzing how different LLMs impact the feedback mechanism. Our results demonstrate that VPRF's benefits successfully extend to LLM architectures, establishing it as a robust technique for enhancing dense retrieval performance regardless of the underlying models. This work bridges the gap between VPRF with traditional BERT-based dense retrievers and modern LLMs, while providing insights into their future directions.
Similar Papers
A Little More Like This: Text-to-Image Retrieval with Vision-Language Models Using Relevance Feedback
CV and Pattern Recognition
Improves image search by learning from results.
Pseudo Relevance Feedback is Enough to Close the Gap Between Small and Large Dense Retrieval Models
Information Retrieval
Makes small AI search better than big AI.
LREF: A Novel LLM-based Relevance Framework for E-commerce
Information Retrieval
Helps online stores show you better stuff.