Score: 1

LLM-VPRF: Large Language Model Based Vector Pseudo Relevance Feedback

Published: April 2, 2025 | arXiv ID: 2504.01448v1

By: Hang Li , Shengyao Zhuang , Bevan Koopman and more

Potential Business Impact:

Makes AI find information better, even with big brains.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Vector Pseudo Relevance Feedback (VPRF) has shown promising results in improving BERT-based dense retrieval systems through iterative refinement of query representations. This paper investigates the generalizability of VPRF to Large Language Model (LLM) based dense retrievers. We introduce LLM-VPRF and evaluate its effectiveness across multiple benchmark datasets, analyzing how different LLMs impact the feedback mechanism. Our results demonstrate that VPRF's benefits successfully extend to LLM architectures, establishing it as a robust technique for enhancing dense retrieval performance regardless of the underlying models. This work bridges the gap between VPRF with traditional BERT-based dense retrievers and modern LLMs, while providing insights into their future directions.

Country of Origin
🇦🇺 Australia

Repos / Data Links

Page Count
13 pages

Category
Computer Science:
Information Retrieval