LLM-Assisted Pseudo-Relevance Feedback
By: David Otero, Javier Parapar
Potential Business Impact:
Helps search engines find better results.
Query expansion is a long-standing technique to mitigate vocabulary mismatch in ad hoc Information Retrieval. Pseudo-relevance feedback methods, such as RM3, estimate an expanded query model from the top-ranked documents, but remain vulnerable to topic drift when early results include noisy or tangential content. Recent approaches instead prompt Large Language Models to generate synthetic expansions or query variants. While effective, these methods risk hallucinations and misalignment with collection-specific terminology. We propose a hybrid alternative that preserves the robustness and interpretability of classical PRF while leveraging LLM semantic judgement. Our method inserts an LLM-based filtering stage prior to RM3 estimation: the LLM judges the documents in the initial top-$k$ ranking, and RM3 is computed only over those accepted as relevant. This simple intervention improves over blind PRF and a strong baseline across several datasets and metrics.
Similar Papers
Revisiting Feedback Models for HyDE
Information Retrieval
Makes search engines find better answers using smart words.
Generative Query Expansion with Multilingual LLMs for Cross-Lingual Information Retrieval
Information Retrieval
Helps computers find information in different languages.
Generalized Pseudo-Relevance Feedback
Information Retrieval
Improves search results by learning from what you find.