Can Instructed Retrieval Models Really Support Exploration?
By: Piyush Maheshwari, Sheshera Mysore, Hamed Zamani
Potential Business Impact:
Helps search engines understand your changing questions better.
Exploratory searches are characterized by under-specified goals and evolving query intents. In such scenarios, retrieval models that can capture user-specified nuances in query intent and adapt results accordingly are desirable -- instruction-following retrieval models promise such a capability. In this work, we evaluate instructed retrievers for the prevalent yet under-explored application of aspect-conditional seed-guided exploration using an expert-annotated test collection. We evaluate both recent LLMs fine-tuned for instructed retrieval and general-purpose LLMs prompted for ranking with the highly performant Pairwise Ranking Prompting. We find that the best instructed retrievers improve on ranking relevance compared to instruction-agnostic approaches. However, we also find that instruction following performance, crucial to the user experience of interacting with models, does not mirror ranking relevance improvements and displays insensitivity or counter-intuitive behavior to instructions. Our results indicate that while users may benefit from using current instructed retrievers over instruction-agnostic models, they may not benefit from using them for long-running exploratory sessions requiring greater sensitivity to instructions.
Similar Papers
Think Before You Retrieve: Learning Test-Time Adaptive Search with Small Language Models
Artificial Intelligence
Teaches small computers to find information better.
Towards Better Instruction Following Retrieval Models
Computation and Language
Helps search engines understand your exact instructions.
Exploiting Instruction-Following Retrievers for Malicious Information Retrieval
Computation and Language
Finds harmful information when asked.