Asking Clarifying Questions for Preference Elicitation With Large Language Models
By: Ali Montazeralghaem , Guy Tennenholtz , Craig Boutilier and more
Potential Business Impact:
Helps computers ask smart questions to learn what you like.
Large Language Models (LLMs) have made it possible for recommendation systems to interact with users in open-ended conversational interfaces. In order to personalize LLM responses, it is crucial to elicit user preferences, especially when there is limited user history. One way to get more information is to present clarifying questions to the user. However, generating effective sequential clarifying questions across various domains remains a challenge. To address this, we introduce a novel approach for training LLMs to ask sequential questions that reveal user preferences. Our method follows a two-stage process inspired by diffusion models. Starting from a user profile, the forward process generates clarifying questions to obtain answers and then removes those answers step by step, serving as a way to add ``noise'' to the user profile. The reverse process involves training a model to ``denoise'' the user profile by learning to ask effective clarifying questions. Our results show that our method significantly improves the LLM's proficiency in asking funnel questions and eliciting user preferences effectively.
Similar Papers
Can We Predict the Next Question? A Collaborative Filtering Approach to Modeling User Behavior
Information Retrieval
Helps computers guess what you'll ask next.
Ask Good Questions for Large Language Models
Computation and Language
Helps computers ask better questions to find answers.
Do LLMs Recognize Your Latent Preferences? A Benchmark for Latent Information Discovery in Personalized Interaction
Machine Learning (CS)
Helps computers guess what you want without asking.