Score: 1

Asking Clarifying Questions for Preference Elicitation With Large Language Models

Published: October 13, 2025 | arXiv ID: 2510.12015v1

By: Ali Montazeralghaem , Guy Tennenholtz , Craig Boutilier and more

BigTech Affiliations: Google

Potential Business Impact:

Helps computers ask smart questions to learn what you like.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) have made it possible for recommendation systems to interact with users in open-ended conversational interfaces. In order to personalize LLM responses, it is crucial to elicit user preferences, especially when there is limited user history. One way to get more information is to present clarifying questions to the user. However, generating effective sequential clarifying questions across various domains remains a challenge. To address this, we introduce a novel approach for training LLMs to ask sequential questions that reveal user preferences. Our method follows a two-stage process inspired by diffusion models. Starting from a user profile, the forward process generates clarifying questions to obtain answers and then removes those answers step by step, serving as a way to add ``noise'' to the user profile. The reverse process involves training a model to ``denoise'' the user profile by learning to ask effective clarifying questions. Our results show that our method significantly improves the LLM's proficiency in asking funnel questions and eliciting user preferences effectively.

Country of Origin
🇺🇸 United States

Page Count
9 pages

Category
Computer Science:
Artificial Intelligence