Learning Steerable Clarification Policies with Collaborative Self-play
By: Jonathan Berant , Maximillian Chen , Adam Fisch and more
Potential Business Impact:
AI learns to ask questions when unsure.
To handle underspecified or ambiguous queries, AI assistants need a policy for managing their uncertainty to determine (a) when to guess the user intent and answer directly, (b) when to enumerate and answer multiple possible intents, and (c) when to ask a clarifying question. However, such policies are contextually dependent on factors such as user preferences or modality. For example, enumerating multiple possible user intentions is cumbersome on small screens or in a voice setting. In this work, we propose to train steerable policies for managing this uncertainty using self-play. Given two agents, one simulating a user and the other an AI assistant, we generate conversations where the user issues a potentially ambiguous query, and the assistant needs to determine how to respond. Importantly, the model takes as input the numerical cost of each clarification question, and each generated word, and is asked to take the action that will maximize its final reward, which is the cost-penalized accuracy. We use Reinforced Self-Training (ReST) to train our model to achieve high reward and show this leads to a steerable policy that changes its behavior predictably conditioned on the provided costs, leading to higher reward and accuracy. Moreover, our procedure also generalizes to numerical cost values that were unobserved at training time.
Similar Papers
Reasoning About Intent for Ambiguous Requests
Computation and Language
Shows computers many ways to answer confusing questions.
Steering Robots with Inference-Time Interactions
Robotics
Lets you fix robot mistakes without retraining.
Learn the Ropes, Then Trust the Wins: Self-imitation with Progressive Exploration for Agentic Reinforcement Learning
Machine Learning (CS)
Teaches AI to learn better by trying new things.