UserBench: An Interactive Gym Environment for User-Centric Agents
By: Cheng Qian , Zuxin Liu , Akshara Prabhakar and more
Potential Business Impact:
Helps computers work better with people's unclear ideas.
Large Language Models (LLMs)-based agents have made impressive progress in reasoning and tool use, enabling them to solve complex tasks. However, their ability to proactively collaborate with users, especially when goals are vague, evolving, or indirectly expressed, remains underexplored. To address this gap, we introduce UserBench, a user-centric benchmark designed to evaluate agents in multi-turn, preference-driven interactions. UserBench features simulated users who start with underspecified goals and reveal preferences incrementally, requiring agents to proactively clarify intent and make grounded decisions with tools. Our evaluation of leading open- and closed-source LLMs reveals a significant disconnect between task completion and user alignment. For instance, models provide answers that fully align with all user intents only 20% of the time on average, and even the most advanced models uncover fewer than 30% of all user preferences through active interaction. These results highlight the challenges of building agents that are not just capable task executors, but true collaborative partners. UserBench offers an interactive environment to measure and advance this critical capability.
Similar Papers
ChatBench: From Static Benchmarks to Human-AI Evaluation
Computation and Language
Tests how well people and AI work together.
InnovatorBench: Evaluating Agents' Ability to Conduct Innovative LLM Research
Artificial Intelligence
Tests AI's ability to do real science research.
InnovatorBench: Evaluating Agents' Ability to Conduct Innovative LLM Research
Artificial Intelligence
Tests AI to help scientists discover new things faster.