Evaluations at Work: Measuring the Capabilities of GenAI in Use
By: Brandon Lepine , Gawesha Weerantunga , Juho Kim and more
Potential Business Impact:
Tests how well people and AI work together.
Current AI benchmarks miss the messy, multi-turn nature of human-AI collaboration. We present an evaluation framework that decomposes real-world tasks into interdependent subtasks, letting us track both LLM performance and users' strategies across a dialogue. Complementing this framework, we develop a suite of metrics, including a composite usage derived from semantic similarity, word overlap, and numerical matches; structural coherence; intra-turn diversity; and a novel measure of the "information frontier" reflecting the alignment between AI outputs and users' working knowledge. We demonstrate our methodology in a financial valuation task that mirrors real-world complexity. Our empirical findings reveal that while greater integration of LLM-generated content generally enhances output quality, its benefits are moderated by factors such as response incoherence, excessive subtask diversity, and the distance of provided information from users' existing knowledge. These results suggest that proactive dialogue strategies designed to inject novelty may inadvertently undermine task performance. Our work thus advances a more holistic evaluation of human-AI collaboration, offering both a robust methodological framework and actionable insights for developing more effective AI-augmented work processes.
Similar Papers
Evaluating LLM Metrics Through Real-World Capabilities
Artificial Intelligence
AI helps people with everyday tasks better.
HAI-Eval: Measuring Human-AI Synergy in Collaborative Coding
Software Engineering
Tests how well people and AI code together.
From Consumption to Collaboration: Measuring Interaction Patterns to Augment Human Cognition in Open-Ended Tasks
Human-Computer Interaction
Helps AI be a thinking partner, not a shortcut.