Score: 1

GuessingGame: Measuring the Informativeness of Open-Ended Questions in Large Language Models

Published: September 23, 2025 | arXiv ID: 2509.19593v1

By: Dylan Hutson , Daniel Vennemeyer , Aneesh Deshmukh and more

Potential Business Impact:

Teaches computers to ask smart questions to guess things.

Business Areas:
Semantic Search Internet Services

We introduce GuessingGame, a protocol for evaluating large language models (LLMs) as strategic question-askers in open-ended, open-domain settings. A Guesser LLM identifies a hidden object by posing free-form questions to an Oracle without predefined choices or candidate lists. To measure question quality, we propose two information gain (IG) metrics: a Bayesian method that tracks belief updates over semantic concepts using LLM-scored relevance, and an entropy-based method that filters candidates via ConceptNet. Both metrics are model-agnostic and support post hoc analysis. Across 858 games with multiple models and prompting strategies, higher IG strongly predicts efficiency: a one-standard-deviation IG increase reduces expected game length by 43\%. Prompting constraints guided by IG, such as enforcing question diversity, enable weaker models to significantly improve performance. These results show that question-asking in LLMs is both measurable and improvable, and crucial for interactive reasoning.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Computation and Language