Do You Get the Hint? Benchmarking LLMs on the Board Game Concept
By: Ine Gevers, Walter Daelemans
Potential Business Impact:
Makes computers better at guessing words and understanding people.
Large language models (LLMs) have achieved striking successes on many benchmarks, yet recent studies continue to expose fundamental weaknesses. In particular, tasks that require abstract reasoning remain challenging, often because they use representations such as grids, symbols, or visual patterns that differ from the natural language data LLMs are trained on. In this paper, we introduce Concept, a simple word-guessing board game, as a benchmark for probing abductive reasoning in a representation that is much closer to LLM pre-training data: natural language. Our results show that this game, easily solved by humans (with a success rate of over 90\%), is still very challenging for state-of-the-art LLMs (no model exceeds 40\% success rate). Specifically, we observe that LLMs struggle with interpreting other players' strategic intents, and with correcting initial hypotheses given sequential information updates. In addition, we extend the evaluation across multiple languages, and find that the LLM performance drops further in lower-resource languages (Dutch, French, and Spanish) compared to English.
Similar Papers
Think Globally, Group Locally: Evaluating LLMs Using Multi-Lingual Word Grouping Games
Computation and Language
Computers think better in English for puzzles.
LLM CHESS: Benchmarking Reasoning and Instruction-Following in LLMs through Chess
Artificial Intelligence
Tests how well AI plays and understands chess.
Human-Level Reasoning: A Comparative Study of Large Language Models on Logical and Abstract Reasoning
Artificial Intelligence
Tests if AI can think like a person.