Multi-Turn Puzzles: Evaluating Interactive Reasoning and Strategic Dialogue in LLMs
By: Kartikeya Badola , Jonathan Simon , Arian Hosseini and more
Potential Business Impact:
Tests AI's ability to talk and learn.
Large language models (LLMs) excel at solving problems with clear and complete statements, but often struggle with nuanced environments or interactive tasks which are common in most real-world scenarios. This highlights the critical need for developing LLMs that can effectively engage in logically consistent multi-turn dialogue, seek information and reason with incomplete data. To this end, we introduce a novel benchmark comprising a suite of multi-turn tasks each designed to test specific reasoning, interactive dialogue, and information-seeking abilities. These tasks have deterministic scoring mechanisms, thus eliminating the need for human intervention. Evaluating frontier models on our benchmark reveals significant headroom. Our analysis shows that most errors emerge from poor instruction following, reasoning failures, and poor planning. This benchmark provides valuable insights into the strengths and weaknesses of current LLMs in handling complex, interactive scenarios and offers a robust platform for future research aimed at improving these critical capabilities.
Similar Papers
Multi-Turn Puzzles: Evaluating Interactive Reasoning and Strategic Dialogue in LLMs
Computation and Language
Tests computers on talking and solving tricky problems.
Interactive Evaluation of Large Language Models for Multi-Requirement Software Engineering Tasks
Artificial Intelligence
Tests AI code writing with helpful feedback.
Multi-turn Training with Basic Human Feedback Helps Little on LLM Reasoning
Computation and Language
Simple training works best for AI reasoning.