Score: 0

Active Confusion Expression in Large Language Models: Leveraging World Models toward Better Social Reasoning

Published: October 9, 2025 | arXiv ID: 2510.07974v1

By: Jialu Du , Guiyang Hou , Yihui Fu and more

Potential Business Impact:

Helps AI understand people's thoughts and feelings.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

While large language models (LLMs) excel in mathematical and code reasoning, we observe they struggle with social reasoning tasks, exhibiting cognitive confusion, logical inconsistencies, and conflation between objective world states and subjective belief states. Through deteiled analysis of DeepSeek-R1's reasoning trajectories, we find that LLMs frequently encounter reasoning impasses and tend to output contradictory terms like "tricky" and "confused" when processing scenarios with multiple participants and timelines, leading to erroneous reasoning or infinite loops. The core issue is their inability to disentangle objective reality from agents' subjective beliefs. To address this, we propose an adaptive world model-enhanced reasoning mechanism that constructs a dynamic textual world model to track entity states and temporal sequences. It dynamically monitors reasoning trajectories for confusion indicators and promptly intervenes by providing clear world state descriptions, helping models navigate through cognitive dilemmas. The mechanism mimics how humans use implicit world models to distinguish between external events and internal beliefs. Evaluations on three social benchmarks demonstrate significant improvements in accuracy (e.g., +10% in Hi-ToM) while reducing computational costs (up to 33.8% token reduction), offering a simple yet effective solution for deploying LLMs in social contexts.

Country of Origin
🇨🇳 China

Page Count
15 pages

Category
Computer Science:
Computation and Language