When Researchers Say Mental Model/Theory of Mind of AI, What Are They Really Talking About?
By: Xiaoyun Yin , Elmira Zahmat Doost , Shiwen Zhou and more
Potential Business Impact:
AI doesn't think like us, it just copies.
When researchers claim AI systems possess ToM or mental models, they are fundamentally discussing behavioral predictions and bias corrections rather than genuine mental states. This position paper argues that the current discourse conflates sophisticated pattern matching with authentic cognition, missing a crucial distinction between simulation and experience. While recent studies show LLMs achieving human-level performance on ToM laboratory tasks, these results are based only on behavioral mimicry. More importantly, the entire testing paradigm may be flawed in applying individual human cognitive tests to AI systems, but assessing human cognition directly in the moment of human-AI interaction. I suggest shifting focus toward mutual ToM frameworks that acknowledge the simultaneous contributions of human cognition and AI algorithms, emphasizing the interaction dynamics, instead of testing AI in isolation.
Similar Papers
Towards properly implementing Theory of Mind in AI systems: An account of four misconceptions
Human-Computer Interaction
Teaches computers to understand people's thoughts.
RecToM: A Benchmark for Evaluating Machine Theory of Mind in LLM-based Conversational Recommender Systems
Artificial Intelligence
Helps computers understand what people want and need.
Do Theory of Mind Benchmarks Need Explicit Human-like Reasoning in Language Models?
Computation and Language
Computers can guess what others think, but maybe not really.