Rethinking Theory of Mind Benchmarks for LLMs: Towards A User-Centered Perspective
By: Qiaosi Wang , Xuhui Zhou , Maarten Sap and more
Potential Business Impact:
Helps computers understand what people are thinking.
The last couple of years have witnessed emerging research that appropriates Theory-of-Mind (ToM) tasks designed for humans to benchmark LLM's ToM capabilities as an indication of LLM's social intelligence. However, this approach has a number of limitations. Drawing on existing psychology and AI literature, we summarize the theoretical, methodological, and evaluation limitations by pointing out that certain issues are inherently present in the original ToM tasks used to evaluate human's ToM, which continues to persist and exacerbated when appropriated to benchmark LLM's ToM. Taking a human-computer interaction (HCI) perspective, these limitations prompt us to rethink the definition and criteria of ToM in ToM benchmarks in a more dynamic, interactional approach that accounts for user preferences, needs, and experiences with LLMs in such evaluations. We conclude by outlining potential opportunities and challenges towards this direction.
Similar Papers
Do Theory of Mind Benchmarks Need Explicit Human-like Reasoning in Language Models?
Computation and Language
Computers can guess what others think, but maybe not really.
Theory of Mind in Large Language Models: Assessment and Enhancement
Computation and Language
Helps computers understand what people are thinking.
UniToMBench: Integrating Perspective-Taking to Improve Theory of Mind in LLMs
Computation and Language
Teaches computers to understand people's thoughts.