Metacognition and Uncertainty Communication in Humans and Large Language Models
By: Mark Steyvers, Megan A. K. Peters
Potential Business Impact:
Helps computers know what they don't know.
Metacognition--the capacity to monitor and evaluate one's own knowledge and performance--is foundational to human decision-making, learning, and communication. As large language models (LLMs) become increasingly embedded in both high-stakes and widespread low-stakes contexts, it is important to assess whether, how, and to what extent they exhibit metacognitive abilities. Here, we provide an overview of current knowledge of LLMs' metacognitive capacities, how they might be studied, and how they relate to our knowledge of metacognition in humans. We show that while humans and LLMs can sometimes appear quite aligned in their metacognitive capacities and behaviors, it is clear many differences remain; attending to these differences is important for enhancing human-AI collaboration. Finally, we discuss how endowing future LLMs with more sensitive and more calibrated metacognition may also help them develop new capacities such as more efficient learning, self-direction, and curiosity.
Similar Papers
Cognitive Foundations for Reasoning and Their Manifestation in LLMs
Artificial Intelligence
Teaches computers to think more like people.
Large Language Models Have Intrinsic Meta-Cognition, but Need a Good Lens
Computation and Language
Helps computers check their own math work better.
Can the capability of Large Language Models be described by human ability? A Meta Study
Computation and Language
Computers can now do some human-like thinking tasks.