Metacognitive Sensitivity for Test-Time Dynamic Model Selection
By: Le Tuan Minh Trinh , Le Minh Vu Pham , Thi Minh Anh Pham and more
Potential Business Impact:
Helps AI know when it's right or wrong.
A key aspect of human cognition is metacognition - the ability to assess one's own knowledge and judgment reliability. While deep learning models can express confidence in their predictions, they often suffer from poor calibration, a cognitive bias where expressed confidence does not reflect true competence. Do models truly know what they know? Drawing from human cognitive science, we propose a new framework for evaluating and leveraging AI metacognition. We introduce meta-d', a psychologically-grounded measure of metacognitive sensitivity, to characterise how reliably a model's confidence predicts its own accuracy. We then use this dynamic sensitivity score as context for a bandit-based arbiter that performs test-time model selection, learning which of several expert models to trust for a given task. Our experiments across multiple datasets and deep learning model combinations (including CNNs and VLMs) demonstrate that this metacognitive approach improves joint-inference accuracy over constituent models. This work provides a novel behavioural account of AI models, recasting ensemble selection as a problem of evaluating both short-term signals (confidence prediction scores) and medium-term traits (metacognitive sensitivity).
Similar Papers
Robot Metacognition: Decision Making with Confidence for Tool Invention
Robotics
Robots learn better by thinking about their own mistakes.
Adapting Like Humans: A Metacognitive Agent with Test-time Reasoning
Artificial Intelligence
Helps computers learn new games by watching.
Metacognition in Content-Centric Computational Cognitive C4 Modeling
Artificial Intelligence
AI learns to think and remember like people.