Score: 1

Metacognitive Sensitivity for Test-Time Dynamic Model Selection

Published: December 11, 2025 | arXiv ID: 2512.10451v1

By: Le Tuan Minh Trinh , Le Minh Vu Pham , Thi Minh Anh Pham and more

Potential Business Impact:

Helps AI know when it's right or wrong.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

A key aspect of human cognition is metacognition - the ability to assess one's own knowledge and judgment reliability. While deep learning models can express confidence in their predictions, they often suffer from poor calibration, a cognitive bias where expressed confidence does not reflect true competence. Do models truly know what they know? Drawing from human cognitive science, we propose a new framework for evaluating and leveraging AI metacognition. We introduce meta-d', a psychologically-grounded measure of metacognitive sensitivity, to characterise how reliably a model's confidence predicts its own accuracy. We then use this dynamic sensitivity score as context for a bandit-based arbiter that performs test-time model selection, learning which of several expert models to trust for a given task. Our experiments across multiple datasets and deep learning model combinations (including CNNs and VLMs) demonstrate that this metacognitive approach improves joint-inference accuracy over constituent models. This work provides a novel behavioural account of AI models, recasting ensemble selection as a problem of evaluating both short-term signals (confidence prediction scores) and medium-term traits (metacognitive sensitivity).

Country of Origin
🇬🇧 🇻🇳 United Kingdom, Viet Nam

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)