Differences That Matter: Auditing Models for Capability Gap Discovery and Rectification
By: Qihao Liu , Chengzhi Mao , Yaojie Liu and more
Potential Business Impact:
Finds AI mistakes to make AI smarter.
Conventional evaluation methods for multimodal LLMs (MLLMs) lack interpretability and are often insufficient to fully disclose significant capability gaps across models. To address this, we introduce AuditDM, an automated framework that actively discovers and rectifies MLLM failure modes by auditing their divergence. AuditDM fine-tunes an MLLM as an auditor via reinforcement learning to generate challenging questions and counterfactual images that maximize disagreement among target models. Once trained, the auditor uncovers diverse, interpretable exemplars that reveal model weaknesses and serve as annotation-free data for rectification. When applied to SoTA models like Gemma-3 and PaliGemma-2, AuditDM discovers more than 20 distinct failure types. Fine-tuning on these discoveries consistently improves all models across 16 benchmarks, and enables a 3B model to surpass its 28B counterpart. Our results suggest that as data scaling hits diminishing returns, targeted model auditing offers an effective path to model diagnosis and improvement.
Similar Papers
Some Modalities are More Equal Than Others: Decoding and Architecting Multimodal Integration in MLLMs
CV and Pattern Recognition
Teaches AI to trust the right information.
AuditCopilot: Leveraging LLMs for Fraud Detection in Double-Entry Bookkeeping
Artificial Intelligence
AI finds fake money records better than old ways.
Revisiting Data Auditing in Large Vision-Language Models
CV and Pattern Recognition
Finds if AI saw your private pictures.