Beyond Accuracy: What Matters in Designing Well-Behaved Models?
By: Robin Hesse , Doğukan Bağcı , Bernt Schiele and more
Potential Business Impact:
Makes AI models fairer, stronger, and smarter.
Deep learning has become an essential part of computer vision, with deep neural networks (DNNs) excelling in predictive performance. However, they often fall short in other critical quality dimensions, such as robustness, calibration, or fairness. While existing studies have focused on a subset of these quality dimensions, none have explored a more general form of "well-behavedness" of DNNs. With this work, we address this gap by simultaneously studying nine different quality dimensions for image classification. Through a large-scale study, we provide a bird's-eye view by analyzing 326 backbone models and how different training paradigms and model architectures affect the quality dimensions. We reveal various new insights such that (i) vision-language models exhibit high fairness on ImageNet-1k classification and strong robustness against domain changes; (ii) self-supervised learning is an effective training paradigm to improve almost all considered quality dimensions; and (iii) the training dataset size is a major driver for most of the quality dimensions. We conclude our study by introducing the QUBA score (Quality Understanding Beyond Accuracy), a novel metric that ranks models across multiple dimensions of quality, enabling tailored recommendations based on specific user needs.
Similar Papers
A Causal Framework for Aligning Image Quality Metrics and Deep Neural Network Robustness
CV and Pattern Recognition
Improves AI's understanding of image quality.
Robustness as Architecture: Designing IQA Models to Withstand Adversarial Perturbations
CV and Pattern Recognition
Makes AI better at judging picture quality.
The Impact of Scaling Training Data on Adversarial Robustness
CV and Pattern Recognition
Makes AI smarter and harder to trick.