Keypoint Counting Classifiers: Turning Vision Transformers into Self-Explainable Models Without Training
By: Kristoffer Wickstrøm , Teresa Dorszewski , Siyan Chen and more
Current approaches for designing self-explainable models (SEMs) require complicated training procedures and specific architectures which makes them impractical. With the advance of general purpose foundation models based on Vision Transformers (ViTs), this impracticability becomes even more problematic. Therefore, new methods are necessary to provide transparency and reliability to ViT-based foundation models. In this work, we present a new method for turning any well-trained ViT-based model into a SEM without retraining, which we call Keypoint Counting Classifiers (KCCs). Recent works have shown that ViTs can automatically identify matching keypoints between images with high precision, and we build on these results to create an easily interpretable decision process that is inherently visualizable in the input. We perform an extensive evaluation which show that KCCs improve the human-machine communication compared to recent baselines. We believe that KCCs constitute an important step towards making ViT-based foundation models more transparent and reliable.
Similar Papers
Hands-on Evaluation of Visual Transformers for Object Recognition and Detection
CV and Pattern Recognition
Helps computers see the whole picture, not just parts.
CascadedViT: Cascaded Chunk-FeedForward and Cascaded Group Attention Vision Transformer
CV and Pattern Recognition
Makes AI see better using less power.
EVCC: Enhanced Vision Transformer-ConvNeXt-CoAtNet Fusion for Classification
CV and Pattern Recognition
Makes AI see better, using less computer power.