Beyond the final layer: Attentive multilayer fusion for vision transformers
By: Laure Ciernik , Marco Morik , Lukas Thede and more
With the rise of large-scale foundation models, efficiently adapting them to downstream tasks remains a central challenge. Linear probing, which freezes the backbone and trains a lightweight head, is computationally efficient but often restricted to last-layer representations. We show that task-relevant information is distributed across the network hierarchy rather than solely encoded in any of the last layers. To leverage this distribution of information, we apply an attentive probing mechanism that dynamically fuses representations from all layers of a Vision Transformer. This mechanism learns to identify the most relevant layers for a target task and combines low-level structural cues with high-level semantic abstractions. Across 20 diverse datasets and multiple pretrained foundation models, our method achieves consistent, substantial gains over standard linear probes. Attention heatmaps further reveal that tasks different from the pre-training domain benefit most from intermediate representations. Overall, our findings underscore the value of intermediate layer information and demonstrate a principled, task aware approach for unlocking their potential in probing-based adaptation.
Similar Papers
Head Pursuit: Probing Attention Specialization in Multimodal Transformers
CV and Pattern Recognition
Changes AI's words or pictures by fixing tiny parts.
Rethinking the Use of Vision Transformers for AI-Generated Image Detection
CV and Pattern Recognition
Finds fake pictures better using more picture parts.
Multi-Level Feature Fusion for Continual Learning in Visual Quality Inspection
CV and Pattern Recognition
Teaches machines to fix broken things better.