Multi-Scale Visual Prompting for Lightweight Small-Image Classification
By: Salim Khazem
Potential Business Impact:
Makes computer vision work better on simple pictures.
Visual prompting has recently emerged as an efficient strategy to adapt vision models using lightweight, learnable parameters injected into the input space. However, prior work mainly targets large Vision Transformers and high-resolution datasets such as ImageNet. In contrast, small-image benchmarks like MNIST, Fashion-MNIST, and CIFAR-10 remain widely used in education, prototyping, and research, yet have received little attention in the context of prompting. In this paper, we introduce \textbf{Multi-Scale Visual Prompting (MSVP)}, a simple and generic module that learns a set of global, mid-scale, and local prompt maps fused with the input image via a lightweight $1 \times 1$ convolution. MSVP is backbone-agnostic, adds less than $0.02\%$ parameters, and significantly improves performance across CNN and Vision Transformer backbones. We provide a unified benchmark on MNIST, Fashion-MNIST, and CIFAR-10 using a simple CNN, ResNet-18, and a small Vision Transformer. Our method yields consistent improvements with negligible computational overhead. We further include ablations on prompt scales, fusion strategies, and backbone architectures, along with qualitative analyzes using prompt visualizations and Grad-CAM. Our results demonstrate that multi-scale prompting provides an effective inductive bias even on low-resolution images.
Similar Papers
Prompt-based Adaptation in Large-scale Vision Models: A Survey
CV and Pattern Recognition
Helps computers learn new things with less data.
DSS-Prompt: Dynamic-Static Synergistic Prompting for Few-Shot Class-Incremental Learning
CV and Pattern Recognition
Teaches computers to learn new things without forgetting.
Rethinking Prompt Design for Inference-time Scaling in Text-to-Visual Generation
CV and Pattern Recognition
Improves AI art by changing instructions.