CropVLM: Learning to Zoom for Fine-Grained Vision-Language Perception
By: Miguel Carvalho, Helder Dias, Bruno Martins
Potential Business Impact:
Lets computers see tiny details in pictures better.
Vision-Language Models (VLMs) often struggle with tasks that require fine-grained image understanding, such as scene-text recognition or document analysis, due to perception limitations and visual fragmentation. To address these challenges, we introduce CropVLM as an external low-cost method for boosting performance, enabling VLMs to dynamically ''zoom in'' on relevant image regions, enhancing their ability to capture fine details. CropVLM is trained using reinforcement learning, without using human-labeled bounding boxes as a supervision signal, and without expensive synthetic evaluations. The model is trained once and can be paired with both open-source and proprietary VLMs to improve their performance. Our approach delivers significant improvements on tasks that require high-resolution image understanding, notably for benchmarks that are out-of-domain for the target VLM, without modifying or fine-tuning the VLM, thus avoiding catastrophic forgetting.
Similar Papers
Semantic-Clipping: Efficient Vision-Language Modeling with Semantic-Guidedd Visual Selection
CV and Pattern Recognition
Helps computers understand pictures better by focusing on important parts.
From Synthetic Scenes to Real Performance: Enhancing Spatial Reasoning in VLMs
CV and Pattern Recognition
Makes AI understand pictures better, without mistakes.
Efficient Few-Shot Learning in Remote Sensing: Fusing Vision and Vision-Language Models
CV and Pattern Recognition
Finds planes in pictures better, even blurry ones.