RT-VLM: Re-Thinking Vision Language Model with 4-Clues for Real-World Object Recognition Robustness
By: Junghyun Park, Tuan Anh Nguyen, Dugki Min
Potential Business Impact:
Teaches AI to see better in new places.
Real world deployments often expose modern object recognition models to domain shifts that precipitate a severe drop in accuracy. Such shifts encompass (i) variations in low level image statistics, (ii) changes in object pose and viewpoint, (iii) partial occlusion, and (iv) visual confusion across adjacent classes. To mitigate this degradation, we introduce the Re-Thinking Vision Language Model (RT-VLM) framework. The foundation of this framework is a unique synthetic dataset generation pipeline that produces images annotated with "4-Clues": precise bounding boxes, class names, detailed object-level captions, and a comprehensive context-level caption for the entire scene. We then perform parameter efficient supervised tuning of Llama 3.2 11B Vision Instruct on this resource. At inference time, a two stage Re-Thinking scheme is executed: the model first emits its own four clues, then re examines these responses as evidence and iteratively corrects them. Across robustness benchmarks that isolate individual domain shifts, RT-VLM consistently surpasses strong baselines. These findings indicate that the integration of structured multimodal evidence with an explicit self critique loop constitutes a promising route toward reliable and transferable visual understanding.
Similar Papers
VLM-3D:End-to-End Vision-Language Models for Open-World 3D Perception
CV and Pattern Recognition
Helps self-driving cars see new things safely.
A Review of 3D Object Detection with Vision-Language Models
CV and Pattern Recognition
Lets computers see and name objects in 3D.
Look, Recite, Then Answer: Enhancing VLM Performance via Self-Generated Knowledge Hints
CV and Pattern Recognition
Helps computers see plants better, not guess.