Enhancing Vision Language Models with Logic Reasoning for Situational Awareness
By: Pavana Pradeep, Krishna Kant, Suya Yu
Potential Business Impact:
Helps computers understand what's happening in videos.
Vision-Language Models (VLMs) offer the ability to generate high-level, interpretable descriptions of complex activities from images and videos, making them valuable for situational awareness (SA) applications. In such settings, the focus is on identifying infrequent but significant events with high reliability and accuracy, while also extracting fine-grained details and assessing recognition quality. In this paper, we propose an approach that integrates VLMs with traditional computer vision methods through explicit logic reasoning to enhance SA in three key ways: (a) extracting fine-grained event details, (b) employing an intelligent fine-tuning (FT) strategy that achieves substantially higher accuracy than uninformed selection, and (c) generating justifications for VLM outputs during inference. We demonstrate that our intelligent FT mechanism improves the accuracy and provides a valuable means, during inferencing, to either confirm the validity of the VLM output or indicate why it may be questionable.
Similar Papers
Beyond Generation: Multi-Hop Reasoning for Factual Accuracy in Vision-Language Models
Artificial Intelligence
Makes AI understand pictures and facts better.
VLMs Guided Interpretable Decision Making for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars make safer, clearer choices.
Spatial-aware Vision Language Model for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see in 3D.