Unleashing the Capabilities of Large Vision-Language Models for Intelligent Perception of Roadside Infrastructure
By: Luxuan Fu , Chong Liu , Bisheng Yang and more
Automated perception of urban roadside infrastructure is crucial for smart city management, yet general-purpose models often struggle to capture the necessary fine-grained attributes and domain rules. While Large Vision Language Models (VLMs) excel at open-world recognition, they often struggle to accurately interpret complex facility states in compliance with engineering standards, leading to unreliable performance in real-world applications. To address this, we propose a domain-adapted framework that transforms VLMs into specialized agents for intelligent infrastructure analysis. Our approach integrates a data-efficient fine-tuning strategy with a knowledge-grounded reasoning mechanism. Specifically, we leverage open-vocabulary fine-tuning on Grounding DINO to robustly localize diverse assets with minimal supervision, followed by LoRA-based adaptation on Qwen-VL for deep semantic attribute reasoning. To mitigate hallucinations and enforce professional compliance, we introduce a dual-modality Retrieval-Augmented Generation (RAG) module that dynamically retrieves authoritative industry standards and visual exemplars during inference. Evaluated on a comprehensive new dataset of urban roadside scenes, our framework achieves a detection performance of 58.9 mAP and an attribute recognition accuracy of 95.5%, demonstrating a robust solution for intelligent infrastructure monitoring.
Similar Papers
Multi-Agent Visual-Language Reasoning for Comprehensive Highway Scene Understanding
CV and Pattern Recognition
Helps cameras see road dangers and warn drivers.
Towards General Urban Monitoring with Vision-Language Models: A Review, Evaluation, and a Research Agenda
CV and Pattern Recognition
Lets computers see city problems like people.
Spatial-aware Vision Language Model for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars see in 3D.