SignRAG: A Retrieval-Augmented System for Scalable Zero-Shot Road Sign Recognition
By: Minghao Zhu , Zhihao Zhang , Anmol Sidhu and more
Potential Business Impact:
Helps cars identify any road sign, even new ones.
Automated road sign recognition is a critical task for intelligent transportation systems, but traditional deep learning methods struggle with the sheer number of sign classes and the impracticality of creating exhaustive labeled datasets. This paper introduces a novel zero-shot recognition framework that adapts the Retrieval-Augmented Generation (RAG) paradigm to address this challenge. Our method first uses a Vision Language Model (VLM) to generate a textual description of a sign from an input image. This description is used to retrieve a small set of the most relevant sign candidates from a vector database of reference designs. Subsequently, a Large Language Model (LLM) reasons over the retrieved candidates to make a final, fine-grained recognition. We validate this approach on a comprehensive set of 303 regulatory signs from the Ohio MUTCD. Experimental results demonstrate the framework's effectiveness, achieving 95.58% accuracy on ideal reference images and 82.45% on challenging real-world road data. This work demonstrates the viability of RAG-based architectures for creating scalable and accurate systems for road sign recognition without task-specific training.
Similar Papers
M4-RAG: A Massive-Scale Multilingual Multi-Cultural Multimodal RAG
Computation and Language
Helps computers answer questions about pictures in many languages.
RouteRAG: Efficient Retrieval-Augmented Generation from Text and Graph via Reinforcement Learning
Computation and Language
Lets computers learn from text and links.
Zero-Shot Vehicle Model Recognition via Text-Based Retrieval-Augmented Generation
CV and Pattern Recognition
Identifies car makes and models without retraining.