Descriptor: Distance-Annotated Traffic Perception Question Answering (DTPQA)
By: Nikos Theodoridis , Tim Brophy , Reenu Mohandas and more
Potential Business Impact:
Helps self-driving cars see far-away objects.
The remarkable progress of Vision-Language Models (VLMs) on a variety of tasks has raised interest in their application to automated driving. However, for these models to be trusted in such a safety-critical domain, they must first possess robust perception capabilities, i.e., they must be capable of understanding a traffic scene, which can often be highly complex, with many things happening simultaneously. Moreover, since critical objects and agents in traffic scenes are often at long distances, we require systems with not only strong perception capabilities at close distances (up to 20 meters), but also at long (30+ meters) range. Therefore, it is important to evaluate the perception capabilities of these models in isolation from other skills like reasoning or advanced world knowledge. Distance-Annotated Traffic Perception Question Answering (DTPQA) is a Visual Question Answering (VQA) benchmark designed specifically for this purpose: it can be used to evaluate the perception systems of VLMs in traffic scenarios using trivial yet crucial questions relevant to driving decisions. It consists of two parts: a synthetic benchmark (DTP-Synthetic) created using a simulator, and a real-world benchmark (DTP-Real) built on top of existing images of real traffic scenes. Additionally, DTPQA includes distance annotations, i.e., how far the object in question is from the camera. More specifically, each DTPQA sample consists of (at least): (a) an image, (b) a question, (c) the ground truth answer, and (d) the distance of the object in question, enabling analysis of how VLM performance degrades with increasing object distance. In this article, we provide the dataset itself along with the Python scripts used to create it, which can be used to generate additional data of the same kind.
Similar Papers
Evaluating Small Vision-Language Models on Distance-Dependent Traffic Perception
CV and Pattern Recognition
Helps self-driving cars see far and near.
DriveQA: Passing the Driving Knowledge Test
CV and Pattern Recognition
Teaches self-driving cars all traffic rules.
Hierarchical Question-Answering for Driving Scene Understanding Using Vision-Language Models
CV and Pattern Recognition
Helps self-driving cars understand roads faster.