Robust Driving QA through Metadata-Grounded Context and Task-Specific Prompts
By: Seungjun Yu , Junsung Park , Youngsun Lim and more
Potential Business Impact:
Helps self-driving cars understand driving situations better.
We present a two-phase vision-language QA system for autonomous driving that answers high-level perception, prediction, and planning questions. In Phase-1, a large multimodal LLM (Qwen2.5-VL-32B) is conditioned on six-camera inputs, a short temporal window of history, and a chain-of-thought prompt with few-shot exemplars. A self-consistency ensemble (multiple sampled reasoning chains) further improves answer reliability. In Phase-2, we augment the prompt with nuScenes scene metadata (object annotations, ego-vehicle state, etc.) and category-specific question instructions (separate prompts for perception, prediction, planning tasks). In experiments on a driving QA benchmark, our approach significantly outperforms the baseline Qwen2.5 models. For example, using 5 history frames and 10-shot prompting in Phase-1 yields 65.1% overall accuracy (vs.62.61% with zero-shot); applying self-consistency raises this to 66.85%. Phase-2 achieves 67.37% overall. Notably, the system maintains 96% accuracy under severe visual corruption. These results demonstrate that carefully engineered prompts and contextual grounding can greatly enhance high-level driving QA with pretrained vision-language models.
Similar Papers
Enhancing Vision-Language Models for Autonomous Driving through Task-Specific Prompting and Spatial Reasoning
CV and Pattern Recognition
Helps self-driving cars understand roads better.
Hierarchical Question-Answering for Driving Scene Understanding Using Vision-Language Models
CV and Pattern Recognition
Helps self-driving cars understand roads faster.
DriveQA: Passing the Driving Knowledge Test
CV and Pattern Recognition
Teaches self-driving cars all traffic rules.