Score: 1

Toward Automatic Safe Driving Instruction: A Large-Scale Vision Language Model Approach

Published: November 28, 2025 | arXiv ID: 2511.23311v1

By: Haruki Sakajo , Hiroshi Takato , Hiroshi Tsutsui and more

Potential Business Impact:

Helps cars watch drivers and roads for safety.

Business Areas:
Autonomous Vehicles Transportation

Large-scale Vision Language Models (LVLMs) exhibit advanced capabilities in tasks that require visual information, including object detection. These capabilities have promising applications in various industrial domains, such as autonomous driving. For example, LVLMs can generate safety-oriented descriptions of videos captured by road-facing cameras. However, ensuring comprehensive safety requires monitoring driver-facing views as well to detect risky events, such as the use of mobiles while driving. Thus, the ability to process synchronized inputs is necessary from both driver-facing and road-facing cameras. In this study, we develop models and investigate the capabilities of LVLMs by constructing a dataset and evaluating their performance on this dataset. Our experimental results demonstrate that while pre-trained LVLMs have limited effectiveness, fine-tuned LVLMs can generate accurate and safety-aware driving instructions. Nonetheless, several challenges remain, particularly in detecting subtle or complex events in the video. Our findings and error analysis provide valuable insights that can contribute to the improvement of LVLM-based systems in this domain.

Country of Origin
🇦🇺 Australia

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition