M2S-RoAD: Multi-Modal Semantic Segmentation for Road Damage Using Camera and LiDAR Data
By: Tzu-Yun Tseng , Hongyu Lyu , Josephine Li and more
Potential Business Impact:
Helps cars spot bad roads in the country.
Road damage can create safety and comfort challenges for both human drivers and autonomous vehicles (AVs). This damage is particularly prevalent in rural areas due to less frequent surveying and maintenance of roads. Automated detection of pavement deterioration can be used as an input to AVs and driver assistance systems to improve road safety. Current research in this field has predominantly focused on urban environments driven largely by public datasets, while rural areas have received significantly less attention. This paper introduces M2S-RoAD, a dataset for the semantic segmentation of different classes of road damage. M2S-RoAD was collected in various towns across New South Wales, Australia, and labelled for semantic segmentation to identify nine distinct types of road damage. This dataset will be released upon the acceptance of the paper.
Similar Papers
A Benchmark Dataset for Spatially Aligned Road Damage Assessment in Small Uncrewed Aerial Systems Disaster Imagery
CV and Pattern Recognition
Helps drones find damaged roads after disasters.
RoadSens-4M: A Multimodal Smartphone & Camera Dataset for Holistic Road-way Analysis
Robotics
Finds road bumps and holes using phone sensors.
RoadBench: A Vision-Language Foundation Model and Benchmark for Road Damage Understanding
Computational Engineering, Finance, and Science
Helps computers see road damage using words.