3D Semantic Segmentation for Post-Disaster Assessment
By: Nhut Le, Maryam Rahnemoonfar
The increasing frequency of natural disasters poses severe threats to human lives and leads to substantial economic losses. While 3D semantic segmentation is crucial for post-disaster assessment, existing deep learning models lack datasets specifically designed for post-disaster environments. To address this gap, we constructed a specialized 3D dataset using unmanned aerial vehicles (UAVs)-captured aerial footage of Hurricane Ian (2022) over affected areas, employing Structure-from-Motion (SfM) and Multi-View Stereo (MVS) techniques to reconstruct 3D point clouds. We evaluated the state-of-the-art (SOTA) 3D semantic segmentation models, Fast Point Transformer (FPT), Point Transformer v3 (PTv3), and OA-CNNs on this dataset, exposing significant limitations in existing methods for disaster-stricken regions. These findings underscore the urgent need for advancements in 3D segmentation techniques and the development of specialized 3D benchmark datasets to improve post-disaster scene understanding and response.
Similar Papers
3DAeroRelief: The first 3D Benchmark UAV Dataset for Post-Disaster Assessment
CV and Pattern Recognition
Helps drones map damaged buildings after disasters.
Structural Damage Detection Using AI Super Resolution and Visual Language Model
CV and Pattern Recognition
Helps drones see damage after disasters.
Deploying Rapid Damage Assessments from sUAS Imagery for Disaster Response
CV and Pattern Recognition
Helps drones quickly find damaged buildings after storms.