GLD-Road:A global-local decoding road network extraction model for remote sensing images
By: Ligao Deng , Yupeng Deng , Yu Meng and more
Potential Business Impact:
Maps roads faster and more accurately.
Road networks are crucial for mapping, autonomous driving, and disaster response. While manual annotation is costly, deep learning offers efficient extraction. Current methods include postprocessing (prone to errors), global parallel (fast but misses nodes), and local iterative (accurate but slow). We propose GLD-Road, a two-stage model combining global efficiency and local precision. First, it detects road nodes and connects them via a Connect Module. Then, it iteratively refines broken roads using local searches, drastically reducing computation. Experiments show GLD-Road outperforms state-of-the-art methods, improving APLS by 1.9% (City-Scale) and 0.67% (SpaceNet3). It also reduces retrieval time by 40% vs. Sat2Graph (global) and 92% vs. RNGDet++ (local). The experimental results are available at https://github.com/ucas-dlg/GLD-Road.
Similar Papers
Beyond Endpoints: Path-Centric Reasoning for Vectorized Off-Road Network Extraction
CV and Pattern Recognition
Maps roads in wild places automatically.
Beyond Endpoints: Path-Centric Reasoning for Vectorized Off-Road Network Extraction
CV and Pattern Recognition
Maps roads in wild places better and faster.
LDGNet: A Lightweight Difference Guiding Network for Remote Sensing Change Detection
CV and Pattern Recognition
Find changes in pictures faster, using less power.