Score: 0

A Graph Attention Network-Based Framework for Reconstructing Missing LiDAR Beams

Published: December 13, 2025 | arXiv ID: 2512.12410v1

By: Khalfalla Awedat, Mohamed Abidalrekab, Mohammad El-Yabroudi

Potential Business Impact:

Fixes self-driving car vision when sensors have gaps.

Business Areas:
Image Recognition Data and Analytics, Software

Vertical beam dropout in spinning LiDAR sensors triggered by hardware aging, dust, snow, fog, or bright reflections removes entire vertical slices from the point cloud and severely degrades 3D perception in autonomous vehicles. This paper proposes a Graph Attention Network (GAT)-based framework that reconstructs these missing vertical channels using only the current LiDAR frame, with no camera images or temporal information required. Each LiDAR sweep is represented as an unstructured spatial graph: points are nodes and edges connect nearby points while preserving the original beam-index ordering. A multi-layer GAT learns adaptive attention weights over local geometric neighborhoods and directly regresses the missing elevation (z) values at dropout locations. Trained and evaluated on 1,065 raw KITTI sequences with simulated channel dropout, the method achieves an average height RMSE of 11.67 cm, with 87.98% of reconstructed points falling within a 10 cm error threshold. Inference takes 14.65 seconds per frame on a single GPU, and reconstruction quality remains stable for different neighborhood sizes k. These results show that a pure graph attention model operating solely on raw point-cloud geometry can effectively recover dropped vertical beams under realistic sensor degradation.

Page Count
6 pages

Category
Computer Science:
CV and Pattern Recognition