Towards Low-Latency Event-based Obstacle Avoidance on a FPGA-Drone
By: Pietro Bonazzi , Christian Vogt , Michael Jost and more
Potential Business Impact:
Helps cars see and avoid crashes faster.
This work quantitatively evaluates the performance of event-based vision systems (EVS) against conventional RGB-based models for action prediction in collision avoidance on an FPGA accelerator. Our experiments demonstrate that the EVS model achieves a significantly higher effective frame rate (1 kHz) and lower temporal (-20 ms) and spatial prediction errors (-20 mm) compared to the RGB-based model, particularly when tested on out-of-distribution data. The EVS model also exhibits superior robustness in selecting optimal evasion maneuvers. In particular, in distinguishing between movement and stationary states, it achieves a 59 percentage point advantage in precision (78% vs. 19%) and a substantially higher F1 score (0.73 vs. 0.06), highlighting the susceptibility of the RGB model to overfitting. Further analysis in different combinations of spatial classes confirms the consistent performance of the EVS model in both test data sets. Finally, we evaluated the system end-to-end and achieved a latency of approximately 2.14 ms, with event aggregation (1 ms) and inference on the processing unit (0.94 ms) accounting for the largest components. These results underscore the advantages of event-based vision for real-time collision avoidance and demonstrate its potential for deployment in resource-constrained environments.
Similar Papers
RGB-Event Fusion with Self-Attention for Collision Prediction
Robotics
Helps robots avoid crashing into things.
Event-based vision for egomotion estimation using precise event timing
CV and Pattern Recognition
Helps robots see and move without getting lost.
EV-Flying: an Event-based Dataset for In-The-Wild Recognition of Flying Objects
CV and Pattern Recognition
Spots tiny flying things, even fast ones.