Perturbed State Space Feature Encoders for Optical Flow with Event Cameras
By: Gokul Raju Govinda Raju , Nikola Zubić , Marco Cannici and more
Potential Business Impact:
Makes cameras see fast, even in the dark.
With their motion-responsive nature, event-based cameras offer significant advantages over traditional cameras for optical flow estimation. While deep learning has improved upon traditional methods, current neural networks adopted for event-based optical flow still face temporal and spatial reasoning limitations. We propose Perturbed State Space Feature Encoders (P-SSE) for multi-frame optical flow with event cameras to address these challenges. P-SSE adaptively processes spatiotemporal features with a large receptive field akin to Transformer-based methods, while maintaining the linear computational complexity characteristic of SSMs. However, the key innovation that enables the state-of-the-art performance of our model lies in our perturbation technique applied to the state dynamics matrix governing the SSM system. This approach significantly improves the stability and performance of our model. We integrate P-SSE into a framework that leverages bi-directional flows and recurrent connections, expanding the temporal context of flow prediction. Evaluations on DSEC-Flow and MVSEC datasets showcase P-SSE's superiority, with 8.48% and 11.86% improvements in EPE performance, respectively.
Similar Papers
Spatio-Temporal State Space Model For Efficient Event-Based Optical Flow
CV and Pattern Recognition
Makes robots see fast and move smoothly.
Nonlinear Motion-Guided and Spatio-Temporal Aware Network for Unsupervised Event-Based Optical Flow
CV and Pattern Recognition
Helps cameras see fast, jerky movements better.
EDmamba: Rethinking Efficient Event Denoising with Spatiotemporal Decoupled SSMs
CV and Pattern Recognition
Cleans up blurry camera images super fast.