RobustSpring: Benchmarking Robustness to Image Corruptions for Optical Flow, Scene Flow and Stereo
By: Jenny Schmalfuss , Victor Oei , Lukas Mehl and more
Potential Business Impact:
Tests how well computer vision handles bad pictures.
Standard benchmarks for optical flow, scene flow, and stereo vision algorithms generally focus on model accuracy rather than robustness to image corruptions like noise or rain. Hence, the resilience of models to such real-world perturbations is largely unquantified. To address this, we present RobustSpring, a comprehensive dataset and benchmark for evaluating robustness to image corruptions for optical flow, scene flow, and stereo models. RobustSpring applies 20 different image corruptions, including noise, blur, color changes, quality degradations, and weather distortions, in a time-, stereo-, and depth-consistent manner to the high-resolution Spring dataset, creating a suite of 20,000 corrupted images that reflect challenging conditions. RobustSpring enables comparisons of model robustness via a new corruption robustness metric. Integration with the Spring benchmark enables public two-axis evaluations of both accuracy and robustness. We benchmark a curated selection of initial models, observing that accurate models are not necessarily robust and that robustness varies widely by corruption type. RobustSpring is a new computer vision benchmark that treats robustness as a first-class citizen to foster models that combine accuracy with resilience. It will be available at https://spring-benchmark.org.
Similar Papers
Examining the Impact of Optical Aberrations to Image Classification and Object Detection Models
CV and Pattern Recognition
Makes computer vision better at seeing blurry pictures.
RobustGait: Robustness Analysis for Appearance Based Gait Recognition
CV and Pattern Recognition
Helps computers recognize people by how they walk.
DispBench: Benchmarking Disparity Estimation to Synthetic Corruptions
CV and Pattern Recognition
Tests how well AI sees depth in pictures.