Distilling Stereo Networks for Performant and Efficient Leaner Networks
By: Rafia Rahim, Samuel Woerz, Andreas Zell
Potential Business Impact:
Makes 3D cameras see depth faster and better.
Knowledge distillation has been quite popular in vision for tasks like classification and segmentation however not much work has been done for distilling state-of-the-art stereo matching methods despite their range of applications. One of the reasons for its lack of use in stereo matching networks is due to the inherent complexity of these networks, where a typical network is composed of multiple two- and three-dimensional modules. In this work, we systematically combine the insights from state-of-the-art stereo methods with general knowledge-distillation techniques to develop a joint framework for stereo networks distillation with competitive results and faster inference. Moreover, we show, via a detailed empirical analysis, that distilling knowledge from the stereo network requires careful design of the complete distillation pipeline starting from backbone to the right selection of distillation points and corresponding loss functions. This results in the student networks that are not only leaner and faster but give excellent performance . For instance, our student network while performing better than the performance oriented methods like PSMNet [1], CFNet [2], and LEAStereo [3]) on benchmark SceneFlow dataset, is 8x, 5x, and 8x faster respectively. Furthermore, compared to speed oriented methods having inference time less than 100ms, our student networks perform better than all the tested methods. In addition, our student network also shows better generalization capabilities when tested on unseen datasets like ETH3D and Middlebury.
Similar Papers
JointDistill: Adaptive Multi-Task Distillation for Joint Depth Estimation and Scene Segmentation
Machine Learning (CS)
Teaches cars to see and understand roads better.
Efficient Knowledge Distillation via Curriculum Extraction
Machine Learning (CS)
Makes small computers learn like big ones faster.
MIDAS: Modeling Ground-Truth Distributions with Dark Knowledge for Domain Generalized Stereo Matching
CV and Pattern Recognition
Makes 3D pictures from two camera images.