Accuracy Does Not Guarantee Human-Likeness in Monocular Depth Estimators
By: Yuki Kubota, Taiki Fukiage
Potential Business Impact:
Makes computers see depth like people do.
Monocular depth estimation is a fundamental capability for real-world applications such as autonomous driving and robotics. Although deep neural networks (DNNs) have achieved superhuman accuracy on physical-based benchmarks, a key challenge remains: aligning model representations with human perception, a promising strategy for enhancing model robustness and interpretability. Research in object recognition has revealed a complex trade-off between model accuracy and human-like behavior, raising a question whether a similar divergence exist in depth estimation, particularly for natural outdoor scenes where benchmarks rely on sensor-based ground truth rather than human perceptual estimates. In this study, we systematically investigated the relationship between model accuracy and human similarity across 69 monocular depth estimators using the KITTI dataset. To dissect the structure of error patterns on a factor-by-factor basis, we applied affine fitting to decompose prediction errors into interpretable components. Intriguingly, our results reveal while humans and DNNs share certain estimation biases (positive error correlations), we observed distinct trade-off relationships between model accuracy and human similarity. This finding indicates that improving accuracy does not necessarily lead to more human-like behavior, underscoring the necessity of developing multifaceted, human-centric evaluations beyond traditional accuracy.
Similar Papers
How to Evaluate Monocular Depth Estimation?
CV and Pattern Recognition
Makes computer depth guesses match human eyes.
Uncertainty Estimation by Human Perception versus Neural Models
Machine Learning (CS)
Makes AI more honest about what it knows.
UM-Depth : Uncertainty Masked Self-Supervised Monocular Depth Estimation with Visual Odometry
CV and Pattern Recognition
Makes self-driving cars see better in tricky spots.