Stabilizing Information Flow Entropy: Regularization for Safe and Interpretable Autonomous Driving Perception
By: Haobo Yang , Shiyan Zhang , Zhuoyi Yang and more
Potential Business Impact:
Makes self-driving cars see problems better.
Deep perception networks in autonomous driving traditionally rely on data-intensive training regimes and post-hoc anomaly detection, often disregarding fundamental information-theoretic constraints governing stable information processing. We reconceptualize deep neural encoders as hierarchical communication chains that incrementally compress raw sensory inputs into task-relevant latent features. Within this framework, we establish two theoretically justified design principles for robust perception: (D1) smooth variation of mutual information between consecutive layers, and (D2) monotonic decay of latent entropy with network depth. Our analysis shows that, under realistic architectural assumptions, particularly blocks comprising repeated layers of similar capacity, enforcing smooth information flow (D1) naturally encourages entropy decay (D2), thus ensuring stable compression. Guided by these insights, we propose Eloss, a novel entropy-based regularizer designed as a lightweight, plug-and-play training objective. Rather than marginal accuracy improvements, this approach represents a conceptual shift: it unifies information-theoretic stability with standard perception tasks, enabling explicit, principled detection of anomalous sensor inputs through entropy deviations. Experimental validation on large-scale 3D object detection benchmarks (KITTI and nuScenes) demonstrates that incorporating Eloss consistently achieves competitive or improved accuracy while dramatically enhancing sensitivity to anomalies, amplifying distribution-shift signals by up to two orders of magnitude. This stable information-compression perspective not only improves interpretability but also establishes a solid theoretical foundation for safer, more robust autonomous driving perception systems.
Similar Papers
Differentiable Entropy Regularization for Geometry and Neural Networks
Machine Learning (CS)
Makes computers learn faster and better.
Information-Theoretic Greedy Layer-wise Training for Traffic Sign Recognition
Machine Learning (CS)
Trains AI faster and with less memory.
Entropic Regularization in the Deep Linear Network
Neural and Evolutionary Computing
Makes computer learning faster and more accurate.