Score: 1

Taming the Light: Illumination-Invariant Semantic 3DGS-SLAM

Published: November 28, 2025 | arXiv ID: 2511.22968v1

By: Shouhe Zhang , Dayong Ren , Sensen Song and more

Potential Business Impact:

Lets robots see clearly in any light.

Business Areas:
Image Recognition Data and Analytics, Software

Extreme exposure degrades both the 3D map reconstruction and semantic segmentation accuracy, which is particularly detrimental to tightly-coupled systems. To achieve illumination invariance, we propose a novel semantic SLAM framework with two designs. First, the Intrinsic Appearance Normalization (IAN) module proactively disentangles the scene's intrinsic properties, such as albedo, from transient lighting. By learning a standardized, illumination-invariant appearance model, it assigns a stable and consistent color representation to each Gaussian primitive. Second, the Dynamic Radiance Balancing Loss (DRB-Loss) reactively handles frames with extreme exposure. It activates only when an image's exposure is poor, operating directly on the radiance field to guide targeted optimization. This prevents error accumulation from extreme lighting without compromising performance under normal conditions. The synergy between IAN's proactive invariance and DRB-Loss's reactive correction endows our system with unprecedented robustness. Evaluations on public datasets demonstrate state-of-the-art performance in camera tracking, map quality, and semantic and geometric accuracy.

Page Count
6 pages

Category
Computer Science:
CV and Pattern Recognition