DarkEQA: Benchmarking Vision-Language Models for Embodied Question Answering in Low-Light Indoor Environments
By: Yohan Park , Hyunwoo Ha , Wonjun Jo and more
Vision Language Models (VLMs) are increasingly adopted as central reasoning modules for embodied agents. Existing benchmarks evaluate their capabilities under ideal, well-lit conditions, yet robust 24/7 operation demands performance under a wide range of visual degradations, including low-light conditions at night or in dark environments--a core necessity that has been largely overlooked. To address this underexplored challenge, we present DarkEQA, an open-source benchmark for evaluating EQA-relevant perceptual primitives under multi-level low-light conditions. DarkEQA isolates the perception bottleneck by evaluating question answering from egocentric observations under controlled degradations, enabling attributable robustness analysis. A key design feature of DarkEQA is its physical fidelity: visual degradations are modeled in linear RAW space, simulating physics-based illumination drop and sensor noise followed by an ISP-inspired rendering pipeline. We demonstrate the utility of DarkEQA by evaluating a wide range of state-of-the-art VLMs and Low-Light Image Enhancement (LLIE) models. Our analysis systematically reveals VLMs' limitations when operating under these challenging visual conditions. Our code and benchmark dataset will be released upon acceptance.
Similar Papers
EncQA: Benchmarking Vision-Language Models on Visual Encodings for Charts
CV and Pattern Recognition
Helps computers better understand charts and graphs.
EgoNight: Towards Egocentric Vision Understanding at Night with a Challenging Benchmark
CV and Pattern Recognition
Helps cameras see and understand things in the dark.
BridgeEQA: Virtual Embodied Agents for Real Bridge Inspections
CV and Pattern Recognition
Helps robots inspect bridges by answering questions.