Visual Grounding from Event Cameras
By: Lingdong Kong , Dongyue Lu , Ao Liang and more
Potential Business Impact:
Lets computers understand spoken words about moving things.
Event cameras capture changes in brightness with microsecond precision and remain reliable under motion blur and challenging illumination, offering clear advantages for modeling highly dynamic scenes. Yet, their integration with natural language understanding has received little attention, leaving a gap in multimodal perception. To address this, we introduce Talk2Event, the first large-scale benchmark for language-driven object grounding using event data. Built on real-world driving scenarios, Talk2Event comprises 5,567 scenes, 13,458 annotated objects, and more than 30,000 carefully validated referring expressions. Each expression is enriched with four structured attributes -- appearance, status, relation to the viewer, and relation to surrounding objects -- that explicitly capture spatial, temporal, and relational cues. This attribute-centric design supports interpretable and compositional grounding, enabling analysis that moves beyond simple object recognition to contextual reasoning in dynamic environments. We envision Talk2Event as a foundation for advancing multimodal and temporally-aware perception, with applications spanning robotics, human-AI interaction, and so on.
Similar Papers
Exploring Spatial-Temporal Dynamics in Event-based Facial Micro-Expression Analysis
CV and Pattern Recognition
Helps computers see tiny, fast facial changes.
Exploring The Missing Semantics In Event Modality
CV and Pattern Recognition
Helps cameras see objects even in fast motion.
Event Camera Guided Visual Media Restoration & 3D Reconstruction: A Survey
CV and Pattern Recognition
Improves blurry videos and 3D pictures.