Talk2Event: Grounded Understanding of Dynamic Scenes from Event Cameras
By: Lingdong Kong , Dongyue Lu , Ao Liang and more
Potential Business Impact:
Lets cars understand spoken commands about surroundings.
Event cameras offer microsecond-level latency and robustness to motion blur, making them ideal for understanding dynamic environments. Yet, connecting these asynchronous streams to human language remains an open challenge. We introduce Talk2Event, the first large-scale benchmark for language-driven object grounding in event-based perception. Built from real-world driving data, we provide over 30,000 validated referring expressions, each enriched with four grounding attributes -- appearance, status, relation to viewer, and relation to other objects -- bridging spatial, temporal, and relational reasoning. To fully exploit these cues, we propose EventRefer, an attribute-aware grounding framework that dynamically fuses multi-attribute representations through a Mixture of Event-Attribute Experts (MoEE). Our method adapts to different modalities and scene dynamics, achieving consistent gains over state-of-the-art baselines in event-only, frame-only, and event-frame fusion settings. We hope our dataset and approach will establish a foundation for advancing multimodal, temporally-aware, and language-driven perception in real-world robotics and autonomy.
Similar Papers
Visual Grounding from Event Cameras
CV and Pattern Recognition
Lets computers understand spoken words about moving things.
Event-Driven Storytelling with Multiple Lifelike Humans in a 3D Scene
CV and Pattern Recognition
Makes computer characters move together in stories.
Exploring Spatial-Temporal Dynamics in Event-based Facial Micro-Expression Analysis
CV and Pattern Recognition
Helps computers see tiny, fast facial changes.