3EED: Ground Everything Everywhere in 3D
By: Rong Li , Yuhao Dong , Tianshuai Hu and more
Potential Business Impact:
Lets robots find things outside using words.
Visual grounding in 3D is the key for embodied agents to localize language-referred objects in open-world environments. However, existing benchmarks are limited to indoor focus, single-platform constraints, and small scale. We introduce 3EED, a multi-platform, multi-modal 3D grounding benchmark featuring RGB and LiDAR data from vehicle, drone, and quadruped platforms. We provide over 128,000 objects and 22,000 validated referring expressions across diverse outdoor scenes -- 10x larger than existing datasets. We develop a scalable annotation pipeline combining vision-language model prompting with human verification to ensure high-quality spatial grounding. To support cross-platform learning, we propose platform-aware normalization and cross-modal alignment techniques, and establish benchmark protocols for in-domain and cross-platform evaluations. Our findings reveal significant performance gaps, highlighting the challenges and opportunities of generalizable 3D grounding. The 3EED dataset and benchmark toolkit are released to advance future research in language-driven 3D embodied perception.
Similar Papers
Grounding Beyond Detection: Enhancing Contextual Understanding in Embodied 3D Grounding
CV and Pattern Recognition
Helps robots find objects using words.
Dual Enhancement on 3D Vision-Language Perception for Monocular 3D Visual Grounding
CV and Pattern Recognition
Helps computers find objects using descriptions.
ChangingGrounding: 3D Visual Grounding in Changing Scenes
CV and Pattern Recognition
Robots find things in changing rooms using memory.