MultiEgo: A Multi-View Egocentric Video Dataset for 4D Scene Reconstruction
By: Bate Li , Houqiang Zhong , Zhengxue Cheng and more
Multi-view egocentric dynamic scene reconstruction holds significant research value for applications in holographic documentation of social interactions. However, existing reconstruction datasets focus on static multi-view or single-egocentric view setups, lacking multi-view egocentric datasets for dynamic scene reconstruction. Therefore, we present MultiEgo, the first multi-view egocentric dataset for 4D dynamic scene reconstruction. The dataset comprises five canonical social interaction scenes: meetings, performances, and a presentation. Each scene provides five authentic egocentric videos captured by participants wearing AR glasses. We design a hardware-based data acquisition system and processing pipeline, achieving sub-millisecond temporal synchronization across views, coupled with accurate pose annotations. Experiment validation demonstrates the practical utility and effectiveness of our dataset for free-viewpoint video (FVV) applications, establishing MultiEgo as a foundational resource for advancing multi-view egocentric dynamic scene reconstruction research.
Similar Papers
Understanding Dynamic Scenes in Ego Centric 4D Point Clouds
CV and Pattern Recognition
Helps robots understand moving things and how they interact.
OpenEgo: A Large-Scale Multimodal Egocentric Dataset for Dexterous Manipulation
CV and Pattern Recognition
Teaches robots to copy human hand movements.
The CASTLE 2024 Dataset: Advancing the Art of Multimodal Understanding
Multimedia
Shows how things look from many viewpoints.