Ego-EXTRA: video-language Egocentric Dataset for EXpert-TRAinee assistance
By: Francesco Ragusa , Michele Mazzamuto , Rosario Forte and more
Potential Business Impact:
Helps robots learn by watching and talking.
We present Ego-EXTRA, a video-language Egocentric Dataset for EXpert-TRAinee assistance. Ego-EXTRA features 50 hours of unscripted egocentric videos of subjects performing procedural activities (the trainees) while guided by real-world experts who provide guidance and answer specific questions using natural language. Following a ``Wizard of OZ'' data collection paradigm, the expert enacts a wearable intelligent assistant, looking at the activities performed by the trainee exclusively from their egocentric point of view, answering questions when asked by the trainee, or proactively interacting with suggestions during the procedures. This unique data collection protocol enables Ego-EXTRA to capture a high-quality dialogue in which expert-level feedback is provided to the trainee. Two-way dialogues between experts and trainees are recorded, transcribed, and used to create a novel benchmark comprising more than 15k high-quality Visual Question Answer sets, which we use to evaluate Multimodal Large Language Models. The results show that Ego-EXTRA is challenging and highlight the limitations of current models when used to provide expert-level assistance to the user. The Ego-EXTRA dataset is publicly available to support the benchmark of egocentric video-language assistants: https://fpv-iplab.github.io/Ego-EXTRA/.
Similar Papers
IndEgo: A Dataset of Industrial Scenarios and Collaborative Work for Egocentric Assistants
CV and Pattern Recognition
Helps robots learn to do factory jobs.
Perceiving and Acting in First-Person: A Dataset and Benchmark for Egocentric Human-Object-Human Interactions
CV and Pattern Recognition
AI learns to help people by watching and listening.
EgoX: Egocentric Video Generation from a Single Exocentric Video
CV and Pattern Recognition
Turns normal videos into your own first-person view.