Score: 0

Hoi! -- A Multimodal Dataset for Force-Grounded, Cross-View Articulated Manipulation

Published: December 4, 2025 | arXiv ID: 2512.04884v1

By: Tim Engelbracht , René Zurbrügg , Matteo Wohlrapp and more

Potential Business Impact:

Teaches robots to grab and feel objects like humans.

Business Areas:
Motion Capture Media and Entertainment, Video

We present a dataset for force-grounded, cross-view articulated manipulation that couples what is seen with what is done and what is felt during real human interaction. The dataset contains 3048 sequences across 381 articulated objects in 38 environments. Each object is operated under four embodiments - (i) human hand, (ii) human hand with a wrist-mounted camera, (iii) handheld UMI gripper, and (iv) a custom Hoi! gripper - where the tool embodiment provides synchronized end-effector forces and tactile sensing. Our dataset offers a holistic view of interaction understanding from video, enabling researchers to evaluate how well methods transfer between human and robotic viewpoints, but also investigate underexplored modalities such as force sensing and prediction.

Country of Origin
🇨🇭 Switzerland

Page Count
18 pages

Category
Computer Science:
Robotics