A Humanoid Visual-Tactile-Action Dataset for Contact-Rich Manipulation
By: Eunju Kwon , Seungwon Oh , In-Chang Baek and more
Potential Business Impact:
Robots learn to touch and grab soft things.
Contact-rich manipulation has become increasingly important in robot learning. However, previous studies on robot learning datasets have focused on rigid objects and underrepresented the diversity of pressure conditions for real-world manipulation. To address this gap, we present a humanoid visual-tactile-action dataset designed for manipulating deformable soft objects. The dataset was collected via teleoperation using a humanoid robot equipped with dexterous hands, capturing multi-modal interactions under varying pressure conditions. This work also motivates future research on models with advanced optimization strategies capable of effectively leveraging the complexity and diversity of tactile signals.
Similar Papers
Hoi! -- A Multimodal Dataset for Force-Grounded, Cross-View Articulated Manipulation
Robotics
Teaches robots to grab and feel objects like humans.
TACT: Humanoid Whole-body Contact Manipulation through Deep Imitation Learning with Tactile Modality
Robotics
Robot learns to grab things by feeling them.
Humanoid Everyday: A Comprehensive Robotic Dataset for Open-World Humanoid Manipulation
Robotics
Teaches robots to do many new things.