Mirror Skin: In Situ Visualization of Robot Touch Intent on Robotic Skin
By: David Wagmann , Matti Krüger , Chao Wang and more
Effective communication of robotic touch intent is a key factor in promoting safe and predictable physical human-robot interaction (pHRI). While intent communication has been widely studied, existing approaches lack the spatial specificity and semantic depth necessary to convey robot touch actions. We present Mirror Skin, a cephalopod-inspired concept that utilizes high-resolution, mirror-like visual feedback on robotic skin. By mapping in-situ visual representations of a human's body parts onto the corresponding robot's touch region, Mirror Skin communicates who shall initiate touch, where it will occur, and when it is imminent. To inform the design of Mirror Skin, we conducted a structured design exploration with experts in virtual reality (VR), iteratively refining six key dimensions. A subsequent controlled user study demonstrated that Mirror Skin significantly enhances accuracy and reduces response times for interpreting touch intent. These findings highlight the potential of visual feedback on robotic skin to communicate human-robot touch interactions.
Similar Papers
DexSkin: High-Coverage Conformable Robotic Skin for Learning Contact-Rich Manipulation
Robotics
Robots feel like humans to grab things better.
Social Gesture Recognition in spHRI: Leveraging Fabric-Based Tactile Sensing on Humanoid Robots
Robotics
Robots understand your touch like a friend.
TACT: Humanoid Whole-body Contact Manipulation through Deep Imitation Learning with Tactile Modality
Robotics
Robot learns to grab things by feeling them.