Quality-focused Active Adversarial Policy for Safe Grasping in Human-Robot Interaction
By: Chenghao Li, Razvan Beuran, Nak Young Chong
Potential Business Impact:
Robot learns to avoid grabbing human hands.
Vision-guided robot grasping methods based on Deep Neural Networks (DNNs) have achieved remarkable success in handling unknown objects, attributable to their powerful generalizability. However, these methods with this generalizability tend to recognize the human hand and its adjacent objects as graspable targets, compromising safety during Human-Robot Interaction (HRI). In this work, we propose the Quality-focused Active Adversarial Policy (QFAAP) to solve this problem. Specifically, the first part is the Adversarial Quality Patch (AQP), wherein we design the adversarial quality patch loss and leverage the grasp dataset to optimize a patch with high quality scores. Next, we construct the Projected Quality Gradient Descent (PQGD) and integrate it with the AQP, which contains only the hand region within each real-time frame, endowing the AQP with fast adaptability to the human hand shape. Through AQP and PQGD, the hand can be actively adversarial with the surrounding objects, lowering their quality scores. Therefore, further setting the quality score of the hand to zero will reduce the grasping priority of both the hand and its adjacent objects, enabling the robot to grasp other objects away from the hand without emergency stops. We conduct extensive experiments on the benchmark datasets and a cobot, showing the effectiveness of QFAAP. Our code and demo videos are available here: https://github.com/clee-jaist/QFAAP.
Similar Papers
Attribute-Based Robotic Grasping with Data-Efficient Adaptation
Robotics
Teaches robots to grab new things quickly.
GraspQP: Differentiable Optimization of Force Closure for Diverse and Robust Dexterous Grasping
Robotics
Robots can grasp objects in many new ways.
Towards Imperceptible Adversarial Defense: A Gradient-Driven Shield against Facial Manipulations
Cryptography and Security
Stops fake faces from fooling people.