GenDexHand: Generative Simulation for Dexterous Hands
By: Feng Chen , Zhuxiu Xu , Tianzhe Chu and more
Potential Business Impact:
Creates robot hands that can do many tasks.
Data scarcity remains a fundamental bottleneck for embodied intelligence. Existing approaches use large language models (LLMs) to automate gripper-based simulation generation, but they transfer poorly to dexterous manipulation, which demands more specialized environment design. Meanwhile, dexterous manipulation tasks are inherently more difficult due to their higher degrees of freedom. Massively generating feasible and trainable dexterous hand tasks remains an open challenge. To this end, we present GenDexHand, a generative simulation pipeline that autonomously produces diverse robotic tasks and environments for dexterous manipulation. GenDexHand introduces a closed-loop refinement process that adjusts object placements and scales based on vision-language model (VLM) feedback, substantially improving the average quality of generated environments. Each task is further decomposed into sub-tasks to enable sequential reinforcement learning, reducing training time and increasing success rates. Our work provides a viable path toward scalable training of diverse dexterous hand behaviors in embodied intelligence by offering a simulation-based solution to synthetic data generation. Our website: https://winniechen2002.github.io/GenDexHand/.
Similar Papers
Cross-embodied Co-design for Dexterous Hands
Robotics
Robots learn to build better hands for tasks.
DexFlow: A Unified Approach for Dexterous Hand Pose Retargeting and Interaction
Robotics
Makes robot hands grab things more like people.
OmniDexVLG: Learning Dexterous Grasp Generation from Vision Language Model-Guided Grasp Semantics, Taxonomy and Functional Affordance
Robotics
Lets robots pick up anything, any way.