SLAP: Shortcut Learning for Abstract Planning
By: Y. Isabel Liu , Bowen Li , Benjamin Eysenbach and more
Potential Business Impact:
Teaches robots new tricks to solve problems faster.
Long-horizon decision-making with sparse rewards and continuous states and actions remains a fundamental challenge in AI and robotics. Task and motion planning (TAMP) is a model-based framework that addresses this challenge by planning hierarchically with abstract actions (options). These options are manually defined, limiting the agent to behaviors that we as human engineers know how to program (pick, place, move). In this work, we propose Shortcut Learning for Abstract Planning (SLAP), a method that leverages existing TAMP options to automatically discover new ones. Our key idea is to use model-free reinforcement learning (RL) to learn shortcuts in the abstract planning graph induced by the existing options in TAMP. Without any additional assumptions or inputs, shortcut learning leads to shorter solutions than pure planning, and higher task success rates than flat and hierarchical RL. Qualitatively, SLAP discovers dynamic physical improvisations (e.g., slap, wiggle, wipe) that differ significantly from the manually-defined ones. In experiments in four simulated robotic environments, we show that SLAP solves and generalizes to a wide range of tasks, reducing overall plan lengths by over 50% and consistently outperforming planning and RL baselines.
Similar Papers
Optimistic Reinforcement Learning-Based Skill Insertions for Task and Motion Planning
Robotics
Robots learn to do tasks with uncertain steps.
RLAP: A Reinforcement Learning Enhanced Adaptive Planning Framework for Multi-step NLP Task Solving
Computation and Language
Helps computers solve hard word problems better.
LLM-GROP: Visually Grounded Robot Task and Motion Planning with Large Language Models
Robotics
Robot learns to set tables using common sense.