Unifying Deep Predicate Invention with Pre-trained Foundation Models
By: Qianwei Wang , Bowen Li , Zhanpeng Luo and more
Long-horizon robotic tasks are hard due to continuous state-action spaces and sparse feedback. Symbolic world models help by decomposing tasks into discrete predicates that capture object properties and relations. Existing methods learn predicates either top-down, by prompting foundation models without data grounding, or bottom-up, from demonstrations without high-level priors. We introduce UniPred, a bilevel learning framework that unifies both. UniPred uses large language models (LLMs) to propose predicate effect distributions that supervise neural predicate learning from low-level data, while learned feedback iteratively refines the LLM hypotheses. Leveraging strong visual foundation model features, UniPred learns robust predicate classifiers in cluttered scenes. We further propose a predicate evaluation method that supports symbolic models beyond STRIPS assumptions. Across five simulated and one real-robot domains, UniPred achieves 2-4 times higher success rates than top-down methods and 3-4 times faster learning than bottom-up approaches, advancing scalable and flexible symbolic world modeling for robotics.
Similar Papers
Pretraining a Unified PDDL Domain from Real-World Demonstrations for Generalizable Robot Task Planning
Robotics
Robots learn to do new jobs by watching videos.
SkillWrapper: Generative Predicate Invention for Skill Abstraction
Robotics
Teaches robots to solve new, hard tasks.
UniCoD: Enhancing Robot Policy via Unified Continuous and Discrete Representation Learning
Robotics
Robots learn to do many new jobs by watching videos.