Learning General Policies From Examples
By: Blai Bonet, Hector Geffner
Potential Business Impact:
Teaches computers to solve huge problems by learning from examples.
Combinatorial methods for learning general policies that solve large collections of planning problems have been recently developed. One of their strengths, in relation to deep learning approaches, is that the resulting policies can be understood and shown to be correct. A weakness is that the methods do not scale up and learn only from small training instances and feature pools that contain a few hundreds of states and features at most. In this work, we propose a new symbolic method for learning policies based on the generalization of sampled plans that ensures structural termination and hence acyclicity. The proposed learning approach is not based on SAT/ASP, as previous symbolic methods, but on a hitting set algorithm that can effectively handle problems with millions of states, and pools with hundreds of thousands of features. The formal properties of the approach are analyzed, and its scalability is tested on a number of benchmarks.
Similar Papers
Few-Shot Neuro-Symbolic Imitation Learning for Long-Horizon Planning and Acting
Robotics
Teaches robots complex tasks with few examples.
Structured Imitation Learning of Interactive Policies through Inverse Games
Robotics
Teaches robots to work with people.
Robust Finetuning of Vision-Language-Action Robot Policies via Parameter Merging
Robotics
Robot learns new tricks without forgetting old ones.