Score: 0

Learning Attentive Neural Processes for Planning with Pushing Actions

Published: April 24, 2025 | arXiv ID: 2504.17924v3

By: Atharv Jain, Seiji Shaw, Nicholas Roy

Potential Business Impact:

Robots learn to push blocks to exact spots.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Our goal is to enable robots to plan sequences of tabletop actions to push a block with unknown physical properties to a desired goal pose. We approach this problem by learning the constituent models of a Partially-Observable Markov Decision Process (POMDP), where the robot can observe the outcome of a push, but the physical properties of the block that govern the dynamics remain unknown. A common solution approach is to train an observation model in a supervised fashion, and do inference with a general inference technique such as particle filters. However, supervised training requires knowledge of the relevant physical properties that determine the problem dynamics, which we do not assume to be known. Planning also requires simulating many belief updates, which becomes expensive when using particle filters to represent the belief. We propose to learn an Attentive Neural Process that computes the belief over a learned latent representation of the relevant physical properties given a history of actions. To address the pushing planning problem, we integrate a trained Neural Process with a double-progressive widening sampling strategy. Simulation results indicate that Neural Process Tree with Double Progressive Widening (NPT-DPW) generates better-performing plans faster than traditional particle-filter methods that use a supervised-trained observation model, even in complex pushing scenarios.

Page Count
7 pages

Category
Computer Science:
Robotics