Score: 0

Learning a Vision-Based Footstep Planner for Hierarchical Walking Control

Published: August 9, 2025 | arXiv ID: 2508.06779v1

By: Minku Kim , Brian Acosta , Pratik Chaudhari and more

Potential Business Impact:

Robots walk better on bumpy ground using sight.

Plain English Summary

Robots that walk on two legs can now navigate rough ground much better, like a person walking over rocks. This new system uses cameras to "see" the ground and figure out the best way to step, making them more reliable than older robots that relied only on their "feel" or complicated pre-programmed paths. This means these robots could soon help us in places too dangerous or difficult for humans, like disaster zones or exploring new planets.

Bipedal robots demonstrate potential in navigating challenging terrains through dynamic ground contact. However, current frameworks often depend solely on proprioception or use manually designed visual pipelines, which are fragile in real-world settings and complicate real-time footstep planning in unstructured environments. To address this problem, we present a vision-based hierarchical control framework that integrates a reinforcement learning high-level footstep planner, which generates footstep commands based on a local elevation map, with a low-level Operational Space Controller that tracks the generated trajectories. We utilize the Angular Momentum Linear Inverted Pendulum model to construct a low-dimensional state representation to capture an informative encoding of the dynamics while reducing complexity. We evaluate our method across different terrain conditions using the underactuated bipedal robot Cassie and investigate the capabilities and challenges of our approach through simulation and hardware experiments.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
8 pages

Category
Computer Science:
Robotics