Score: 2

Making VLMs More Robot-Friendly: Self-Critical Distillation of Low-Level Procedural Reasoning

Published: July 11, 2025 | arXiv ID: 2507.08224v2

By: Chan Young Park , Jillian Fisher , Marius Memmel and more

BigTech Affiliations: University of Washington

Potential Business Impact:

Robots learn to do tasks better by thinking.

Business Areas:
Autonomous Vehicles Transportation

Large language models (LLMs) have shown promise in robotic procedural planning, yet their human-centric reasoning often omits the low-level, grounded details needed for robotic execution. Vision-language models (VLMs) offer a path toward more perceptually grounded plans, but current methods either rely on expensive, large-scale models or are constrained to narrow simulation settings. We introduce SelfReVision, a lightweight and scalable self-improvement framework for vision-language procedural planning. SelfReVision enables small VLMs to iteratively critique, revise, and verify their own plans-without external supervision or teacher models-drawing inspiration from chain-of-thought prompting and self-instruct paradigms. Through this self-distillation loop, models generate higher-quality, execution-ready plans that can be used both at inference and for continued fine-tuning. Using models varying from 3B to 72B, our results show that SelfReVision not only boosts performance over weak base VLMs but also outperforms models 100X the size, yielding improved control in downstream embodied tasks.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Robotics