Lite VLA: Efficient Vision-Language-Action Control on CPU-Bound Edge Robots
By: Justin Williams , Kishor Datta Gupta , Roy George and more
Potential Business Impact:
Robots see and think without internet.
The deployment of artificial intelligence models at the edge is increasingly critical for autonomous robots operating in GPS-denied environments where local, resource-efficient reasoning is essential. This work demonstrates the feasibility of deploying small Vision-Language Models (VLMs) on mobile robots to achieve real-time scene understanding and reasoning under strict computational constraints. Unlike prior approaches that separate perception from mobility, the proposed framework enables simultaneous movement and reasoning in dynamic environments using only on-board hardware. The system integrates a compact VLM with multimodal perception to perform contextual interpretation directly on embedded hardware, eliminating reliance on cloud connectivity. Experimental validation highlights the balance between computational efficiency, task accuracy, and system responsiveness. Implementation on a mobile robot confirms one of the first successful deployments of small VLMs for concurrent reasoning and mobility at the edge. This work establishes a foundation for scalable, assured autonomy in applications such as service robotics, disaster response, and defense operations.
Similar Papers
Efficient Vision-Language-Action Models for Embodied Manipulation: A Systematic Survey
Robotics
Makes robots understand and do tasks faster.
Efficient Vision-Language-Action Models for Embodied Manipulation: A Systematic Survey
Robotics
Makes robots understand and do tasks faster.
Efficient Vision-Language-Action Models for Embodied Manipulation: A Systematic Survey
Robotics
Makes robots understand and do tasks faster.