RLLaVA: An RL-central Framework for Language and Vision Assistants
By: Lei Zhao , Zihao Ma , Boyu Lin and more
We present an RL-central framework for Language and Vision Assistants (RLLaVA) with its formulation of Markov decision process (MDP). RLLaVA decouples RL algorithmic logic from model architecture and distributed execution, supporting researchers in implementing new RL algorithms with minimal code, and to plug in a broad family of RL methods and vision-language models (VLMs) while remaining agnostic to specific training and inference engines. RLLaVA makes resource-efficient training of 1B--7B models feasible on common GPUs; notably, 4B-scale models can be trained end-to-end with full-parameter updates on a single 24GB GPU. Experiments on multi-modal and agentic tasks demonstrate that RLLaVA has task extensibility, and the models trained with it consistently improve performance over base models, competitive with other specially engineered RL frameworks. The code is available at https://github.com/TinyLoopX/RLLaVA.
Similar Papers
AVA-VLA: Improving Vision-Language-Action models with Active Visual Attention
Machine Learning (CS)
Helps robots learn tasks by remembering past actions.
RLinf-VLA: A Unified and Efficient Framework for VLA+RL Training
Robotics
Teaches robots to learn tasks faster.
LLaDA-VLA: Vision Language Diffusion Action Models
Robotics
Robots learn to do tasks by watching and reading.