Semore: VLM-guided Enhanced Semantic Motion Representations for Visual Reinforcement Learning
By: Wentao Wang , Chunyang Liu , Kehua Sheng and more
The growing exploration of Large Language Models (LLM) and Vision-Language Models (VLM) has opened avenues for enhancing the effectiveness of reinforcement learning (RL). However, existing LLM-based RL methods often focus on the guidance of control policy and encounter the challenge of limited representations of the backbone networks. To tackle this problem, we introduce Enhanced Semantic Motion Representations (Semore), a new VLM-based framework for visual RL, which can simultaneously extract semantic and motion representations through a dual-path backbone from the RGB flows. Semore utilizes VLM with common-sense knowledge to retrieve key information from observations, while using the pre-trained clip to achieve the text-image alignment, thereby embedding the ground-truth representations into the backbone. To efficiently fuse semantic and motion representations for decision-making, our method adopts a separately supervised approach to simultaneously guide the extraction of semantics and motion, while allowing them to interact spontaneously. Extensive experiments demonstrate that, under the guidance of VLM at the feature level, our method exhibits efficient and adaptive ability compared to state-of-art methods. All codes are released.
Similar Papers
Rethinking Intermediate Representation for VLM-based Robot Manipulation
Robotics
Helps robots understand and do new tasks.
HMVLM: Human Motion-Vision-Lanuage Model via MoE LoRA
CV and Pattern Recognition
Teaches computers to understand and create human movement.
A Self-supervised Motion Representation for Portrait Video Generation
CV and Pattern Recognition
Makes videos move realistically from just sound.