LAPP: Large Language Model Feedback for Preference-Driven Reinforcement Learning
By: Pingcheng Jian , Xiao Wei , Yanbaihui Liu and more
Potential Business Impact:
Robots learn new tricks by reading human instructions.
We introduce Large Language Model-Assisted Preference Prediction (LAPP), a novel framework for robot learning that enables efficient, customizable, and expressive behavior acquisition with minimum human effort. Unlike prior approaches that rely heavily on reward engineering, human demonstrations, motion capture, or expensive pairwise preference labels, LAPP leverages large language models (LLMs) to automatically generate preference labels from raw state-action trajectories collected during reinforcement learning (RL). These labels are used to train an online preference predictor, which in turn guides the policy optimization process toward satisfying high-level behavioral specifications provided by humans. Our key technical contribution is the integration of LLMs into the RL feedback loop through trajectory-level preference prediction, enabling robots to acquire complex skills including subtle control over gait patterns and rhythmic timing. We evaluate LAPP on a diverse set of quadruped locomotion and dexterous manipulation tasks and show that it achieves efficient learning, higher final performance, faster adaptation, and precise control of high-level behaviors. Notably, LAPP enables robots to master highly dynamic and expressive tasks such as quadruped backflips, which remain out of reach for standard LLM-generated or handcrafted rewards. Our results highlight LAPP as a promising direction for scalable preference-driven robot learning.
Similar Papers
QuickLAP: Quick Language-Action Preference Learning for Autonomous Driving Agents
Artificial Intelligence
Robots learn faster from what you do and say.
RLAP: A Reinforcement Learning Enhanced Adaptive Planning Framework for Multi-step NLP Task Solving
Computation and Language
Helps computers solve hard word problems better.
Leveraging Pre-trained Large Language Models with Refined Prompting for Online Task and Motion Planning
Robotics
Robots learn to fix mistakes while working.