Iterative Deployment Improves Planning Skills in LLMs
By: Augusto B. Corrêa , Yoav Gelberg , Luckeciano C. Melo and more
We show that iterative deployment of large language models (LLMs), each fine-tuned on data carefully curated by users from the previous models' deployment, can significantly change the properties of the resultant models. By testing this mechanism on various planning domains, we observe substantial improvements in planning skills, with later models displaying emergent generalization by discovering much longer plans than the initial models. We then provide theoretical analysis showing that iterative deployment effectively implements reinforcement learning (RL) training in the outer-loop (i.e. not as part of intentional model training), with an implicit reward function. The connection to RL has two important implications: first, for the field of AI safety, as the reward function entailed by repeated deployment is not defined explicitly, and could have unexpected implications to the properties of future model deployments. Second, the mechanism highlighted here can be viewed as an alternative training regime to explicit RL, relying on data curation rather than explicit rewards.
Similar Papers
Plan Verification for LLM-Based Embodied Task Completion Agents
Artificial Intelligence
Makes robots learn better by fixing their mistakes.
Responsible LLM Deployment for High-Stake Decisions by Decentralized Technologies and Human-AI Interactions
Computers and Society
Makes AI safer for important money choices.
Agent-R1: Training Powerful LLM Agents with End-to-End Reinforcement Learning
Computation and Language
Teaches AI to learn and solve problems better.