Parental Guidance: Efficient Lifelong Learning through Evolutionary Distillation
By: Octi Zhang , Quanquan Peng , Rosario Scalise and more
Potential Business Impact:
Robots learn many skills by copying and improving.
Developing robotic agents that can perform well in diverse environments while showing a variety of behaviors is a key challenge in AI and robotics. Traditional reinforcement learning (RL) methods often create agents that specialize in narrow tasks, limiting their adaptability and diversity. To overcome this, we propose a preliminary, evolution-inspired framework that includes a reproduction module, similar to natural species reproduction, balancing diversity and specialization. By integrating RL, imitation learning (IL), and a coevolutionary agent-terrain curriculum, our system evolves agents continuously through complex tasks. This approach promotes adaptability, inheritance of useful traits, and continual learning. Agents not only refine inherited skills but also surpass their predecessors. Our initial experiments show that this method improves exploration efficiency and supports open-ended learning, offering a scalable solution where sparse reward coupled with diverse terrain environments induces a multi-task setting.
Similar Papers
An Efficient Task-Oriented Dialogue Policy: Evolutionary Reinforcement Learning Injected by Elite Individuals
Computation and Language
Makes chatbots learn faster and smarter.
Learning Where, What and How to Transfer: A Multi-Role Reinforcement Learning Approach for Evolutionary Multitasking
Neural and Evolutionary Computing
Teaches computers to learn many tasks at once.
Efficient Adaptation of Reinforcement Learning Agents to Sudden Environmental Change
Machine Learning (CS)
Helps robots learn new tricks without forgetting old ones.