Building surrogate models using trajectories of agents trained by Reinforcement Learning
By: Julen Cestero, Marco Quartulli, Marcello Restelli
Potential Business Impact:
Teaches computers to learn faster from fewer tries.
Sample efficiency in the face of computationally expensive simulations is a common concern in surrogate modeling. Current strategies to minimize the number of samples needed are not as effective in simulated environments with wide state spaces. As a response to this challenge, we propose a novel method to efficiently sample simulated deterministic environments by using policies trained by Reinforcement Learning. We provide an extensive analysis of these surrogate-building strategies with respect to Latin-Hypercube sampling or Active Learning and Kriging, cross-validating performances with all sampled datasets. The analysis shows that a mixed dataset that includes samples acquired by random agents, expert agents, and agents trained to explore the regions of maximum entropy of the state transition distribution provides the best scores through all datasets, which is crucial for a meaningful state space representation. We conclude that the proposed method improves the state-of-the-art and clears the path to enable the application of surrogate-aided Reinforcement Learning policy optimization strategies on complex simulators.
Similar Papers
A Kriging-HDMR-based surrogate model with sample pool-free active learning strategy for reliability analysis
Machine Learning (CS)
Finds weak spots in complex systems faster.
Probabilistic Insights for Efficient Exploration Strategies in Reinforcement Learning
Probability
Helps robots find rare things faster.
Surrogate Fitness Metrics for Interpretable Reinforcement Learning
Machine Learning (CS)
Helps computers explain their decisions better.