Adaptive Multi-Fidelity Reinforcement Learning for Variance Reduction in Engineering Design Optimization
By: Akash Agrawal, Christopher McComb
Potential Business Impact:
Teaches robots better by using different skill levels.
Multi-fidelity Reinforcement Learning (RL) frameworks efficiently utilize computational resources by integrating analysis models of varying accuracy and costs. The prevailing methodologies, characterized by transfer learning, human-inspired strategies, control variate techniques, and adaptive sampling, predominantly depend on a structured hierarchy of models. However, this reliance on a model hierarchy can exacerbate variance in policy learning when the underlying models exhibit heterogeneous error distributions across the design space. To address this challenge, this work proposes a novel adaptive multi-fidelity RL framework, in which multiple heterogeneous, non-hierarchical low-fidelity models are dynamically leveraged alongside a high-fidelity model to efficiently learn a high-fidelity policy. Specifically, low-fidelity policies and their experience data are adaptively used for efficient targeted learning, guided by their alignment with the high-fidelity policy. The effectiveness of the approach is demonstrated in an octocopter design optimization problem, utilizing two low-fidelity models alongside a high-fidelity simulator. The results demonstrate that the proposed approach substantially reduces variance in policy learning, leading to improved convergence and consistent high-quality solutions relative to traditional hierarchical multi-fidelity RL methods. Moreover, the framework eliminates the need for manually tuning model usage schedules, which can otherwise introduce significant computational overhead. This positions the framework as an effective variance-reduction strategy for multi-fidelity RL, while also mitigating the computational and operational burden of manual fidelity scheduling.
Similar Papers
Multi-fidelity Reinforcement Learning Control for Complex Dynamical Systems
Machine Learning (CS)
Teaches computers to control tricky systems faster.
Multi-Fidelity Policy Gradient Algorithms
Machine Learning (CS)
Teaches robots using cheap, fake practice.
Automated Model Tuning for Multifidelity Uncertainty Propagation in Trajectory Simulation
Computation
Makes computer simulations faster and more accurate.