On Some Tunable Multi-fidelity Bayesian Optimization Frameworks
By: Arjun Manoj , Anastasia S. Georgiou , Dimitris G. Giovanis and more
Potential Business Impact:
Finds best designs using less expensive tests.
Multi-fidelity optimization employs surrogate models that integrate information from varying levels of fidelity to guide efficient exploration of complex design spaces while minimizing the reliance on (expensive) high-fidelity objective function evaluations. To advance Gaussian Process (GP)-based multi-fidelity optimization, we implement a proximity-based acquisition strategy that simplifies fidelity selection by eliminating the need for separate acquisition functions at each fidelity level. We also enable multi-fidelity Upper Confidence Bound (UCB) strategies by combining them with multi-fidelity GPs rather than the standard GPs typically used. We benchmark these approaches alongside other multi-fidelity acquisition strategies (including fidelity-weighted approaches) comparing their performance, reliance on high-fidelity evaluations, and hyperparameter tunability in representative optimization tasks. The results highlight the capability of the proximity-based multi-fidelity acquisition function to deliver consistent control over high-fidelity usage while maintaining convergence efficiency. Our illustrative examples include multi-fidelity chemical kinetic models, both homogeneous and heterogeneous (dynamic catalysis for ammonia production).
Similar Papers
Progressive multi-fidelity learning for physical system predictions
Machine Learning (CS)
Makes computer guesses better with mixed data.
Efficient multi-fidelity Gaussian process regression for noisy outputs and non-nested experimental designs
Applications
Improves computer predictions with less data.
Assessing the performance of correlation-based multi-fidelity neural emulators
Machine Learning (CS)
Makes slow computer models faster and smarter.