Score: 1

On Some Tunable Multi-fidelity Bayesian Optimization Frameworks

Published: August 1, 2025 | arXiv ID: 2508.01013v1

By: Arjun Manoj , Anastasia S. Georgiou , Dimitris G. Giovanis and more

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Finds best designs using less expensive tests.

Multi-fidelity optimization employs surrogate models that integrate information from varying levels of fidelity to guide efficient exploration of complex design spaces while minimizing the reliance on (expensive) high-fidelity objective function evaluations. To advance Gaussian Process (GP)-based multi-fidelity optimization, we implement a proximity-based acquisition strategy that simplifies fidelity selection by eliminating the need for separate acquisition functions at each fidelity level. We also enable multi-fidelity Upper Confidence Bound (UCB) strategies by combining them with multi-fidelity GPs rather than the standard GPs typically used. We benchmark these approaches alongside other multi-fidelity acquisition strategies (including fidelity-weighted approaches) comparing their performance, reliance on high-fidelity evaluations, and hyperparameter tunability in representative optimization tasks. The results highlight the capability of the proximity-based multi-fidelity acquisition function to deliver consistent control over high-fidelity usage while maintaining convergence efficiency. Our illustrative examples include multi-fidelity chemical kinetic models, both homogeneous and heterogeneous (dynamic catalysis for ammonia production).

Country of Origin
🇺🇸 United States

Page Count
33 pages

Category
Computer Science:
Machine Learning (CS)