Score: 0

Parametric Expensive Multi-Objective Optimization via Generative Solution Modeling

Published: November 12, 2025 | arXiv ID: 2511.09598v1

By: Tingyang Wei , Jiao Liu , Abhishek Gupta and more

Potential Business Impact:

Solves many hard problems faster, without re-testing.

Business Areas:
Personalization Commerce and Shopping

Many real-world applications require solving families of expensive multi-objective optimization problems~(EMOPs) under varying operational conditions. This gives rise to parametric expensive multi-objective optimization problems (P-EMOPs) where each task parameter defines a distinct optimization instance. Current multi-objective Bayesian optimization methods have been widely used for finding finite sets of Pareto optimal solutions for individual tasks. However, P-EMOPs present a fundamental challenge: the continuous task parameter space can contain infinite distinct problems, each requiring separate expensive evaluations. This demands learning an inverse model that can directly predict optimized solutions for any task-preference query without expensive re-evaluation. This paper introduces the first parametric multi-objective Bayesian optimizer that learns this inverse model by alternating between (1) acquisition-driven search leveraging inter-task synergies and (2) generative solution sampling via conditional generative models. This approach enables efficient optimization across related tasks and finally achieves direct solution prediction for unseen parameterized EMOPs without additional expensive evaluations. We theoretically justify the faster convergence by leveraging inter-task synergies through task-aware Gaussian processes. Meanwhile, empirical studies in synthetic and real-world benchmarks further verify the effectiveness of our alternating framework.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)