Efficient First-Order Optimization on the Pareto Set for Multi-Objective Learning under Preference Guidance
By: Lisha Chen , Quan Xiao , Ellen Hidemi Fukuda and more
Potential Business Impact:
Helps computers learn to do many things fairly.
Multi-objective learning under user-specified preference is common in real-world problems such as multi-lingual speech recognition under fairness. In this work, we frame such a problem as a semivectorial bilevel optimization problem, whose goal is to optimize a pre-defined preference function, subject to the constraint that the model parameters are weakly Pareto optimal. To solve this problem, we convert the multi-objective constraints to a single-objective constraint through a merit function with an easy-to-evaluate gradient, and then, we use a penalty-based reformulation of the bilevel optimization problem. We theoretically establish the properties of the merit function, and the relations of solutions for the penalty reformulation and the constrained formulation. Then we propose algorithms to solve the reformulated single-level problem, and establish its convergence guarantees. We test the method on various synthetic and real-world problems. The results demonstrate the effectiveness of the proposed method in finding preference-guided optimal solutions to the multi-objective problem.
Similar Papers
Preference Elicitation for Multi-objective Combinatorial Optimization with Active Learning and Maximum Likelihood Estimation
Artificial Intelligence
Helps computers find best choices with fewer questions.
Preference-Guided Diffusion for Multi-Objective Offline Optimization
Machine Learning (CS)
Finds best designs even when not seen.
Preference Optimization for Combinatorial Optimization Problems
Machine Learning (CS)
Teaches computers to solve hard puzzles better.