Score: 0

Efficient First-Order Optimization on the Pareto Set for Multi-Objective Learning under Preference Guidance

Published: March 26, 2025 | arXiv ID: 2504.02854v1

By: Lisha Chen , Quan Xiao , Ellen Hidemi Fukuda and more

Potential Business Impact:

Helps computers learn to do many things fairly.

Business Areas:
Personalization Commerce and Shopping

Multi-objective learning under user-specified preference is common in real-world problems such as multi-lingual speech recognition under fairness. In this work, we frame such a problem as a semivectorial bilevel optimization problem, whose goal is to optimize a pre-defined preference function, subject to the constraint that the model parameters are weakly Pareto optimal. To solve this problem, we convert the multi-objective constraints to a single-objective constraint through a merit function with an easy-to-evaluate gradient, and then, we use a penalty-based reformulation of the bilevel optimization problem. We theoretically establish the properties of the merit function, and the relations of solutions for the penalty reformulation and the constrained formulation. Then we propose algorithms to solve the reformulated single-level problem, and establish its convergence guarantees. We test the method on various synthetic and real-world problems. The results demonstrate the effectiveness of the proposed method in finding preference-guided optimal solutions to the multi-objective problem.

Page Count
57 pages

Category
Mathematics:
Optimization and Control