MapReduce LoRA: Advancing the Pareto Front in Multi-Preference Optimization for Generative Models
By: Chieh-Yun Chen , Zhonghao Wang , Qi Chen and more
Potential Business Impact:
Makes AI create better pictures and videos.
Reinforcement learning from human feedback (RLHF) with reward models has advanced alignment of generative models to human aesthetic and perceptual preferences. However, jointly optimizing multiple rewards often incurs an alignment tax, improving one dimension while degrading others. To address this, we introduce two complementary methods: MapReduce LoRA and Reward-aware Token Embedding (RaTE). MapReduce LoRA trains preference-specific LoRA experts in parallel and iteratively merges them to refine a shared base model; RaTE learns reward-specific token embeddings that compose at inference for flexible preference control. Experiments on Text-to-Image generation (Stable Diffusion 3.5 Medium and FLUX.1-dev) show improvements of 36.1%, 4.6%, and 55.7%, and 32.7%, 4.3%, and 67.1% on GenEval, PickScore, and OCR, respectively. On Text-to-Video generation (HunyuanVideo), visual and motion quality improve by 48.1% and 90.0%, respectively. On the language task, Helpful Assistant, with Llama-2 7B, helpful and harmless improve by 43.4% and 136.7%, respectively. Our framework sets a new state-of-the-art multi-preference alignment recipe across modalities.
Similar Papers
AutoLoRA: Automatic LoRA Retrieval and Fine-Grained Gated Fusion for Text-to-Image Generation
CV and Pattern Recognition
Lets computers create many different pictures easily.
LoRAverse: A Submodular Framework to Retrieve Diverse Adapters for Diffusion Models
CV and Pattern Recognition
Finds best AI art styles from many options.
A Shared Low-Rank Adaptation Approach to Personalized RLHF
Machine Learning (CS)
AI learns what *you* like, not just what most people like.