LoFA: Learning to Predict Personalized Priors for Fast Adaptation of Visual Generative Models
By: Yiming Hao , Mutian Xu , Chongjie Ye and more
Personalizing visual generative models to meet specific user needs has gained increasing attention, yet current methods like Low-Rank Adaptation (LoRA) remain impractical due to their demand for task-specific data and lengthy optimization. While a few hypernetwork-based approaches attempt to predict adaptation weights directly, they struggle to map fine-grained user prompts to complex LoRA distributions, limiting their practical applicability. To bridge this gap, we propose LoFA, a general framework that efficiently predicts personalized priors for fast model adaptation. We first identify a key property of LoRA: structured distribution patterns emerge in the relative changes between LoRA and base model parameters. Building on this, we design a two-stage hypernetwork: first predicting relative distribution patterns that capture key adaptation regions, then using these to guide final LoRA weight prediction. Extensive experiments demonstrate that our method consistently predicts high-quality personalized priors within seconds, across multiple tasks and user prompts, even outperforming conventional LoRA that requires hours of processing. Project page: https://jaeger416.github.io/lofa/.
Similar Papers
AutoLoRA: Automatic LoRA Retrieval and Fine-Grained Gated Fusion for Text-to-Image Generation
CV and Pattern Recognition
Lets computers create many different pictures easily.
LoRAverse: A Submodular Framework to Retrieve Diverse Adapters for Diffusion Models
CV and Pattern Recognition
Finds best AI art styles from many options.
LoRAtorio: An intrinsic approach to LoRA Skill Composition
CV and Pattern Recognition
Combines many art styles to create new pictures.