LoRAverse: A Submodular Framework to Retrieve Diverse Adapters for Diffusion Models
By: Mert Sonmezer, Matthew Zheng, Pinar Yanardag
Potential Business Impact:
Finds best AI art styles from many options.
Low-rank Adaptation (LoRA) models have revolutionized the personalization of pre-trained diffusion models by enabling fine-tuning through low-rank, factorized weight matrices specifically optimized for attention layers. These models facilitate the generation of highly customized content across a variety of objects, individuals, and artistic styles without the need for extensive retraining. Despite the availability of over 100K LoRA adapters on platforms like Civit.ai, users often face challenges in navigating, selecting, and effectively utilizing the most suitable adapters due to their sheer volume, diversity, and lack of structured organization. This paper addresses the problem of selecting the most relevant and diverse LoRA models from this vast database by framing the task as a combinatorial optimization problem and proposing a novel submodular framework. Our quantitative and qualitative experiments demonstrate that our method generates diverse outputs across a wide range of domains.
Similar Papers
LoRAtorio: An intrinsic approach to LoRA Skill Composition
CV and Pattern Recognition
Combines many art styles to create new pictures.
Serving Heterogeneous LoRA Adapters in Distributed LLM Inference Systems
Distributed, Parallel, and Cluster Computing
Makes AI models run faster using fewer computers.
WeightLoRA: Keep Only Necessary Adapters
Machine Learning (CS)
Trains big computer brains with less memory.