Steering Guidance for Personalized Text-to-Image Diffusion Models
By: Sunghyun Park , Seokeon Choi , Hyoungwoo Park and more
Potential Business Impact:
Creates personalized images that match descriptions perfectly.
Personalizing text-to-image diffusion models is crucial for adapting the pre-trained models to specific target concepts, enabling diverse image generation. However, fine-tuning with few images introduces an inherent trade-off between aligning with the target distribution (e.g., subject fidelity) and preserving the broad knowledge of the original model (e.g., text editability). Existing sampling guidance methods, such as classifier-free guidance (CFG) and autoguidance (AG), fail to effectively guide the output toward well-balanced space: CFG restricts the adaptation to the target distribution, while AG compromises text alignment. To address these limitations, we propose personalization guidance, a simple yet effective method leveraging an unlearned weak model conditioned on a null text prompt. Moreover, our method dynamically controls the extent of unlearning in a weak model through weight interpolation between pre-trained and fine-tuned models during inference. Unlike existing guidance methods, which depend solely on guidance scales, our method explicitly steers the outputs toward a balanced latent space without additional computational overhead. Experimental results demonstrate that our proposed guidance can improve text alignment and target distribution fidelity, integrating seamlessly with various fine-tuning strategies.
Similar Papers
How Much To Guide: Revisiting Adaptive Guidance in Classifier-Free Guidance Text-to-Vision Diffusion Models
CV and Pattern Recognition
Makes AI art faster and better.
Dynamic Classifier-Free Diffusion Guidance via Online Feedback
Machine Learning (CS)
Makes AI pictures match words better.
Energy-Guided Optimization for Personalized Image Editing with Pretrained Text-to-Image Diffusion Models
CV and Pattern Recognition
Changes pictures to match your exact ideas.