LLM-Enabled Style and Content Regularization for Personalized Text-to-Image Generation
By: Anran Yu , Wei Feng , Yaochen Zhang and more
Potential Business Impact:
Makes AI pictures match your style better.
The personalized text-to-image generation has rapidly advanced with the emergence of Stable Diffusion. Existing methods, which typically fine-tune models using embedded identifiers, often struggle with insufficient stylization and inaccurate image content due to reduced textual controllability. In this paper, we propose style refinement and content preservation strategies. The style refinement strategy leverages the semantic information of visual reasoning prompts and reference images to optimize style embeddings, allowing a more precise and consistent representation of style information. The content preservation strategy addresses the content bias problem by preserving the model's generalization capabilities, ensuring enhanced textual controllability without compromising stylization. Experimental results verify that our approach achieves superior performance in generating consistent and personalized text-to-image outputs.
Similar Papers
Energy-Guided Optimization for Personalized Image Editing with Pretrained Text-to-Image Diffusion Models
CV and Pattern Recognition
Changes pictures to match your exact ideas.
IMAGE-ALCHEMY: Advancing subject fidelity in personalised text-to-image generation
CV and Pattern Recognition
Makes AI draw any person or thing from a few pictures.
DesignDiffusion: High-Quality Text-to-Design Image Generation with Diffusion Models
CV and Pattern Recognition
Creates pictures from words for designs.