Style Composition within Distinct LoRA modules for Traditional Art
By: Jaehyun Lee , Wonhark Park , Wonsik Shin and more
Potential Business Impact:
Mixes different art styles in one picture.
Diffusion-based text-to-image models have achieved remarkable results in synthesizing diverse images from text prompts and can capture specific artistic styles via style personalization. However, their entangled latent space and lack of smooth interpolation make it difficult to apply distinct painting techniques in a controlled, regional manner, often causing one style to dominate. To overcome this, we propose a zero-shot diffusion pipeline that naturally blends multiple styles by performing style composition on the denoised latents predicted during the flow-matching denoising process of separately trained, style-specialized models. We leverage the fact that lower-noise latents carry stronger stylistic information and fuse them across heterogeneous diffusion pipelines using spatial masks, enabling precise, region-specific style control. This mechanism preserves the fidelity of each individual style while allowing user-guided mixing. Furthermore, to ensure structural coherence across different models, we incorporate depth-map conditioning via ControlNet into the diffusion framework. Qualitative and quantitative experiments demonstrate that our method successfully achieves region-specific style mixing according to the given masks.
Similar Papers
One-shot Embroidery Customization via Contrastive LoRA Modulation
Graphics
Makes computer designs look like real embroidery.
Leveraging Diffusion Models for Stylization using Multiple Style Images
CV and Pattern Recognition
Changes pictures to look like any art style.
LLM-Enabled Style and Content Regularization for Personalized Text-to-Image Generation
CV and Pattern Recognition
Makes AI pictures match your style better.