SPG: Style-Prompting Guidance for Style-Specific Content Creation
By: Qian Liang , Zichong Chen , Yang Zhou and more
Potential Business Impact:
Makes AI art match the exact look you want.
Although recent text-to-image (T2I) diffusion models excel at aligning generated images with textual prompts, controlling the visual style of the output remains a challenging task. In this work, we propose Style-Prompting Guidance (SPG), a novel sampling strategy for style-specific image generation. SPG constructs a style noise vector and leverages its directional deviation from unconditional noise to guide the diffusion process toward the target style distribution. By integrating SPG with Classifier-Free Guidance (CFG), our method achieves both semantic fidelity and style consistency. SPG is simple, robust, and compatible with controllable frameworks like ControlNet and IPAdapter, making it practical and widely applicable. Extensive experiments demonstrate the effectiveness and generality of our approach compared to state-of-the-art methods. Code is available at https://github.com/Rumbling281441/SPG.
Similar Papers
SP-Guard: Selective Prompt-adaptive Guidance for Safe Text-to-Image Generation
CV and Pattern Recognition
Stops AI from making bad pictures.
Training-Free Generation of Diverse and High-Fidelity Images via Prompt Semantic Space Optimization
CV and Pattern Recognition
Makes AI art makers create more different pictures.
SPG: Improving Motion Diffusion by Smooth Perturbation Guidance
CV and Pattern Recognition
Makes computer-animated people move more like real people.