OmniPSD: Layered PSD Generation with Diffusion Transformer
By: Cheng Liu , Yiren Song , Haofan Wang and more
Recent advances in diffusion models have greatly improved image generation and editing, yet generating or reconstructing layered PSD files with transparent alpha channels remains highly challenging. We propose OmniPSD, a unified diffusion framework built upon the Flux ecosystem that enables both text-to-PSD generation and image-to-PSD decomposition through in-context learning. For text-to-PSD generation, OmniPSD arranges multiple target layers spatially into a single canvas and learns their compositional relationships through spatial attention, producing semantically coherent and hierarchically structured layers. For image-to-PSD decomposition, it performs iterative in-context editing, progressively extracting and erasing textual and foreground components to reconstruct editable PSD layers from a single flattened image. An RGBA-VAE is employed as an auxiliary representation module to preserve transparency without affecting structure learning. Extensive experiments on our new RGBA-layered dataset demonstrate that OmniPSD achieves high-fidelity generation, structural consistency, and transparency awareness, offering a new paradigm for layered design generation and decomposition with diffusion transformers.
Similar Papers
PSDiffusion: Harmonized Multi-Layer Image Generation via Layout and Appearance Alignment
CV and Pattern Recognition
Creates layered pictures with real-looking shadows.
OmniAlpha: A Sequence-to-Sequence Framework for Unified Multi-Task RGBA Generation
CV and Pattern Recognition
Creates images with transparent parts, like cutouts.
DreamLayer: Simultaneous Multi-Layer Generation via Diffusion Mode
CV and Pattern Recognition
Creates realistic pictures from text, layer by layer.