3SGen: Unified Subject, Style, and Structure-Driven Image Generation with Adaptive Task-specific Memory
By: Xinyang Song , Libin Wang , Weining Wang and more
Recent image generation approaches often address subject, style, and structure-driven conditioning in isolation, leading to feature entanglement and limited task transferability. In this paper, we introduce 3SGen, a task-aware unified framework that performs all three conditioning modes within a single model. 3SGen employs an MLLM equipped with learnable semantic queries to align text-image semantics, complemented by a VAE branch that preserves fine-grained visual details. At its core, an Adaptive Task-specific Memory (ATM) module dynamically disentangles, stores, and retrieves condition-specific priors, such as identity for subjects, textures for styles, and spatial layouts for structures, via a lightweight gating mechanism along with several scalable memory items. This design mitigates inter-task interference and naturally scales to compositional inputs. In addition, we propose 3SGen-Bench, a unified image-driven generation benchmark with standardized metrics for evaluating cross-task fidelity and controllability. Extensive experiments on our proposed 3SGen-Bench and other public benchmarks demonstrate our superior performance across diverse image-driven generation tasks.
Similar Papers
UniUGG: Unified 3D Understanding and Generation via Geometric-Semantic Encoding
CV and Pattern Recognition
Creates 3D worlds from pictures and words.
GEN3D: Generating Domain-Free 3D Scenes from a Single Image
CV and Pattern Recognition
Creates realistic 3D worlds from one picture.
MM-GEN: Enhancing Task Performance Through Targeted Multimodal Data Curation
CV and Pattern Recognition
Teaches computers to understand charts and diagrams.