Learning Common and Salient Generative Factors Between Two Image Datasets
By: Yunlong He , Gwilherm Lesné , Ziqian Liu and more
Potential Business Impact:
Finds what's the same and different in pictures.
Recent advancements in image synthesis have enabled high-quality image generation and manipulation. Most works focus on: 1) conditional manipulation, where an image is modified conditioned on a given attribute, or 2) disentangled representation learning, where each latent direction should represent a distinct semantic attribute. In this paper, we focus on a different and less studied research problem, called Contrastive Analysis (CA). Given two image datasets, we want to separate the common generative factors, shared across the two datasets, from the salient ones, specific to only one dataset. Compared to existing methods, which use attributes as supervised signals for editing (e.g., glasses, gender), the proposed method is weaker, since it only uses the dataset signal. We propose a novel framework for CA, that can be adapted to both GAN and Diffusion models, to learn both common and salient factors. By defining new and well-adapted learning strategies and losses, we ensure a relevant separation between common and salient factors, preserving a high-quality generation. We evaluate our approach on diverse datasets, covering human faces, animal images and medical scans. Our framework demonstrates superior separation ability and image quality synthesis compared to prior methods.
Similar Papers
Salient Concept-Aware Generative Data Augmentation
CV and Pattern Recognition
Makes AI create better, more varied pictures from words.
Comparison Reveals Commonality: Customized Image Generation through Contrastive Inversion
CV and Pattern Recognition
Makes AI create pictures from just a few examples.
Supervised Contrastive Learning for Few-Shot AI-Generated Image Detection and Attribution
CV and Pattern Recognition
Finds fake pictures made by AI.