Score: 1

Learning Common and Salient Generative Factors Between Two Image Datasets

Published: December 14, 2025 | arXiv ID: 2512.12800v1

By: Yunlong He , Gwilherm Lesné , Ziqian Liu and more

Potential Business Impact:

Finds what's the same and different in pictures.

Business Areas:
Image Recognition Data and Analytics, Software

Recent advancements in image synthesis have enabled high-quality image generation and manipulation. Most works focus on: 1) conditional manipulation, where an image is modified conditioned on a given attribute, or 2) disentangled representation learning, where each latent direction should represent a distinct semantic attribute. In this paper, we focus on a different and less studied research problem, called Contrastive Analysis (CA). Given two image datasets, we want to separate the common generative factors, shared across the two datasets, from the salient ones, specific to only one dataset. Compared to existing methods, which use attributes as supervised signals for editing (e.g., glasses, gender), the proposed method is weaker, since it only uses the dataset signal. We propose a novel framework for CA, that can be adapted to both GAN and Diffusion models, to learn both common and salient factors. By defining new and well-adapted learning strategies and losses, we ensure a relevant separation between common and salient factors, preserving a high-quality generation. We evaluate our approach on diverse datasets, covering human faces, animal images and medical scans. Our framework demonstrates superior separation ability and image quality synthesis compared to prior methods.

Repos / Data Links

Page Count
31 pages

Category
Computer Science:
CV and Pattern Recognition