Score: 0

ASemConsist: Adaptive Semantic Feature Control for Training-Free Identity-Consistent Generation

Published: December 29, 2025 | arXiv ID: 2512.23245v1

By: Shin seong Kim , Minjung Shin , Hyunin Cho and more

Potential Business Impact:

Keeps characters looking the same in different pictures.

Business Areas:
Semantic Search Internet Services

Recent text-to-image diffusion models have significantly improved visual quality and text alignment. However, generating a sequence of images while preserving consistent character identity across diverse scene descriptions remains a challenging task. Existing methods often struggle with a trade-off between maintaining identity consistency and ensuring per-image prompt alignment. In this paper, we introduce a novel framework, ASemconsist, that addresses this challenge through selective text embedding modification, enabling explicit semantic control over character identity without sacrificing prompt alignment. Furthermore, based on our analysis of padding embeddings in FLUX, we propose a semantic control strategy that repurposes padding embeddings as semantic containers. Additionally, we introduce an adaptive feature-sharing strategy that automatically evaluates textual ambiguity and applies constraints only to the ambiguous identity prompt. Finally, we propose a unified evaluation protocol, the Consistency Quality Score (CQS), which integrates identity preservation and per-image text alignment into a single comprehensive metric, explicitly capturing performance imbalances between the two metrics. Our framework achieves state-of-the-art performance, effectively overcoming prior trade-offs. Project page: https://minjung-s.github.io/asemconsist

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition