CharaConsist: Fine-Grained Consistent Character Generation
By: Mengyu Wang , Henghui Ding , Jianing Peng and more
Potential Business Impact:
Keeps characters looking the same in many pictures.
In text-to-image generation, producing a series of consistent contents that preserve the same identity is highly valuable for real-world applications. Although a few works have explored training-free methods to enhance the consistency of generated subjects, we observe that they suffer from the following problems. First, they fail to maintain consistent background details, which limits their applicability. Furthermore, when the foreground character undergoes large motion variations, inconsistencies in identity and clothing details become evident. To address these problems, we propose CharaConsist, which employs point-tracking attention and adaptive token merge along with decoupled control of the foreground and background. CharaConsist enables fine-grained consistency for both foreground and background, supporting the generation of one character in continuous shots within a fixed scene or in discrete shots across different scenes. Moreover, CharaConsist is the first consistent generation method tailored for text-to-image DiT model. Its ability to maintain fine-grained consistency, combined with the larger capacity of latest base model, enables it to produce high-quality visual outputs, broadening its applicability to a wider range of real-world scenarios. The source code has been released at https://github.com/Murray-Wang/CharaConsist
Similar Papers
ASemConsist: Adaptive Semantic Feature Control for Training-Free Identity-Consistent Generation
CV and Pattern Recognition
Keeps characters looking the same in different pictures.
ConsistEdit: Highly Consistent and Precise Training-free Visual Editing
CV and Pattern Recognition
Makes editing pictures and videos more consistent.
ConsiStyle: Style Diversity in Training-Free Consistent T2I Generation
CV and Pattern Recognition
Keeps characters looking the same in different pictures.