Ar2Can: An Architect and an Artist Leveraging a Canvas for Multi-Human Generation
By: Shubhankar Borse , Phuc Pham , Farzad Farhadzadeh and more
Potential Business Impact:
Creates realistic pictures with many people.
Despite recent advances in text-to-image generation, existing models consistently fail to produce reliable multi-human scenes, often duplicating faces, merging identities, or miscounting individuals. We present Ar2Can, a novel two-stage framework that disentangles spatial planning from identity rendering for multi-human generation. The Architect module predicts structured layouts, specifying where each person should appear. The Artist module then synthesizes photorealistic images, guided by a spatially-grounded face matching reward that combines Hungarian spatial alignment with ArcFace identity similarity. This approach ensures faces are rendered at correct locations and faithfully preserve reference identities. We develop two Architect variants, seamlessly integrated with our diffusion-based Artist model and optimized via Group Relative Policy Optimization (GRPO) using compositional rewards for count accuracy, image quality, and identity matching. Evaluated on the MultiHuman-Testbench, Ar2Can achieves substantial improvements in both count accuracy and identity preservation, while maintaining high perceptual quality. Notably, our method achieves these results using primarily synthetic data, without requiring real multi-human images.
Similar Papers
Bringing Your Portrait to 3D Presence
CV and Pattern Recognition
Turns one photo into a moving 3D person.
Adaptive graph Kolmogorov-Arnold network for 3D human pose estimation
CV and Pattern Recognition
Helps computers guess body poses from pictures.
WithAnyone: Towards Controllable and ID Consistent Image Generation
CV and Pattern Recognition
Makes AI create people that look real, not copied.