Omni-Attribute: Open-vocabulary Attribute Encoder for Visual Concept Personalization
By: Tsai-Shien Chen , Aliaksandr Siarohin , Guocheng Gordon Qian and more
Potential Business Impact:
Lets computers change just one thing in a picture.
Visual concept personalization aims to transfer only specific image attributes, such as identity, expression, lighting, and style, into unseen contexts. However, existing methods rely on holistic embeddings from general-purpose image encoders, which entangle multiple visual factors and make it difficult to isolate a single attribute. This often leads to information leakage and incoherent synthesis. To address this limitation, we introduce Omni-Attribute, the first open-vocabulary image attribute encoder designed to learn high-fidelity, attribute-specific representations. Our approach jointly designs the data and model: (i) we curate semantically linked image pairs annotated with positive and negative attributes to explicitly teach the encoder what to preserve or suppress; and (ii) we adopt a dual-objective training paradigm that balances generative fidelity with contrastive disentanglement. The resulting embeddings prove effective for open-vocabulary attribute retrieval, personalization, and compositional generation, achieving state-of-the-art performance across multiple benchmarks.
Similar Papers
Towards Open-Vocabulary Multimodal 3D Object Detection with Attributes
CV and Pattern Recognition
Helps cars see and describe new things.
Open-Attribute Recognition for Person Retrieval: Finding People Through Distinctive and Novel Attributes
CV and Pattern Recognition
Helps find people even with new descriptions.
Per-Query Visual Concept Learning
CV and Pattern Recognition
Teaches computers to draw your specific ideas.