Directional Textual Inversion for Personalized Text-to-Image Generation
By: Kunhee Kim , NaHyeon Park , Kibeom Hong and more
Potential Business Impact:
Makes AI images match your words better.
Textual Inversion (TI) is an efficient approach to text-to-image personalization but often fails on complex prompts. We trace these failures to embedding norm inflation: learned tokens drift to out-of-distribution magnitudes, degrading prompt conditioning in pre-norm Transformers. Empirically, we show semantics are primarily encoded by direction in CLIP token space, while inflated norms harm contextualization; theoretically, we analyze how large magnitudes attenuate positional information and hinder residual updates in pre-norm blocks. We propose Directional Textual Inversion (DTI), which fixes the embedding magnitude to an in-distribution scale and optimizes only direction on the unit hypersphere via Riemannian SGD. We cast direction learning as MAP with a von Mises-Fisher prior, yielding a constant-direction prior gradient that is simple and efficient to incorporate. Across personalization tasks, DTI improves text fidelity over TI and TI-variants while maintaining subject similarity. Crucially, DTI's hyperspherical parameterization enables smooth, semantically coherent interpolation between learned concepts (slerp), a capability that is absent in standard TI. Our findings suggest that direction-only optimization is a robust and scalable path for prompt-faithful personalization.
Similar Papers
Textual Inversion for Efficient Adaptation of Open-Vocabulary Object Detectors Without Forgetting
CV and Pattern Recognition
Teaches computers to find new things in pictures.
EDITOR: Effective and Interpretable Prompt Inversion for Text-to-Image Diffusion Models
CV and Pattern Recognition
Finds the words that made a picture.
Align 3D Representation and Text Embedding for 3D Content Personalization
CV and Pattern Recognition
Changes 3D objects using words, no retraining.