Score: 1

Directional Textual Inversion for Personalized Text-to-Image Generation

Published: December 15, 2025 | arXiv ID: 2512.13672v1

By: Kunhee Kim , NaHyeon Park , Kibeom Hong and more

Potential Business Impact:

Makes AI images match your words better.

Business Areas:
Semantic Search Internet Services

Textual Inversion (TI) is an efficient approach to text-to-image personalization but often fails on complex prompts. We trace these failures to embedding norm inflation: learned tokens drift to out-of-distribution magnitudes, degrading prompt conditioning in pre-norm Transformers. Empirically, we show semantics are primarily encoded by direction in CLIP token space, while inflated norms harm contextualization; theoretically, we analyze how large magnitudes attenuate positional information and hinder residual updates in pre-norm blocks. We propose Directional Textual Inversion (DTI), which fixes the embedding magnitude to an in-distribution scale and optimizes only direction on the unit hypersphere via Riemannian SGD. We cast direction learning as MAP with a von Mises-Fisher prior, yielding a constant-direction prior gradient that is simple and efficient to incorporate. Across personalization tasks, DTI improves text fidelity over TI and TI-variants while maintaining subject similarity. Crucially, DTI's hyperspherical parameterization enables smooth, semantically coherent interpolation between learned concepts (slerp), a capability that is absent in standard TI. Our findings suggest that direction-only optimization is a robust and scalable path for prompt-faithful personalization.

Country of Origin
🇰🇷 Korea, Republic of

Repos / Data Links

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)