Score: 1

How Bias Binds: Measuring Hidden Associations for Bias Control in Text-to-Image Compositions

Published: November 10, 2025 | arXiv ID: 2511.07091v1

By: Jeng-Lin Li, Ming-Ching Chang, Wei-Chao Chen

Potential Business Impact:

Makes AI pictures fairer by fixing word links.

Business Areas:
Semantic Search Internet Services

Text-to-image generative models often exhibit bias related to sensitive attributes. However, current research tends to focus narrowly on single-object prompts with limited contextual diversity. In reality, each object or attribute within a prompt can contribute to bias. For example, the prompt "an assistant wearing a pink hat" may reflect female-inclined biases associated with a pink hat. The neglected joint effects of the semantic binding in the prompts cause significant failures in current debiasing approaches. This work initiates a preliminary investigation on how bias manifests under semantic binding, where contextual associations between objects and attributes influence generative outcomes. We demonstrate that the underlying bias distribution can be amplified based on these associations. Therefore, we introduce a bias adherence score that quantifies how specific object-attribute bindings activate bias. To delve deeper, we develop a training-free context-bias control framework to explore how token decoupling can facilitate the debiasing of semantic bindings. This framework achieves over 10% debiasing improvement in compositional generation tasks. Our analysis of bias scores across various attribute-object bindings and token decorrelation highlights a fundamental challenge: reducing bias without disrupting essential semantic relationships. These findings expose critical limitations in current debiasing approaches when applied to semantically bound contexts, underscoring the need to reassess prevailing bias mitigation strategies.

Country of Origin
🇺🇸 United States

Page Count
9 pages

Category
Computer Science:
CV and Pattern Recognition