LINA: Learning INterventions Adaptively for Physical Alignment and Generalization in Diffusion Models
By: Shu Yu, Chaochao Lu
Potential Business Impact:
Makes AI draw pictures that make real-world sense.
Diffusion models (DMs) have achieved remarkable success in image and video generation. However, they still struggle with (1) physical alignment and (2) out-of-distribution (OOD) instruction following. We argue that these issues stem from the models' failure to learn causal directions and to disentangle causal factors for novel recombination. We introduce the Causal Scene Graph (CSG) and the Physical Alignment Probe (PAP) dataset to enable diagnostic interventions. This analysis yields three key insights. First, DMs struggle with multi-hop reasoning for elements not explicitly determined in the prompt. Second, the prompt embedding contains disentangled representations for texture and physics. Third, visual causal structure is disproportionately established during the initial, computationally limited denoising steps. Based on these findings, we introduce LINA (Learning INterventions Adaptively), a novel framework that learns to predict prompt-specific interventions, which employs (1) targeted guidance in the prompt and visual latent spaces, and (2) a reallocated, causality-aware denoising schedule. Our approach enforces both physical alignment and OOD instruction following in image and video DMs, achieving state-of-the-art performance on challenging causal generation tasks and the Winoground dataset. Our project page is at https://opencausalab.github.io/LINA.
Similar Papers
Beyond the Noise: Aligning Prompts with Latent Representations in Diffusion Models
CV and Pattern Recognition
Finds bad AI pictures while they're still being made.
Learning to Look: Cognitive Attention Alignment with Vision-Language Models
CV and Pattern Recognition
Teaches computers to see like humans.
ProPhy: Progressive Physical Alignment for Dynamic World Simulation
CV and Pattern Recognition
Makes computer videos follow real-world physics.