CaliTex: Geometry-Calibrated Attention for View-Coherent 3D Texture Generation
By: Chenyu Liu , Hongze Chen , Jingzhi Bao and more
Potential Business Impact:
Makes 3D objects look real from every angle.
Despite major advances brought by diffusion-based models, current 3D texture generation systems remain hindered by cross-view inconsistency -- textures that appear convincing from one viewpoint often fail to align across others. We find that this issue arises from attention ambiguity, where unstructured full attention is applied indiscriminately across tokens and modalities, causing geometric confusion and unstable appearance-structure coupling. To address this, we introduce CaliTex, a framework of geometry-calibrated attention that explicitly aligns attention with 3D structure. It introduces two modules: Part-Aligned Attention that enforces spatial alignment across semantically matched parts, and Condition-Routed Attention which routes appearance information through geometry-conditioned pathways to maintain spatial fidelity. Coupled with a two-stage diffusion transformer, CaliTex makes geometric coherence an inherent behavior of the network rather than a byproduct of optimization. Empirically, CaliTex produces seamless and view-consistent textures and outperforms both open-source and commercial baselines.
Similar Papers
A Scalable Attention-Based Approach for Image-to-3D Texture Mapping
CV and Pattern Recognition
Makes 3D objects look real from one picture.
Debiasing Diffusion Priors via 3D Attention for Consistent Gaussian Splatting
CV and Pattern Recognition
Makes 3D pictures look right from all sides.
TEXTRIX: Latent Attribute Grid for Native Texture Generation and Beyond
CV and Pattern Recognition
Makes 3D models look real and helps computers understand them.