GeoDiT: A Diffusion-based Vision-Language Model for Geospatial Understanding
By: Jiaqi Liu , Ronghao Fu , Haoran Liu and more
Potential Business Impact:
Creates maps that understand objects better.
Autoregressive models are structurally misaligned with the inherently parallel nature of geospatial understanding, forcing a rigid sequential narrative onto scenes and fundamentally hindering the generation of structured and coherent outputs. We challenge this paradigm by reframing geospatial generation as a parallel refinement process, enabling a holistic, coarse-to-fine synthesis that resolves all semantic elements simultaneously. To operationalize this, we introduce GeoDiT, the first diffusion-based vision-language model tailored for the geospatial domain. Extensive experiments demonstrate that GeoDiT establishes a new state-of-the-art on benchmarks requiring structured, object-centric outputs. It achieves significant gains in image captioning, visual grounding, and multi-object detection, precisely the tasks where autoregressive models falter. Our work validates that aligning the generative process with the data's intrinsic structure is key to unlocking superior performance in complex geospatial analysis.
Similar Papers
From Sequential to Spatial: Reordering Autoregression for Efficient Visual Generation
CV and Pattern Recognition
Makes pictures faster by drawing in rings.
GeoDiff-SAR: A Geometric Prior Guided Diffusion Model for SAR Image Generation
Image and Video Processing
Makes radar pictures of things from any angle.
Generative Pre-trained Autoregressive Diffusion Transformer
CV and Pattern Recognition
Makes computers create realistic, moving videos.