C-DiffDet+: Fusing Global Scene Context with Generative Denoising for High-Fidelity Object Detection
By: Abdellah Zakaria Sellam , Ilyes Benaissa , Salah Eddine Bekhouche and more
Potential Business Impact:
Helps computers see tiny damage on cars.
Fine-grained object detection in challenging visual domains, such as vehicle damage assessment, presents a formidable challenge even for human experts to resolve reliably. While DiffusionDet has advanced the state-of-the-art through conditional denoising diffusion, its performance remains limited by local feature conditioning in context-dependent scenarios. We address this fundamental limitation by introducing Context-Aware Fusion (CAF), which leverages cross-attention mechanisms to integrate global scene context with local proposal features directly. The global context is generated using a separate dedicated encoder that captures comprehensive environmental information, enabling each object proposal to attend to scene-level understanding. Our framework significantly enhances the generative detection paradigm by enabling each object proposal to attend to comprehensive environmental information. Experimental results demonstrate an improvement over state-of-the-art models on the CarDD benchmark, establishing new performance benchmarks for context-aware object detection in fine-grained domains
Similar Papers
FlowDet: Unifying Object Detection and Generative Transport Flows
CV and Pattern Recognition
Finds objects in pictures much faster.
Accelerated Multi-Modal Motion Planning Using Context-Conditioned Diffusion Models
Robotics
Robots learn new paths without retraining.
Denoised Diffusion for Object-Focused Image Augmentation
CV and Pattern Recognition
Helps drones find sick animals even when hidden.