Score: 3

DyCAF-Net: Dynamic Class-Aware Fusion Network

Published: August 5, 2025 | arXiv ID: 2508.03598v1

By: Md Abrar Jahin , Shahriar Soudeep , M. F. Mridha and more

Potential Business Impact:

Helps computers see objects better, even when hidden.

Recent advancements in object detection rely on modular architectures with multi-scale fusion and attention mechanisms. However, static fusion heuristics and class-agnostic attention limit performance in dynamic scenes with occlusions, clutter, and class imbalance. We introduce Dynamic Class-Aware Fusion Network (DyCAF-Net) that addresses these challenges through three innovations: (1) an input-conditioned equilibrium-based neck that iteratively refines multi-scale features via implicit fixed-point modeling, (2) a dual dynamic attention mechanism that adaptively recalibrates channel and spatial responses using input- and class-dependent cues, and (3) class-aware feature adaptation that modulates features to prioritize discriminative regions for rare classes. Through comprehensive ablation studies with YOLOv8 and related architectures, alongside benchmarking against nine state-of-the-art baselines, DyCAF-Net achieves significant improvements in precision, mAP@50, and mAP@50-95 across 13 diverse benchmarks, including occlusion-heavy and long-tailed datasets. The framework maintains computational efficiency ($\sim$11.1M parameters) and competitive inference speeds, while its adaptability to scale variance, semantic overlaps, and class imbalance positions it as a robust solution for real-world detection tasks in medical imaging, surveillance, and autonomous systems.

Country of Origin
πŸ‡²πŸ‡Ύ πŸ‡§πŸ‡© πŸ‡ΊπŸ‡Έ Bangladesh, Malaysia, United States

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition