Cognitive-YOLO: LLM-Driven Architecture Synthesis from First Principles of Data for Object Detection
By: Jiahao Zhao
Potential Business Impact:
AI designs better computer vision systems automatically.
Designing high-performance object detection architectures is a complex task, where traditional manual design is time-consuming and labor-intensive, and Neural Architecture Search (NAS) is computationally prohibitive. While recent approaches using Large Language Models (LLMs) show promise, they often function as iterative optimizers within a search loop, rather than generating architectures directly from a holistic understanding of the data. To address this gap, we propose Cognitive-YOLO, a novel framework for LLM-driven architecture synthesis that generates network configurations directly from the intrinsic characteristics of the dataset. Our method consists of three stages: first, an analysis module extracts key meta-features (e.g., object scale distribution and scene density) from the target dataset; second, the LLM reasons upon these features, augmented with state-of-the-art components retrieved via Retrieval-Augmented Generation (RAG), to synthesize the architecture into a structured Neural Architecture Description Language (NADL); finally, a compiler instantiates this description into a deployable model. Extensive experiments on five diverse object detection datasets demonstrate that our proposed Cognitive-YOLO consistently generates superior architectures, achieving highly competitive performance and demonstrating a superior performance-per-parameter trade-off compared to strong baseline models across multiple benchmarks. Crucially, our ablation studies prove that the LLM's data-driven reasoning is the primary driver of performance, demonstrating that a deep understanding of data "first principles" is more critical for achieving a superior architecture than simply retrieving SOTA components.
Similar Papers
LLM-Guided Evolution: An Autonomous Model Optimization for Object Detection
Neural and Evolutionary Computing
Makes AI better at finding objects in pictures.
Enhancing Small Object Detection with YOLO: A Novel Framework for Improved Accuracy and Efficiency
CV and Pattern Recognition
Finds tiny things in big sky pictures.
YOLOA: Real-Time Affordance Detection via LLM Adapter
CV and Pattern Recognition
Helps robots understand objects and how to use them.