Real-Time On-the-Go Annotation Framework Using YOLO for Automated Dataset Generation
By: Mohamed Abdallah Salem, Ahmed Harb Rabia
Potential Business Impact:
Labels farm pictures instantly while taking them.
Efficient and accurate annotation of datasets remains a significant challenge for deploying object detection models such as You Only Look Once (YOLO) in real-world applications, particularly in agriculture where rapid decision-making is critical. Traditional annotation techniques are labor-intensive, requiring extensive manual labeling post data collection. This paper presents a novel real-time annotation approach leveraging YOLO models deployed on edge devices, enabling immediate labeling during image capture. To comprehensively evaluate the efficiency and accuracy of our proposed system, we conducted an extensive comparative analysis using three prominent YOLO architectures (YOLOv5, YOLOv8, YOLOv12) under various configurations: single-class versus multi-class annotation and pretrained versus scratch-based training. Our analysis includes detailed statistical tests and learning dynamics, demonstrating significant advantages of pretrained and single-class configurations in terms of model convergence, performance, and robustness. Results strongly validate the feasibility and effectiveness of our real-time annotation framework, highlighting its capability to drastically reduce dataset preparation time while maintaining high annotation quality.
Similar Papers
YOLOv1 to YOLOv11: A Comprehensive Survey of Real-Time Object Detection Innovations and Challenges
CV and Pattern Recognition
Helps computers see and understand things faster.
Real-Time Object Detection and Classification using YOLO for Edge FPGAs
CV and Pattern Recognition
Helps cars see and understand things faster.
Barcode and QR Code Object Detection: An Experimental Study on YOLOv8 Models
CV and Pattern Recognition
Makes computers scan barcodes and QR codes faster.