SDQM: Synthetic Data Quality Metric for Object Detection Dataset Evaluation
By: Ayush Zenith , Arnold Zumbrun , Neel Raut and more
Potential Business Impact:
Checks fake pictures for computers to learn better.
The performance of machine learning models depends heavily on training data. The scarcity of large-scale, well-annotated datasets poses significant challenges in creating robust models. To address this, synthetic data generated through simulations and generative models has emerged as a promising solution, enhancing dataset diversity and improving the performance, reliability, and resilience of models. However, evaluating the quality of this generated data requires an effective metric. This paper introduces the Synthetic Dataset Quality Metric (SDQM) to assess data quality for object detection tasks without requiring model training to converge. This metric enables more efficient generation and selection of synthetic datasets, addressing a key challenge in resource-constrained object detection tasks. In our experiments, SDQM demonstrated a strong correlation with the mean Average Precision (mAP) scores of YOLOv11, a leading object detection model, while previous metrics only exhibited moderate or weak correlations. Additionally, it provides actionable insights for improving dataset quality, minimizing the need for costly iterative training. This scalable and efficient metric sets a new standard for evaluating synthetic data. The code for SDQM is available at https://github.com/ayushzenith/SDQM
Similar Papers
CGVQM+D: Computer Graphics Video Quality Metric and Dataset
Graphics
Makes computer-generated videos look more real.
SynQuE: Estimating Synthetic Dataset Quality Without Annotations
Machine Learning (CS)
Chooses best fake data for computers to learn.
Domain Randomization for Object Detection in Manufacturing Applications using Synthetic Data: A Comprehensive Study
CV and Pattern Recognition
Teaches robots to see and grab parts.