Data Factory with Minimal Human Effort Using VLMs
By: Jiaojiao Ye , Jiaxing Zhong , Qian Xie and more
Potential Business Impact:
Makes computers create realistic pictures from words.
Generating enough and diverse data through augmentation offers an efficient solution to the time-consuming and labour-intensive process of collecting and annotating pixel-wise images. Traditional data augmentation techniques often face challenges in manipulating high-level semantic attributes, such as materials and textures. In contrast, diffusion models offer a robust alternative, by effectively utilizing text-to-image or image-to-image transformation. However, existing diffusion-based methods are either computationally expensive or compromise on performance. To address this issue, we introduce a novel training-free pipeline that integrates pretrained ControlNet and Vision-Language Models (VLMs) to generate synthetic images paired with pixel-level labels. This approach eliminates the need for manual annotations and significantly improves downstream tasks. To improve the fidelity and diversity, we add a Multi-way Prompt Generator, Mask Generator and High-quality Image Selection module. Our results on PASCAL-5i and COCO-20i present promising performance and outperform concurrent work for one-shot semantic segmentation.
Similar Papers
A Visual Leap in CLIP Compositionality Reasoning through Generation of Counterfactual Sets
CV and Pattern Recognition
Teaches computers to understand pictures better.
Dual-Process Image Generation
CV and Pattern Recognition
Teaches image makers new drawing styles quickly.
From Synthetic Scenes to Real Performance: Enhancing Spatial Reasoning in VLMs
CV and Pattern Recognition
Makes AI understand pictures better, without mistakes.