Data Augmentation Through Random Style Replacement
By: Qikai Yang , Cheng Ji , Huaiying Luo and more
Potential Business Impact:
Makes computer pictures better for learning.
In this paper, we introduce a novel data augmentation technique that combines the advantages of style augmentation and random erasing by selectively replacing image subregions with style-transferred patches. Our approach first applies a random style transfer to training images, then randomly substitutes selected areas of these images with patches derived from the style-transferred versions. This method is able to seamlessly accommodate a wide range of existing style transfer algorithms and can be readily integrated into diverse data augmentation pipelines. By incorporating our strategy, the training process becomes more robust and less prone to overfitting. Comparative experiments demonstrate that, relative to previous style augmentation methods, our technique achieves superior performance and faster convergence.
Similar Papers
Stylized Synthetic Augmentation further improves Corruption Robustness
CV and Pattern Recognition
Makes computer pictures work even when blurry.
When Dynamic Data Selection Meets Data Augmentation
Machine Learning (CS)
Trains computers faster with less data.
A Training-Free Style-aligned Image Generation with Scale-wise Autoregressive Model
CV and Pattern Recognition
Makes AI pictures match the style you want.