Data Augmentation Through Random Style Replacement
By: Qikai Yang , Cheng Ji , Huaiying Luo and more
Potential Business Impact:
Makes computer pictures better for learning.
In this paper, we introduce a novel data augmentation technique that combines the advantages of style augmentation and random erasing by selectively replacing image subregions with style-transferred patches. Our approach first applies a random style transfer to training images, then randomly substitutes selected areas of these images with patches derived from the style-transferred versions. This method is able to seamlessly accommodate a wide range of existing style transfer algorithms and can be readily integrated into diverse data augmentation pipelines. By incorporating our strategy, the training process becomes more robust and less prone to overfitting. Comparative experiments demonstrate that, relative to previous style augmentation methods, our technique achieves superior performance and faster convergence.
Similar Papers
Transferring Styles for Reduced Texture Bias and Improved Robustness in Semantic Segmentation Networks
CV and Pattern Recognition
Makes computer vision see shapes, not just textures.
Stylized Synthetic Augmentation further improves Corruption Robustness
CV and Pattern Recognition
Makes computer pictures work even when blurry.
Stylized Synthetic Augmentation further improves Corruption Robustness
CV and Pattern Recognition
Makes computer pictures work even when blurry.