Transferring Styles for Reduced Texture Bias and Improved Robustness in Semantic Segmentation Networks
By: Ben Hamscher , Edgar Heinert , Annika Mütze and more
Potential Business Impact:
Makes computer vision see shapes, not just textures.
Recent research has investigated the shape and texture biases of deep neural networks (DNNs) in image classification which influence their generalization capabilities and robustness. It has been shown that, in comparison to regular DNN training, training with stylized images reduces texture biases in image classification and improves robustness with respect to image corruptions. In an effort to advance this line of research, we examine whether style transfer can likewise deliver these two effects in semantic segmentation. To this end, we perform style transfer with style varying across artificial image areas. Those random areas are formed by a chosen number of Voronoi cells. The resulting style-transferred data is then used to train semantic segmentation DNNs with the objective of reducing their dependence on texture cues while enhancing their reliance on shape-based features. In our experiments, it turns out that in semantic segmentation, style transfer augmentation reduces texture bias and strongly increases robustness with respect to common image corruptions as well as adversarial attacks. These observations hold for convolutional neural networks and transformer architectures on the Cityscapes dataset as well as on PASCAL Context, showing the generality of the proposed method.
Similar Papers
Data Augmentation Through Random Style Replacement
CV and Pattern Recognition
Makes computer pictures better for learning.
Dynamic Neural Style Transfer for Artistic Image Generation using VGG19
CV and Pattern Recognition
Makes any picture look like a famous painting.
Style transfer as data augmentation: evaluating unpaired image-to-image translation models in mammography
Image and Video Processing
Helps AI find breast cancer better in X-rays.