Transferring Styles for Reduced Texture Bias and Improved Robustness in Semantic Segmentation Networks
Abstract: Recent research has investigated the shape and texture biases of deep neural networks (DNNs) in image classification which influence their generalization capabilities and robustness. It has been shown that, in comparison to regular DNN training, training with stylized images reduces texture biases in image classification and improves robustness with respect to image corruptions. In an effort to advance this line of research, we examine whether style transfer can likewise deliver these two effects in semantic segmentation which inherently requires preservation of structural information across deep layers. To this end, we perform style transfer with styles varying across artificial image areas. Those random areas are formed by a chosen number of Voronoi cells. The resulting style-transferred data is then used to train semantic segmentation DNNs with the objective of reducing their dependence on texture cues while enhancing their reliance on shape-based features. In our experiments, it turns out that in semantic segmentation, style transfer augmentation reduces texture bias and strongly increases robustness with respect to common image corruptions as well as adversarial attacks. These observations hold for a convolutional neural network and a transformer architecture on the Cityscapes dataset as well as on PASCAL Context, showing the generality of the proposed method.
External IDs:dblp:conf/ecai/HamscherHMMR25
Loading