Exploiting Texture Cues for Clothing Parsing in Fashion ImagesDownload PDFOpen Website

2018 (modified: 15 Nov 2022)ICIP 2018Readers: Everyone
Abstract: We focus on the problem of parsing fashion images for detecting various types of clothing and style. The current state-of-the-art techniques for the problem are mostly based on variations of the SegNet model. The techniques formulate the problem as segmentation and typically rely on geometrical shapes and position to segment the image. However, specifically for fashion images, each clothing item is made of specific type of materials with characteristic visual texture patterns. Exploiting the texture for recognizing the clothing type is an important cue which has been ignored so far by the state-of-the-art. In this paper, we propose a two-stream deep neural network architecture for fashion image parsing. While the first stream uses the regular fully convolutional network segmentation architecture to give accurate spatial segments, the second stream provides texture features learned from hand-crafted Gabor feature maps used as input, and helps in determining the clothing type resulting in improved recognition of the various segments. Our experiments show that, the proposed two-stream architecture successfully reduces the confusion between the clothing types, having similar visual shapes in the images but different material. Our approach achieves state-of-the-art results on the standard benchmark datasets, such as Fashionista and CFPD.
0 Replies

Loading