Abstract: We propose a method to extract representative features for fashion analysis by utilizing weakly annotated online fashion images in this work. The proposed system consists of two stages. In the first stage, we attempt to detect clothing items in a fashion image: the top clothes (t), bottom clothes (b) and one-pieces (o). In the second stage, we extract discriminative features from detected regions for various applications of interest. Unlike previous work that heavily relies on well-annotated fashion data, we propose a way to collect fashion images from online resources and conduct automatic annotation on them. Based on this methodology, we create a new fashion dataset, called the Web Attributes, to train our feature extractor. It is shown by experiments that extracted regional features can capture local characteristics of fashion images well and offer better performance than previous works.
0 Replies
Loading