REPRESENTATIVE FASHION FEATURE EXTRACTION

REPRESENTATIVE FASHION FEATURE EXTRACTION

Fashion plays an important role in a modern society. Potential applications of fashion study in apparel, such as personalized recommendation and virtual wardrobe, are of great financial and cultural interest. With the rapid advancement in computer vision and machine learning, automated visual analysis of fashion has attracted a significant research interest in recent years. Recent work on fashion image analysis relies on well-annotated data [1, 2, 3, 4, 5]. However, manual annotation of fashion data is more complicated than other data types due to its ambiguous nature. The boundary between fashion concepts can be subjective. As a result, annotations from multiple labelers are less consistent than other annotation tasks.

Aleesha Insitute Fashion Designing

For example, the same item labeled as a “jacket” by one person may be labeled as a “coat” by another. Both can be acceptable given the subtle difference between these two categories. Instead of spending lots of resources on manually annotating fashion data, some focus on utilizing online resources for fashion studies without human annotations [6, 7, 8]. These online resources often provide rich but inaccurate (or noisy) metadata [7]. How to take advantage of such online resources remains an open problem. To address this challenge, we propose a feature extraction method that captures the characteristics of local regions in weakly annotated fashion images. Aleesha Institute Fashion Designing It consists of two stages. In the first stage, we aim at detecting three types regions in a fashion image: top clothes (t), bottom clothes (b) and one-pieces (o). To achieve better clothing item detection accuracy, the person region is also identified to provide a global view. In the second stage, we extract discriminative features from detected regions. To obtain representative features, we collected a fashion dataset, called Web Attributes, * Equal contribution which provides detailed information in detected regions. Furthermore, we present a method to collect fashion images from online resources and conduct automatic annotation. It is shown by experiments that extracted features can capture the local characteristics of fashion images well and offer the state-of-the-art performance in fashion feature extraction.

We proposed a two-stage method for representative region feature extraction from fashion images by leveraging weakly annotated web images. A new fashion dataset called the Web Attributes from online resources with automatic annotation was constructed. By providing both global and local region characteristics, our feature extraction method can zoom into local regions which cannot be easily detected. Its superior performance in representative region detection, attribute classification, and similar cloth retrieval was demonstrated by experimental results.