The iMaterialist Fashion Attribute Dataset

The iMaterialist Fashion Attribute Dataset

Sheng Guo     Weilin Huang     Xiao Zhang     Prasanna Srikhanta     Yin Cui     Yuan Li
Serge Belongie     Hartwig Adam     Matthew Scott
Malong Technologies      Google AI      Wish      Cornell University      Horizon Robotics

Large-scale image databases such as ImageNet have significantly advanced image classification and other visual recognition tasks. However much of these datasets are constructed only for single-label and coarse object-level classification. For real-world applications, multiple labels and fine-grained categories are often needed, yet very few such datasets exist publicly, especially those of large-scale and high quality. In this work, we contribute to the community a new dataset called iMaterialist Fashion Attribute (iFashion-Attribute) to address this problem in the fashion domain. The dataset is constructed from over one million fashion images with a label space that includes 8 groups of 228 fine-grained attributes in total. Each image is annotated by experts with multiple, high-quality fashion attributes. The result is the first known million-scale multi-label and fine-grained image dataset. We conduct extensive experiments and provide baseline results with modern deep Convolutional Neural Networks (CNNs). Additionally, we demonstrate models pre-trained on iFashion-Attribute achieve superior transfer learning performance on fashion related tasks compared with pre-training from ImageNet or other fashion datasets. Data is available at:

1 Introduction

Recent deep learning models trained on large-scale datasets (e.g., ImageNet [4] and Open Images [16]) have significantly advanced the task of image classification and related applications in computer vision, such as object detection [30, 23, 29, 21], segmentation [25, 10, 8] and retrieval [7]. Performance on existing image classification benchmarks such as ImageNet [4] has reached the saturation point when using recent CNNs [31, 9, 32, 11]. New datasets need to be created to tackle more challenging problems, such as multi-label classification and fine-grained recognition. On the other hand, domain-specific datasets have raised a lot of interest recently, especially in the fashion domain [40, 24, 7, 41]. In light of this, we introduce the iMaterialist Fashion Attribute Dataset (iFashion-Attribute). The iFashion-Attribute dataset includes over one million high-quality annotated fashion images. The label space includes 8 groups and a total of 228 fashion attributes. Details of the fashion groups and attribute-level classes are described in Table 1. The labels are curated by fashion experts and each image has on average 5 individual labels.

iFashion-Attribute presents a few unique challenges. Firstly, it is a multi-label prediction problem and the models are evaluated by precision and recall. Multi-label image recognition has been studied in the community with recent deep learning based approaches [37, 35, 36, 20, 44, 22]. However, performance of this task is significantly lower than that of ImageNet classification. Most existing datasets created for multi-label image recognition are limited in scale, such as PASCAL VOC [5], MS-COCO [22] and NUS-WIDE [3], which have about 6K, 80K and 160K training images, with 20, 80 and 81 categories, respectively. Both learning difficulty and annotation effort would be increased considerably when the number of categories increases. It is particularly challenging to collect a large-scale (e.g., million-level) database to benchmark this task.

Figure 1: Examples of iFashion-Attribute dataset. The attribute labels are divided into 8 groups. Here we show example images and labels from 4 attribute groups: pattern, neckline, style and category.
Attribute Class Label #Label #Image Example
Category 105 S 913,857 913,857 - 90.2% Athletic Pants, Bikinis, Cargo Pants, Heels, Petticoats …
Color 21 M 894,904 467,137 - 46.1% Black, Bronze, Gold, Gray, Green …
Gender 3 M 1,012,947 935,265 - 92.3% Male, Female, Neutral.
Material 34 M 701,197 591,175 -58.4% Nylon, Organze, Patent, Plush, Rayon …
Neckline 11 S 721,908 721,908 -71.3% Racerback, Shoulder Drapes, Square Necked, Turtlenecks, U-Necks …
Pattern 28 M 325,361 311,676 - 30.8% Argyle, Camouflage, Checkered, Floral, Galaxy …
Sleeve 5 S 733,501 733,501 - 72.4% Long Sleeved, Puff Sleeves, Short Sleeves, Sleeveless, Strapless.
Style 21 S 610,442 610,443 - 60.3% Asymmetric, Summer, Tunic, Vintage Retro, Wrap …
Table 1: The detail of iFashion-Attribute groups. The table shows the number of attribute classes, label and images in each group.

Secondly, many of the fashion attributes in iFashion-Attribute are fine-grained labels and may have very similar visual patterns. For example, as shown in Fig. 1, in the group of , identifying fine-grained visual difference on the defined fashion pattern () between classes of - and is particularly challenging because the images often have large visual diversity within each class. This is much more significant than the subtle distinctions between different classes on the defined fashion pattern. The results is a large intra-class diversity which can be significantly larger than inter-class variance. This gives rise to new challenges on fine-grained image recognition compared to existing benchmarks for fine-grained recognition where images often have similar visual appearance with low intra-class diversity, such as birds in CUB-200-2011 database [34] and cars in Stanford Cars database [15]. The new dataset has 8 high-level groups of fashion attributes, each of which contains a number of fine-grained attribute-level classes. This allows it to define multiple visual patterns for fine-grained recognition, further increasing the diversity of recognition. Additionally, iFashion provides a significantly larger number of training images than existing benchmarks. These properties make the new dataset more meaningful for real-world applications by enabling the learning of CNNs with a stronger generalization capability than existing datasets.

Dataset #Train #Classes Single Labeled Multi Labeled Fine Grained Fashion Domain
Scene15 [17] 4,485 15
MIT Indoor-67 [28] 5,360 67
ILSVRC [4] 1,281,167 1000
SUN397 [39] 19,850 397
Places365 [43] 8,000,000 365
WebVision [19] 2,439,574 1000
NUS-WIDE[3] 161,789 81
VOC2012 [5] 5,717 20
MS-COCO [22] 8,787 80
Open Images V4 [16] 1,700,000 600
CUB-200-2011 [34] 5,994 200
Stanford Dogs [14] 12,000 120
Stanford Cars [15] 8,144 196
FGVC-Aircraft [26] 6,667 100
iNaturalist 2017 [33] 579,184 5,089
DCSA  [2] 1,856 26
ACWS [1] 145,718 15
Clothes-1M [40] 1,050,000 14
WTBI [7] 78,958 11
DeepFashion-C [24] 209,222 46
DeepFashion-A [24] 209,222 1000
iFashion-Attribute 1,012,947 228
Table 2: Summary of popular datasets. Each section of the table covers one type of classification task in computer vision, which are (top to bottom): single-label classification, multi-label classification, fine-grained classification and fashion related classification. iFashion is the first known dataset which is expert-labeled at million-scale with multi-label and fine-grained attributes (see Sec. 3.1).

The goal of iFashion-Attribute is to encourage research on a more complex but real-world task, by jointly considering multi-label and fine-grained image recognition with a hierarchical class structure. Our major contributions are: (i) the first known million-scale image dataset with multiple fine-grained attribute labels curated by experts; (ii) extensive experiments were conducted by using recent CNN models for multi-label and fine-grained recognition tasks, providing meaningful baseline results that facilitate future research; (iii) we demonstrate empirically that iFashion is valuable for transfer learning on other fashion related datasets and tasks.

2 Related work

Databases have always been a key resource for computer vision research. They provide standard benchmarks for evaluating algorithms developed for a defined task; e.g., image recognition which is a fundamental task. In this section, we review a number of recent databases created for image classification, by categorizing them into four groups: single-label, multi-label image classification, fine-grained recognition and fashion-related benchmarks.

Single-label image classification. There are two fundamental applications for single-label image classification: object recognition and scene classification. ImageNet [4] is a widely-used dataset created for large-scale object classification. It has 1000 object categories with over 1 million training images having clean human annotations. This allows it to boost the performance of recent CNNs for various computer vision tasks. However, performance on ImageNet has reached the saturation point in terms of performance with a 2.25% Top-5 error using SE-Net [11]. WebVision [19] was created in 2017 by increasing data difficulty where a large amount of noisy labels is considered. Noisy labels are generated from metadata information from web images, without any human annotation or cleaning. It has 1000 object categories which are exactly the same as ImageNet, and about 2.4 million training images crawled from the Internet. Recently, Guo et al. [6] developed CurriculumNet which learns CNNs from WebVision data with a 5.2% Top-5 error achieved. Additionally, there are a number of standard databases built for scene recognition, including Scene15 [17], MIT-67 [28], SUN397 [39] and Places365 [43]. These databases are all designed for single-label image classification, and were created by increasing the number of scene categories (from 15 to 397), and the number of training images (from about 5K to 8K). Details are summarized in Table 2. Note that it is more difficult to define a large number of scene categories than object classes, due to category ambiguity.

Multi-label image classification. There are common scenarios where an image may contain multiple object items. Obviously, describing such an image by just using a single object class is less informative, and multi-class descriptions should be considered. Multi-label image classification is well established in the literature with a number of databases created. For example, Everingham et al[5] built Pascal Visual Object Classes (VOC) which is an important database for object recognition. The Pascal VOC has 20 object categories, with 5,717 training images, each of which has one or multiple labels. NUS-WIDE [3] is a web image dataset created by Chua et al., for evaluation of traditional image annotation and multi-label image classification. It contains 161,789 training images with associated tags collected from Flickr. The images are manually labeled into 81 concepts, including activities, objects and scenes. Recently, Lin et al. built the Microsoft Common Objects in Context (MS-COCO) dataset [22] which is a widely-used database for object detection, (instance) segmentation, and multi-label image classification. It has 82 object categories, with 8,787 training images, each of which has been tagged with multiple labels of object categories. Our database is collected for joint multi-label and fine-grained image recognition, and is several magnitudes larger in scale.

Fine-grained image recognition. Fine-grained image classification has long been studied in the computer vision community. There are a number of databases built for this task, including CUB-200-2011 [34], Stanford Dogs [14], Stanford Cars [15] and FGVC-Aircraft [26]. Training images in these datasets range from about 6K to 12K, with 100 to 200 fine-grained categories. They have common properties and the whole dataset just has a single type of object or animal (e.g., Cars, Aircrafts, Dogs, and Birds). Recently, the iNaturalist dataset was created by Hon et al[33]. It has 859,000 images from over 5,000 different species of plants and animals, increasing both the number of training images and the number of categories considerably. Furthermore, multiple types of plants and animals are included, rather than previous versions where all images in the database have only a common type of object. Differing from these single-label databases, our iFashion-Attribute dataset is designed for both multi-label and fine-grained recognition of fashion images. Particularly, we define multiple fine-grained differences in multiple dimensions, by using various visual patterns designed in fashion applications. This allows for significant visual diversity within each fine-grained class, increasing learning complexity considerably.

Fashion-related image recognition. Research efforts have recently been devoted to fashion-related tasks, such as clothes recognition [40, 24], fashion attribute prediction [24], fashion retrieval [24, 7] and clothing parsing [41]. Table 2 summarizes databases created for these tasks. For example, DeepFashion [24] was introduced by Liu et al. It contains more than 800,000 images, where 209,222 images were used for clothes classification and attribute perdition, with 50 categories and 1,000 attributes defined. The remaining are used for other tasks, such as fashion retrieval and landmarks prediction. Compared with Deepfashion, the proposed iFashion-Attribute dataset has 5 times more images. In addition, our attribute labels go through several rounds of post-processing steps (see Sec. 3.1) which significantly improves label quality. In comparison, the labels in DeepFashion are crawled from the metadata on the web and does not go through any post-processing. Xiao et al[40] built a clothes dataset (called Clothes-1M) by crawling images from several online shopping websites, which may contain a large amount of noisy labels. It has 1 million raw images (containing noisy labels generated from meta information), and 50K manually-clean and fully-annotated images from 14 clothes categories. The scale of Clothes-1M is comparable to iFashion, but it has only one label per image, the labels are noisy in the training set and the labels are coarse categores; not fine-grained. As shown in our experiments, by providing more meaningful multi-label and detailed attribute information, CNNs pre-trained on iFashion achieve better generalization, allowing it to transfer more effectively into a new task.

3 iFashion Dataset

We describe details of the iFashion-Attribute database in this section, including data collection, construction of the train, validation and test splits. We describe the methods for manually cleaning and labeling images in the validation and test set. Evaluation measurements are also presented.

Figure 2: Histogram of number of labels per image, with an average of 5.8 and 8 per image in the training and validation sets.
Figure 3: Number of images per attribute label, demonstrating the long tail nature of the dataset.

All images in the iFashion-Attribute database are provided by Wish, with 1,012,947, 9,897 and 39,706 images split into train, validation, and test sets respectively. It has 228 fine-grained fashion attribute-level classes which form 8 high-level fashion groups defined professionally from the fashion industry. Each image has multiple labels. The histogram of number of labels per image and number of images per label are shown in Fig. 2 and Fig. 3.

The 8 fashion groups are presented in Table 1. Each group has a number of fashion classes, ranging from 3 ( “gender” group) to 105 ( “category” group). Statistics on the number of images and labels provided for each group are listed in Table 1. We found that the “Gender” group has a label in all images, while the “Pattern” group just has labels in 30% of all images in the training set. For each image, the number of labels it has, is ranged from 1 to 23, with an average of 5.8 labels for each training image. Furthermore, we also compute statistics on the number of images for each attribute-level class. There are 31 fine-grained classes, each of which has training images less than 500, while the number of classes with over 10K training images is 88. This indicates a significant data imbalance in our database, which further increases the challenge for learning CNNs, and useful for evaluating low-shot learning algorithms.

Our database considers large scale (million level), multiple labels (with group structure), and fine-grained recognition jointly for fashion classification, setting it apart from existing datasets which were often designed for investigating individual problems. In particular, our fine-grained classes are created structurally based on multiple groups of fashion attributes, which are professionally defined, such as “pattern”, “neckline” and “color”, as shown in Fig. 1. This allow it to have large intra-class variance which is much more significant than visual diversity between different classes (as shown in Fig. 1), making it more challenging than existing fine-grained or fashion-related benchmarks. For example, a same model may have visual similar clothes but with fine-grained distinctions on fashion pattern in two different attributes of “Plaid” and “Checkered”, as shown in the “Pattern” group in Fig. 1. This sets a new challenge for CNNs to learn fine-grained distinctions between such structurally-defined patterns automatically from the provided data and labels.

3.1 Dataset collection and quality improvements

The fashion images are provided by Wish. Wish has over 50m+ unique images across an extensive product line. We collected 1M+ fashion images by randomly sampling across individual attribute classes. In some cases the attribute class was not large enough and a smaller number images sampled in this case. All the images were pre-tagged by humans using an organically grown taxonomy. Finally, the Wish tags were mapped to the competition taxonomy and format. Several steps are applied to post-process the dataset and improve quality.

Deduplication: a deduplication pass is made to remove as many exact duplicates as possible and still maintain enough images. New images were substituted in if images were deleted. A downside of this deduplication process is that it was done for speed not complete accuracy so there are cases of near duplicate images (mirrored, cropped, color changes) that exist in the dataset.

Automatic verification: one downside of human based tagging is there maybe errors in the tags that were not picked up by QA. To filter out potential inconsistent tag/image pairs a second pass was made on the image tags and the product title to verify the image tags were in the right attribute class. The product title/description adds an additional level of accuracy to the tags, and adds augmented tags that were missed by human annotators.

Data checking and cleaning: Wish provides the original product images containing a total of 9 fashion groups, and 242 attribute-level classes. We found that some of the original attributes are ambiguous, which are difficult to be well defined and discriminated clearly by visual information, such as “size” group including five attributes: . We removed the whole “size” group with its attributes from the dataset, and reduced the number of groups from 9 to 8. Furthermore, there is a number of attribute-level classes which are defined by coarse fashion concept. For example, the attribute “top” included the high-level “category” group can include other attribute-level classes, such as “Polo”, “T-shirt”, “Vest”, and “Jacket” in the same group. Such high-level concept attributes include ect. It is important to ensure non-overlap between the attribute-level classes within each group. Therefore, we manually removed all such coarse-concept attributes, and all remained attribute-level classes are excluded to each other on fashion concept. This further reduces the number of attributes to 228, and the remained images are 1,012,947, 9,897 and 39,706, for train, validation, and test respectively.

Figure 4: Per-attribute recalls on 50 randomly selected attributes.

Dataset statistics: The number of images and the number of labels provided for each group are listed in Table 1. We found that the “Gender” group has a label in 92.3% images of the training set, while the “Pattern” group just has labels in 30% of all training images. For each image, the number of labels it has, is ranged from 1 to a maximum of 23, with an average of 5.8 labels over all training images. Furthermore, we also compute the number of images for each attribute-level class. There are 31 classes, each of which has training images less than 500, while the number of attributes with over 10K training images is 88. This indicates a significant data imbalance in the database, which further increases the challenge for learning CNNs. The top-10 attributes with the largest number of training images are: , , , , , , , -, and . In addition, recognition difficulty are changed significantly over different high-level groups or attribute-level classes, as shown in Fig. 4, where top-8 recalls for 50 randomly selected attribute-level classes are reported. The average recalls (top-1) for 8 groups are: 58.5%, 48.3%, 97.3%, 52.2%, 66.0%, 43.1%, 86.2%, and 28.8%, corresponding to the 8 groups listed in Table 1. This further increases the challenge of our dataset.

Validation set and test set. All images in validation set and test set were manually checked and annotated carefully. The two sets were created as follows. We randomly collected 10,000 and 40,000 images from ordinal data as validation set and test set, respectively. We first processed attribute checking as did on the training set, and removed a small number of confused images with confused concepts as mentioned. To ensure correct labels for all images, each image was checked or re-labeled by three human labelers: (i) each image was first checked or annotated by two labelers separately; (ii) the third labeler double checked the image if its labels provided were not consistent. The last labeler made the final decision, and an image can also be discarded if it did not include a clear fashion item. Finally, we obtained 9,897 manually-cleaned images for the validation set, and 39,706 images for the test set.

4 Experiments, Baselines and Comparisons

We conduct experiments by using recent CNN models, and provide meaningful baseline results that facilitate future research. We analyze and discuss challenge of our database based on experimental results. Furthermore, extensive experiments are also conducted on two fashion databases (Clothes-1M [40] and DeepFashion [24]), for investigating the transfer learning capability of various databases. We use the CNNs trained from our dataset as pre-trained models, which demonstrate its strong generalization capability by transforming to other databases or applications.

4.1 Baseline Results

Evaluation Metrics. We first evaluate the performance of various models for multi-label classification on our iFashion-Attribute dataset. Our measurement method was inspired by a comprehensive study from Wu et al[38] on reviewing performance measurement methods for multi-label classification. We employ a micro recall, a micro precision and a mean-F1 score for measuring performance on our validation and test sets. The “micro” means that it is an overall measure over all images. We select the top-8 classes with the highest output score as the predicted results for each image. Specifically, the evaluation metrics are computed via precision (P) and recall (R) as follows:


Where is the number of classes, and is the number of images which are correctly predicted for the -th class. is the number of predicted images, and is the number of ground truth images. We apply the Binary Cross-entropy loss instead of softmax loss to train the CNNs,


where is the number of classes, and denote a ground truth binary vector and the predicted probability scores.

To facilitate future research on this task by using the iFashion-Attribute database, we report a set of experimental results by using a number of widely-used deep network architectures, including Inception-V1 [31], Inception-BN [13], Inception-V3 [32], and ResNet [9]. We apply data augmentation during training. Training images are resized as , where is set to 256 for Inception-V1 [31], Inception-BN [13] and ResNet [9], and 336 for Inception-V3 [32]. Then, we randomly crop a region at a set of fixed positions, where the cropped width and height are picked from . Then these cropped regions are further resized as ( or ) for training the models, where depends on the image resolution , and is set as . Meanwhile, we also implement a horizontal flip randomly on the cropped images. The batch size is set as 256, and the learning rate is from 0.1, decayed according to a fixed schedule determined by the dataset size. RMSProp optimization with momentum is set to 0, and decay is set to 0.9.

Method Validation Private Test
R P F1 R P F1
Inception-BN 59.4 59.6 59.5 59.0 59.6 59.3
Inception-BN 60.0 60.2 60.1 59.6 60.2 59.9
Inception-v1 59.9 60.1 60.0 59.5 60.1 59.8
Inception-v3 60.5 60.7 60.6 59.9 60.5 60.2
Resnet101 59.7 59.9 59.8 59.3 59.9 59.5
Table 3: Baseline results on iFashion-Attribute with different models evaluated by precision (P), recall (R) and F1 score.

Results of various CNN models on the validation set and private test set are reported in Table 3 in the term of recall, precision and mean-F1 score. From the Inception family, we found that a deeper network Inception-V3 outperforms the Inception-V1 and Inception-BN on both validation and test sets. But surprisingly, the results of Inception-V1 are slightly better than that of Inception-BN. We hypothesize that such results may due to the complexity of our database which is significantly more difficult than single-label databases for image classification where models from the Inception family were regularly applied. Therefore, training CNNs from our dataset may require more local supervised information, and Inception-V1 and Inception-V3 have multiple loss functions to enhance local supervision. This challenge has not been fully investigated in the community, and may open new research interests on multi-label image classification with hierarchical label structures.

In addition, we further train an ImageNet pre-trained Inception-BN, which can improve the results from the original one. This suggests that ImageNet pre-trained CNN features are useful to learn from our database. For data imbalance, we implemented the weighted binary cross entropy (weighted BCE) loss which was originally developed in [24] for handling this issue, and obtained a 0.7% performance improvement on iFashion by using the Resnet architecture. Higher performance can be expected by designing more advanced approaches that jointly consider data imbalance with multi-label and fine-grained recognition problem.

4.2 Transfer Learning

To investigate the generalization ability of CNN models learnt from the iFashion-Attribute dataset, we compare them by using various related databases, such as DeepFashion [24], ImageNet [4] and Clothes-1M [40]. Capability of the pre-trained models learnt from these databases is compared by using transfer learning, which transforms model capability to learning from a new small-scale database by fine-turning the pre-trained models. Clothes-50K [40] and DeepFashion [24] are used as the new databases for evaluation. Our goal here is not to evaluate the capability of various datasets, but to investigate new algorithms, CNN architecture and / or training schemes. Thus we simply apply a widely-used CNN architecture - Inception-BN [13], with regular training and fine-tuning schemes.

4.2.1 Transfer learning on Clothes-50K [40]

We evaluate the transfer ability of CNN models to Clothes-50K, which is a subset of Clothes-1M [40]. The Clothes-50K has 50,000 training images of 14 categories. All images were manually cleaned and annotated. We test the trained models on the validation set, which contains 14,312 images of 14 categories. Two groups of experiments are conducted. First, we train Inception-BN models [13] individually from four different databases mentioned. Then the four models are used as pre-trained models, and we fine-tune them by using the Clothes-50K data. Second, we train Inception-BN models from the iFashion-Attribute, DeepFashion and ImageNet, and then fine-tune the pre-trained CNNs by using the Clothes-1M and Clothes-50K sequentially, by following previous approaches implemented on the Clothes-1M and Clothes-50K [40]. Results on the validation set of Clothes-50K are reported in Table 4.

Method Training Data Pre-train Model Val Accuracy
Inception-BN 50K clean ImageNet 74.9
Inception-BN 50K clean DeepFashion 76.4
Inception-BN 50K clean Clothes-1M 77.5
Inception-BN 50K clean iFashion 78.9
Inception-BN 1M + 50K ImageNet 78.7
Inception-BN 1M + 50K DeepFashion 78.3
Inception-BN 1M + 50K iFashion 80.5
With new algorithms robust to noise:
Xiao et al. [40] 1M + 50K ImageNet 78.2
CleanNet [18] 1M + 50K 79.9
Patrini et al. [27] 1M + 50K 80.4
Table 4: Transfer learning with Clothes-50K, by using pre-trained models learned from iFashion-Attribute, ImageNet, DeepFashion and Clothes-1M, with recent results by state-of-the-art algorithms.

As shown in Table 4, by using Clothes-50K as training data, the pre-trained model from iFashion-Attribute obtains the best performance with an accuracy of 78.9%. The accuracy in each class is compared in Fig. 5, where the pre-trained model from iFashion-Attribute has the highest performance in 5 classes. It outperforms the other three pre-trained models by a large margin, particularly for ImageNet (74.9% 78.9%), which just includes a number of fashion-related categories. This suggests that with a similar data scale, our database has stronger transfer capability to fashion-related tasks than the object-centralized ImageNet. Both DeepFashion and Clothes-1M are fashion-related databases. Our iFashion-Attribute has about five times the number of training images and fashion classes of DeepFashion, leading to performance improvements. Compared to Clothes-1M, we have 228 multi-label fashion classes, which is both significantly larger and more complex than 14 single-label classes, providing more meaningful supervision information for training higher performance CNNs. In addition, Clothes-1M includes a large number of noisy labels and images by crawling them raw from multiple online shopping websites. This reduces data quality, which in turn increases the difficulty of transfer learning with CNNs.

Figure 5: Classification accuracy of 14 categories with three pre-training strategies (ImageNet, 1M-Noisy and iFashion-Attribute) on Clothes-1M dataset. The iFashion-Attribute pre-trained model achieves the best overall performance.

Similarly, in the second group of experiments, the pre-trained model from our database consistently outperforms those from ImageNet and DeepFashion, by using both Clothes-1M and Clothes-50K as training data. The impact of pre-trained models is decreased when the amount of training data is increased from 50K to 1M+, which naturally reduces the performance margin between different models. Furthermore, as shown in the bottom part of Table 4, our result of 80.5% accuracy is even comparable or better than those of recent deep learning approaches, which were specifically designed to deal with the noisy images and labels in Clothes-1M, However with our model, empowered by our database, it just employs a simple and straightforward training method, with an off-the-shelf CNN architecture. Note that designing a new algorithm robust against noisy images and labels is beyond the scope of this work.

4.2.2 Transfer learning on DeepFashion [24]

Method Top3 Top5
WTBI  [7] 43.7 66.3
DARN [12] 59.5 79.6
Yang et al. [42] 75.3 84.9
FashionNet [24] 82.6 90.2
Incpetion-BN (DeepFashion) 85.4 91.6
Incpetion-BN (Clothes-1M) 85.9 91.9
Incpetion-BN (ImageNet) 87.3 92.9
Incpetion-BN (iFashion) 88.2 93.3
Table 5: Transfer learning with DeepFashion, by using the pre-trained models learned from iFashion-Attribute, ImageNet and Clothes-1M, with recent results by state-of-the-art algorithms.

We further evaluate the transfer learning ability of CNN models to fashion category recognition on the DeepFahsion database [24]. For the classification task, DeepFashion has 209,222 training images from 46 classes. All images were manually cleaned and annotated. We report our results on the validation set, which contains 40,000 images. Similarly, Inception-BN is used as our basic structure for experiments.

We investigate the transfer capability of the models pre-trained on three million-level databases: ImageNet, Clothes-1M and iFashion-Attribute. We first train three Inception-BN models individually on three databases, and then fine-tune them by using DeepFashion data. We compare these results with that of training from scratch and recent results reported in Table 5.

As shown in Table 5, the results are consistent with those on Clothes-50K: (i) all pre-trained models improved the performance over that of training from scratch; (ii) iFashion-Attribute obtains the best performance on all terms, demonstrating its stronger capability for transfer learning than Clothes-1M and ImageNet; (iii) With our iFashion-Attribute for pre-training, we can achieve state-of-the-art results on the DeepFashion, by simply using an off-the-shelf Inception-BN model. Notice that the results of the ImageNet pre-trained model are better than that of Clothes-1M pre-trained, which may be due to two reasons. First, Clothes-1M dataset includes a large amount of noisy images and labels, which may reduce its performance. A simple experiment was conducted to verify this: fine-tune the iFashion-Attribute pre-trained model on the 50K clean images from Clothes-1M, and then run the model over all training images from Clothes-1M. We obtained a correct rate of 72.7%, by comparing the results with ground truth labels. This indicates that a large amount of images or labels from the training set are not consistent with the model prediction trained on the clean data. Second, many of the fashion categories presented on the DeepFashion are included in the 1000 classes of ImageNet. For example, ImageNet has 57 fashion-related categories, which include 21 overlapping categories with DeepFashion. We further investigate the impact of ImageNet pre-trained model to those overlapping and non-overlapping categories in DeepFashion, and obtained 73.8% and 60.5% Top-1 accuracy, respectively, compared to 67.6% overall accuracy shown in Table 5. The high-performance results on DeepFashion further confirm the promise of our database.

5 Conclusion

We present the iMaterialist Fashion Attributes dataset (iFashion-Attribute). It is the first known million-scale expertly curated image dataset with multi-label and fine-grained attributes. Several automated and manual processes have been taken to improve the label quality. The aforementioned characteristics of this dataset enable it to be relevant for real-world applications, particularly in fashion domain.

The introduction of iFashion-Attribute dataset allows us to compare different approaches for multi-label learning, which we provide several baselines with state of the art CNN models. Our experiments show that there is still large room to improve in this space. We also demonstrated the value of iFashion for transfer learning. In this task, iFashion-Attribute dataset outperforms other well-known datasets for pre-trained fashion image classification models.

In future, we plan to add other annotations like bounding boxes and segmentation masks to enable localization related fashion tasks. Additionally, we plan to study few-shot learning based on the long-tail nature of the dataset.


  • [1] L. Bossard, M. Dantone, C. Leistner, C. Wengert, T. Quack, and L. Van Gool. Apparel classification with style. In ACCV, pages 321–335. Springer, 2012.
  • [2] H. Chen, A. Gallagher, and B. Girod. Describing clothing by semantic attributes. In ECCV, pages 609–623, 2012.
  • [3] T.-S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y.-T. Zheng. Nus-wide: A real-world web image database from national university of singapore. In CIVR, Santorini, Greece., July 8-10, 2009.
  • [4] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and F. Li. ImageNet: A large-scale hierarchical image database. In CVPR, pages 248–255, 2009.
  • [5] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. IJCV, 111(1):98–136, 2015.
  • [6] S. Guo, W. Huang, H. Zhang, C. Zhuang, D. Dong, M. R. Scott, and D. Huang. Curriculumnet: Weakly supervised learning from large-scale web images. In ECCV, pages 135–150, 2018.
  • [7] M. Hadi Kiapour, X. Han, S. Lazebnik, A. C. Berg, and T. L. Berg. Where to buy it: Matching street clothing photos in online shops. In ICCV, pages 3343–3351, 2015.
  • [8] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In ICCV, pages 2980–2988, 2017.
  • [9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition, 2016. CVPR.
  • [10] S. Hong, H. Noh, and B. Han. Decoupled deep neural network for semi-supervised semantic segmentation. In NIPS, pages 1495–1503, 2015.
  • [11] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. 2018.
  • [12] J. Huang, R. S. Feris, Q. Chen, and S. Yan. Cross-domain image retrieval with a dual attribute-aware ranking network. In ICCV, pages 1062–1070, 2015.
  • [13] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015.
  • [14] A. Khosla, N. Jayadevaprakash, B. Yao, and F.-F. Li. Novel dataset for fine-grained image categorization: Stanford dogs. In CVPRW, volume 2, page 1, 2011.
  • [15] J. Krause, M. Stark, J. Deng, and L. Fei-Fei. 3d object representations for fine-grained categorization. In 3DRR, Sydney, Australia, 2013.
  • [16] A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, T. Duerig, et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. arXiv preprint arXiv:1811.00982, 2018.
  • [17] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In null, pages 2169–2178. IEEE, 2006.
  • [18] K.-H. Lee, X. He, L. Zhang, and L. Yang. Cleannet: Transfer learning for scalable image classifier training with label noise. CoRR, abs/1711.07131, 2017.
  • [19] W. Li, L. Wang, W. Li, E. Agustsson, and L. Van Gool. Webvision database: Visual learning and understanding from web data. CoRR, abs/1708.02862, 2017.
  • [20] Y. Li, Y. Song, and J. Luo. Improving pairwise ranking for multi-label image classification. In CVPR, 2017.
  • [21] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. PAMI, 2018.
  • [22] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740–755, 2014.
  • [23] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In ECCV, pages 21–37. Springer, 2016.
  • [24] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In CVPR, pages 1096–1104, 2016.
  • [25] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431–3440, 2015.
  • [26] S. Maji, E. Rahtu, J. Kannala, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. CoRR, abs/1306.5151, 2013.
  • [27] G. Patrini, A. Rozza, A. K. Menon, R. Nock, and L. Qu. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, pages 2233–2241, 2017.
  • [28] A. Quattoni and A. Torralba. Recognizing indoor scenes. In CVPR, pages 413–420. IEEE, 2009.
  • [29] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In CVPR, pages 779–788, 2016.
  • [30] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, pages 91–99, 2015.
  • [31] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, pages 1–9, 2015.
  • [32] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In CVPR, pages 2818–2826, 2016.
  • [33] G. Van Horn, O. Mac Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. Belongie. The inaturalist species classification and detection dataset. In CVPR, 2018.
  • [34] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The caltech-ucsd birds-200-2011 dataset. 2011.
  • [35] J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, and W. Xu. Cnn-rnn: A unified framework for multi-label image classification. In CVPR, pages 2285–2294, 2016.
  • [36] Z. Wang, T. Chen, G. Li, R. Xu, and L. Lin. Multi-label image recognition by recurrently discovering attentional regions. In CVPR, pages 464–472, 2017.
  • [37] Y. Wei, W. Xia, M. Lin, J. Huang, B. Ni, J. Dong, Y. Zhao, and S. Yan. Hcp: A flexible cnn framework for multi-label image classification. PAMI, 38(9):1901–1907, 2016.
  • [38] X.-Z. Wu and Z.-H. Zhou. A unified view of multi-label performance measures. CoRR, abs/1609.00288, 2016.
  • [39] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In CVPR, pages 3485–3492. IEEE, 2010.
  • [40] T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang. Learning from massive noisy labeled data for image classification. In CVPR, pages 2691–2699, 2015.
  • [41] K. Yamaguchi, M. H. Kiapour, L. E. Ortiz, and T. L. Berg. Parsing clothing in fashion photographs. In CVPR, pages 3570–3577, 2012.
  • [42] Y. Yang and D. Ramanan. Articulated pose estimation with flexible mixtures-of-parts. In CVPR, pages 1385–1392. IEEE, 2011.
  • [43] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba. Places: A 10 million image database for scene recognition. PAMI, 2017.
  • [44] F. Zhu, H. Li, W. Ouyang, N. Yu, and X. Wang. Learning spatial regularization with image-level supervisions for multi-label image classification. CoRR, abs/1702.05891, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description