AFIF{}^{4}: Deep Gender Classification based on AdaBoost-based Fusion of Isolated Facial Features and Foggy Faces

Afif: Deep Gender Classification based on AdaBoost-based Fusion of Isolated Facial Features and Foggy Faces

Mahmoud Afifi mafifi@eecs.yorku.com Abdelrahman Abdelhamed kamel@eecs.yorku.com Department of Electrical Engineering and Computer Science, Lassonde School of Engineering, York University, Canada Faculty of Computers and Information, Assiut University, Egypt
Abstract

Gender classification aims at recognizing a person’s gender. Despite the high accuracy achieved by state-of-the-art methods for this task, there is still room for improvement in generalized and unrestricted datasets. In this paper, we advocate a new strategy inspired by the behavior of humans in gender recognition. Instead of dealing with the face image as a sole feature, we rely on the combination of isolated facial features and a holistic feature which we call the foggy face. Then, we use these features to train deep convolutional neural networks followed by an AdaBoost-based score fusion to infer the final gender class. We evaluate our method on four challenging datasets to demonstrate its efficacy in achieving better or on-par accuracy with state-of-the-art methods. In addition, we present a new face dataset that intensifies the challenges of occluded faces and illumination changes, which we believe to be a much-needed resource for gender classification research.

keywords:
gender classification, deep convolutional neural networks, face image dataset

1 Introduction

Gender classification is undoubtedly a simple task for humans, however, it is still an active research problem that draws the attention of many researchers in various fields, including computer vision and machine learning, with many applications ng2015review ; jain2011handbook , such as visual surveillance, intelligent human-computer interaction, social media, demographic studies, and augmented reality.

Although human faces are a powerful visual biometric feature, some facial features have semantic structures that may mislead the classification process relying on facial images as we show later in Section 3. In this paper, we advocate a different strategy to address the gender classification problem by mimicking the human behavior in gender recognition. We started by conducting some user studies to obtain a good grasp on how facial features can sometimes be unreliable in gender recognition. Then, we show how the human behavior in gender recognition can help us decide which facial features are more reliable. To this end, instead of dealing with the raw face images, we extract a few reliable facial features, then we input each of these features to a deep convolutional neural network (CNN) that initially classifies each visual feature to be belonging to either male or female. Finally, we use an AdaBoost-based score fusion to get the final classification decision based on the prediction score of each separate facial feature (Section 3). To evaluate the efficacy of our method, we apply it to four widely-used gender classification datasets, what reveals that our method achieves better, or at least on-par, results than many state-of-the-art methods (Section 4).

With the state-of-the-art gender classification methods achieving compelling accuracy on the existing benchmark datasets, we believe that there is a need for more datasets focusing on more challenging scenarios for gender classification. So, in addition to the proposed method, we also propose a new challenging face dataset where we focus mainly on challenging cases such as occluded and badly illuminated faces. We also evaluate our method on this proposed dataset, revealing how it can be more challenging than other existing datasets. The rest of the paper is as follows: in Section 2, we provide a quick review on key gender classification methods and discuss some of the common shortcomings. In Section 3, we discuss in depth our proposed method. Then, we present our proposed face image dataset and experimental evaluation of our method in Section 4, followed by a brief conclusion in Section 5.

2 Related Work

Vision-based gender classification methods are usually based on extracting features from the given face image then use these features to train a classifier that outputs the predicted gender. Such methods, can be divided into two main categories: 1) geometric-based, and 2) appearance-based methods.

The geometric-based techniques extract and utilize geometric features from the given face image to predict the gender. Burton et al. burton1993 presented a gender classification technique that relies on 73 facial points and uses discriminant analysis of point-to-point distances to infer the gender. Hands were used as biometric traits by Amayeh et al. amayeh2008 where the extracted geometric features of different parts of the hand were used for gender discrimination. The main issue with such methods is that they require highly precise extraction of the geometric features to obtain good classification accuracy.

On the other hand, appearance-based methods rely on extracting features from either or both of: i) the whole face image (holistic features) and ii) regions of the face image (local features). Li et al. gender introduced a method based on five individual facial features in addition to the hair and clothing of the person then used multiple support vector machine (SVM) classifiers to classify the gender based on the individual features. By combining this elaborated visual information using different five fusion approaches, they improved the classification accuracy even more. Nevertheless, the clothing and hair information may mislead the classifier, since the lack of context-based representation can be tricky even for humans.

Combining both geometric-based and appearance-based features, Mozaffari et al. mozaffari2010 presented a technique that extracts both geometric-based and appearance-based features using three alternative methods: discrete cosine transform, local binary pattern (LBP), and geometrical distance. Tapia et al. tapia2013 proposed a technique that uses fusion of local visual features where the feature selection is based on the information theory. They used the mutual information to measure the similarity of pixel intensities in order to reduce redundant features. They applied this measure on three different features: pixel intensities, shape, and local image texture that was described using LBPs.

Rai et al. rai2014 presented a gender classification system that uses local visual information extracted from the Region of Interest (ROI) which is determined manually using three selected points. The ROI is divided into grid sub-images that were used to generate Gabor space by applying 2D Gabor filter with six orientations on each sub-image in order to reduce the sensitivity to light variations. The reduced feature vector is used as an input to SVM classifier that distinguishes between male and female local features.

Hadid et al. hadid2015 showed the efficiency of using LBPs in the gender and texture classification. Recently, Castrillón-Santana et al. castrill2016 presented a comparative study among ten local feature (holistic and local) descriptors and three fusion strategies for gender classification using two datasets, namely EGA ega and Groups groups . They reported that local salient patterns (LSP) lsp and histogram of oriented gradients (HOG) hog achieve the best accuracy using holistic visual features, while local phase quantization (LPQ) lpq with SVM attains the best accuracy using local visual features. Moeini and Mozaffari moeini2017 proposed to learn separate dictionaries of male and female features, extracting 64 feature vectors of the face image using LBP. Then, a sparse representation-based classifier is used in the classification process, reporting state-of-the-art accuracy.

On the other end, deep neural networks are becoming increasingly ubiquitous in many classification problems, according to the achieved remarkable improvements on accuracy. Levi and Hassner levi2015 classified both gender and age via a simple Convolutional Neural Network (CNN) that was applied to the Adience benchmark Adience for age and gender classification in a holistic manner. A local CNN was used by Mansanet et al. mansanet2016 where they utilized Sobel filter in order to extract the local patches while taking into account the location of each patch. In order that the trained CNN obtains high accuracy with face images under occlusions, Juefei-Xu et al. juefeixu2016 utilized multiple levels of blurring to train a deep CNN in a progressive way.

From the above quick review, we can see that most of the prior work in gender classification was depending on extracting large number of local features per image, hand-crafted features, or unreliable holistic features. On the contrary, in our approach, we avoid such issues by using only four highly-discriminative local features and one holistic feature, and we combine these features with the classification power of deep CNNs to achieve state-of-the-art, or at least on-par, gender classification accuracy. As we discussed in the introduction, our choice of facial features is mainly based on the findings from the user studies we carried out trying to understand how humans behave in the gender recognition process. A detailed description of these user studies and our full approach follows in Section 3.

3 Our Methodology

In this section, we will discuss in detail our approach to addressing the gender classification problem. First, we discuss the user studies carried out to help us decide which facial features (local and holistic) are more discriminative for gender classification. Then, we show how we prepare the face patches using a deformable part-based model as proposed by Yu et al. cdsm . After that, we discuss the training of CNNs to classify the facial patches and get initial classification decisions. Finally, we present an adapted AdaBoost-based fusion mechanism of initial classification labels leading to the final classification decision.

3.1 A quick study of human behavior in gender recognition

To study how the gender discrimination is performed by humans, we carried out two experiments. For these experiments, we used 200 images from the Labeled Faces in the Wild (LFW) dataset lfw , containing both male and female faces. The ground truth classes (male or female) was based on the attribute classifiers presented by Kumar et al. LFWAttributes .

In the first experiment, we wanted to see how a human decides on the gender of a face image, especially when the facial features are not very discriminative. To do that, we showed 100 images to 5 different volunteers (20 images per volunteer) in two stages. In the first stage, each volunteer was asked to watch 10 different images and classify each one as male or female. Before doing the second stage, the volunteers were asked to be ready to explain what they would do once they are uncertain about the gender of the face. At the end of the experiment, almost all of the volunteers reported that the first thing they did was to look at the face region and facial features (eyes, nose, ears, etc.), if they cannot precisely determine the gender, they look at the whole image and think about all of the visual information (clothing, hair, accessories, etc.) surrounding the face in order to make their final decision. It is worth noting that the classification accuracy in this experiment was 96%.

Figure 1: Sample questions from our user studies. (a) and (b) are asking the users to guess a male/female image among a set of female/male images using cropped-face images. (c) and (d) are the same questions using foggy-face images. All images are taken from the Labeled Faces in the Wild (LFW) dataset lfw .

From the first experiment, we notice that visual information surrounding the face in an image are of high importance in classifying the gender of the face, especially when the facial features are quite ambiguous. That led us to conduct the second experiment, where we wanted to see which is more discriminative in deciding the gender from a human perspective: the facial features or the visual information surrounding the face? To achieve this, we used two types of images: 1) cropped face images that contain only the facial features, and 2) foggy face images that contain the whole visual information surrounding the face while the face region being heavily blurred out, so that a volunteer will depend only on the surrounding visual information to decide on the gender. Figure 1 shows some examples of the two types of images. Then, we prepared an on-line subjective test that asks volunteers to guess the image of a male among a set of female images and vice versa, i.e., to guess the image of a female among a set of male images. Each question contained either 5 or 10 images from either the cropped or the foggy face images. There were 70 volunteers who accomplished this experiment using 100 images. Additionally, the volunteer is asked to comment on which type of face images was easier to recognize its gender. As a result, we got classification accuracy of 69.40% for the cropped face images and a higher accuracy of 88.09% for the foggy face images. Also, most of the volunteers reported that the foggy face images were easier to classify than the cropped face images.

From the above experiments, we notice that, beside the high importance of isolated facial features, the visual information from the general look of persons also possesses an important role in the classification process regardless of the visibility of facial features. That was also noticed in prior work by Lian and Lu (hair, ). As a result, we decided to build our approach to gender classification by combining both isolated facial features and general appearance of the subject in the classification process, as we will discuss in Section 3.2.

3.2 Facial features preparation

Before extracting the facial features and to improve the face detection process involved in our approach, we preprocess the images by applying lighting-invariant enhancement techniques. This was inspired from the work by Han et al. preprocessingFaces . We apply the single scale retinex (SSR) presented by Jobson et al. SSR , in which the lighting-invariant enhanced image (the retinex image) is generated by subtracting the estimated global illumination from the given face image in the log space, as in the following equation:

(1)

where represents the spatial location of a pixel in the image, is the Gaussian low-pass filter given by

(2)

in which is determined such that and is the Gaussian surround contrast.

In the next step, we scan the input image to fit the cascaded deformable shape model (CDSM) proposed by Yu et al. cdsm in order to detect and extract the face regions. Fitting a CDSM requires the maximization of a scoring function that is based on both local appearance and shape optimization. This score function represents how well the estimated facial landmark positions are aligned with the CDSM. In order to detect multiple faces in a single image (as in the case of the Groups dataset), we repeat the following 2-step procedure: 1) we apply the CDSM to detect a face; 2) we mask out the detected face region by a single-color rectangle and repeat step 1 again on the same image until no more faces are detected. In the case that applying the CDSM on the original image fails, we use the SSR image instead. Finally, we use the estimated landmarks of eyes, nose, and mouth, to extract a separate patch for each visual feature. Figure 2 shows an example of the process of extracting facial patches.

Figure 2: The process of extracting facial features including the foggy face. (a) is the input image. (b) is the illumination-invariant image generated by the single scale retinex (SSR) method SSR . (c) is the selected facial landmarks from the face detection process by fitting the cascaded deformable shape model (CDSM) cdsm . (d) is the extracted features: eyes, nose, mouth, and foggy face which is generated by applying the Poisson image editing (PIE) technique poisson .

To generate the foggy face images, first, the face region is detected using the above mentioned procedure, then the surrounding facial landmarks are used to define an unknown region which we feed into a Poisson image editing (PIE) equation poisson :

(3)

where is the first derivative of the unknown scalar function over in the given image . By omitting the suggested guidance used by poisson , the foggy face image is generated by solving Equation 3. Figure 2d shows an example of a generated foggy face image.

3.3 Facial feature classification using CNNs

Once we have all the facial features ready, we feed each patch as an independent input to a pre-trained CNN that is dedicated to classifying this biometric trait. In other words, we train four separate CNNs for the four separate facial features: foggy face, eyes, nose, and mouth. We adapted the Caffe reference model, proposed by Jia et al. caffe , to solve the binary classification problem of each visual patch. Our adapted CNN architecture consists of five convolutional layers and three fully-connected layers. The first convolutional layer contains 96 (1111) convolutional filters and uses stride size equal to 4 to reduce the computational complexity. The second layer uses one stride with 256 (55) convolutional filters. The last three convolutional layers use one stride with 9 (33) filters. The final softmax layer responds with two possible output classes, representing either male or female. The CNN is trained using stochastic gradient descent and back-propagation back over 1000 iterations.

3.4 Classification score fusion

Up to now, we have four separate CNNs that give us four independent gender classification scores based on four separate facial features. To fuse the four independent classification scores, we apply an AdaBoost-based score fusion mechanism as follows. Let be one of the five pre-trained CNNs that gives a decision , such that , and is the corresponding prediction score given by the softmax function of , where and ). We use the foggy face score that represents the prediction score of the foggy face multiplied by the estimated class as a holistic score that is combined with all other possible feature scores. Since we have four more scores (for left eye, right eye, nose, and mouth), we get 15 different combinations of scores, which is the summation given by

(4)

where is the cardinality of the set of features. Then, we train AdaBoost classifiers ada using the combination vectors to get the estimated class of each combined vector, where . For each vector , the predicted class is given by

(5)

where is the estimated class of the given score vector using the weak classifier, is the output weight of the classifier, and is the number of weak classifiers. Eventually, the suggested labels are combined into a single vector to train a linear discriminant classifier that determines the final class of the face image. An overview of our whole approach is illustrated in Figure 3.

Figure 3: An overview of our whole approach for gender classification. (a) the input image. (b) the extracted facial features (foggy face, eyes, nose, and mouth patches). (c) initial classification scores from 4 pre-trained convolutional neural networks (CNNs). (d) intermediate classification decisions based on 15 weak classifiers, one classifier for each combination from the initial scores. (e) the final classification decision from a linear discriminant classifier.

In this section, we have discussed our approach to the gender classification problem, from deciding on the features we used to the final classification decision. In section 4, we present some experimental results to evaluate our approach and compare it against other state-of-the-art methods, showing its high performance and efficacy.

4 Experimental Results

In this section, we demonstrate the efficacy of our approach through extensive evaluation against four widely-used benchmark datasets. Additionally, we evaluate our approach against our new challenging dataset, the Specs of Faces dataset.

4.1 Evaluation Setup

In our evaluation procedure, we use five-fold cross validation on all datasets and report the mean of the accuracy values. As the suggested folds of some datasets contain unbalanced number of male and female images, random images are picked from the excessive group in order to have the same number of images for both genders. We train the deep CNNs using feature patches of size pixels, extracted from 75% of the training set. The AdaBoost classifiers are trained thereafter by 60% of the rest of the training set using the prediction scores of the pre-trained CNNs. At the final stage of training, the fusion classifier is trained using the estimated classes reported by the AdaBoost classifiers over the rest of the training set. Eventually, we test the entire algorithm using the testing fold.

As the CNNs are usually susceptible to the overfitting problem using a limited number of images, we enlarged the number of training images 10 times by generating more images through applying a set of operations depicted in Figure 4. For each training image, we apply translation by 5 pixels along the four border sides, then horizontally flip each of the four translated images and the original image. The empty pixels that have been produced by the previous operations are filled by the mean of the original training image to maintain the equilibrium of original means over the training set.

Figure 4: An example on how we enlarge the number of training images by applying small translation on the original image along the four sides, then horizontally flip the four translated images and the original image. The shown image is an eye patch from the proposed Specs on Faces dataset.

4.2 Datasets

We have evaluated our method against four challenging datasets: the Labeled Faces in the Wild (LFW) lfw , the Images of Groups dataset groups , the Adience benchmark for age and gender classification Adience , and the Face Recognition Technology dataset (FERET) FERET . Furthermore, we present a new dataset of challenging face images and use it to evaluate our method. The proposed dataset is denoted as the Specs on Faces (SoF) dataset.

Labeled Faces in the Wild (LFW). The LFW dataset lfw consists of 13,233 unconstrained face (250250 pixels) images for different 5,749 persons (4,272 males and 1,477 females). In order to label the images based on gender, we used the attribute values presented by Kumar et al. LFWAttributes . Each descriptive visual attribute is represented as a real value , where the magnitude of represents the degree of and the sign of represents the category. In the gender attribute, a positive sign refers to a male image and a negative sign represents a female image. It is worth noting that there is a reported error rate (approximately 8.62%) in this classification. A straightforward way was used to assign each face image to its gender label by applying a threshold based on the sign of the gender attribute. However, there are some images whose gender attributes lie on the boundaries (e.g. 0.3); that leads to incorrect labels. To handle that, we added another layer of separation for images whose magnitude values are less than a threshold (e.g. 0.5). Then, we used the genderize.io111https://genderize.io/ API to estimate the gender based on the first name of each face image in the LFW. Eventually, we manually reviewed each category of male and female images three times to completely eliminate any incorrect labels. We made this accurate labeling of the LFW dataset available online222http://bit.ly/lfw-gender. In our experiments, we used 2,948 images from the LFW dataset (590 images on average for each fold).

Images of Groups dataset. The Groups dataset groups contains 28,231 face images that were extracted from the original 5,080 group images collected from Flickr images. The Groups dataset is considered, in the literature, the most challenging and complex dataset for the gender classification problem onsoft ; FRVT ; ref10 . The experiments were carried out using 12,682 face images (2,536 images on average for each fold).

Adience benchmark for age and gender classification. The Adience benchmark Adience comprises 26,580 unconstrained face images gathered from Flickr albums for 2,284 persons. The images include people with different head poses and ages under various illumination conditions. The folds which have been used in the experiments were picked randomly, where each fold contains 970 images on average.

Face Recognition Technology (FERET). The FERET dataset FERET is widely used to evaluate and develop facial recognition techniques. The dataset consists of 14,126 images for 1199 different persons captured between 1993 and 1996. There is a variety in face poses, facial expressions, and lighting conditions. In 2003, the high resolution (512 768 pixels) color FERET was released which has been used in the presented experiments. The total number of frontal and near-frontal face images, whose pose angle lies between and , is 5,786 images (3,816 male images and 1,970 female images). We evaluated our approach using 5 folds of the FERET dataset (700 images were randomly picked for each fold).

The Specs on Faces (SoF) dataset. Since one of the main problems in gender classification is the face occlusions and illumination changes gender ; rai2014 , we present a new dataset, the Specs on Faces (SoF)333http://bit.ly/sof_dataset, that is devoted to these two problems. We made the proposed dataset more challenging for face detection, recognition, and classification, through capturing the faces under harsh illumination environments and face occlusions. The SoF comprises 2,662 original images of size 640 480 pixels for 112 persons (66 males and 46 females) from different ages. The glasses are the common natural occlusion in all images of the dataset. However, we added two more synthetic occlusions, for nose and mouth, to each image.

The original images in the proposed SoF dataset are divided into two parts. The first part contains 757 unconstrained face images in the wild for 106 different persons whose head orientations approximately fall in the range of in yaw, pitch, and roll. The images were captured in an unstructured manner over a long period in several locations under indoor and outdoor illumination environments. The second part contains 1905 images which are dedicated to challenging harsh illumination changes. In order to get arbitrary indoor lighting conditions, 12 subjects were captured using a wheel-whirled lamp as the only light source in the laboratory. The lamp is located above and spun around each subject to emit light rays in random directions, see Figure 5 for an example. This idea is inspired by the primitive version of the Light Stage system presented by Debevec et al. lightstage . The SoF dataset involves a handcrafted metadata that contains subject ID, view (frontal/near-frontal) label, 17 facial feature points, face and glasses rectangle, gender and age labels, illumination quality, and facial emotion for each subject, see Figure 6 for an example of the metadata.

Figure 5: Samples of an image for the same person from the Specs on Faces (SoF) data set captured under different lighting directions.
Figure 6: Samples of the Specs on Faces (SoF) dataset. The lower part shows a metadata example for the shown image. The green circles represent the 17 facial landmarks, the white rectangle is the glasses rectangle, and the yellow one is the face rectangle.

Moreover, to generate more challenging synthetic images, we applied three image filters (Gaussian noise, Gaussian smoothing, and image posterization using Fuzzy logic) to the original images. All the generated images are categorized into three levels of difficulty (easy, medium, and hard). That enlarges the number of images to be 42,592 images (26,112 male images and 16,480 female images). Furthermore, the dataset comes with a metadata that describes each subject from different aspects. Figure 7 shows a sample image from the two parts of the dataset, original images and synthetic images.

Figure 7: The Specs on Faces (SoF) dataset contains two groups of images. The first part includes the original images. The second part contains the original images besides the generated images. The last three columns show the three levels of difficulty (easy, medium, and hard). The rows from the first to the fifth represent the generated images by applying Gaussian smoothing, Gaussian noise, posterization filter, nose occlusion, and mouth occlusion, respectively.

We carried out two groups of experiments using the SoF dataset. In the first group, we randomly picked 5 folds, each contained 330 original images, i.e., without any filters or synthetic occlusions. In the second group, we randomly picked the folds from the whole images of the dataset, original and synthetic images, where each fold contained 750 images. In the following, we will briefly discuss the results of our facial feature detection mechanism followed by a more thorough discussion of gender classification results achieved by our method.

4.3 Facial feature detection

In spite of the non-frontal view of many images in the datasets, the facial components are extracted in a desirable way. As our target is to extract the facial patch instead of extracting the exact facial points, some error in the alignment of the CDSM is tolerable. In addition, SSR helps improve the feature detection by catching undetected faces, this is shown in Table 1. As the Face Detection Data Set and Benchmark (FDDB) FDDB is devoted to face detection research, it was used to calculate the recall, precision, and F-Measure of the CDSM with and without SSR. The FDDB contains 2845 images that captured 5171 faces. As shown in Table 1, the F-Measure is improved by about for the FDDB dataset after using SSR as an optional preprocessing step. Also, the SSR improves the F-Measure using SoF (original) and SoF (full) datasets by about and , respectively.

Dataset CDSM w/o SSR CDSM w SSR
Recall Precision F-Measure Recall Precision F-Measure
FDDB 73.40 99.15 84.35 77.99 97.07 86.49
SoF (original) 84.08 95.30 89.34 92.40 93.00 92.70
SoF (full) 58.86 95.24 72.76 79.66 90.21 84.61
Table 1: The recall (%), precision (%), and F-Measure (%) of the face detection process using cascaded deformable shape model (CDSM) with and without single scale retinex (SSR).

4.4 Gender classification accuracy

We have applied our proposed method (AFIF) to unconstrained types of face images, i.e., frontal images, near-frontal images, non-frontal images, and images with large poses and occlusions. In literature, many gender classification methods are applied only to frontal or near-frontal face images moeini2015real ; mery2015automatic ; tapia2013 ; shan2012learning ; baluja2007boosting . For the sake of fair comparison, we report only results of methods using unconstrained types of images moeini2017 ; hadid2015 ; levi2015cnn ; van2015deep ; han2015demographic ; rai2014 ; Adience ; groups and we omit results of methods using only frontal or near-frontal face images. From the work by Moeini and Mozaffari moeini2017 , we report the results of two methods: dictionary learning and separate dictionary learning for gender classification, denoted as DL-GC and SDL-GC, respectively. Only for the case of the FERET dataset, because we used frontal and near-frontal images with pose angles between and , we report results of methods using frontal/near-frontal images. Table 2 shows that the accuracy of our method outperforms the state-of-the-art results reported for unconstrained types of face images from the LFW, Groups, and Adience datasets. Also, our method achieves comparable accuracy with the state-of-the-art over the FERET dataset for frontal/near-frontal face images.

Method Accuracy (%)
LFW Groups Adience FERET
DL-GC moeini2017 93.60 84.40 99.50
SDL-GC moeini2017 94.90 83.30 99.90
Hadid et al. hadid2015 89.85
Eidinger et al. Adience 86.80 76.10
Han et al. han2015demographic 94.40
Rai and Khanna rai2014 89.10 98.40
Gallagher and Chen groups 74.10
Levi and Hassner levi2015cnn 86.80
Wolfshaar et al. van2015deep 87.20
Tapia and Pérez tapia2013 99.10
AFIF (Ours) 95.98 90.73 90.59 99.49
Table 2: Comparison of our method (AFIF) with state-of-the-art achieved accuracy over the LFW, Groups, Adience, and FERET datasets. Note that the reported accuracy here are for methods applied to unconstrained types of images (frontal, near-frontal, and images with large poses and occlusions), results for only frontal and/or near-frontal images are omitted for the sake of fair comparison. The cells marked with a ‘–’ represent unavailable results or results from methods using only frontal/near-frontal images. Only for the case of FERET dataset, we report results of methods using only frontal/near-frontal images because we followed the same procedure.

Cross-dataset Evaluation. To further assess the performance of our method, we carried out cross-dataset evaluation as shown in Table 3. The attained accuracy using the same dataset for both training and testing usually drops in an obvious way when using different datasets for training and testing. From Table 3, we can see that the lowest cross-dataset classification accuracy is obtained using the full SoF dataset (in 3 cases), the Adience dataset (in 2 cases), and the Groups dataset (in 1 case). This points towards that our full SoF dataset is the most challenging. The low accuracy obtained using the full SoF dataset, compared to the other datasets, is due to the challenging filters and synthetic occlusions that have been added to the original images. Also, it is worth noting that the highest cross-dataset accuracy is obtained by the FERET dataset, due to the good quality of the images compared with the poor resolutions of many images of other datasets.

Testing Accuracy (%)
Training LFW Adience SoF (full) SoF (original) Groups FERET
LFW 95.98 79.19 65.76 78.36 76.67 92.71
Adience 84.55 90.59 74.15 79.97 85.07 86.86
SoF (full) 79.22 74.16 92.10 97.21 72.30 84.77
SoF (original) 83.31 71.09 84.05 98.48 73.60 89.12
Groups 91.74 83.06 69.80 82.20 90.73 92.20
FERET 85.78 69.19 75.15 83.27 85.67 99.49
Table 3: Results of cross-dataset evaluation of our proposed method (AFIF). The diagonal represents the average accuracy obtained using the same dataset for both training and testing. Rows represent datasets used for training while columns represent datasets used for testing. The values in bold represents the dataset that yields the lowest accuracy when using a specific dataset for training, for example, in first row, when using LFW for training, the full SoF dataset yields the lowest accuracy, that means it is the most challenging in this case.

To assess the performance for cross-dataset evaluation against the state-of-the-art performance, we compare our results with the latest results reported by Moeini and Mozaffari moeini2017 for the LFW, Groups, and FERET datasets in Table 4. It is clear that our method gives higher accuracy for all cases except for the FERET dataset. It is worth noting that cross-dataset evaluation usually yields lower accuracy than the case of using the same dataset for both training and testing. This is mainly due to different conditions of collecting images in different datasets, such as occlusions, illumination changes, backgrounds, etc.

Training Testing DL-GC moeini2017 SDL-GC moeini2017 AFIF (Ours)
LFW LFW 93.60 94.90 95.98
Groups 69.70 72.40 76.67
FERET 88.80 89.70 92.71
Groups LFW 80.60 83.10 91.74
Groups 83.30 84.40 90.73
FERET 85.20 87.10 92.20
FERET LFW 71.70 73.50 85.78
Groups 59.50 61.90 85.67
FERET 99.50 99.90 99.49
Table 4: Comparison of cross-dataset evaluation of our proposed method (AFIF) against the state-of-the-art results reported by Moeini and Mozaffari moeini2017 . The values represent the classification accuracy (%).

5 Conclusion

In this paper, we addressed the gender classification problem by using a combination between local and holistic features extracted from face images. We used four deep convolutional neural networks (CNNs) to separately classify the individual features, then we applied an AdaBoost-based score fusion mechanism to aggregate the prediction scores obtained from the CNNs. Through extensive experiments, we showed that our method achieves better results than the state-of-the-art methods in most cases on widely-used datasets. Also, and more importantly, we showed that our method performs better than the state-of-the-art when generalized to cross-dataset evaluation, which is much more challenging than in-dataset evaluation. Furthermore, we proposed a more challenging dataset of 42,592 face images that mainly addresses the challenges of face occlusions and illumination variation. We accompanied our proposed dataset with handcrafted annotations and gender labels for all images to facilitate further research addressing the gender classification problem.

References

  • (1) C.-B. Ng, Y.-H. Tay, B.-M. Goi, A review of facial gender recognition, Pattern Analysis and Applications 18 (4) (2015) 739–755.
  • (2) A. K. Jain, S. Z. Li, Handbook of face recognition, Springer, 2011.
  • (3) A. M. Burton, V. Bruce, N. Dench, What’s the difference between men and women? evidence from facial measurement, Perception 22 (2) (1993) 153–176.
  • (4) G. Amayeh, G. Bebis, M. Nicolescu, Gender classification from hand shape, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE, 2008, pp. 1–7.
  • (5) B. Li, X.-C. Lian, B.-L. Lu, Gender classification by combining clothing, hair and facial component classifiers, Neurocomputing 76 (1) (2012) 18–27.
  • (6) S. Mozaffari, H. Behravan, R. Akbari, Gender classification using single frontal image per person: combination of appearance and geometric based features, in: International Conference on Pattern Recognition (ICPR), IEEE, 2010, pp. 1192–1195.
  • (7) J. E. Tapia, C. A. Pérez, Gender classification based on fusion of different spatial scale features selected by mutual information from histogram of lbp, intensity, and shape, IEEE transactions on information forensics and security 8 (3) (2013) 488–499.
  • (8) P. Rai, P. Khanna, A gender classification system robust to occlusion using gabor features based (2d) 2 pca, Journal of Visual Communication and Image Representation 25 (5) (2014) 1118–1129.
  • (9) A. Hadid, J. Ylioinas, M. Bengherabi, M. Ghahramani, A. Taleb-Ahmed, Gender and texture classification: A comparative analysis using 13 variants of local binary patterns, Pattern Recognition Letters 68 (2015) 231–238.
  • (10) M. Castrillón-Santana, M. De Marsico, M. Nappi, D. Riccio, Meg: Texture operators for multi-expert gender classification, Computer Vision and Image Understanding.
  • (11) D. Riccio, G. Tortora, M. De Marsico, H. Wechsler, Ega—ethnicity, gender and age, a pre-annotated face database, in: IEEE Workshop on Biometric Measurements and Systems for Security and Medical Applications (BIOMS), IEEE, 2012, pp. 1–8.
  • (12) A. C. Gallagher, T. Chen, Understanding images of groups of people, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 256–263.
  • (13) Z. Chai, Z. Sun, T. Tan, H. Mendez-Vazquez, Local salient patterns—a novel local descriptor for face recognition, in: International Conference on Biometrics (ICB), IEEE, 2013, pp. 1–6.
  • (14) N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 1, IEEE, 2005, pp. 886–893.
  • (15) V. Ojansivu, J. Heikkilä, Blur insensitive texture classification using local phase quantization, in: International conference on image and signal processing, Springer, 2008, pp. 236–243.
  • (16) H. Moeini, S. Mozaffari, Gender dictionary learning for gender classification, Journal of Visual Communication and Image Representation 42 (2017) 1 – 13. doi:http://dx.doi.org/10.1016/j.jvcir.2016.11.002.
  • (17) G. Levi, T. Hassner, Age and gender classification using convolutional neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, pp. 34–42.
  • (18) E. Eidinger, R. Enbar, T. Hassner, Age and gender estimation of unfiltered faces, IEEE Transactions on Information Forensics and Security 9 (12) (2014) 2170–2179.
  • (19) J. Mansanet, A. Albiol, R. Paredes, Local deep neural networks for gender recognition, Pattern Recognition Letters 70 (2016) 80–86.
  • (20) F. Juefei-Xu, E. Verma, P. Goel, A. Cherodian, M. Savvides, Deepgender: Occlusion and low resolution robust facial gender classification via progressively trained convolutional neural networks with attention, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2016, pp. 68–77.
  • (21) X. Yu, J. Huang, S. Zhang, W. Yan, D. N. Metaxas, Pose-free facial landmark fitting via optimized part mixtures and cascaded deformable shape model, in: Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 1944–1951.
  • (22) G. B. Huang, M. Ramesh, T. Berg, E. Learned-Miller, Labeled faces in the wild: A database for studying face recognition in unconstrained environments, Tech. rep., Technical Report 07-49, University of Massachusetts, Amherst (2007).
  • (23) N. Kumar, A. Berg, P. N. Belhumeur, S. Nayar, Describable visual attributes for face verification and image search, IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (10) (2011) 1962–1977.
  • (24) X.-C. Lian, B.-L. Lu, Gender classification by combining facial and hair information, in: International Conference on Neural Information Processing, Springer, 2008, pp. 647–654.
  • (25) H. Han, S. Shan, X. Chen, W. Gao, A comparative study on illumination preprocessing in face recognition, Pattern Recognition 46 (6) (2013) 1691–1699.
  • (26) D. J. Jobson, Z.-u. Rahman, G. A. Woodell, Properties and performance of a center/surround retinex, IEEE transactions on image processing 6 (3) (1997) 451–462.
  • (27) P. Pérez, M. Gangnet, A. Blake, Poisson image editing, in: ACM Transactions on Graphics (TOG), Vol. 22, ACM, 2003, pp. 313–318.
  • (28) Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, T. Darrell, Caffe: Convolutional architecture for fast feature embedding, in: Proceedings of the 22nd ACM international conference on Multimedia, ACM, 2014, pp. 675–678.
  • (29) D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning representations by back-propagating errors, Cognitive modeling 5 (3) (1988) 1.
  • (30) Y. Freund, R. E. Schapire, et al., Experiments with a new boosting algorithm, in: Icml, Vol. 96, 1996, pp. 148–156.
  • (31) P. J. Phillips, H. Wechsler, J. Huang, P. J. Rauss, The feret database and evaluation procedure for face-recognition algorithms, Image and vision computing 16 (5) (1998) 295–306.
  • (32) M. S. Nixon, P. L. Correia, K. Nasrollahi, T. B. Moeslund, A. Hadid, M. Tistarelli, On soft biometrics, Pattern Recognition Letters 68 (2015) 218–230.
  • (33) M. Ngan, P. Grother, Face recognition vendor test (frvt) performance of automated gender classification algorithms, in: Technical Report NIST IR 8052, National Institute of Standards and Technology, 2015.
  • (34) M. Castrillón-Santana, J. Lorenzo-Navarro, E. Ramón-Balmaseda, Multi-scale score level fusion of local descriptors for gender classification in the wild, Multimedia Tools and Applications (2016) 1–17.
  • (35) P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, M. Sagar, Acquiring the reflectance field of a human face, in: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, ACM Press/Addison-Wesley Publishing Co., 2000, pp. 145–156.
  • (36) V. Jain, E. Learned-Miller, Fddb: A benchmark for face detection in unconstrained settings, Tech. Rep. UM-CS-2010-009, University of Massachusetts, Amherst (2010).
  • (37) A. Moeini, K. Faez, H. Moeini, Real-world gender classification via local gabor binary pattern and three-dimensional face reconstruction by generic elastic model, IET Image Processing 9 (8) (2015) 690–698.
  • (38) D. Mery, K. Bowyer, Automatic facial attribute analysis via adaptive sparse representation of random patches, Pattern Recognition Letters 68 (2015) 260–269.
  • (39) C. Shan, Learning local binary patterns for gender classification on real-world face images, Pattern Recognition Letters 33 (4) (2012) 431–437.
  • (40) S. Baluja, H. A. Rowley, Boosting sex identification performance, International Journal of computer vision 71 (1) (2007) 111–119.
  • (41) G. Levi, T. Hassner, Age and gender classification using convolutional neural networks, in: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2015.
  • (42) J. van de Wolfshaar, M. F. Karaaba, M. A. Wiering, Deep convolutional neural networks and support vector machines for gender recognition, in: IEEE Symposium Series on Computational Intelligence, IEEE, 2015, pp. 188–195.
  • (43) H. Han, C. Otto, X. Liu, A. K. Jain, Demographic estimation from face images: Human vs. machine performance, IEEE Transactions on Pattern Analysis and Machine Intelligence 37 (6) (2015) 1148–1161.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
44846
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description