# Anisotropic Diffusion-based Kernel Matrix Model

for Face Liveness Detection

###### Abstract

Facial recognition and verification is a widely used biometric technology in security system. Unfortunately, face biometrics is vulnerable to spoofing attacks using photographs or videos. In this paper, we present an anisotropic diffusion-based kernel matrix model (ADKMM) for face liveness detection to prevent face spoofing attacks. We use the anisotropic diffusion to enhance the edges and boundary locations of a face image, and the kernel matrix model to extract face image features which we call the diffusion-kernel (D-K) features. The D-K features reflect the inner correlation of the face image sequence. We introduce convolution neural networks to extract the deep features, and then, employ a generalized multiple kernel learning method to fuse the D-K features and the deep features to achieve better performance. Our experimental evaluation on the two publicly available datasets shows that the proposed method outperforms the state-of-art face liveness detection methods.

## 1 Introduction

Face recognition and verification [21, 23] has become the most popular technology in high-level security systems due to the natural, intuitive, and less human-invasive face biometrics. Unfortunately, face biometrics is vulnerable to spoofing attacks using photographs or videos of the actual user. Attackers can attempt to hack the security system by using printed photos, mimic masks, or screenshots, and they can also use the captured or downloaded video sequences containing facial gestures like eye blinking of the valid user to invade the security system. In order to mitigate this problem, many researchers have made much effort to face liveness detection based on image quality [24, 29], spectrum [10, 30], and motion information like eye blinking [18], mouth movement [11], and head pose [3]. Recently, Diffusion methods have been applied to face liveness detection [9, 1], which can estimate the difference in surface properties between live and fake faces and achieve spectacular performance. However, it still remains a big challenge to detect face liveness against spoofing attacks.

In this paper, we present an anisotropic diffusion-based kernel matrix model (ADKMM) for face liveness detection to prevent face spoofing attacks. The ADKMM can accurately estimate the difference in surface properties and inner correlation between live and fake face images. Figure 1 illustrates the overview of our method. The anisotropic diffusion is used to enhance the edges and boundary locations of a face image sequence, and the kernel matrix model is used to extract face features from the sequence which reflect the inner correlation of the sequence. We call these features the diffusion-kernel (D-K) features. To achieve better performance against the spoofing attack, we also extract the deep features using deep convolution neural networks, and then utilize a generalized multiple kernel learning method to fuse the D-K features and the deep features. The deep features can work well with D-K features by providing complementary information.

Synthetical considering these two kinds of feature relationships, we can capture the differences effectively in both sides of illumination characteristics and inner correlation of face images. The experimental results on various publicly available datasets demonstrate that our method provides a reliable performance of face liveness detection.

The main contributions of this paper can be summarized as follows:

We present an anisotropic diffusion-based kernel matrix model (ADKMM) that can accurately estimate the difference in surface properties and inner correlation between live and fake face images to extract diffusion-kernel (D-K) features for face liveness detection.

We utilize a generalized multiple kernel learning method to fuse the D-K features and the deep features extracted by deep convolution neural networks to get better performance against the spoofing attack.

Our method achieves an impressive accuracy on the publicly available datasets and outperforms the state-of-art face liveness detection methods.

## 2 Related Work

Many methods of face liveness detection are based on the analysis of a single image. These methods assume that fake faces tend to lose more information by the imaging system and thus come into a lower quality image under the same capturing condition. Li et al. [15] proposed to analyze the coefficients of Fourier transform since the reflections of light on 2D and 3D surfaces result in different frequency distributions. For example, fake faces are mostly captured twice by the camera, so their high-frequency components are different with those of real faces. Zhang et al. [29] attempted to extract frequency information using multiple DoG filters to detect the liveness of the captured face image. Tan et al. [24] and Peixoto et al. [19] combined the DoG filter and other models to extract efficient features from the input image to improve the liveness detection performance. Maatta et al. [16] extracted the micro texture of input image using the multi-scale local binary pattern (LBP). Based on these micro textures, they used SVM classifier to detect the face liveness. Kim et al. [9] calculated the diffusion speed of a single image, then they used a local speed model to extract features and input them into a linear SVM classifier to distinguish the fake faces from real ones. Alotaibi et al. [1] used nonlinear diffusion to detect edges in the input image and utilized the diffused image to detect face liveness by using convolution neural networks.

Motion-based approaches are another common class of face liveness detection, which aim at detecting subconscious response of a face. Given an image sequence, these methods attempt to capture facial response like eye blinking, mouth movement, and head pose, then exploit spatial and temporal features. Pan et al. [18] detected the eye blinking behavior using a unidirectional conditional graphic framework. A discriminative measure of eye states is also incorporated into the framework to detect face liveness. Singh et al. [22] applied the haar classifier and distinguished the fake faces from the real ones by detecting eye and mouth movements. Anjos et al. [2] attempted to detect motion correlations between the user¡¯s head and the background regions obtained from the optical flow that indicate a spoofing attack. Tirunagari et al. [25] used the dynamic mode decomposition (DMD) algorithm to capture face from the input video and extract dynamic visual information to detect the spoofing attack. Bharadwaj et al. [5] proposed a novel method to detect face liveness using the configuration LBP and motion estimaion to extract the facial features. Since multiple input frames are required to track face, dynamic approaches usually cost more detection time and computing resources. Besides, some dynamic methods require user to follow some instructions which in the users¡¯ experience is inconvenient.

Different from existing methods, our work focus on the the difference in surface properties and inner correlation between live and fake face images. We present an anisotropic diffusion-based kernel matrix model (ADKMM) for face liveness detection to prevent face spoofing attacks. The anisotropic diffusion method helps to distinguish fake faces from real ones by diffusing the input image to enhance the difference in illumination characteristics. From these images, we can gain the depth information and boundary locations of face images. The D-K features extracted by kernel matrix model can significantly reflect the inner correlation of the faces image sequences and thus lead to a better classification.

## 3 Proposed Method

Our method uses the anisotropic diffusion to enhance the edges and boundary locations of a face image, and the kernel matrix model to extract the diffusion-kernel (D-K) features. To achieve better performance against the spoofing attacks, we extract the deep features by deep convolution neural networks, and utilize a generalized multiple kernel learning method to fuse the D-K features and the deep features.

### 3.1 Anisotropic Diffusion

The anisotropic diffusion method is used to diffuse the input video or images to enhance edge information. Some examples of diffused images are shown in Figure 2. Consider the anisotropic diffusion equation [20]

where the represents the divergence operator, and , represent the gradient and Laplacian operators, respectively. is the original image and is the variance in Gaussian kernel . It reduces to the isotropic heat diffusion equation if is a constant. Suppose that at time , we knew the locations of the region boundaries or edges for that scale. We want to encourage a region to be smooth in preference to smoothing across the boundaries. We could achieve this by setting the conduction coefficient to be 1 in the interior of each region and 0 at the boundaries. The blurring would then take place separately in each region with no interaction between regions and the region boundaries would remain sharp.

The next task is to localize the region boundaries at each scale. Perona et al. [20] compute the best estimate of the location of the boundaries appropriate to that scale. Let be such an estimate, the conduction coefficient can be chosen to be a function of the magnitude of . means the points are in the interior of a region and in other cases means the points are at the edge. Specially, if the function is chosen properly, the diffusion in which the conduction coefficient is chosen locally as a function of the magnitude of the gradient of the brightness function, i.e.,

will not only preserve, but also sharpen the brightness edges.

#### 3.1.1 Edge Enhancement

The blurring of edges is the main price paid for eliminating the noise with conventional low-pass filtering and linear diffusion. It will be difficult to detect and localize the region boundaries. Edge enhancement and reconstruction of blurry images can be obtained by high-pass filtering or running the diffusion equation backwards in time. If the conduction coefficient is chosen to be an appropriate function of the image gradient, we can make the anisotropic diffusion enhance edges while running forward in time, thus ensuring the stability of diffusions.

The model make the edge as a step function convolved with a Gaussian kernel. Without loss of generality, assume that the edge is aligned with the axis.

The expression for the divergence operator simplifies to

We choose to be a function of the gradient of as in Eq.(2). Let denote the flux .

Then the 1-D version of the Eq.(1) is

The variation of the edge slope is . If , the function is smooth, and the order of differentiation may be inverted as

(5) |

Suppose the edge is oriented as . At the point of inflection , and since the point of inflection corresponds to the point with maximum slope. Then in a neighborhood of the point of inflection has sign opposite to . If the edge slope will decrease with time, on the contrary the slope will increase with time. Based on the increase of the slope, the edge becomes sharper.

#### 3.1.2 Diffusion Scheme

The anisotropic diffusion and edge detection method [20] we utilized is a simple numerical scheme that is described in this section.

Eq.(1) can be discretized on a square lattice with brightness values associated to the vertices, and conduction coefficients to the arcs. A 4-nearest neighbors discretization of the Laplacian operator can be given by

(6) |

where for the numerical scheme to be stable, , , , are the mnemonic subscripts for North, South, East, West. The symbol indicates nearest-neighbor differences:

(7) |

In some directions, if the difference is obviously, it indicates that this point may at the edge and we should preserve and sharpen the edge information. Figure 3 shows the overall scheme to detect the edges and preserve them.

The value of the gradient can be computed by

(8) |

If the in some directions change greater, the value of the should be smaller, and thus will preserve and sharpen the boundary locations.

In this scheme, the conduction tensor in the diffusion equation is diagonal with entries and instead of and . This diffusion scheme preserves the property of the continuous Eq.(1) that the total amount of brightness in the image is preserved.

### 3.2 Kernel Matrix

Kernel matrix has recently received increasing attention as a generic feature representation in various recognition and classification tasks. For a large set of kernel functions, the kernel matrix is guaranteed to be nonsingular, even if samples are scarce. More importantly, kernel matrix gives us unlimited opportunities to model nonlinear feature relationship in an efficient manner.

#### 3.2.1 Feature Representation

We use the kernel matrix, M, as a generic feature representation [27]. The th entry of M is defined as

where is an implicit nonlinear mapping and is the induced kernel function. The mapping is applied to each feature , rather than to each sample as usually seen in kernel-based learning methods. The most significant advantage of using M is that we can have much more flexibility to efficiently model the nonlinear relationship among features.

We can evaluate the similarity of feature distributions by some specific kernels like Bhattacharyya kernel [12]. When we do not know beforehand what kind of nonlinear relationship shall be modeled, we can apply a kernel representation. In this case, any general purpose kernel, such as the Gaussian RBF kernel

can be employed. Also, once it becomes necessary, users are free to design new, specific kernels to serve their goals. Using a kernel matrix as feature representation is so flexible for us to model the relationship between different features.

In relation to the singularity issue, kernel matrix also has its advantages. When is true, where is the number of the features and is the dimension of them, some feature representation like covariance matrix is bound to be singular. In contrast, kernel matrix can handle this situation well. A direct application of Micchelli¡¯s Theorem [8] gives the following result for this case.

Theorem 1. Let , , , be a set of different n-dimensional vectors. The matrix computed with a RBF kernel is guaranteed to be nonsingular, no matter what values d and n are.

According to Micchelli¡¯s Theorem, the RBF kernel also satisfies the above theorem to ensure the nonsingularity of D-K features. The presence of the kernel matrix as feature representation provides us great freedom to choose the most appropriate one for a kernel representation.

#### 3.2.2 Kernel Function

To reduce the computational complexity, we use the commonly used RBF kernel Eq.(10) as the kernel of our matrix for its superior properties. Given -dimensional vectors, , computing all the entries has the complexity of . In addition, the proposed kernel representation based on RBF kernel could be quickly computed via integral images. Noting that , we can precompute integral images for the inner product of any two feature dimensions.

Generally, the availability of more samples makes kernel evaluation more reliable. In RBF kernel function, more samples make the parameters converge towards their true values. However, in practice we are constrained by the number of available training samples. Also, the proposed kernel matrix has a fixed size , independent of the number of samples in a set. Due to this, the kernel-based representations obtained from two different-sized sets can be directly compared.

### 3.3 Diffusion-kernel (D-K) Features

The ADKMM includes two processes. First, we input the face video clip or images , the anisotropic diffusion method diffuse the input or to enhance edge information. After several diffusion iterations, the edge of the face images will be preserved and become sharper. From the diffused video clip or images , we can obtain more depth information and boundary locations of face images.

Next, we extract D-K features from these diffused face video clip . As we use the RBF kernel Eq.(10) as the kernel function, our model is defined as

We vectorize pixel values of each frame of as a column vector, and the video clip is represented as a matrix , where is the dimension of the images and is the number of the frames. Since the dimension of the matrix is so high that will cost huge computing resource and time, we then reduce the dimension as . After the dimensionality reduction of every frame, the matrix which represent the diffused face video clip becomes . Then we input the low dimension matrix as the representation of into the model and gain a D-K feature. The D-K features can guide to distinguish fake face images from real one effectively since they reflect the inner correlation of sequential face images like .

### 3.4 Deep Features

Deep learning algorithms have been successfully applied in several vision tasks such as face detection [7] and face recognition [14]. CNNs are designed to extract the local features by combining three architectural concepts that perform some degree of shift, scale, distortion invariance, local receptive fields, shared weight and subsampling. The ability of both convolution layers and subsampling layers to learn distinctive features from the diffused image helps to extract features and achieve better performance for face liveness detection.

The CNN is pre-trained on the several datasets which totally has more than 500 thousand images of 80 clients to obtain good initializations. The pre-trained model we used is the AlexNet [13] for its impressive performance. The AlexNet contains convolutional layers, normalization layers, linear layers, ReLU activation layers, and max-pooling layers. For simplicity, we use L1-5 to represent the 5 convolutional layers, and L6-8 describe the 3 linear layers. The L3-5 are connected to one another without any intervening pooling or normalization layers. The fully-connected layers L6-8 have 4096 neurons each. The L6-7 output features with the dimension of 4096, and the dimensionality of features in L8 is 1000. The L8 is followed by a softmatx classifier to generate probability distribution for classification. Previous studies [13, 17] show that the 4096-dimensional features of L7 perform better than many handcrafted features. In our network, the L1-7 layers are used as the feature extractor, and we use the 4096-dimensional features of L7 as the deep features.

As mentioned above, whether the input is a face image or a video clip , the ADKMM can diffuse and extract the D-K features from them. When the input is a video clip, we randomly select a frame to represent the whole video clip. We assume that the deep feature of this image frame can represent the deep feature of input video clip .

### 3.5 Generalized Multiple Kernel Learning

Multiple kernel learning refers to a set of machine learning methods that use a predefined set of kernels and learn an optimal linear or non-linear combination of kernels as part of the algorithm. Multiple kernel learning method has the ability to select for an optimal kernel and parameters from a larger set of kernels, reducing bias due to kernel selection while allowing for more automated machine learning methods. Instead of creating a new kernel, multiple kernel algorithms can be used to combine kernels already established for each individual data source.

Given training one kind of features and the other kind of features , the generalized multiple kernel learning is used to fuse these two kinds of training features and train multiple binary classifiers for face liveness detection. The decision function is given by

where and are the combination coefficients of the two kinds of features with the constraints that and , and are parameters of the standard SVM, and is the a function mapping those two kinds of features to high dimensional space. and are learned by solving

(13) |

where is the loss function, and is the label of the -th training sample. Similar to [26], Eq.(12) can be reformulated by replacing the SVM with its dual form:

(14) |

where

(15) |

is the dual variable, and are kernel functions for two kinds of training features, respectively. Here, the RBF kernel function and the linear kernel function are used, where is the kernel parameter. Following [26], Eq.(13) is solved by iteratively updating the linear combination coefficients and the dual variable .

## 4 Experimental Results

In this section, we perform an extensive experimental evaluation on various datasets to validate the effectiveness and the superiority of our method. We first introduce two benchmark datasets: the NUAA dataset and the Replay-Attack dataset. Then, we give the detailed description of parameter choices for the ADKMM and generalized multiple kernel learning method. Finally, we compare our method with a number of face liveness detection methods and demonstrate the outstanding performance of the proposed method.

### 4.1 Datasets

NUAA: The NUAA dataset [24] is publicly available and the most widely adopted benchmark for the evaluation of face liveness detection. The database images consist of 15 different clients which provide 12,614 images of both live and photographed faces. These images were resized to pixels with gray-scale representation. Some samples of the NUAA dataset are shown in Figure 4. For the training set, a total of 3,491 images (live:1,743 / fake:1,748) were selected, while the testing set was composed of 9,123 images (live:3,362 / fake:5,761).

Replay-Attack: The Replay-Attack dataset [6] was released in 2012 and is publicly available and widely used. It consists of 1,300 video clips of 50 different subjects. These video clips are divided into 300 real-access videos and 1000 spoofing attack videos which the resolution is pixels. The dataset takes into consideration the different lighting conditions used in spoofing attacks. Some samples of the Replay-Attack dataset are shown in Figure 5. Note that the Replay-Attack database is divided into three subsets: training, development and testing.

Iteration numbers | Accuracy |
---|---|

5 | 94.2% |

10 | 98.7% |

15 | 99.3% |

20 | 96.5% |

25 | 93.1% |

30 | 90.6% |

### 4.2 Parameter Settings

All parameters of our method are found experimentally and remain unchanged for all datasets. In our anisotropic diffusion scheme, we use as the the gradient value function and the constant was fixed as 15. We fixed the in Eq.(6). In our generalized multiple kernel learning method, the combination coefficients are initialized as . Since the face liveness detection is a binary classification problem, parameter in the RBF kernel function is fixed as .

### 4.3 Performance Evaluation on the NUAA Dataset

We evaluate the performance of our approach on the NUAA dataset and conducted many experiments with different iteration numbers in Eq.(6) as shown in Table 1. The best detection accuracy achieved using the NUAA dataset was 99.3% using value of . From Table 1, we can see that increase the number of iterations does not always lead to higher accuracy. For example, experiments where resulted in the accuracy of 98.7%, and experiments where resulted in an accuracy of 93.1%.

To prove the effectiveness and superiority of the proposed method, we compared the performance of our approach with all previously proposed approaches on the NUAA dataset.

The compared approaches include multiple difference of Gaussian (DoG-M) [29], DoG and high frequencybased (DoG-F) [15], DoG-sparse low-rank bilinear logistic regression (DoG-LRBLR) [24], multiple local binary pattern (M-LBP) [16], DoG-sparse logistic (DoG-SL) [19], component-dependent descriptor (CDD) [28], diffused speed-local speed pattern (DS-LSP) [9] and nonlinear diffusion based convolution neural network (ND-CNN) [1]. Owing to the strong performance of the ADKMM, our method can model the differences between live and fake faces efficiently. As shown in Table 2, our method achieves the best performance with an accuracy of 99.3% beyond the previous approaches.

### 4.4 Performance Evaluation on the Replay-Attack Dataset

We describe our performance evaluation on the Replay-Attack dataset, which is designed specifically for face spoofing studies and contains diverse spoofing attacks as well. We employed for our ADKMM achieving the best performance on the NUAA dataset. Besides training and testing samples, the Replay-Attack dataset also provides development samples to efficiently evaluate the performance of anti-spoofing methods. For accurately measure the performance on the Replay-Attack dataset, we computed the half total error rate (HTER) [4] to measure the performance of our proposed approach. The HTER is half of the sum of the false rejection rate (FRR) and false acceptance rate (FAR):

A performance comparison with previously proposed methods is shown in Table 3. On ReplayAttack-test set, the HTER of our method is 4.30% and our HTER is 5.16% on ReplayAttack-devel set. From table 3, we can know that the HTER of our method is better than that of LBP-based methods [6, 16]. It indicates that the ADKMM can evidently estimate the difference in surface properties between live and fake face images. The result of our method is also better than other diffusion-based methods [9, 1] which indicates the D-K features also have the ability to reflect the difference between face image sequences in inner correlation. The detail results shown in Table 3 confirm that the proposed method with ADKMM achieves an impressive accuracy under various types of spoofing attacks as compared to previous approaches.

## 5 Conclusions

In this paper, we have presented an anisotropic diffusion-based kernel model (ADKMM) for face liveness detection. The anisotropic diffusion method helps to enhance edge information and boundary locations of face images. Diffusion-kernel (D-K) features extracted from these images can significantly represent the differences in surface properties and inner correlation between live and fake face images. The D-K features and the deep features are fused by a generalized multiple kernel learning method, thus achieves excellent performance against face spoofing attack. The ADKMM allows us to detect face liveness even under different lighting conditions usually used in attack attempts. Experimental results compared with other face liveness methods show the superiority and outstanding performance of our method.

## References

- [1] A. Alotaibi and A. Mahmood. Deep face liveness detection based on nonlinear diffusion using convolution neural network. Signal, Image and Video Processing, pages 1–8, 2016.
- [2] A. Anjos, M. M. Chakka, and S. Marcel. Motion-based counter-measures to photo attacks in face recognition. IET biometrics, 3(3):147–158, 2014.
- [3] W. Bao, H. Li, N. Li, and W. Jiang. A liveness detection method for face recognition based on optical flow field. In International Conference on Image Analysis and Signal Processing, pages 233–236. IEEE, 2009.
- [4] S. Bengio and J. Mariéthoz. A statistical significance test for person authentication. In Proceedings of Odyssey 2004: The Speaker and Language Recognition Workshop, number EPFL-CONF-83049, 2004.
- [5] S. Bharadwaj, T. I. Dhamecha, M. Vatsa, and R. Singh. Computationally efficient face spoofing detection with motion magnification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 105–110, 2013.
- [6] I. Chingovska, A. Anjos, and S. Marcel. On the effectiveness of local binary patterns in face anti-spoofing. In BIOSIG-Proceedings of the International Conference of the Biometrics Special Interest Group (BIOSIG), pages 1–7. IEEE, 2012.
- [7] C. Garcia and M. Delakis. Convolutional face finder: A neural architecture for fast and robust face detection. IEEE Transactions on pattern analysis and machine intelligence, 26(11):1408–1423, 2004.
- [8] S. Haykin and N. Network. A comprehensive foundation. Neural Networks, 2(2004):41, 2004.
- [9] W. Kim, S. Suh, and J.-J. Han. Face liveness detection from a single image via diffusion speed model. IEEE transactions on Image processing, 24(8):2456–2465, 2015.
- [10] Y. Kim, J. Na, S. Yoon, and J. Yi. Masked fake face detection using radiance measurements. JOSA A, 26(4):760–766, 2009.
- [11] K. Kollreider, H. Fronthaler, M. I. Faraj, and J. Bigun. Real-time face detection and motion analysis with application in âlivenessâ assessment. IEEE Transactions on Information Forensics and Security, 2(3):548–558, 2007.
- [12] R. Kondor and T. Jebara. A kernel between sets of vectors. In ICML, volume 20, page 361, 2003.
- [13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
- [14] S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back. Face recognition: A convolutional neural-network approach. IEEE transactions on neural networks, 8(1):98–113, 1997.
- [15] J. Li, Y. Wang, T. Tan, and A. K. Jain. Live face detection based on the analysis of fourier spectra. In Defense and Security, pages 296–303. International Society for Optics and Photonics, 2004.
- [16] J. Määttä, A. Hadid, and M. Pietikäinen. Face spoofing detection from single images using micro-texture analysis. In International Joint Conference on Biometrics (IJCB), pages 1–7. IEEE, 2011.
- [17] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1717–1724, 2014.
- [18] G. Pan, L. Sun, Z. Wu, and S. Lao. Eyeblink-based anti-spoofing in face recognition from a generic webcamera. In IEEE 11th International Conference on Computer Vision, pages 1–8. IEEE, 2007.
- [19] B. Peixoto, C. Michelassi, and A. Rocha. Face liveness detection under bad illumination conditions. In 18th IEEE International Conference on Image Processing (ICIP), pages 3557–3560. IEEE, 2011.
- [20] P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on pattern analysis and machine intelligence, 12(7):629–639, 1990.
- [21] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 815–823, 2015.
- [22] A. K. Singh, P. Joshi, and G. C. Nandi. Face recognition with liveness detection using eye and mouth movement. In International Conference on Signal Propagation and Computer Technology (ICSPCT), pages 592–597. IEEE, 2014.
- [23] Y. Sun, Y. Chen, X. Wang, and X. Tang. Deep learning face representation by joint identification-verification. In Advances in neural information processing systems, pages 1988–1996, 2014.
- [24] X. Tan, Y. Li, J. Liu, and L. Jiang. Face liveness detection from a single image with sparse low rank bilinear discriminative model. In European Conference on Computer Vision, pages 504–517. Springer, 2010.
- [25] S. Tirunagari, N. Poh, D. Windridge, A. Iorliam, N. Suki, and A. T. Ho. Detection of face spoofing using visual dynamics. IEEE transactions on information forensics and security, 10(4):762–777, 2015.
- [26] M. Varma and B. R. Babu. More generality in efficient multiple kernel learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1065–1072. ACM, 2009.
- [27] L. Wang, J. Zhang, L. Zhou, C. Tang, and W. Li. Beyond covariance: Feature representation with nonlinear kernel matrices. In Proceedings of the IEEE International Conference on Computer Vision, pages 4570–4578, 2015.
- [28] J. Yang, Z. Lei, S. Liao, and S. Z. Li. Face liveness detection with component dependent descriptor. In International Conference on Biometrics (ICB), pages 1–6. IEEE, 2013.
- [29] Z. Zhang, J. Yan, S. Liu, Z. Lei, D. Yi, and S. Z. Li. A face antispoofing database with diverse attacks. In 5th IAPR international conference on Biometrics (ICB), pages 26–31. IEEE, 2012.
- [30] Z. Zhang, D. Yi, Z. Lei, and S. Z. Li. Face liveness detection by learning multispectral reflectance distributions. In IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG), pages 436–441. IEEE, 2011.