A Benchmark for Iris Location and a Deep Learning Detector Evaluation

A Benchmark for Iris Location and a Deep Learning Detector Evaluation

Evair Severo1, Rayson Laroca1, Cides Bezerra1, Luiz Antonio Zanlorensi Junior1,
Daniel Weingaertner1, Gladston Moreira2 and David Menotti1
1Postgraduate Program in Informatics, Federal University of Paraná (UFPR), Curitiba, Paraná, Brazil
2Computing Department, Federal University of Ouro Preto (UFOP), Ouro Preto, Minas Gerais, Brazil

Email: {ebsevero, rblsantos, csbezerra, lazjunior, daniel, menotti}@inf.ufpr.br   gladston@iceb.ufop.br
Abstract

The iris is considered as the biometric trait with the highest unique probability. The iris location is an important task for biometrics systems, affecting directly the results obtained in specific applications such as iris recognition, spoofing and contact lenses detection, among others. This work defines the iris location problem as the delimitation of a smallest squared window that encompass the iris region. In order to build a benchmark for iris location we annotate (iris squared bounding boxes) four databases from different biometric applications and make them publicly available to the community. Besides these 4 annotated databases, we include other 2 from the literature, and we perform experiments on these six databases, five obtained with near infra-red sensors and one with visible light sensor. We compare the classical and outstanding Daugman iris location approach with two window based detectors: 1) a sliding window detector based on features from Histogram of Gradients (HoG) and a linear Support Vector Machines classifier; 2) a Deep Learning based detector fine-tuned from YOLO object detector. Experimental results showed that the Deep Learning based detector outperforms the other ones in terms of accuracy and runtime (GPUs version) and should be chosen whenever possible.

Iris location; Daugman detector; HoG & linear SVM; YOLO; Deep Learning.

I Introduction

Biometrics systems have significantly improved person identification and authentication, performing an important role in personal, national, and global security [1].

In biometry, the iris appears as one of the main biological characteristics, since it remains unchanged over time and is unique for each person [2]. Furthermore, the identification process is non-invasive, in other words, there is no need of physical contact to obtain an iris image and analyze it [3]. Figure 1a illustrates the iris and other structures of a human eye.

Fig. 1: (a) Periocular region and their main structures. (b) Manual iris location through a bounding box and a circle

Iris location is usually the initial step in recognition, authentication and identification systems [4] and thus can directly affect their performance [5, 6]. In this sense, how the iris location initial step influences those systems is an interesting question to be studied. For achieving such aim, here, we propose to benchmark/evaluate baseline methods that can be applied to iris location. Initially, we survey some methods in the literature.

The pioneer and maybe the most well known methods for iris location is the one proposed by Daugman [6], which defines an integro-differential operator to identify the circular borders present in the images. This operator takes into account the circular shape of the iris in order to find the correct position, by maximizing the partial derivative with respect to the radius. Wildes [5] proposed another relevant method for iris location by using border detection and the Hough transform. First, the iris is isolated by using Gaussian filters of low pass followed by a spatial sub-sampling. Subsequently, the Hough transform is applied and those elements that better fit a circle according to a defined condition are selected. Tisse et al. [7], present a modification of Daugman’s algorithm. This approach applies a Hough transform on a gradient decomposition to find an approximation of the pupil center. Then, the integro differential operator is applied to locate the iris boundaries. It has the advantage of eliminating the errors caused by specular reflections.

Rodríguez & Rubio [8] used two strategies to locate inner and outer iris contours. For locating the inner contour of the Iris, the operator proposed by Daugman is used. Then, for determining the outer boundary of the iris, three points are detected, which represent the vertexes of a triangle inscribed in a circumference that models the iris boundary. This method presented no better accuracy than the Daugman method, but makes full use of the local texture variation and does not use any optimization procedure. For this reason it can reduce the computational cost [8].

Alvarez-Betancourt & Garcia-Silvente [9] presents an iris location method based on the detection of circular boundaries under an approach of gradient analysis in points of interest of successive arcs. The quantified majority operator QMA-OWA [10] was used in order to obtain a representative value for each successive arc. The identification of the iris boundary will be given by obtaining the arc with the greatest representative value. The authors reported similar results to those achieved by the Daugman method, with improvements in processing time.

In the method proposed by Cui et al. [11], the first step is to remove the eyelashes by dual-threshold method, which can be an advantage over other iris location approaches. Then the facula is removed through erosion method. Finally the accurate location can be obtained through Hough Transform and least-squares method.

Zhou et al. [12] presented a method for iris location based on Vector Field Convolution (VFC), which is used to estimate the initial location of the iris. This initial estimate makes pupil location much closer to the real boundary instead of circle fitting, improving location accuracy and reducing computational cost. The final result is obtained using the algorithm proposed by Daugman [6].

Zhang et al. [13] use an algorithm which adopts a momentum based level set method [14, 15] to locate the pupil boundary. Finally, the Daugman’s method was used in order to locate the iris. Determine the initial contour for momentum based level set by minimum average gray level method decreases the time consumption and improves the results obtained by the Daugman method. This improvement happens because this initial contour, as well as the Zhou et al. [12] method, is generally close to the real iris inner boundary [13].

Su et al. [16] propose an iris location algorithm based on regional property and iterative searching. The pupil area is extracted using the regional attribute of the iris image, and the iris inner edge is fitted by iterating, comparing and sorting the pupil edge points. Then the outer edge location is completed in an iterative searching method on the basis of the extracted pupil centre and radius.

As can be seen, several works in the literature have proposed methods to perform iris location by determining a circle that delimits it (as shown in red in Figure 1(b)), since in many applications it is necessary to perform the iris normalization. Normalization consists in transforming the circular region of the iris from the Cartesian space into a polar coordinate system, so that the iris is represented by a rectangle. Usually, representations and characteristics used on further processes are extracted from the transformed image.

In contrast, with the increasing success of deep learning techniques and convolutional networks in computer vision problems [17, 18, 19, 1, 20, 21], it has become interesting also in iris-related biometrics problems (besides faces), the use of the entire iris region, including the pupil and some sclera region, without the need for normalization.

In this sense, this work defines the iris location task as the determination of the smallest squared bounding box that encompass the entire region of the iris as show in yellow in Figure 1b. Thus we propose to evaluate, as baselines, the following windows based detectors: 1) a sliding window detector based on features from Histogram of Gradients (HOG) and linear Support Vector Machines (SVM) classifier, i.e., an adaptation from the human detection method proposed by Dalal & Triggs [22]; 2) a Deep Learning based detector fine-tuned from YOLO object detector [23, 24].

We compare our results with the well-known method of Daugman [4], since its notoriety and one fair implementation can be publicly found 111https://github.com/Qingbao/iris. The experiments were performed in six databases and the reported results show that the use of Deep Learning to the iris location is promising. The fine-tuned model from YOLO object detector yielded real time location with high accuracy, overcoming problems like noise, eyelids, eyelashes and reflections.

This paper is structured as follows: Section II presents the datasets used in the experiments; Section III describes the baselines methods used in this work; Section IV reports our experiments and discusses our results; Finally, Section V concludes the work.

Ii Datasets

Six databases were used for the experiments performed in this work: IIIT-Delhi Contact Lens Iris (IIIT-D CLI) [25], Notre Dame Contact Lens Detection 2015 (NDCLD15) [26], MobBIOfake [27], Notre Dame Cosmetic Contact Lenses (NDCCL) [28], CASIA-IrisV3 Interval [29] and BERC Mobile-iris database [30].

Fig. 2: Examples of images from the databases used. (a) IIITD CLI - VistaFA2E sensor; (b) IIITD CLI - Cogent sensor; (c) BERC; (d) MobBIO - Fake; (e) MobBIO - Real; (f) CASIA-IrisV3 Interval; (g) NDCCL - AD100 sensor; (h) NDCCL - LG4000 sensor; (i) NDCLD15

With the exception of NDCLD15, all other databases were manually annotated from a single annotator and are publicly available once the paper is accepted. The NDCLD15 annotations were provided by the database authors [26].

Bellow we present a brief description of these databases and how they were used in the experiments.

IIIT-Delhi Contact Lens Iris: The IIIT-D CLI database consists of 6570 iris images of 101 individuals. Three classes of images were used for the composition of the database: individuals who are not using contact lenses, individuals using transparent lenses and individuals using color cosmetic lenses. In order to study the effect of acquisition device, iris images were captured using two iris sensors: Cogent iris sensor and VistaFA2E single iris sensor [25].

For the training set, images of each sensor were randomly selected. The remaining images () were used to compose the test set. All images have resolution of pixels and were manually annotated. Figure 2a and Figure 2b show, respectively, examples of images obtained by VistaFA2E and Cogent sensor.

CASIA-IrisV3 Interval - The CASIA-IrisV3 Interval consists of iris images with resolution of pixels, obtained in two sections. The images of this database were captured with your own developed camera and an example can be seen in Figure 2f. The main characteristic of this database is that a circular near-infrared led illumination was used when the images were captured and so this database can be used for studies on the detailing of texture features in iris images [29]. For training, images were randomly selected. The remaining images were used as the test set.

Notre Dame Cosmetic Contact Lenses - The Images from the NDCCL database have resolution of pixels and were captured under near infrared illumination. Two iris cameras were used: IrisGuard AD100 (Figure 2g and IrisAccess LG4000 sensor(Figure 2h), composing two subsets. The IrisAccess LG4000 subset has a training set with images and a test set of images. IrisGuard AD100 subset has images for training and for test [31, 32]. The database contains images of individuals divided into three classes: no contact lenses, non-textured contact lenses and textured contact lenses.

MobBIOfake - The MobBIOfake database was created with the purpose of studying the liveliness detection in iris images obtained from mobile devices in uncontrolled environment [27]. This database is composed of fake iris images of pixels, obtained from a subset of images belonging to the MobBIO database [33].

For the construction of the fake images, the original images were grouped by each subject and a pre-processing was performed in order to improve the contrast. The images were then printed using a professional printer in a high quality photo paper and recaptured using the same device. Finally, the images were cropped and resized to unify the dimensions. The database is equally divided into training and test sets, in other words, real images and fake images were destined for the training sets. Figure 2d and Figure 2e are examples of fake and real images, respectively.

Notre Dame Contact Lens Detection 2015 - The NDCLD15 database is composed of iris images with resolution of pixels. The main dataset is composed of images for training and images for evaluation. Images were acquired using either IrisAccess LG4000 sensor or Iris-Guard AD100 sensor. All iris images were captured in a windowless indoor lab under consistent lighting conditions. This database was created with the purpose of studying the classification of iris images between types of contact lenses [26]. Therefore, the database contains images of individuals divided into three classes: no contact lenses, non-textured contact lenses and textured contact lenses. An example image of this database can be seen in Figure 2i.

BERC Mobile-iris Database - The BERC database is composed of images obtained in NIR wavelength with a resolution of pixels. The images were captured by a mobile device under vertical position, in sequences composed of images [30]. In order to simulate the situation when the user moves the mobile phone back and forth to adjust the focus, the sequences of images were obtained by moving the mobile phone to the iris at distances: to cm, to cm and to cm. The best images of each sequence were selected, totaling iris images of subjects. An example image of this database can be seen in Figure 2c. In this database, images were randomly selected as a training set and 100 as a test set.

Iii Baselines

In this work, we use two approaches to perform iris location. One of them is based on Histogram of Oriented Gradients (HOG) and Support Vector Machines (SVM), which is an adaptation of the human detection method proposed by Dalal & Triggs [22]. We use this approach together with the sliding window technique presented on the face detection method, by Viola & Jones [34], [35]. The other approach is based on Deep Learning, using the Convolutional Neural Network (CNN) Darknet YOLO [23].

Iii-a Histogram of Oriented Gradients and Support Vector Machines

Despite image acquisition with different equipment, lighting conditions, variations of translation, rotation and scale [2], the iris presents a common structure, following patterns of texture, shape and edge orientations, which can be described by a feature descriptor and interpreted by a classifier.

HOG is a feature descriptor used in computer vision for object detection. This method quantizes the gradient orientation occurrences in regions of an image, extracting shape information from objects and neglecting color and size [22]. Figure 3 illustrates an image described by HOG.

Fig. 3: Exemple of image described by HOG.

In this work, each window was divided into cells of pixels. For each cell, the horizontal and vertical gradients in all pixels are calculated. Thus, the orientations and magnitudes of the gradient are obtained. The gradient orientations are then quantified in nine directions.

In order to avoid effects of light and contrast variation, the histograms of all cells on blocks ( cells) are normalized. The HOG characteristic vector that describes each iris window is then constructed by concatenating the normalized cell histograms for all blocks. Finally, a vector of characteristics ( blocks cells orientations) is obtained to describe each iris candidate window.

The window containing the iris region (ground truth) from each training image is extracted and used to compose the examples of positive windows. Furthermore, windows that are completely outside or have only a small intersection with the iris region are extracted and considered negative windows, and were created in a ratio of negative windows for each positive window. Figures (a)a and (b)b illustrate, respectively, positive and negative samples used for the training of the approach studied.

(a)
(b)
Fig. 4: Training samples. (a) Positives samples; (b) Negatives samples.

From these positive and negative samples, the SVM classifier is trained using a linear kernel and the constant determined by grid-search in the training base.

The SVM was first presented by Vladimir Vapknik [36], and is one of the most used classification methods in recent years [37, 38]. To find the decision boundary, the SVM minimizes the upper limit of the generalization error, which is obtained by maximizing the margin distance from the training data.

In order to perform the iris location, a sliding window approach with different scales is applied in each test image. We adopted windows with size pixels as canonical scale. From this scale, we used 6 lower scales and 8 higher scales by a factor of . The image region that presents the greatest similarity with the iris can be found through the decision border generated by the SVM, which will return the highest positive response for the best estimated location for the iris.

Iii-B YOLO Object Detector

Currently, Deep Neural Networks are one of the most efficient ways to perform image classification, segmentation and object detection. In this work, we use the Darknet, which is an open source Convolutional Neural Network (CNN) [39] used to implement the YOLO object detection system [23]. YOLO is a Deep Convolutional Neural framework capable of detecting objects in real time.

The YOLO network, as most CNNs, is composed of three main operation layers to object detection, which are: convolution, max pooling and classification, the latter occurs through fully connected layers.

On Darknet, Convolutional layers work as feature extraction, in other words, a convolutional kernel is sliding in the input image. The network architecture is inspired by the GoogLeNet model for image classification [40]. The original YOLO has convolutional layers that produce different feature maps from the input.

The feature maps are then processed by Max pooling layers, which dimensionally reduces the previously obtained feature map. Max pooling divides the feature map into blocks and reduces each block into one value. Instead of the inception modules used by GoogLeNet, YOLO uses reduction layers followed by convolutional layers, similar to Lin et al [41].

However, in this work we use an fast version of YOLO, based on a neural network with fewer convolutional layers ( instead of ) and fewer filters in those layers. Other than the size of the network, all training and testing parameters are the same for both YOLO and Fast YOLO.

Iv Results and discussion

In this work, we evaluate the HOG-SVM approach and the CNN YOLO, applied to iris location and compare them to the well-known Daugman method. The experiments were performed in the six databases described in the previous section. All the experiments were performed mainly on a NVIDIA Titan XP GPU (3.840 CUDA cores and 12 GB of RAM) and also using an Intel (R) Core i7-5820K CPU @ 3.30GHz 12 core, 64GB of DDR4 RAM.

In order to analyze the experiments, we employ the following metrics: Recall, Precision, Accuracy and (intersection over union). These metrics are defined between the area of ground truth and predicted bounding boxes in terms of False Positives (FP), False Negatives (FN), True Positives (TP), and True Negatives (TN) pixels, and can formally be expressed as:

Recall
Precision
Accuracy
IoU

In the following, we describe experiments in three different scenarios: intra-sensor, inter-sensor, multiple-sensors and mixing of datasets.

Database Precision Recall Accuracy IoU
Daugman HOG YOLO Daugman HOG YOLO Daugman HOG YOLO Daugman HOG YOLO
 [6] SVM  [6] SVM  [6] SVM  [6] SVM
NDCCL
AD100 84.60 92.39 98.43 82.49 94.78 95.01 94.28 96.98 98.39 80.41 87.52 93.37
LG4000 93.41 96.72 97.59 92.15 90.80 97.13 97.53 97.24 98.87 89.67 87.76 94.77
IIITD CLI
Vista 85.49 94.51 97.72 89.34 92.24 93.56 95.38 98.10 98.32 80.82 87.23 91.50
Cogent 86.24 96.44 95.61 92.82 87.99 95.46 96.34 96.67 98.23 82.61 84.76 91.38
MobBIO
Real 76.32 95.77 96.56 74.71 72.26 93.51 85.26 95.33 98.87 70.79 68.76 90.32
Fake 75.81 93.28 95.54 73.45 74.33 94.93 84.81 95.26 98.83 70.12 68.99 90.74
BERC 88.19 92.83 97.18 85.64 87.95 93.76 98.72 98.49 99.72 79.10 85.10 91.17
CASIA IrisV3
Interval 96.38 96.97 97.55 96.23 88.48 96.11 97.38 92.21 97.12 90.95 86.17 91.13
NDCLD15 91.63 96.04 97.23 89.76 90.29 95.61 96.67 97.14 98.51 85.34 86.85 93.11
TABLE I: Intra-sensor results (%)

Intra-sensor: Table I shows the results obtained by intra-sensor experiments, in other words, experiments in which the models were trained and tested with images from the same sensor. The CNN YOLO achieved the best averages in almost all analyzed metrics and required less processing time for iris location per image. The exception is for CASIA IrisV3 dataset where Daugman method presented slightly better Recall ( against ) and Accuracy ( against ). This surprising result can be explained by the high level of cooperation and control in the image acquisition of such dataset. That is, the Daugman method take somehow advantage of the scenario. Anyway, the CNN YOLO locates the iris in real time ( seconds per image, on average) using our fast Titan X GPU, whilst the Daugman method and the HOG-SVM approach demand, on average, and seconds, respectively, to locate the iris in each image using a single CPU cores.

Database Sets Precision Recall Accuracy IoU
Train Test HOG-SVM YOLO HOG-SVM YOLO HOG-SVM YOLO HOG-SVM YOLO
NDCCL AD100 LG4000 92.95 79.19 91.13 89.01 96.84 92.58 85.78 68.52
LG4000 AD100 93.22 97.98 93.15 93.44 96.78 97.89 86.76 91.54
IIITD CLI Vista Cogent 96.89 96.00 89.89 94.08 96.43 97.95 83.94 90.38
Cogent Vista 93.44 98.06 93.61 87.89 97.08 96.49 87.55 80.71
TABLE II: Inter-sensor results (%)
(a)
(b)
Fig. 5: Samples of iris location on inter sensor experiments: (a) poor results due to a homogeneous training set; (b) good results achieved with images of different sensors on training datasets.

Inter-sensor: In addition, for databases containing images acquired with more than one sensor, inter-sensor experiments were performed and are presented in Table II. That is, we train the detectors with images of one sensor and test/evaluate then on the images from other sensor. Theses experiments show that in some cases CNN YOLO did not achieve promising results as previously shown. For example, in the database NDCCL, when fine tunning/training the detector with images from the AD100 sensor and testing with the ones from LG4000 sensor. The reason for the poor result might lye in the fact that the database for that specific sensor (AD100) has only 600 images, thus not allowing for a good generalization of the trained CNN. In Figure (a)a, we can observe some examples where the iris location by the YOLO method did not achieved good results.

Database Sets Precision Recall Accuracy IoU
Train Test HOG-SVM YOLO HOG-SVM YOLO HOG-SVM YOLO HOG-SVM YOLO
NDCCL AD100 & LG4000 LG4000 95.37 99.24 92.93 99.62 97.48 99.74 88.63 98.87
AD100 & LG4000 AD100 91.77 99.23 94.77 97.36 96.85 99.23 86.91 96.63
IIITD CLI Vista & Cogent Cogent 96.73 97.12 87.15 96.25 96.50 98.42 84.17 92.41
Vista & Cogent Vista 94.20 98.15 92.74 93.22 97.01 98.20 87.41 91.67
TABLE III: Combined sensor results (%), same datasets

Multiple-sensors: In order to better analyze and understand the results of the inter-sensor experiments and to confirm our hypothesis that the YOLO’s poor performance is due to few/homogeneous training samples, experiments were performed combining images from multiple sensors of the same datasets. The figures obtained in this new experiment can be seen in Table III. It highlights the importance of a diverse collection of images for the training set in Convolutional Neural Networks. With a larger number of images acquired from different sensors in the training set, the CNN was able to better generalize, increasing the correct iris location in most cases. Some examples of good iris location can be seen in Figure (b)b.

Method Sets Precision Recall Accuracy IoU Time
Train Test
YOLO All trainnig sets All test sets 97.07 95.12 98.25 92.39 0.02 s
Daugman [6] - All test sets 86.45 86.28 94.04 81.09 3.5 s
TABLE IV: Combined sensor results (%), mixed datasets
Fig. 6: Recall curve of Daugman method and YOLO applied to all test sets.

Mixing databases: Table IV contains the results obtained by experiments where YOLO was trained with the training sets of all the databases and tested in the test images of all the databases. The results achieved by the Daugman method applied to all the test images are also presented, and we used specific parameters for each dataset. By analyzing these figures, we observe that YOLO strikingly outperforms the Daugman method in all analyzed metrics.

Figure 6 shows the behavior of the recall curve for the experiment reported in Table IV. It depicts how the percentage of images varies when we required a minimum of Recall measure. These curve highlights how YOLO is a promising alternative to iris location, since all tested images achieved Recall values above . That is, at least 80% of the required region of a iris is certainly located by the CNN YOLO detector.

V Conclusion

The iris location is a preliminary but extremely important task in specific applications such as iris recognition, spoofing and liveness detection, as well as contact lens detection, among others. In this work, two object detection approaches were evaluated for the iris location. The experiments were performed in six databases. We manually annotated four of the six databases used in this work, and those annotations will be publicly available to the community once the paper is accepted.

The experiments showed that the use of the YOLO object detector, based on deep learning, applied to the iris location presents promising results for all studied databases. Moreover, the iris location using this approach run in real time ( seconds per image, on average) using a current and powerful GPU (NVIDIA GeForce Titan XP Pascal). Another relevant conclusion to be mentioned is that, similar to other Deep Learning approaches, it is important to have a sufficiently large number of images for training. The number and variety of images in the training set directly affects the generalization capability of the learned model.

As future work, we intend to perform experiments with more visible and cross-spectral iris databases. In addition, we intend to analyze the impact that iris location exerts on iris recognition, Spoofing, Liveness, and contact lens detection systems. Also, we plan to study how a short and shallow network than the YOLO one can be designed for our single object detection problem, the iris location.

Acknowledgments

This research has been supported by Coordination for the Improvement of Higher Education Personnel (CAPES) and the National Council for Scientific and Technological Development (CNPq) grants 471050/2013-0, # 428333/2016-8, and # 313423/2017-2) We thank the NVIDIA Corporation for the donation of the GeForce GTX Titan XP Pascal GPU used for this research. The annotations made in the NDCCL database are thanks to Pedro Silva (UFOP).

References

  • [1] D. Menotti, G. Chiachia, A. Pinto, W. R. Schwartz, H. Pedrini, A. X. Falcao, and A. Rocha, “Deep representations for iris, face, and fingerprint spoofing detection,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 4, pp. 864–879, 2015.
  • [2] Y. Zhu, T. Tan, and Y. Wang, “Biometric personal identification based on iris patterns,” in Pattern Recognition, 2000. Proceedings. 15th International Conference on, vol. 2.   IEEE, 2000, pp. 801–804.
  • [3] A. Jain, R. Bolle, and S. Pankanti, Biometrics: personal identification in networked society.   Springer Science & Business Media, 2006, vol. 479.
  • [4] J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE transactions on pattern analysis and machine intelligence, vol. 15, no. 11, pp. 1148–1161, 1993.
  • [5] R. P. Wildes, “Iris recognition: an emerging biometric technology,” Proceedings of the IEEE, vol. 85, no. 9, pp. 1348–1363, 1997.
  • [6] J. Daugman, “How iris recognition works,” IEEE Transactions on circuits and systems for video technology, vol. 14, no. 1, pp. 21–30, 2004.
  • [7] C.-L. Tisse, L. Martin, L. Torres, and M. Robert, “Iris recognition system for person identification,” in PRIS: Pattern Recognition in Information Systems, 2002, pp. 71–75.
  • [8] J. Rodríguez and Y. Rubio, “A new method for iris pupil contour delimitation and its application in iris texture parameter estimation,” Progress in Pattern Recognition, Image Analysis and Applications, pp. 631–641, 2005.
  • [9] Y. Alvarez-Betancourt and M. Garcia-Silvente, “A fast iris location based on aggregating gradient approximation using qma-owa operator,” in Fuzzy Systems (FUZZ), 2010 IEEE International Conference on.   IEEE, 2010, pp. 1–8.
  • [10] J. I. Peláez and J. M. Doña, “A majority model in group decision making using qma–owa operators,” International Journal of Intelligent Systems, vol. 21, no. 2, pp. 193–208, 2006.
  • [11] W. Cui et al., “A rapid iris location algorithm based on embedded,” in Computer Science and Information Processing (CSIP), 2012 International Conference on.   IEEE, 2012, pp. 233–236.
  • [12] L. Zhou, Y. Ma, J. Lian, and Z. Wang, “A new effective algorithm for iris location,” in Robotics and Biomimetics (ROBIO), 2013 IEEE International Conference on.   IEEE, 2013, pp. 1790–1795.
  • [13] W. Zhang and Y.-D. Ma, “A new approach for iris localization based on an improved level set method,” in Wavelet Active Media Technology and Information Processing (ICCWAMTIP), 2014 11th International Computer Conference on.   IEEE, 2014, pp. 309–312.
  • [14] G. Läthén, T. Andersson, R. Lenz, and M. Borga, “Momentum based optimization methods for level set segmentation,” in International Conference on Scale Space and Variational Methods in Computer Vision.   Springer, 2009, pp. 124–136.
  • [15] Z. Wang, Y. Feng, and Q. Tao, “Momentum based level set segmentation for complex phase change thermography sequence,” in Computer Application and System Modeling (ICCASM), 2010 International Conference on, vol. 12.   IEEE, 2010, pp. V12–257.
  • [16] L. Su, J. Wu, Q. Li, and Z. Liu, “Iris location based on regional property and iterative searching,” in Mechatronics and Automation (ICMA), 2017 IEEE International Conference on.   IEEE, 2017, pp. 1064–1068.
  • [17] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2016.
  • [18] N. Pinto, Z. Stone, T. Zickler, and D. D. Cox, “Scaling-up biologically-inspired computer vision: A case study in unconstrained face recognition on facebook,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2011, pp. 35–42.
  • [19] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems, 2012.
  • [20] Z. Mao, W. X. Yao, and Y. Huang, “Eeg-based biometric identification with deep learning,” in Neural Engineering (NER), 2017 8th International IEEE/EMBS Conference on.   IEEE, 2017, pp. 609–612.
  • [21] T.-Y. Fan, Z.-C. Mu, and R.-Y. Yang, “Multi-modality recognition of human face and ear based on deep learning,” in Wavelet Analysis and Pattern Recognition (ICWAPR), 2017 International Conference on.   IEEE, 2017, pp. 38–42.
  • [22] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1.   IEEE, 2005, pp. 886–893.
  • [23] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779–788.
  • [24] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [25] N. Kohli, D. Yadav, M. Vatsa, and R. Singh, “Revisiting iris recognition with color cosmetic contact lenses,” in Biometrics (ICB), 2013 International Conference on.   IEEE, 2013, pp. 1–7.
  • [26] J. S. Doyle and K. W. Bowyer, “Robust detection of textured contact lenses in iris recognition using bsif,” IEEE Access, vol. 3, pp. 1672–1683, 2015.
  • [27] A. F. Sequeira, J. Murari, and J. S. Cardoso, “Iris liveness detection methods in mobile applications,” in Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, vol. 3.   IEEE, 2014, pp. 22–33.
  • [28] J. Doyle and K. Bowyer, “Notre dame image database for contact lens detection in iris recognition—2013,” 2014.
  • [29] CASIA-IrisV3 Image Database Center for Biometrics and Security Research (CBSR). [Online]. Available: http://biometrics.idealtest.org/
  • [30] D. Kim, Y. Jung, K.-A. Toh, B. Son, and J. Kim, “An empirical study on iris recognition in a mobile phone,” Expert Systems with Applications, vol. 54, pp. 328–339, 2016.
  • [31] J. S. Doyle, K. W. Bowyer, and P. J. Flynn, “Variation in accuracy of textured contact lens detection based on sensor and lens pattern,” in Biometrics: Theory, Applications and Systems (BTAS), 2013 IEEE Sixth International Conference on.   IEEE, 2013, pp. 1–7.
  • [32] D. Yadav, N. Kohli, J. S. Doyle, R. Singh, M. Vatsa, and K. W. Bowyer, “Unraveling the effect of textured contact lenses on iris recognition,” IEEE Transactions on Information Forensics and Security, vol. 9, no. 5, pp. 851–862, 2014.
  • [33] A. F. Sequeira, J. C. Monteiro, A. Rebelo, and H. P. Oliveira, “Mobbio: a multimodal database captured with a portable handheld device,” in Computer Vision Theory and Applications (VISAPP), 2014 International Conference on, vol. 3.   IEEE, 2014, pp. 133–139.
  • [34] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, 2001, pp. I–511–I–518 vol.1.
  • [35] P. Viola and M. J. Jones, “Robust real-time face detection,” International Journal of Computer Vision, vol. 57, no. 2, pp. 137–154, May 2004.
  • [36] B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” in Proceedings of the fifth annual workshop on Computational learning theory.   ACM, 1992, pp. 144–152.
  • [37] G. Franchi, J. Angulo, and D. Sejdinović, “Hyperspectral image classification with support vector machines on kernel distribution embeddings,” in Image Processing (ICIP), 2016 IEEE International Conference on.   IEEE, 2016, pp. 1898–1902.
  • [38] P. Ruiz, J. Mateos, G. Camps-Valls, R. Molina, and A. K. Katsaggelos, “Bayesian active remote sensing image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 4, pp. 2186–2196, 2014.
  • [39] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
  • [40] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
  • [41] M. Lin, Q. Chen, and S. Yan, “Network in network,” arXiv preprint arXiv:1312.4400, 2013.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
118230
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description