RaspiReader: An Open Source Fingerprint Reader Facilitating Spoof Detection

RaspiReader: An Open Source Fingerprint Reader Facilitating Spoof Detection


We present the design and prototype of an open source, optical fingerprint reader, called RaspiReader, using ubiquitous components. RaspiReader, a low-cost and easy to assemble reader, provides the fingerprint research community a seamless and simple method for gaining more control over the sensing component of fingerprint recognition systems. In particular, we posit that this versatile fingerprint reader will encourage researchers to explore novel spoof detection methods that integrate both hardware and software. RaspiReader’s hardware is customized with two cameras for fingerprint acquisition with one camera providing high contrast, frustrated total internal reflection (FTIR) images, and the other camera outputting direct images. Using both of these image streams, we extract complementary information which, when fused together, results in highly discriminative features for fingerprint spoof (presentation attack) detection. Our experimental results demonstrate a marked improvement over previous spoof detection methods which rely only on FTIR images provided by COTS optical readers. Finally, fingerprint matching experiments between images acquired from the FTIR output of the RaspiReader and images acquired from a COTS fingerprint reader verify the interoperability of the RaspiReader with existing COTS optical readers.


Frustrated Total Internal Reflection (FTIR), Optical Fingerprint Readers, Presentation Attack Detection, Spoof Detection, Local Binary Pattern (LBP), Color Texture Features


1 Introduction


One of the major challenges facing biometric technology is the growing threat of presentation attacks, commonly referred to as spoofing1 [1]. In this paper, we are interested in fingerprint presentation attacks in which a hacker intentionally assumes the identity of unsuspecting individuals, called victims here, through stealing their fingerprints, fabricating spoofs with the stolen fingerprints, and maliciously attacking fingerprint recognition systems with the spoofs into identifying the hacker as the victim2 [2, 3, 4, 5].

The need to prevent presentation attacks is paramount due to the monumental costs and loss of user privacy associated with spoofed systems. Consider for example India’s Aadhaar program which (i) provides benefits and services to an ever growing population of over 1.17 billion residents through fingerprint and/or iris recognition [6, 7] and (ii) facilitates electronic financial transactions through the United Payments Interface (UPI) [8]. Failure to detect presentation attacks on the Aadhaar system could cause the disruption of a commerce system affecting untold numbers of people. Also consider the United States Office of Biometric Identity Management (US OBIM) which is responsible for supporting the Department of Homeland Security (DHS) with biometric identification services specifically aimed at preventing people who pose a risk to the United States from entering the country [9]. Failure to detect spoofs on systems deployed by OBIM could result in a deadly terrorist attack3. Finally, almost all of us are actively carrying fingerprint recognition systems embedded within our personal smart devices. Failure to detect spoof attacks on smartphones [11] could compromise emails, banking information, social media content, personal photos and a plethora of other confidential information.

Figure 1: Prototype of RaspiReader: two images (b, (i)) and (b, (ii)) of the input fingerprint (a) are captured. The raw direct image (b, (i)) and the raw, high contrast FTIR image (b, (ii)) both contain useful information for spoof detection. Following the use of (b, (ii)) for spoof detection, image calibration and processing are performed on the raw FTIR image to output a high quality, 500 ppi fingerprint for matching (b, (iii)).
Figure 2: Fingerprint images acquired using the RaspiReader. Images in (a) were collected from a live finger during a single acquisition. Images in (b) were collected from a spoof finger during a single acquisition. Using features extracted from both raw image outputs ((i), direct) and ((ii), high-contrast FTIR) of the RaspiReader, our spoof detection algorithms are able to discriminate between live fingers and spoof fingers. In particular, because the images in (i) and (ii) contain color information, discriminative color local binary patterns (CLBP) can be extracted for spoof detection. The raw FTIR image output of the RaspiReader (ii) can be post processed (after spoof detection) to output images suitable for fingerprint matching. Images in (c) were acquired from the same live finger (a) and spoof finger (b) on a COTS 500 ppi optical reader. The close similarity between the two images in (c) qualitatively illustrates why current spoof detectors are limited by the low information content, processed fingerprint images output by COTS readers.

In an effort to mitigate the costs associated with presentation attacks, a number of presentation attack detection techniques involving both hardware and software have been proposed in the literature. Special hardware embedded in fingerprint readers4 enables capture of features such as heartbeat, thermal output, blood flow [14], odor [15], and sub-dermal finger characteristics [12, 16, 17] useful for distinguishing a live finger from a spoof [2, 18, 19, 20]. Presentation attack detection methods in software are based on extracting textural [21, 22, 23, 24, 25], anatomical [26], and physiological [27, 28] features from processed5 fingerprint images which are used in conjunction with a classifier such as Support Vector Machines (SVM). Alternatively, a Convolutional Neural Network (CNN) can be trained to distinguish a live finger from a spoof [29, 30, 31].

While hardware and software spoof detection schemes provide a reasonable starting point for solving the spoof detection problem, current solutions have a plethora of shortcomings. As noted in [18, 19, 20] most hardware based approaches can be easily bypassed by developing very thin spoofs (Fig. 3 (a)), since heartbeat, thermal output, and blood flow can still be read from the live human skin behind the thin spoof. Additionally, some of the characteristics (such as odor and heartbeat) acquired by the hardware vary tremendously amongst different human subjects, making it very difficult to build an adequate model representative of all live subjects [18, 19].

Figure 3: Example spoof fingers and live fingers in our database. (a) Spoof fingers and (b) live fingers used to acquire both spoof fingerprint impressions and live fingerprint impressions for conducting the experiments reported in this paper.

Spoof detection software solutions have their own limitations. Although the LiveDet 2015 competition reported state-of-the-art spoof detection software to have an average accuracy of 95.51% [32], the evaluations were limited to spoofs fabricated with only six material types. Furthermore, the spoof detection performance at desired operating points such as False Detect Rate (FDR) of 0.1% was not reported, and very limited evaluation was performed to determine the effects of testing spoof detectors with spoofs fabricated from materials not seen during training (cross-material evaluation). In the limited cross material evaluation that was performed, the rate of spoofs correctly classified as spoofs was shown to drop from 96.57% to 94.20% [32]. While this slight drop in accuracy seems promising, without knowing the performance at field conditions, namely False Detect Rate (FDR) of 0.1% on a larger collection of unknown materials, the reported levels of total accuracy should be accepted with caution. Chugh et al. [31] pushed state-of-the-art fingerprint spoof detection performance on the LiveDet 2015 dataset from 95.51% average accuracy to 98.61% average accuracy using a CNN, but they also demonstrated that performance at strict operating points dropped significantly in some experiments6. Additionally, many other studies have reported up to a three-fold increase in error when testing spoof detectors on unknown material types [33, 34, 35].

Because of the less than desired performance of spoof detection software to adapt to spoofs fabricated from unseen materials, studies in [36], [37], and [38] developed open-set recognition classifiers to better detect spoofs fabricated with novel material types. However, while these classifiers are able to generalize to spoofs made with new materials better than closed-set recognition algorithms, their overall accuracy (approx. 85% - 90%) still does not meet the desired performance for field deployments.

Given the limitations of state-of-the-art fingerprint presentation attack detection (both in hardware and software), it is evident that much work remains to be done in developing robust and generalizable presentation attack detection solutions. We posit that one of the biggest limitations facing the most successful spoof detection solutions to date (such as use of textural features [35] and CNNs [29, 30, 31]), is the processed COTS fingerprint reader images used to train spoof detectors. In particular, because COTS fingerprint readers output fingerprint images which have undergone a number of image processing operations (in an effort to achieve high matching performance), they are not optimal for fingerprint spoof detection, since valuable information such as color and textural aberrations is lost during the image processing operations. By removing color and minute textural details from the raw fingerprint images, spoof fingerprint impressions and live fingerprint impressions (acquired on COTS optical readers) become very similar (Fig. 2 (c)), even when the physical live/spoof fingers used to collect the respective fingerprint impressions appear very different (Fig. 3).

This limitation inherent to many existing spoof detection solutions motivated us to develop a custom, optical fingerprint reader, called RaspiReader, with the capability to output 2 raw images (from 2 different cameras) for spoof detection. By mounting two cameras at appropriate angles to a glass prism (Fig. 4), one camera is able to capture high contrast FTIR fingerprint images (useful for both fingerprint spoof detection and fingerprint matching) (Fig. 2 (ii)), while the other camera captures direct images of the finger skin in contact with the platen (useful for fingerprint spoof detection) (Fig. 2 (i)). Both images of the RaspiReader visually differentiate between live fingers and spoof fingers much more than the processed fingerprint images output by COTS fingerprint readers (Fig. 2 (c)).

Table 1: Primary Components Used to Construct RaspiReader. Total Cost is $165.48
Component Image \arraybackslashName and Description Quantity Cost (USD)1
\arraybackslashRaspberry Pi: A single board computer (SBC) with 1.2 GHz 64-bit quad-core CPU, 1 GB RAM, MicroSDHC storage, and Broadcom VideoCore IV Graphic card 1 $38.27
\arraybackslashRaspberry Pi Camera Module V1: A 5.0 megapixel, 30 frames per second, fixed focal length camera 2 $22.44
\arraybackslashMulti-Camera Adapter: Splits Raspberry Pi camera slot into two slots, enabling connection of two cameras 1 $49.99
\arraybackslashLEDs: white light, 5 mm, 1 watt 3 $0.10
\arraybackslashResistors: 10 k 3 $5.16
\arraybackslashRight Angle Prism: Acrylic, 25 mm 1 $16.56
  • All items were purchased for the listed prices on Amazon.com

RaspiReader’s two camera approach is similar to that which was prescribed by Rowe et al. in [12, 16] where both an FTIR image and a direct view image were acquired using different wavelength LEDs, however, the commercial products developed around the ideas in [12, 16] act as a proprietary black box outputting only a single processed composite image of a collection of raw image frames captured under various wavelengths. As such, fingerprint researchers cannot implement new spoof detection schemes on the individual raw frames captured by the reader. Furthermore, unlike the patented ideas in [12], RaspiReader is built with ubiquitous components and open source software packages, enabling fingerprint researchers to very easily prototype their own RaspiReader, and further customize it with new spoof detection hardware.

By utilizing the novel hardware of the RaspiReader, we are able to develop robust fingerprint presentation attack detection algorithms. In particular, because both image outputs of the RaspiReader are raw and contain useful color information, we can extract discriminative color local binary patterns (CLBP) [39, 40] from each of the image outputs. Our experimental results demonstrate that the color local binary patterns from each image contain complementary information such that when the features are fused together and passed to a binary SVM classifier, state-of-the-art spoof detection performance can be achieved. Additionally, by calibrating and processing the FTIR image output of the RaspiReader (post spoof detection), we demonstrate that RaspiReader is not only interoperable with existing COTS optical readers but is also capable of achieving state-of-the-art fingerprint matching accuracy. Finally, RaspiReader is cost effective (approx. $165 in total component costs), built with off-the-shelf components (Table 1), and can be easily assembled and operated by those with little expertise in hardware, enabling easy integration and extension of this research (such as adding additional hardware specifically for spoof detection).

More concisely, the contributions of this research are:

  • An open source, easy to assemble, cost effective fingerprint reader, called RaspiReader, capable of producing fingerprint images useful for spoof detection and that have sufficient quality for fingerprint matching. The custom RaspiReader can be easily modified to facilitate presentation attack detection studies.

  • A customized fingerprint reader with two cameras for image acquisition rather than a single camera. Use of two cameras enables robust fingerprint spoof detection, since we can extract features from two complementary, information rich images instead of processed grayscale images output by traditional COTS optical fingerprint readers.

  • The implementation of a new fingerprint spoof detection schema through the extraction of highly discriminative fingerprint spoof detection features (CLBP) using the raw images of the RaspiReader. Our algorithm delivers state-of-the-art spoof detection accuracy.

Figure 4: Schematic illustrating RaspiReader functionality. Incoming white light from three LEDs enters the prism. Camera 2 receives light rays reflected from the fingerprint ridges only (light rays are not reflected back from the valleys due to FTIR). This image from Camera 2, with high contrast between ridges and valleys can be used for both spoof detection and fingerprint matching. Camera 1 receives light rays reflected from both the ridges and valleys. This image from Camera 1 provides complementary information for spoof detection.

2 RaspiReader Construction and Calibration

In this section, the construction of the RaspiReader using ubiquitous, off-the-shelf components is explained. Additionally, the steps for calibrating and processing the raw FTIR fingerprint images of the RaspiReader for fingerprint matching are prescribed.

2.1 Construction of RaspiReader

The construction of RaspiReader adheres to the following process. First, an outer casing is electronically modeled using Meshlab (Fig. 5) [41] and subsequently fabricated using a high resolution 3D printer [42]. Next, the components enumerated in Table 1 are assembled together according the the specifications of the schematic diagram shown in Figure 4. The exact dimensions of assembly are not critical, however, depending on the dimensions used, the focal lengths of the two cameras need to be adjusted to focus on the finger surface as it makes contact with the acrylic prism.

Because the Raspberry Pi only has a single camera connection port, a camera port multiplexer is used to enable the use of multiple cameras on a single Pi [43]. Using the Raspberry Pi GPIO pins, the code available in [43], and the camera multiplexer, one can easily extend the Raspberry Pi to use multiple cameras. Note that an alternative approach that has been suggested for enabling multiple cameras on the Raspberry Pi is to attach one camera to the Pi’s camera connection port, and a separate USB web-cam to the Raspberry Pi USB port. This method was experimented with, however, the frame rate of the USB camera is significantly reduced on the Pi due to the latency in loading images from the USB port to the Pi’s graphics card. As such, using the camera multiplexer is recommended.

Once the components of Table 1 have been assembled, and the camera port multiplexed for two cameras, open source python libraries [43] can be used to acquire two images from the fingerprint reader (one raw FTIR fingerprint image and another raw direct fingerprint image).

Figure 5: Electronic CAD model of the RaspiReader case. The dimensions here were provided to a 3D printer for fabricating the prototype.
Figure 6: Acquiring Image Transformation Parameters. (a) A 3D printed checkerboard pattern is imaged by the RaspiReader. In this image, the checkerboard pattern is not frontalized to the image plane. Therefore, destination coordinate pairs are defined on a frontalized checkerboard (b) such that perspective transformation parameters can be estimated to map (a) into (b). These transformation parameters are subsequently used to frontalize fingerprint images acquired by RaspiReader for the purpose of fingerprint matching.
Figure 7: Processing a RaspiReader raw FTIR fingerprint image into a 500 ppi fingerprint image compatible for matching with existing COTS fingerprint readers. (a) The RGB raw FTIR image is first converted to grayscale. (b) Histogram equalization is performed on the grayscale FTIR image to enhance the contrast between the fingerprint ridges and valleys. (c) The fingerprint is negated so that the ridges appear dark, and they valleys appear white. (d),(f) Calibration (estimated using the checkerboard calibration pattern in (e)) is applied to frontalize the fingerprint image to the image plane and down sample (by averaging neighborhood pixels) to 500 ppi in both the x and y directions.

2.2 Fingerprint Image Processing

In order for the RaspiReader to be used for spoof detection, it must also demonstrate the ability to output high quality fingerprint images suitable for fingerprint matching. As previously mentioned, the RaspiReader performs spoof detection on non-processed, raw fingerprint images. While these raw images are shown to provide discriminatory information for spoof detection, they need to be made compatible with processed images output by other commercial fingerprint readers. Therefore, after spoof detection, the RaspiReader performs (1) image enhancement operations and (2) image transformations on the raw high contrast, FTIR image frames in order to output high fidelity images compatible with COTS optical fingerprint readers (Fig. 7).

Fingerprint Image Enhancement

Let a raw (unprocessed) FTIR fingerprint image from the RaspiReader be denoted as . This raw image is first converted from the RGB color space to grayscale () (Fig. 7 (a)). Then, in order to further contrast the ridges from the valleys of the fingerprint, histogram equalization is performed on (Fig. 7 (b)). Finally, is negated so that the ridges of the fingerprint image are dark, and the background of the image is white (as are fingerprint images acquired from COTS readers) (Fig. 7 (c)).

Fingerprint Transformation

Following the aforementioned image processing techniques, the RaspiReader FTIR fingerprint images are further processed by performing a perspective transformation (to frontalize the fingerprint to the image plane) and scaling to 500 ppi (the native resolution of the RaspiReader images, approx. 900 ppi, was obtained using a calibration target.) (Fig. 7 (d),(f)).

A perspective transformation is performed using Equation 1,


where and are the source coordinates, and are the transformed coordinates, is the set of transformation parameters, and is a scale parameter. In this work, we image a 3D printed checkerboard pattern (Fig. 7 (e)) to define source and destination coordinate pairs (Fig. 6) such that the transformation parameters could be estimated. Once the perspective transformation has been completed, the image is downsampled (by averaging neighborhood pixels) to 500 ppi (Fig. 7 (f)).

Upon completion of this entire fingerprint reader assembly and image processing procedure, the RaspiReader is fully functional and ready for use in both presentation attack detection and subsequent fingerprint matching.

3 Live and Spoof Fingerprint Database Construction

To test the utility of the RaspiReader for presentation attack detection and its interoperability for fingerprint matching, a database of live and spoof fingerprint impressions was collected for performing experiments. This database is constructed as follows.

Using 7 different materials (Fig. 3 (a)), 66 spoofs were fabricated. Then, for each of these spoofs, 10 impressions were captured at varying orientations and pressure on both the RaspiReader (Rpi) and a COTS 500 ppi optical FTIR fingerprint reader (). The summary of this data collection is enumerated in Table 2.

To collect a sufficient variety of live finger data, we enlisted 15 human subjects with different skin colors (Fig. 3 (b)). Each of these subjects gave 5 finger impressions (at different orientations and pressures) from all 10 of their fingers on both the RaspiReader and 7. A summary of this data collection is enumerated in Table 3.

Table 2: Summary of Spoof1 Fingerprints Collected
RPi Direct
Ecoflex 10 100 100 100
Wood Glue
10 100 100 100
Monster Liquid
10 100 100 100
Liquid Latex
Body Paint
10 100 100 100
Gelatin 10 100 100 100
Silver Coated
10 100 100 100
Crayola Model
6 60 60 60
Total 66 660 660 660
  • The spoof materials used to fabricate these spoofs were in accordance with the approved materials by the IARPA Odin project [1].

  • The spoofs are all of unique fingerprint patterns.

Table 3: Summary of Live Finger Data Collected
Number of
Number of
RPi Direct
15 150 750 750 750

In addition to the images of live finger impressions and spoof finger impressions we collected for conducting spoof detection experiments, we also verified that for spoofs with optical properties and/or 3D structural properties too far from that of live finger skin (Fig. 8), images would not be captured by the RaspiReader. These “failure to capture” spoofs are therefore filtered out as attacks before any software based spoof detection methods need to be performed.

Figure 8: Failure to Capture. Several spoofs are unable to be imaged by the RaspiReader due to their dissimilarity in color (a, b) and/or lack of 3D friction ridge patterns (c). In particular, because (a) and (b) are black, all light rays will be absorbed preventing light rays from reflecting back to the FTIR imaging sensor. In (c), the lack of depth on the friction ridge patterns will again prevent the FTIR phenomena. (a) and (b) are both ecoflex spoofs coated with two different conductive coatings. (c) is a 2D printed spoof attack.

4 Experimental Protocol and Results

Given the database of live and spoof fingerprint images collected on both , and the prototype RaspiReader, a number of spoof detection experiments are conducted to demonstrate the value of the raw images from the RaspiReader for training spoof detectors. In addition, experiments are conducted to demonstrate that fingerprint images from the RaspiReader are compatible for matching with fingerprint images acquired from .

4.1 Spoof Detection Experiments

In all of our spoof detection experiments, we use a binary linear SVM for classification8, namely to classify images into live vs. spoof fingerprints. A binary linear SVM informally is the hyperplane with the “largest” margin separating the positive and negative samples (here spoof and live impressions). More formally, this optimal decision boundary can be found by solving the following unconstrained optimization equation (Eq. 2) given a set of instance label pairs and the quadratic hinge loss function defined in Equation 3 [44].


In practice, finding a “large margin” (hyperplane) which perfectly separates the positive and negative samples is infeasible due to noise in the training data. As such, linear SVMs are relaxed from a “hard margin” SVM (no training sample misclassifications allowed by the margin) to a “soft margin” SVM (some training sample misclassifications allowed by the margin). The number of misclassifications allowed is dependent upon the user defined parameter in Equation 2. If is set very large, then a large penalty incurs for allowing any training sample misclassifications. Consequently, a very strict decision boundary is chosen which may not generalize well to the testing data. In an effort to better generalize the SVM to the testing data, the value of parameter can be reduced. However, if is reduced too much, the selected decision boundary may begin to deviate from the optimal decision boundary for the testing data. Thus, selecting the appropriate value of is a tradeoff that needs to be empirically determined for the dataset on hand. In our case, we use five-fold cross validation to select the value of (from the list of ) such that the best performance is achieved in different folds. In our experiments, the best classification results were achieved with .

Experiment 1: LBP Features From COTS Acquired Images. With a classifier selected, we begin our experiments using grayscale processed fingerprint images acquired from (Fig. 9) for training and testing. The purpose of this initial experiment is to set a baseline performance for spoof detection when classifiers are trained on grayscale images from COTS fingerprint readers. In the subsequent experiments, the images from the RaspiReader are used for training classifiers. This first experiment is then used as a reference point in demonstrating the improvements in classification error when RaspiReader images are used for extracting features rather than images from a 500 ppi COTS optical fingerprint reader.

Figure 9: Fingerprint images acquired using a 500 ppi COTS optical fingerprint reader used to conduct a baseline spoof detection experiment. A live fingerprint impression is shown in (a). An ecoflex spoof fingerprint impression is shown in (b).

Given the reported success of local textural features for spoof detection in comparison to anatomical or physiological features [35], we use the very prevalent grayscale and rotation invariant local binary patterns (LBP) as features for this baseline experiment [45]. LBP features are extracted by constructing a histogram of bit string values determined by thresholding pixels in the local neighborhoods around each pixel in the image. Since textures can be present at different spatial resolutions, parameters and are specified in LBP construction to indicate the length (in pixels) of the neighborhood radius used for selecting pixels and also the number of neighbors to consider in a local neighborhood. Previous studies have shown that more than 90% of fundamental textures in an image can belong to a small subset of binary patterns called “uniform” textures (local binary patterns containing two or fewer 0/1 bit transitions) [45]. Therefore, in line with previous studies using local binary patterns for fingerprint spoof detection, we also employ the use of uniform local binary patterns. More formally, extracting a uniform local binary pattern for a pixel and a set of neighborhood pixels selected at radius from (Fig. 10) as defined in [45] is performed by:





Figure 10: Circularly symmetric isotropic neighbor sets for different values of (P, R), used for the extraction of LBP. Figure reproduced from [45].

Finally, let be the uniform local binary pattern histogram constructed by binning the local binary patterns for each pixel in an image according to equations 4, 5, and 6 using and as parameters to the LBP feature extractor. In our experiments, we extract , , and in order to capture textures at different spatial resolutions. These histograms (each having bins) are individually normalized and concatenated into a single feature vector of dimension 54.

Using the live and spoof finger images from as described in Tables 2 and 3, five-fold cross validation is used to partition training and testing splits in the data. When partitioning each fold of the live fingers, the finger impressions from 12 subjects are used for training, and the finger impressions from the remaining 3 subjects are used for testing. When partitioning spoof finger impressions, 80% of the impressions for each spoof material are used for training, and the other 20% for testing.

Next, a feature vector is extracted from every image in the training and testing splits and used to train and test a binary linear SVM according to the parameters previously described. Using the proposed classification scheme and training/testing splits, we report a mean (over five folds) True Detection Rate (TDR) of 58.33% at a False Detection Rate (FDR) of 0.1%. Additionally, we report the standard deviation for the TDR (over five folds) to be 37.69%.

Not only is the performance of this baseline classifier unacceptably low, it is also not robust, as evidenced by the extremely high standard deviation of the TDR over the five folds. Furthermore, it demonstrates that 500 ppi processed grayscale fingerprint images from COTS FTIR fingerprint readers have relatively low information content for discriminating spoof fingers from live fingers. This serves as motivation for developing the RaspiReader to output two raw RGB images for spoof detection. In the subsequent sections, we perform experiments on the RaspiReader images and demonstrate that the performance of classifiers trained with RaspiReader images far surpass the performance of classifiers trained on processed grayscale images.

Experiment 2: Color LBP Features Extracted from RaspiReader Raw FTIR Images. In this experiment, we make use of the information rich, raw FTIR images from the RaspiReader (Fig. 11) for presentation attack detection. As with the experiment 1, we again pursue the use of LBP textural features given their superior performance in comparison with other types of features that have been tried for fingerprint spoof detection [35]. However, since the raw FTIR images from the RaspiReader contain color information, rather than using the traditional grayscale LBP features, we employ the use of color local binary patterns (CLBP). Previous works have shown the efficacy of CLBP for both face recognition and face spoof detection [39, 40]. However, because fingerprint images from COTS fingerprint readers are grayscale, CLBP features have, to our knowledge, not been investigated for use in fingerprint spoof detection until now.

Figure 11: Raw FTIR fingerprint images acquired using the RaspiReader fingerprint reader and used to conduct spoof detection Experiment 2. A live fingerprint impression is shown in (a). An ecoflex spoof fingerprint impression is shown in (b).
Figure 12: Illustration of the extraction of the color LBP features from a spoof fingerprint impression captured by the RaspiReader. First, the fingerprint images from the RaspiReader are converted from the RGB color space to the HSV color space. Then, LBP as defined in equations 4, 5, and 6 is applied to (i) each individual channel of an image (a, b, c), and (ii) across opponent channels (e.g. a center pixel is taken from one channel and the neighborhood pixels are taken from another channel)(d, e, f, g, h, i). Finally, each of these LBP histograms are normalized and concatenated into a single feature vector. Note, the green squares illustrate the location of a center pixel and the red circles illustrate the location of neighboring pixels during each LBP extraction.

Unlike traditional grayscale LBP patterns, color local binary patterns (CLBP) encode discriminative spatiochromatic textures from across multiple spectral channels [39]. In other words, CLBP extracts textures across all the different image bands (HSV used here) in a given input image. More formally, given an input image with spectral channels, let the set of all spectral channels for be defined as . Then, the CLBP feature can be extracted using algorithm 1.

  for  do
     for  do
     end for
  end for
Algorithm 1 Extraction of Color Local Binary Patterns

Note that returns a normalized histogram of local binary patterns using as the image channel that the center (thresholding) pixels are selected from, and as the image channel from which the neighborhood pixels are selected from in the computation of LBP as defined in equations 4, 5, and 6. Also, indicates vector concatenation. This extraction of CLBP features is also visually presented in Figure 12.

Using the live and spoof finger images from the RaspiReader’s FTIR camera as described in Tables 2 and 3, five-fold cross validation is again used to partition training and testing splits. Then, we (i) convert each image to the HSV9 color space and (ii) extract 486 dimensional CLBP features from each image to train/test a binary linear SVM. With this proposed classification scheme and training/testing data, we report a mean (over five folds) True Detection Rate (TDR) of 90.75% at a False Detection Rate (FDR) of 0.1%. Additionally, we report the standard deviation for the TDR (over five folds) to be 8.88%.

To better understand which presentation attacks the spoof detector has a difficult time detecting using the proposed CLBP features and also how well the detector classifies live fingers, we report the mean (over five folds) correct detection rate for live fingers and spoofs (of every material) individually (Table 4).

Table 4: Correct Detection Rates1 on Live Fingers and Individual
Spoof Materials Using RaspiReader Raw FTIR Images
Finger Type
Correct Detection Rate
Live Finger
Wood Glue
Monster Liquid Latex
Liquid Latex Body Paint
Silver Coated Ecoflex
Crayola Model Magic
  • The mean and standard deviation is reported for the correct detection rates over 5-folds, using 0.5 as a threshold (where scores range from 0-1).

From these results, we can see that that most difficult spoofs to detect using the CLBP features extracted from the RaspiReader raw FTIR images are ecoflex spoofs. This is to be expected since the ecoflex spoofs are quite thin. As such, much of the human skin color texture passes through the spoof material, making the spoof impression of similar color to live finger impressions (Fig. 15 (a)). We can also see from these results that the detector correctly classifies spoof fingers at a higher rate than live fingers. This is likely due to a small dataset of live fingers (only 15 human subjects provided live fingerprints) used during the training of the spoof detector. Given a larger number of live finger impressions to train on, the detection rates of live fingers will likely improve.

Overall, in comparison to the baseline classifier (trained on grayscale LBP features extracted from processed images from a COTS fingerprint reader), this classifier (trained on CLBP features extracted from raw FTIR images from the RaspiReader) achieves significant improvement (90.75% mean TDR compared to 58.33% mean TDR both at FDR of 0.1%). This huge jump in TDR from the baseline classifier at the same fixed FDR (0.1%) demonstrates the usefulness of the raw FTIR images from the RaspiReader for fingerprint spoof detection. In particular, because the raw FTIR images from the RaspiReader contain color information, we are able to extract color textural (CLBP) features which are much more discriminative than traditional grayscale textural features.

However, a TDR of 90.75% still leaves much room for improvement, especially if the proposed spoof detection scheme were to be installed in the field where a 9.25% miss rate could prove very costly. Therefore, in the following experiment, we attempt to achieve even higher performance by using the direct image output of the RaspiReader for spoof detection.

Figure 13: Raw direct fingerprint images acquired using the RaspiReader and used to conduct spoof detection Experiment 3. A live fingerprint impression is shown in (a). An ecoflex spoof fingerprint impression is shown in (b).
Figure 14: Direct fingerprint images acquired using the RaspiReader and converted to the HSV color space. A live fingerprint impression is shown in (a). An ecoflex spoof fingerprint impression is shown in (b).

Experiment 3: Color LBP Extracted From RaspiReader Raw Direct Images. To perform this experiment, we essentially repeat the entire structure of Experiment 2. However, instead of extracting CLBP features from the raw FTIR image output of the RaspiReader, 486 dimensional CLBP features are instead extracted from the direct image output of the RaspiReader (Fig. 13).

Using the SVM classifier with the same parameters and 5-fold cross validation training/testing data, we report a mean (over five folds) TDR of 92.88% at a FDR of 0.1% (a slight improvement over the results of Experiment 2). Additionally, we report the standard deviation for the TDR to be 10.97%. As in Experiment 2, we report the correct detection rates for live fingers and each individual spoof material separately to better determine how well the classifier correctly detects live fingers and each respective spoof type (Table 5).

Table 5: Correct Detection Rates1 on Live Fingers and Individual
Spoof Materials Using RaspiReader Raw Direct Images
Finger Type
Correct Detection Rate
Live Finger
Wood Glue
Monster Liquid Latex
Liquid Latex Body Paint
Silver Coated Ecoflex
Crayola Model Magic
  • The mean and standard deviation is reported for the correct detection rates over 5-folds, using 0.5 as a threshold (where scores range from 0-1).

Figure 15: (a) Raw FTIR Image of an ecoflex spoof classified as live finger in Experiment 2. (b) Raw Direct Image of an ecoflex spoof classified as live finger in Experiment 3. Ecoflex is the most difficult spoof type for RaspiReader to defend against since human finger skin color passes through the clear, thin material.

From these results we can see that ecoflex spoofs are again the most difficult spoof type for RaspiReader to defend against since very thin clear spoofs allow much of the human skin color behind the spoof to pass through (Fig. 15 (b)). Also, in these direct images, if the surface area of the spoof is smaller than the surface area of the finger presenting the spoof (Fig. 15 (b)), much of the peripheral surface area of the live finger will be included in the spoof images used to make a classification decision, confusing the classifier. In the future, we plan to solve this problem by collecting more spoof impressions from thin clear spoofs, enabling us to better learn the more subtle differences between thin clear spoofs and live fingers. We can also see from these results that using the direct images of the RaspiReader enables higher correct detection rates of live fingers than the raw FTIR images do. This is likely because the raw direct images contain less variability amongst the various fingers than the raw FTIR images do. As such, the small amount of live finger training data does not affect performance as much in Experiment 3 as it does in Experiment 2.

Finally, even at a TDR of 92.88% in this experiment, there is still room for improvement. Therefore, in the final spoof detection experiment, we extract features from both of the RaspiReader images from camera-1 and camera-2 and concatenate them together in an attempt to achieve even higher and more robust spoof detection performance.

Experiment 4: Fusion of Color LBP Features. In this penultimate spoof detection experiment, we extract CLBP features from both image outputs of the RaspiReader (raw FTIR output and raw direct output). These two CLBP features are then fused into a single 972 dimensional feature vector and used to train a binary linear SVM.

Using this classification scheme, we report a mean (over five folds) TDR of 95.3% at a FDR of 0.1%. The standard deviation over the five folds was 6.05%. By fusing CLBP features extracted from both the raw FTIR output of the RaspiReader and the direct output of the RaspiReader, we boost the classification performance beyond the performance when only a single image output from RaspiReader is used for spoof detection. This indicates that using two cameras for fingerprint image acquisition and subsequent spoof detection is advantageous, since they both contain complementary and discriminative spoof detection information.

In summary, we have demonstrated in these spoof detection experiments that both raw images generated by our custom RaspiReader contain far more discriminative information for spoof detection than grayscale images output by COTS optical fingerprint readers. In addition, we have shown that these images contain complementary information such that when simple CLBP features extracted from each image are fused together, state-of-the-art spoof detection performance on known materials can be achieved. By showing that the RaspiReader images are far more useful for spoof detection than grayscale COTS optical images and by open sourcing the design of RaspiReader, we have removed what we posit to be one of the biggest limitations facing current state-of-the-art presentation attack detection schemes (namely low information processed images on which these detectors are trained). A full summary of these spoof detection experiment results is listed in Table 6.

Table 6: Summary of Spoof Detection Experiments
TDR @ FDR = 0.1%
Detection Time (msecs)2
Rpi Direct
Rpi Fusion
  • These results are reported over 5-folds.

  • The LBP and CLBP feature extractions and SVM classifications were performed with a single CPU core on a Macbook Pro running a 2.9 GHz Intel Core i5 processor with 8 GB of RAM. In the future, we will parallelize the feature extractions and directly run the spoof detection algorithm on the Raspberry Pi controlling the RaspiReader.

4.2 Interoperability of RaspiReader

In addition to demonstrating the usefulness of the RaspiReader images for fingerprint spoof detection, we also demonstrate that by processing the RaspiReader FTIR images, we can output images which are compatible for matching with images from COTS fingerprint readers. Previously, we discussed how to process and transform a RaspiReader raw FTIR image into an image suitable for matching. In this experiment, we evaluate the matching performance (of 11,175 imposter pairs and 6750 genuine pairs) when using (i) the RaspiReader processed images as both the enrollment and probe images, (ii) the images as both the enrollment and probe images, and (iii) the images as the enrollment images and the RaspiReader processed images as the probe images. The results for these matching experiments are listed in Table 7.

Table 7: Fingerprint Matching Results
Enrollment Reader
Probe Reader
TAR @ FAR = 0.1%
RaspiReader 99.21%
RaspiReader 95.56%

From these results, we make two observations. First, the best performance is achieved for native comparisons, where the enrolled and search (probe) image are produced by the same capture device. RaspiReader’s native performance is slightly better than that of . This indicates that the RaspiReader is capable of outputting images which are compatible with state of the art fingerprint matchers10. Second, we note that the performance does drop slightly when conducting the interoperability experiment ( is used for enrollment images and RaspiReader is used for probe images). However, the matching performance is still quite high considering the very stringent operating point (FAR = 0.1%). Furthermore, studies have shown that when different fingerprint readers are used for enrollment and subsequent verification or identification, the matching performance indeed drops [47, 48, 49]. Finally, we are currently investigating other approaches for processing and downsampling RaspiReader images to reduce some of the drop in cross-reader performance.

5 Conclusions

We have designed and prototyped a custom fingerprint reader, called RaspiReader, with ubiquitous components. This fingerprint reader is both low cost and easy to assemble, enabling other researchers to easily and seamlessly develop their own novel fingerprint presentation attack detection solutions which use both hardware and software. By customizing RaspiReader with two cameras for fingerprint image acquisition rather than one, we were able to extract discriminative color local binary patterns (CLBP) from both raw images which, when fused together, enabled us to match the performance of state of the art spoof detection methods (CNN). Finally, by processing the raw FTIR images of the RaspiReader, we were able to output fingerprint images compatible for matching with COTS optical fingerprint readers.

In our ongoing work, we plan to integrate specialized hardware into RaspiReader such as Optical Coherence Tomography (OCT) for sub-dermal imagery, IR cameras for vein detection, or microscopes for extremely high resolution images of the fingerprint. Because the RaspiReader uses ubiquitous components running open source software, RaspiReader enables future integration of these additional hardware components. In addition to the integration of specialized hardware, we also plan to use the raw, information rich images from the RaspiReader to pursue one-class classification schemes for fingerprint spoof detection. In particular, we posit that the RaspiReader images will assist us in modeling the class of live fingerprint images, such that spoofs of all material types can be easily rejected.


This research was supported by both grant no. 70NANB17H027 from the NIST Measurement Science program and also by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2017 - 17020200004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.


[]Joshua J. Engelsma graduated magna cum laude with a B.S. degree in computer science from Grand Valley State University, Allendale, Michigan, in 2016. He is currently working towards a PhD degree in the Department of Computer Science and Engineering at Michigan State University, East Lansing, Michigan. His research interests include pattern recognition, computer vision, and image processing with applications in biometrics.


[]Kai Cao received the Ph.D. degree from the Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing, China, in 2010. He is currently a Post Doctoral Fellow in the Department of Computer Science & Engineering, Michigan State University. He was affiliated with Xidian University as an Associate Professor. His research interests include biometric recognition, image processing and machine learning.


[]Anil K. Jain is a University distinguished professor in the Department of Computer Science and Engineering at Michigan State University. His research interests include pattern recognition and biometric authentication. He served as the editor-in-chief of the IEEE Transactions on Pattern Analysis and Machine Intelligence and was a member of the United States Defense Science Board. He has received Fulbright, Guggenheim, Alexander von Humboldt, and IAPR King Sun Fu awards. He is a member of the National Academy of Engineering and foreign fellow of the Indian National Academy of Engineering.


  1. In this paper, we use the terms presentation attack and spoof interchangeably.
  2. Presentation attacks can also occur when (i) two individuals are in collusion or (ii) an individual obfuscates his or her own fingerprints to avoid recognition [2].
  3. Border control fingerprint recognition systems have already been shown to be vulnerable to presentation attacks as far back as 2012 when a journalist successfully spoofed the fingerprint recognition system at the Hong Kong-China border [10].
  4. Several fingerprint vendors have developed hardware spoof detection solutions by employing multispectral imaging, infrared imaging (useful for sub-dermal finger analysis), and pulse capture to distinguish live fingers from spoof fingers [12, 13].
  5. Raw fingerprint images are “processed” (such as RGB to grayscale conversion, contrast enhancement, and scaling) by COTS readers to boost matching performance. However, useful spoof detection information (such as color and/or minute textural abberations) is lost during this processing.
  6. Chugh et al. reported an average accuracy on the LiveDet 2011 dataset of 97.41%, however, at a FDR of 1.0%, the TDR was only 90.32%. This indicates that current state-of-the-art spoof detection systems leave room for improvement at desired operating points.
  7. Acquiring a fingerprint on RaspiReader involves the same user interactions that a COTS optical reader does. A user simply places their finger on an acrylic prism. Then, LEDs illuminate the finger surface and images are captured from both cameras over a time period of 1 second (Fig. 1). The only difference in the acquisition process between a COTS reader and RaspiReader is that RaspiReader acquires two complementary images of the finger in contact with the acrylic platen from two separately mounted cameras.
  8. Other classifiers were experimented with such as K-Nearest Neighbor and Support Vector Machines with various kernels (RBF and Polynomial), however, linear SVM provided the best results in our experiments.
  9. Other color spaces were experimented with, but HSV consistently provided the highest performance. This is likely because HSV separates the luminance and chrominance components in an image, allowing extraction of LBP features on more complementary image channels (Fig. 14).
  10. We use the Innovatrics fingerprint SDK since we recently acquired this matcher, and it is shown to have high accuracy in the NIST FpVTE evaluation [46].


  1. “IARPA ODIN program.” https://www.iarpa.gov/index.php/research-programs/odin/odin-baa.
  2. E. Marasco and A. Ross, “A survey on antispoofing schemes for fingerprint recognition systems,” ACM Comput. Surv., vol. 47, pp. 28:1–28:36, Nov. 2014.
  3. D. Maltoni, D. Maio, A. K. Jain, and S. Prabhakar, Handbook of Fingerprint Recognition. Springer, 2nd ed., 2009.
  4. S. Yoon, J. Feng, and A. K. Jain, “Altered fingerprints: Analysis and detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 3, pp. 451–464, 2012.
  5. T. Matsumoto, H. Matsumoto, K. Yamada, and S. Hoshino, “Impact of artificial gummy fingers on fingerprint systems,” in Proceedings of SPIE, vol. 4677, pp. 275–289, 2002.
  6. “Unique Identification Authority of India, dashboard summary.” https://portal.uidai.gov.in/uidwebportal/dashboard.do.
  7. “UIDAI biometric device specifications (authentication).” http://www.stqc.gov.in/sites/upload_files/stqc/files/UIDAI-Biometric-Device-Specifications-Authentication-14-05-2012_0.pdf.
  8. “UPI United Payments Interface.” http://www.npci.org.in/documents/UPI_Procedural_Guidelines.pdf.
  9. “Office of Biometric Identity Management.” https://www.dhs.gov/obim.
  10. “Report Hong Kong China Border Biometrics Device Spoofed.” http://cw.com.hk/news/report-hong-kong-china-border-biometrics-device-spoofed.
  11. K. Cao and A. K. Jain, “Hacking mobile phones using 2d printed fingerprints,” tech. rep., MSU Technical report, MSU-CSE-16-2, 2016.
  12. K. Nixon, V. Aimale, and R. Rowe, “Spoof detection schemes,” Handbook of Biometrics, pp. 403–423, 2008.
  13. “Goodix live finger detection.” https://findbiometrics.com/goodix-zte-biometric-sensor-3103187/.
  14. P. D. Lapsley, J. A. Lee, D. F. Pare Jr, and N. Hoffman, “Anti-fraud biometric scanner that accurately detects blood flow,” Apr. 7 1998. US Patent 5,737,439.
  15. D. Baldisserra, A. Franco, D. Maio, and D. Maltoni, “Fake fingerprint detection by odor analysis,” in International Conference on Biometrics, pp. 265–272, Springer, 2006.
  16. R. K. Rowe, “Multispectral imaging biometrics,” Dec. 2 2008. US Patent 7,460,696.
  17. A. Shiratsuki, E. Sano, M. Shikai, T. Nakashima, T. Takashima, M. Ohmi, and M. Haruna, “Novel optical fingerprint sensor utilizing optical characteristics of skin tissue under fingerprints,” in Biomedical Optics 2005, pp. 80–87, International Society for Optics and Photonics, 2005.
  18. M. Sepasian, C. Mares, and W. Balachandran, “Liveness and spoofing in fingerprint identification: Issues and challenges,” in Proceedings of the 4th WSEAS International Conference on Computer Engineering and Applications, CEA’10, pp. 150–158, 2009.
  19. T. Van der Putte and J. Keuning, “Biometrical fingerprint recognition: don’t get your fingers burned,” in Smart Card Research and Advanced Applications, pp. 289–303, Springer, 2000.
  20. S. A. Schuckers, “Spoofing and anti-spoofing measures,” Information Security technical report, vol. 7, no. 4, pp. 56–62, 2002.
  21. S. B. Nikam and S. Agarwal, “Local binary pattern and wavelet based spoof fingerprint detection,” Int. J. Biometrics, vol. 1, pp. 141–159, Aug. 2008.
  22. L. Ghiani, G. L. Marcialis, and F. Roli, “Fingerprint liveness detection by local phase quantization,” in Pattern Recognition (ICPR), 2012 21st International Conference on, pp. 537–540, IEEE, 2012.
  23. L. Ghiani, A. Hadid, G. L. Marcialis, and F. Roli, “Fingerprint liveness detection using binarized statistical image features,” in Biometrics: Theory, Applications and Systems (BTAS), 2013 IEEE Sixth International Conference on, pp. 1–6, IEEE, 2013.
  24. D. Gragnaniello, G. Poggi, C. Sansone, and L. Verdoliva, “Fingerprint liveness detection based on weber local image descriptor,” in Biometric Measurements and Systems for Security and Medical Applications (BIOMS), 2013 IEEE Workshop on, pp. 46–50, IEEE, 2013.
  25. D. Gragnaniello, G. Poggi, C. Sansone, and L. Verdoliva, “Local contrast phase descriptor for fingerprint liveness detection,” Pattern Recognition, vol. 48, no. 4, pp. 1050–1058, 2015.
  26. L. Ghiani, P. Denti, and G. L. Marcialis, “Experimental results on fingerprint liveness detection,” in Proceedings of the 7th International Conference on Articulated Motion and Deformable Objects, AMDO’12, pp. 210–218, Springer-Verlag, 2012.
  27. A. Abhyankar and S. Schuckers, “Integrating a wavelet based perspiration liveness check with fingerprint recognition,” Pattern Recogn., vol. 42, pp. 452–464, Mar. 2009.
  28. E. Marasco and C. Sansone, “Combining perspiration-and morphology-based static features for fingerprint liveness detection,” Pattern Recognition Letters, vol. 33, no. 9, pp. 1148–1156, 2012.
  29. D. Menotti, G. Chiachia, A. da Silva Pinto, W. R. Schwartz, H. Pedrini, A. X. Falcão, and A. Rocha, “Deep representations for iris, face, and fingerprint spoofing detection,” IEEE Trans. Information Forensics and Security, vol. 10, no. 4, pp. 864–879, 2015.
  30. R. Nogueira, R. Lotufo, and R. Machado, “Fingerprint liveness detection using convolutional neural networks,” IEEE Trans. Information Forensics and Security, vol. 11, no. 6, pp. 1206–1213, 2016.
  31. T. Chugh, K. Cao, and A. Jain, “Fingerprint spoof detection using minutiae-based local patches,” in 2017 IEEE International Joint Conference on Biometrics (IJCB) (to appear), IEEE, 2017.
  32. V. Mura, L. Ghiani, G. L. Marcialis, F. Roli, D. A. Yambay, and S. A. C. Schuckers, “Livdet 2015 fingerprint liveness detection competition 2015.,” in BTAS, pp. 1–6, IEEE, 2015.
  33. E. Marasco and C. Sansone, “On the robustness of fingerprint liveness detection algorithms against new materials used for spoofing,” in Proc. Int. Conf. Bio-Inspired Syst. Signal Process., pp. 553–558, 2011.
  34. B. Tan, A. Lewicke, D. Yambay, and S. Schuckers, “The effect of environmental conditions and novel spoofing methods on fingerprint anti-spoofing algorithms,” IEEE Information Forensics and Security, WIFS, 2010.
  35. D. Yambay, L. Ghiani, P. Denti, G. L. Marcialis, F. Roli, and S. A. C. Schuckers, “Livdet 2011 - fingerprint liveness detection competition 2011.,” in Proc. International Conf. Biometrics, pp. 208–215, IEEE, 2012.
  36. A. Rattani, W. J. Scheirer, and A. Ross, “Open set fingerprint spoof detection across novel fabrication materials,” IEEE Trans. on Information Forensics and Security, vol. 10, no. 11, pp. 2447–2460, 2015.
  37. A. Rattani and A. Ross, “Automatic adaptation of fingerprint liveness detector to new spoof materials,” in 2014 IEEE International Joint Conference on Biometrics (IJCB), pp. 1–8, IEEE, 2014.
  38. Y. Ding and A. Ross, “An ensemble of one-class svms for fingerprint spoof detection across different fabrication materials,” in IEEE International Workshop on Information Forensics and Security, WIFS 2016, Abu Dhabi, December 4-7, 2016, pp. 1–6, 2016.
  39. J. Y. Choi, K. N. Plataniotis, and Y. M. Ro, “Using colour local binary pattern features for face recognition,” in Proceedings of the International Conference on Image Processing, ICIP 2010, September 26-29, Hong Kong, pp. 4541–4544, 2010.
  40. Z. Boulkenafet, J. Komulainen, and A. Hadid, “Face spoofing detection using colour texture analysis,” IEEE Trans. Information Forensics and Security, vol. 11, no. 8, pp. 1818–1830, 2016.
  41. P. Cignoni, M. Corsini, and G. Ranzuglia, “Meshlab: an open-source 3d mesh processing system,” ERCIM News, pp. 45–46, April 2008.
  42. Stratsys, Stratasys Connex Printer Machine Specifications. http://www.stratasys.com//̃media/Main/Files/Machine_Spec_Sheets/PSS_PJ_Connex3.ashx.
  43. “Multi camera adapter module for raspberry pi.” https://www.arducam.com/multi-camera-adapter-module-raspberry-pi. Accessed: 2017-4-15.
  44. R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin, “Liblinear: A library for large linear classification,” J. Mach. Learn. Res., vol. 9, pp. 1871–1874, June 2008.
  45. T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, pp. 971–987, July 2002.
  46. C. I. Watson, G. Fiumara, E. Tabassi, S. Cheng, P. Flanagan, and W. Salamon, “Fingerprint vendor technology evaluation, nist interagency/internal report 8034: 2015,” available at https://dx.doi.org/10.6028/NIST.IR.8034.
  47. A. Ross and A. Jain, “Biometric sensor interoperability: A case study in fingerprints,” in ECCV Workshop BioAW, pp. 134–145, Springer, 2004.
  48. X. Jia, X. Yang, Y. Zang, N. Zhang, and J. Tian, “A cross-device matching fingerprint database from multi-type sensors,” in Pattern Recognition (ICPR), 2012 21st International Conference on, pp. 3001–3004, IEEE, 2012.
  49. J. J. Engelsma, S. S. Arora, A. K. Jain, and N. G. Paulter Jr, “Universal 3d wearable fingerprint targets: Advancing fingerprint reader evaluations,” arXiv preprint arXiv:1705.07972, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description