Infrared Face Recognition: A Comprehensive Review of Methodologies and Databases

 Infrared Face Recognition: A Comprehensive Review of Methodologies and Databases

Abstract

Automatic face recognition is an area with immense practical potential which includes a wide range of commercial and law enforcement applications. Hence it is unsurprising that it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in face recognition continues to improve, benefitting from advances in a range of different research fields such as image processing, pattern recognition, computer graphics, and physiology. Systems based on visible spectrum images, the most researched face recognition modality, have reached a significant level of maturity with some practical success. However, they continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease recognition accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. Our key contributions are: (i) a summary of the inherent properties of infrared imaging which makes this modality promising in the context of face recognition, (ii) a systematic review of the most influential approaches, with a focus on emerging common trends as well as key differences between alternative methodologies, (iii) a description of the main databases of infrared facial images available to the researcher, and lastly (iv) a discussion of the most promising avenues for future research.

keywords:
Survey, Thermal, Fusion, Vein Extraction, Thermogram, Identification

Reza Shoja Ghiass   Ognjen Arandjelović   Abdelhakim Bendada   Xavier Maldague

Computer Vision & Systems Pattern Recognition & Data Analytics
   Université Laval, Canada    Deakin University, Australia
   ognjen.arandjelovic@gmail.com
   +61(0)3 93955628
\setstretch

1.90

1 Introduction

In the last two decades automatic face recognition has consistently been one of the most active research areas of computer vision and applied pattern recognition. Systems based on images acquired in the visible spectrum have reached a significant level of maturity with some practical success [1]. However, a range of nuisance factors continue to pose serious problems when visible spectrum based face recognition methods are applied in a real-world setting. Dealing with illumination, pose and facial expression changes, and facial disguises is still a major challenge.

There is a large corpus of published work which has attempted to overcome the aforesaid difficulties by developing increasingly sophisticated models which were then applied on the same type of data – usually images acquired in the visible spectrum (wavelength approximately pin the range  nm). Pose, for example, has been normalized by a learnt 2D warp of an input image [2], generated from a model fitted using an analysis-by-synthesis approach [3] or synthesized using a statistical method [4], while illumination has been corrected for using image processing filters [5] and statistical facial models [6], amongst others, with varying levels of success. Other methods adopt a multi-image approach by matching sets [7, 8, 9, 10] or sequences of images [11, 12]. Another increasingly active research direction has pursued the use of alternative modalities. For example, it is clear that data acquired using 3D scanners [13, 14] is inherently robust to illumination and pose changes. However, the cost of these systems is high and the process of data collection overly restrictive for most practical applications.

1.1 Infrared Spectrum

Infrared imagery is a modality which has attracted particular attention, in large part due to its invariance to the changes in illumination by visible light [15]. A detailed account of the relevant physics, which is outside the scope of this paper, can be found in [16]. In the context of face recognition, data acquired using infrared cameras has distinct advantages over the more common cameras which operate in the visible spectrum. For instance, infrared images of the faces can be obtained under any lighting condition, even in completely dark environments, and there is some evidence that thermal infrared (see Sec. 1.2) “appearance” may exhibit a higher degree of robustness to facial expression changes [17]. Thermal infrared energy is also less affected by scattering and absorption by smoke or dust than reflected visible light [18, 19]. Unlike visible spectrum imaging, infrared imaging can be used to extract not only exterior, but also useful subcutaneous anatomical information, such as the vascular network of a face [20]. Finally, in contrast to visible spectrum imaging, thermal vision can be used to detect facial disguises [21].

1.2 Spectral Composition

In the existing literature, it has been customary to divide the infrared spectrum into four sub-bands: near IR (NIR; wavelength m), short wave IR (SWIR; wavelength m), medium wave IR (MWIR; wavelength m), and long wave IR (LWIR; wavelength m). This division of the IR spectrum is also observed in the manufacturing of infrared cameras, which are often made with sensors that respond to electromagnetic radiation constrained to a particular sub-band. It should be emphasized that the division of the IR spectrum is not arbitrary. Rather, different sub-bands correspond to continuous frequency chunks of the solar spectrum which are divided by absorption lines of different atmospheric gasses [16]. In the context of face recognition, one of the largest differences between different IR sub-bands emerges as a consequence of the human body’s heat emission spectrum which is, in its idealized form, shown in Fig. 1. Specifically, note that most of the heat energy is emitted in LWIR sub-band, which is why it is often referred to as the thermal sub-band (this term is sometimes extended to include the MWIR sub-band). Significant heat is also emitted in the MWIR sub-band. Both of these sub-bands can be used to passively sense facial thermal emissions without an external source of light. This is one of the reasons why LWIR and MWIR sub-bands have received the most attention in the face recognition literature. In contrast to them, facial heat emission in the SWIR and NIR sub-bands is small and recognition systems operating on data acquired in these sub-bands require appropriate illuminators (invisible to the human eye) i.e. recognition is active in nature [22]. In recent years, the use of NIR also started received increasing attention from the face recognition community, while the utility of the SWIR sub-band has yet to be studied in depth.

Figure 1: The idealized spectrum of heat emission by the human body predicted by Planck’s law at 305 K, with marked boundaries of the four infrared sub-bands of interest in this paper: near-wave (NIR), short-wave (SWIR), medium-wave (MWIR) and long-wave (LWIR). Observe that the emission in the NIR and SWIR sub-bands is nearly zero. As a consequence, imaging in these bands is by necessity active i.e. it requires an illuminator at the appropriate wavelengths.

1.3 Challenges

The use of infrared images for automatic face recognition is not void of challenges. For example, MWIR and LWIR images are sensitive to the environmental temperature, as well as the emotional, physical and health condition of the subject, as illustrated in Fig. 2. They are also affected by alcohol intake. Another potential problem is that eyeglasses are opaque to the greater part of the IR spectrum (LWIR, MWIR and SWIR) [23]. This means that a large portion of the face wearing eyeglasses may be occluded, causing the loss of important discriminative information. Unsurprisingly, each of the aforementioned challenges has led to and motivated a new research direction. Some researchers have suggested fusing the information from IR and visible modalities as a possible solution to the problem posed by the opaqueness of eyeglasses [1]. Others have described methods which use thermal infrared images to extract a range of invariant features such as facial vascular networks [20, 24] or blood perfusion data [25] in order to overcome the temperature dependency of thermal “appearance”. Another consideration of interest pertains to the impact of sunlight if recognition is performed outdoors and during daytime. Although invariant to the changes in the illumination by visible light itself (by definition), the infrared “appearance” in the NIR and SWIR sub-bands is affected by sunlight which has significant spectral components at the corresponding wavelengths. This is one of the key reasons why NIR and SWIR based systems which perform well indoors struggle when applied outdoors [26, 27].

Figure 2: Thermal IR images of a person acquired during the course of an average day (a), and following exposure to cold (b). Note that the images were enhanced and are shown in false colour for easier visualization.

1.4 Aims and Organization

The aim of this paper is to present a thorough literature review of the growing and increasingly important problem of infrared face recognition. In comparison with the already published reviews of the field, by Kong et al. [1], Akhloufi et al. [28] and Ghiass et al. [29, 30], the present paper makes several important contributions. Firstly, we survey a much greater corpus of relevant work. What is more, we include and give particular emphasis to the most recent advances in the field. As such, our review is both the most comprehensive and the most up-to-date review of infrared based face recognition to date. Finally, our work is distinguished from other reviews of the field also by its original categorization of different methodologies, which adds further insight into the evolution of dominant research trends.

The remainder of this paper is organized as follows. Firstly, the inherent advantages and disadvantages of infrared data in the context of face recognition are discussed in Sec. 2. Sec. 3 comprises the main part of the paper. This is where we describe different recognition approaches proposed in the literature, grouped by the methodology or the type of features employed for recognition. Sec. 4 which follows aims to survey various databases of infrared facial images. Our focus was on free databases, but a number of proprietary databases which have gained prominence through important peer-reviewed publications are included as well. Finally, the most important conclusions and trends in the field to date are summarized in Sec. 5.

2 Infrared Data: Advantages and Disadvantages

Many of the methods for infrared based face recognition have been inspired by or are verbatim copies of algorithms which were initially developed for visible spectrum recognition. In most cases, these methods make little use of the information about the spectrum which was used to acquire images. However, the increasing appreciation of challenges encountered in trying to robustly match infrared images strongly suggests that domain specific properties of data should be exploited more. Indeed, as we discuss in Sec. 3 and 5, the recent trend in the field has been moving in this direction, increasingly complex IR specific models being proposed. Thus, in this section we focus on the relevant differences of practical significance between infrared and visible spectrum images. The use of infrared imagery provides several important advantages as well as disadvantages, and we start with a summary of the former first.

2.1 Advantages of Infrared Data in Automatic Face Recognition

Much of the early work on the potential of infrared images as identity signatures was performed by Prokoski et al. [31, 32, 33]. They were the first to advance the idea that infrared “appearance” could be used to extract robust biometric features which exhibit a high degree of uniqueness and repeatability.

Facial expression and pose changes are two key factors that a face recognition system should be robust to for it to be useful in most practical applications of interest. By comparing image space differences of thermal and visible spectrum images, Friedrich et al. [17] found that thermal images are less affected by changes in pose or facial expression than their visible spectrum counterparts. An example is shown in Fig. 3. Illumination invariance of different infrared sub-bands was analyzed in detail by Wolff et al. [34] who showed the superiority of infrared over visible data with respect to this important nuisance variable.

Figure 3: Examples of (a) visible spectrum images and (b) the corresponding thermograms of an individual across different poses/views [17]. Note that the visible and thermal images were not acquired concurrently so the poses in (a) and (b) are not exactly the same.

The very nature of thermal imaging also opens the possibility of non-invasive extraction and use of superficial anatomical information for recognition. Blood vessel patterns are one such example. As they continually transport circulating blood, blood vessels are somewhat warmer than the surrounding tissues. Since thermal cameras capture the heat emitted by a face, standard image processing techniques can be readily used to extract blood vessel patterns from facial thermograms. An important property of these patterns which makes them particularly attractive for use in recognition is that the blood vessels are “hardwired” at birth and form a pattern which remains virtually unaffected by factors such as aging, except for predictable growth [35]. Moreover, it appears that the human vessel pattern is robust enough to facilitate scaling up to large populations [33]. Prokoski et al. estimate that about 175 blood vessel based minutiae can be extracted from a full facial image [33] which, they argued, can exhibit a far greater number of possible configurations than the size of the foreseeable maximum human population. It should be noted that the authors did not propose a specific algorithm to extract the minutiae in question. In the same work, the authors also argued that forgery attempts and disguises can both be detected by infrared imaging. The key observation is that the temperature distribution of artificial facial hair or other facial wear differs from that of natural hair and skin, allowing them to be differentiated one from another.

2.1.1 The Twin Paradox

An interesting question first raised by Prokoski et al. [33] concerns thermograms of monozygotic twins. The appearance of monozygotic twins (or “identical” in common vernacular) is nearly identical in the visible spectrum. Using a small number of thermograms of monozygotic twins which were qualitatively assessed for similarity, Prokoski et al. found that the difference in appearance was significantly greater in the thermal than in the visible spectrum, and sufficiently so to allow for them to be automatically differentiated. This hypothesis was disputed by subsequent contradictory findings of Chen et al. [36]. However, the weight of evidence provided both by Prokoski et al. as well as Chen et al. is inadequate to allow for a confident conclusion to be made. Both positive and negative claims are based on experiments which use little data and lack sufficient rigour. In addition, it is plausible that the truth may be somewhere in the middle, that is, that in some cases monozygotic twins can be differentiated from their thermograms and in others not, depending on a host of physiological variables.

2.2 Limitations of Infrared Data in Automatic Face Recognition

In the context of automatic face recognition, the main drawback specific to the thermal sub-band images (or thermograms, as they are often referred to), the most often used sub-band of the infrared spectrum, stems from the fact that the heat pattern emitted by the face is affected by a number of confounding variables, such as ambient temperature, air flow conditions, exercise, postprandial metabolism, illness and drugs [33]. Sensitivity to ambient temperature is illustrated on an example in Fig. 4 (a–d). Some of the confounding variables produce global, others local thermal appearance changes. Wearing clothes, experiencing stress, blushing, having a headache or an infected tooth are examples of factors which can effect localized changes.


(a) 28.4C (b) 28.7C

(c) 28.9C (d) 29.3C

(e) Visible (f) Thermal
Figure 4: (a–d) Thermal infrared images of the same person taken at different ambient temperatures [37]. Regions marked in red correspond to heat intensity values exceeding 93% of the maximal heat value representable in the images. (e,f) A corresponding pair of visual and false colour thermal images of a person wearing eyeglasses. Notice the complete loss of information around the eyes in the thermal image. The visible spectrum image is affected much less: some information is lost due to localized specular effects and the occlusion of the face by the frame of the eyeglasses.

The high sensitivity of the facial thermogram to a large number of extrinsic factors makes the task of finding persistent and discriminative features a challenging one. It also lends support to the ideas first voiced by Prokoski et al. who argued against the use of thermal appearance based methods in favour of anatomical feature based approaches invariant to many of the aforementioned factors. As we will discuss in Sec. 3.2, this direction of infrared based face recognition has indeed attracted a substantial research effort.

Another drawback of using the infrared spectrum for face recognition is that glass and thus eyeglasses are opaque to wavelengths longer and including the SWIR sub-band. Consequently an important part of the face, one rich in discriminative information, may be occluded in the corresponding images. In particular, the absence of appearance information around the eyes can greatly decrease recognition accuracy [38]. Multi-modal fusion based methods have been particularly successful in dealing with this problem, as described in detail in Sec. 3.4.3.

Lastly, a major challenge when NIR and SWIR sub-bands are used for recognition, stems from their sensitivity to sunlight which has significant spectral components at the corresponding wavelengths [26, 27]. In this sense, the problem of matching images acquired in NIR and SWIR sub-bands is similar to matching visible spectrum images.

3 Face Recognition Using Infrared

In this review, we recognize four main groups of face recognition methodologies which use infrared data: holistic appearance based, feature based, multi-spectral based, and multi-modal fusion based. Holistic appearance methods use the entire infrared appearance image of a face for recognition. Feature based approaches use infrared images to extract salient face features, such as facial geometry, its vascular network or blood perfusion data. Spectral model based approaches model the process of infrared image formation to decompose images of faces. Some approaches directly use data from multi-spectral or hyper-spectral imaging sensors to obtain facial images across different frequency sub-bands. Multi-modal fusion based approaches combine information contained in infrared images with information contained in other types of modalities, such as visible spectrum data, with the aim of exploiting their complementary advantages. As the understanding of the challenges of using infrared data for face recognition has increased, this direction of research has become increasingly active.

3.1 Appearance-Based Methods

The earliest attempts at examining the potential of infrared imaging for face recognition dates back to 1992 and the work done by Prokoski et al. [31]. Their work introduced the concept of “elementary shapes” extracted from thermograms, which are likened to fingerprints. While precise technical detail of the method used to extract these elementary shapes is lacking, it appears that they are isothermal regions segmented out from an image, as illustrated in Fig. 5. There is no published record on the effectiveness of this representation.

Figure 5: Images of “elementary shapes” proposed by Prokoski et al. [31].

3.1.1 Early Approaches

Perhaps unsurprisingly, most of the automatic methods which followed the work of Prokoski et al. closely mirrored in their approach methods developed for the more popular visible spectrum based recognition. Generally, these used holistic face appearance in a simple statistical manner, with little attempt to achieve any generalization, relying instead on the availability of training data with sufficient variability of possible appearance for each subject.

One of the first attempts at using infrared data in an automatic face recognition system was described by Cutler [39]. His method was entirely based on the popular Eigenfaces method proposed by Turk and Pentland [40]. Using a database of 288 thermal images (12 images for each of the 24 subjects in the database) which included limited pose and facial expression variation, Cutler reported rank-1 recognition rates of 96% for frontal and semi-profile views, and 100% for profile views. These recognition rates compared favourably with those achievable using the same methodology on visible spectrum images. Following these promising results, many of the subsequently developed algorithms also adopted Eigenfaces as the baseline classifier. For example, findings similar to those made by Cutler were independently reported by Socolinsky et al. [41].

In their later work, Socolinsky et al. [42] and Selinger et al. [43, 44] extended their comparative evaluation of thermal and visible data based recognition using a wider range of linear methods: Eigenfaces (that is, principal component analysis), linear discriminant analysis, local feature analysis and independent component analysis. Their results corroborated previous observations made in the literature on the superiority of the thermal spectrum for recognition in the presence of a range of nuisance variables. However, the conclusions that could be drawn from their analysis of different recognition approaches, or indeed that of Culter, were limited by the insufficiently challenging data sets which were used: pose and expression variability was small, training and test data were acquired in a single session, and the subjects wore no eyeglasses. This is reflected in the fact that all of the evaluated algorithms achieved comparable, and in practical terms high, recognition rates (approximately 93-98%).

3.1.2 Effects of Registration

In practice, after detection faces are still insufficiently well aligned (registered) for pixel-wise comparison to be meaningful. The simplest and the most direct way of registering faces is by detecting a discrete set of salient facial features and then applying a geometric warp to map them into a canonical frame. Unlike in the case of images acquired in the visible spectrum, in which several salient facial features (such as the eyes and the mouth) can usually be reliably detected [45, 46, 47], most of the work to date supports the conclusion that salient facial feature localization in thermal images is significantly more challenging. Different approaches, which mainly focus on the eyes, were described by Tzeng et al. [48], Arandjelović et al. [38], Jin et al. [49], Bourlai et al. [50, 51] and Martinez et al. [52]. What is more, the effect of feature localization errors and thus registration errors seems to be greater for thermal than visible spectrum images. This was investigated by Chen et al. [53] who demonstrated a substantial reduction in thermal based recognition rates when small localization errors were synthetically introduced to manually marked eye positions.

Zhao et al. [54] circumvent the problem of localizing the eyes in passively acquired images by their use of additional active NIR data. A NIR lighting source placed close to and aligned with the camera axis is used to illuminate the face. Because the interior of the eyes reflects the incident light the pupils appear distinctively bright and as such are readily detected in the observed image (the so-called “bright pupil” effect). Zhao et al. use the locations of pupils to register images of faces, which are then represented using their DCT coefficients and classified using a support vector machine. A related approach has also been described by Zou et al. [55].

3.1.3 Recent Advances in IR Appearance Based Recognition

Although the general trend in the field has been way from appearance based approaches and in the direction of feature and model based methods, the former have continued to attract some research interest. Much like the initial work, the recent advances in appearance based IR face recognition has closely mirrored research in visible spectrum based recognition. Progress in comparison with the early work is mainly to be found in the use of more sophisticated statistical techniques. For example, Elguebaly and Bouguila [56] recently described a method based on a generalized Gaussian mixture model, the parameters of which are learnt from a training image set using a Bayesian approach. Although substantially more complex, this approach did not demonstrate a statistically significant improvement in recognition on the IRIS Thermal/Visible database (see Sec. 4.2), both methods achieving rank-1 rate of approximately 95%. Lin et al. [57] were the first to investigate the potential of the increasingly popular compressive sensing in the context of IR face recognition. Using a proprietary database of 50 persons with 10 images each person, their results provided some preliminary evidence for the superiority of this approach over wavelet based decomposition (also see Sec. 3.2.2).

Considering that the development of appearance based methods has nearly exclusively focused on the use of more sophisticated statistical techniques (rather than the incorporation of data specific knowledge, say), it is a major flaw in this body of research that the data sets used for evaluation have not included the types of intra-personal variations that appearance based methods are likely to be sensitive to. Indeed, none of the data sets that we are aware of included intra-personal variations due to differing emotional states, alcohol intake or exercise, for example, or even ambient temperature. This observation casts a shadow on the reported results and impedes further development of algorithms which could cope with such variations in a realistic, practical setup.

3.2 Feature-Based Methods

An early approach which uses features extracted from thermal images, rather than raw thermal appearance, was proposed by Yoshitomi et al. [58]. Following the localization of a face in an image, their method was based on combining the results of neural network based classification of grey level histograms and locally averaged appearance, and supervised classification of a facial geometry based descriptor. The proposed method was evaluated across room temperature variations ranging from 302K to 285K. As expected, the highest recognition rates were attained (92%+) when both training and test data were acquired at the same room temperature. However, the significant drop to 60% for the highest temperature difference of 17K between training and test data demonstrated the lack of robustness of the proposed features and highlighted the need for the development of discriminative features exhibiting a higher degree of invariance to confounding variables expected in practice. Yoshitomi et al. did not investigate the effectiveness of their method in the presence of other nuisance factors, such as pose or expression.

3.2.1 Infrared Local Binary Patterns

In a series of influential works, Li et al. [59, 60, 61, 26] were the first to use features based on local binary patterns (LBP) [62] extracted from infrared images. They apply their algorithm in an active setting which uses strong NIR light-emitting diodes, coaxial with the direction of the camera. This setup ensures both that the face is illuminated as homogeneously as possible, thus removing the need of algorithmic robustness to NIR illumination, as well as that the eyes can be reliably detected using the bright pupil effect. Evaluated in an indoor setting and with cooperative users, their system achieved impressive accuracy. However, as noted by Li et al. [26] themselves, it is unsuitable for uncooperative user applications or outdoor use due to the strong NIR component of sunlight (see Sec. 2).

The use of local binary patters was also investigated by Maeng et al. [63], who applied them in a multi-scale framework on NIR imagery acquired at distance (up to 60m) with limited success, dense SIFT based features proving more successful in their recognition scenario. A good comparative evaluation of local binary patters in the context of a variety of linear and kernel methods was recently published by Goswami et al. [64].

3.2.2 Wavelet Transform

Owing to its ability to capture both frequency and spatial information, the wavelet transform has been studied extensively as a means of representing a wide range of 1D and 2D signals, including face appearance in the visual spectrum. Srivastava et al. [65, 66] were the first to investigate the use of wavelets for extracting robust features from face “appearance” images in the infrared spectrum. They described a system which uses the wavelet transform based on a bank of Gabor filters. The marginal density functions of the filtered features are then modelled using Bessel K forms which are matched using the simple -norm. Srivastava et al. reported a remarkable fit between the observed and the estimated marginals across a large set of filtered images. Evaluated on the Equinox database their method achieved a nearly perfect recognition rate and on the FSU database (the two databases are described in Sec. 4.1 and 4.7) outperformed both Eigenfaces and independent component analysis based matching. A similar approach was also described by Buddharaju et al. [67]. The method of Nicolo and Schmid [19] also adopts Gabor wavelet features at its core and encodes the responses using the recently introduced Weber local descriptor [68] and local binary patterns.

3.2.3 Curvelet Transform

The curvelet transform an extension of the wavelet transform in which the degree of orientational localization is dependent on the scale of the curvelet [69]. For a variety of natural images, the curvelet transform facilitates a sparser representation than wavelet transforms do, with effective spatial and directional localization of edge-like structures. Xie et al. [70, 71, 72] described the first infrared based face recognition system which uses the curvelet transform for feature extraction. Using a simple nearest neighbour classifier, in their experiments the method demonstrated a slight advantage (of approximately 1-2%) over simple linear discriminant based approaches, but with a significant improvement in computational and storage demands.

3.2.4 Vascular Networks

Although the idea of using the superficial vascular network of a face to derive robust features for recognition dates as far back as the work of Prokoski et al. [31], it wasn’t until only recently that the first automatic methods have been described in the literature. The first corpus of work based around this idea was published by Buddharaju et al. [20, 73, 24] with subsequent further contributions by Gault et al. [74] and Seal et al. [75]. Following automatic background-foreground segmentation of a face, Buddharaju et al. first extract blood vessels from an image using simple morphological filters, as shown in Fig. 6(a-d). The skeletonized vascular network is then used to localize salient features of the network which they term thermal minutia points and which are similar in nature to the minutiae used in fingerprint recognition. Indeed, the authors adopt a method of matching sets of minutia points already widely used in fingerprint recognition, using relative minutiae orientations on local and global scale. Unsurprisingly, the method’s performance was best when the semi-profile pose was used for training and querying, rather than the frontal pose. This finding is similar to what has repeatedly been noted by multiple authors for both human and computer based recognition in the visible spectrum [76, 77, 10]. While images of frontally oriented faces contain the highest degree of appearance redundancy, they limit the amount of discriminative information available from the sides of the face. In the multi-pose training scenario, rank-1 recognition of approximately 86% and the equal error rate of approximately 18% were achieved. While, as the authors note, some of the errors can be attributed to incorrectly localized thermal minutia points, the main reason for the relatively poor performance of their method is to be found in the sensitivity of their geometry based approach to out-of-plane rotation and the effected distortion of the observed vascular network shape.

(a) 100%
(b) 90%
(c) 80%
(d) 70%
(e) 100%
(f) 90%
(g) 80%
(h) 70%
Figure 6: One of the major limitations of the vascular network based approach proposed by Buddharaju et al. lies in its ‘crisp’ binary nature: a particular pixel is deemed either a part of the vascular network or not. The consequence of this is that the extracted vascular network is highly sensitive to the scale of the input image (and thus to the distance of the user from the camera as well as the spatial resolution of the camera). (a-d) Even small changes in face scale can effect large topological changes on the result (note that the representation of interest is the vascular network, shown in black, which is only superimposed on the images it is extracted from for the benefit of the reader). (e-h) In contrast, the vesselness response based representation of Ghiass et al. [78, 79] encodes the certainty that a particular pixel locus is a reliable vessel pattern, and exhibits far greater resilience to scale changes.

In their more recent work, Buddharaju et al. [80] improve their method on several accounts. Firstly, they introduce a post-processing step in their vascular network segmentation algorithm, with the aim of removing spurious segments which, as mentioned previously, are responsible for some of the matching errors observed of their initial method [24]. More significantly, using an iterative closest point algorithm Buddharaju et al. now also non-rigidly register two vascular networks which are being compared as a means of correcting for the distortion effected by out-of-plane head rotation. Their experiments indeed demonstrate the superiority of this approach over that proposed previously.

Cho et al. [81] describe a simple modification of the temporal minutia point based approach of Buddharaju et al. which appends the location of the face centre (estimated from the segmented foreground mask) to the vectors corresponding to minutia point loci. Their method significantly outperformed Naïve Bayes, multilayer perceptron and Adaboost classifiers, achieving a false acceptance rate of 1.2% for the false rejection rate of 0.1% on the Equinox database (see Sec. 4).

The most recent contribution to the corpus of work on vascular network based recognition was made by Ghiass et al. [78, 79]. There are several important aspects of novelty in the approach they describe. Firstly, instead of seeking a binary representation in which each pixel either ‘crisply’ belongs or does not belong to the vascular network, the baseline representation of Ghiass et al. smoothly encodes this membership by a confidence level in the interval . This change of paradigm, further embedded within a multi-scale vascular network extraction framework, is shown to achieve better robustness to face scale changes (e.g. due to different resolutions of query and training images, or indeed different user-camera distances), as illustrated in Fig. 6. The second significant contribution of this work concerns the recognition across pose which is a major challenge for previously proposed vascular network based methods. The method of Ghiass et al. achieves pose invariance by geometrically warping images to a canonical frame. Ghiass et al. are the first to show how the active appearance model (AAM) [82] can be applied on IR images of faces and, specifically, they show how the difficult problem of AAM convergence in the presence of many local minima can be addressed by pre-processing thermal IR images in a manner which emphasizes discriminative information content [78]. In their most recent work, recognition across the entire range of poses from frontal to profile is achieved by training en ensemble of AAMs, each ‘specializing’ in a particular region of the thermal IR face space corresponding to an automatically determined cluster of poses and subject appearances [79].

Lastly, it should be noted that Ghiass et al. emphasize that “none of the existing publications on face recognition using ‘vascular network’ based representations provide any evidence that the extracted structures are indeed blood vessels. Thus the reader should understand that we use this term for the sake of consistency with previous work, and that we do not claim that what we extract in this paper is an actual vascular network. Rather we prefer to think of our representation as a function of the underlying vasculature” (the reader may also find the work of Gault et al. [74] useful in the consideration of this issue).

3.2.5 Blood Perfusion

A different attempt at extracting invariant features which also exploits the temperature differential between vascular and non-vascular tissues was proposed by Wu et al. [27] and Xie et al. [83]. Using a series of assumptions on relative temperatures of body’s deep and superficial tissues, and the ambient temperature, Wu et al. formulate a differential equation governing blood perfusion. The model is then used to compute a “blood perfusion image” from the original segmented thermogram of a face, as illustrated in Fig. 7. Finally, blood perfusion images are matched using a standard linear discriminant and an RBF network.

(a) Thermogram
(b) Perfusion
Figure 7: (a) A thermogram and the corresponding (b) blood perfusion image.

Following their original work, Wu et al. [84] and Xie et al. [85] introduce alternative blood perfusion models. The model described by Wu et al. was demonstrated to produce comparable recognition results to the more complex model previously, while achieving greater time and storage efficiency. Xie et al. derived a model based on the Pennes equation which too outperformed the initial model described by Wu et al. [27].

In addition to their work on different blood perfusion models, in their more recent work Wu et al. [25] also extend their classification method by another feature extraction stage. Instead of using the blood perfusion image directly, they first decompose the image of a face using the wavelet transform. After that, they apply the sub-block discrete cosine transform on the low frequency sub-band of the transform and use the obtained coefficients as an identity descriptor. Wu et al. demonstrate experimentally that this representation outperforms both purely discrete cosine transform based and purely wavelet transform based representations of the blood perfusion image.

3.3 Multi-Spectral and Hyper-Spectral Methods

Multi-spectral imaging refers to the process of concurrent acquisition of a set of images, each image corresponding to a different band of the electromagnetic spectrum. A familiar example is colour imaging in the visual spectrum which acquires three images that correspond to what the human eyes perceives as red, green, and blue sensations. In general, the number of bands can be much greater and the width of the sub-bands different images correspond to wider or narrower. The terms multi-spectral and hyper-spectral imaging are often used interchangeably, while some authors make the distinction between sets of images acquired in discrete and separated narrow bands (multi-spectral) and sets of images acquired in usually wider but frequency wise contiguous sub-bands. Henceforth in this paper we will consistently use the term multi-spectral imaging and specifically describe the data used by a specific method (or reference a standard database which contains this information).

The epidermal and dermal layers of skin make up a scattering medium that contains pigments such as melanin, hemoglobin, bilirubin, and -carotene. Small changes in the distribution of these pigments induce significant changes in the skin’s spectral reflectance. In the method of Pan et al. [86], the structure of the skin, including sub-surface layers, is sensed using multi-spectral imaging in 31 narrow bands of the NIR sub-band. The authors measured the variability in spectral properties of the human skin and showed that there are significant differences in both amplitude and spectral shape of the reflectance curves for the different subjects, while the spectral reflectance for the same subject did not change in different trials. They also observed good invariance of local spectral properties to face orientation and expression. On a proprietary database of 200 subjects with a diverse sex, age and ethnicity composition, the proposed method achieved recognition rates of 50%, 75%, and 92% for profile, semi-profile and frontal faces respectively. In their subsequent work, Pan et al. [87] examine the use of holistic multi-spectral appearance, in contrast to their previous work which used a sparse set of local features only. They apply Eigenfaces on images obtained from different NIR sub-bands, as a means of de-correlating the set of features used for classification. They also describe a method for synthesizing a discriminative signature image that they term the “spectral-face” image, obtained by sequential interlacing of images corresponding to different sub-bands, which in their experiments showed some advantage when used as input for Eigenfaces. An example of a spectral-face image and spectral-face based eigenfaces is shown in Fig. 8.


(a)
(b) (c)
Figure 8: (a) The original visible spectrum image, (b) the corresponding spectral-face, and (c) the first five eigen-spectral-faces obtained by Pan et al. [87].

3.3.1 Inter-Spectral Matching

The work by Bourlai et al. [88] is the only published account of the use of data acquired in the short wave infrared sub-band for face recognition. Following face localization using the detector of Viola and Jones [89], Bourlai et al. apply contrast limited adaptive histogram equalization and feed the result into: (i) a -nearest neighbour based classifier, (ii) VeriLook’s and (iii) Identity Tools G8 commercial recognition systems. A particularly interesting aspect of this work is that Bourlai et al. investigate the possibility of inter-spectral matching. Their experimental results suggest that SWIR images can be matched to visible images with promising results. Klare and Jain [90] similarly match visible and NIR data, using local binary patterns and HoG local descriptors [91]. The success of these methods not particularly surprising considering that the NIR and SWIR sub-bands of the infrared spectrum is much closer to the visible spectrum than MWIR or LWIR sub-bands. Indeed, this premise is central to the methods described by Chen et al. [92], Lei and Li [93], Mavadati et al. [94] and Shao et al. [95] who show that visible spectrum data can be used to create synthetic NIR images, the NIR sub-band of the infrared being the closest to the visible spectrum.

A greater challenge was recently investigated by Bourlai et al. [96] who attempted to match MWIR to visible spectrum images. Following global affine normalization and contrast limited adaptive histogram equalization, the authors evaluated different pre-processing methods (the self-quotient image and difference of Gaussian based filtering), feature types (local binary patterns, pyramids of oriented gradients histograms [97] and scale invariant feature transform [98]) and similarity measures (chi-squared, distance transform based, Euclidean and city-block). No combination of the parameters was found to be very promising, the best performing patch based and difference of Gaussian filtered LBP on average achieving only approximately 40% correct rank-1 recognition rate on a 39 subject subset of the West Virginia University database (see Sec. 4.9).

3.4 Multimodal Methods

As predicted from theory and repeatedly demonstrated in experiments summarized in the preceding sections, some of the major challenges of automatic face recognition methods which use infrared images include the opaqueness of eyeglasses in this spectrum and the dependence of the acquired data on the emotional and physical condition of the subject. In contrast, neither of these is a significant challenge in the visible spectrum. In the visible spectrum eyeglasses are largely transparent and such physiological variables such as the emotional state have negligible inherent effect on one’s appearance. Indeed, in the context of many challenging factors in the two spectra, they can be considered complementary. Consequently, it can be expected that this complementary information can be exploited to achieve a greater degree of invariance across a wide range of nuisance variables.

Most of the methods for fusing information from visible and infrared spectra described in the literature fall into one of two groups. The first of these is data-level fusion. Methods of this category construct features which inherit information from both modalities, and then perform learning and classification of such features. The second fusion type is decision-level. Methods of this group compute the final score of matching two individuals from matches independently performed in the visible and in the infrared spectra. To date, decision-level fusion predominates in the infrared face recognition literature.

3.4.1 Early Work

Wilder et al. [99] were the first to investigate the possibility of fusion of visible and infrared data. They examined three different methods for representing and matching images, using (i) transform coded grey scale projections, (ii) Eigenfaces and (iii) pursuit filters, and compared the performance of the two modalities in isolation and their fusion. Decision-level fusion was achieved simply by adding the matching scores separately computed for visible and infrared data. The transform coded grey scale projections based method achieved the best performance of the three methods compared. Using this representation independently in the visible and thermal IR spectra, the two modalities achieved comparable recognition results. However, the proposed fusion method had a remarkable effect, reducing the error rate for approximately an order of magnitude (from approximately 10% down to approximately 1%).

3.4.2 Time-Lapse

The problem of time-lapse in recognition concerns the empirical observation made across different recognition methodologies that the performance of an algorithm degrades with the passage of time between training and test data even if the acquisition conditions are seemingly the same. The term “time-lapse” is, we would argue, a somewhat misleading one. Clearly, the drop in recognition performance is not caused by the passage of time per se but rather a change in some tangible factor which affects facial appearance. This is particularly easy to illustrate on thermal data. Even if external imaging conditions are controlled or compensated for, none of the published work attempts to control or measure the effects of the emotional state or the level of excitement of the subject111This could be achieved using various proxy variables correlated with sympathetic nervous system output, for example, such as perspiration rate, pulse, galvanic skin response and so on. or indeed the loss of calibration of the infrared camera [16]. The effects of external temperature on the temperature of the face is explicitly handled only in the method proposed by Siddiqui et al. [100] who used simple thresholding and image enhancement to detect and normalize the appearance of face regions with particularly delayed temperature regulation. Nonetheless, for the sake of consistency and uniformity with the rest of the literature, we shall continue using the term “time-lapse” with an implicit understanding of the underlying issues raised herein.

The effect of time-lapse on the performance of infrared based systems was investigated by Chen et al. [53, 101, 36]. They presented experiments evidencing the complementarity of visible and infrared spectra in the presence of time-lapse by showing that recognition errors achieved using the two modalities, and effected by the passage of time between training and query data acquisition, are largely uncorrelated. Similar observations were made by Socolinsky et al. [42]. Regardless of whether simple PCA features were used for matching or the commercial system developed by the Equinox Corporation, the benefit of fusing visible and infrared modalities was substantial even though the simple additive combination of matching scores was used.

3.4.3 Eyeglasses

Since eyeglasses are opaque to the infrared frequencies in the SWIR, MWIR and LWIR sub-bands [23], their presence is a major issue when this data is used for recognition as some of the most discriminative regions of the face can be occluded. In contrast, the effect of eyeglasses on the appearance in the visible spectrum is far less significant. The methods of Gyaourova et al. [102] and Singh et al. [103] propose a data level fusion approach whereby a genetic algorithm is used to select features computed separately in the visual and thermal infrared spectra. Using two types of features, Haar wavelet based and eigencomponent based, and the Equinox database the proposed fusion method was shown to yield a superior performance compared to both purely visual and purely thermal infrared based matching, and particularly so in the presence of eyeglasses or variable illumination. Inspired by this work, Chen et al. [104] describe a similar fusion method. Instead of a genetic algorithm, they employ a fuzzy integral neural network based feature selection algorithm which has the advantage of faster convergence and greater probability of reaching a solution close to the global optimum.

Heo et al. [105] investigate both data-level and decision-level fusion. First, following the detection of eyeglasses, the corresponding image region is replaced with a generic eye template. As expected, the replacement of the eyeglass region with a generic template significantly improves recognition in the thermal but not in the visible spectrum. Data-level fusion is achieved by simple weighted addition of the corresponding pixels in mutually co-registered visible and thermal infrared images. The key contribution of this work pertains to the difference in performance observed between data-level and decision-level fusion. Interestingly, unlike in the case of data-level fusion where a remarkable performance improvement was observed, when fusion was performed at the decision-level the performance was actually somewhat worsened.

A similar approach to handing the occlusion of thermal infrared image regions by eyeglasses was taken by Kong et al. [106]. They replace an elliptical patch surrounding the eye occluded by eyeglasses with a patch representing the average eye appearance. Although differently implemented, the approach of Arandjelović et al. [38] is similar in spirit. Following the detection of eyeglasses unlike Heo et al. and Kong et al. Arandjelović et al. do not remove the offending image region, but rather introduce a robust modification to canonical correlations based matching, which ignores the eyeglasses region when sets of images are compared.

3.4.4 Illumination

In addition to the problem posed by eyeglasses, in their work already described in Sec. 3.4.3 Heo et al. [105] also examined the effects of the proposed fusion on illumination invariance. Their results successfully substantiated the theoretically expected complementarity of infrared and visible spectrum data.

Socolinsky et al. [107, 108] extend their previous work [109] by describing a simple decision based fusion based on a weighted combination of visible and thermal infrared based matching scores, and evaluate it in indoor and outdoor data acquisition environments. The more extreme illumination conditions encountered outdoors proved rather more challenging than the indoor environment, regardless of which modality or baseline matching algorithm was used for recognition. Although simple, their fusion approach did yield substantial improvements in all cases, but still failed reach practically useful performance levels when applied outdoors.

In spirit, the work of Bhowmik et al. [110] builds on the contribution of Socolinsky et al. [107, 108]. Bhowmik et al. also investigate a simple weighted combination of visible and thermal infrared spectrum matching scores and report the performance of the fusion for different contributions of the two.

The limitations of the approaches of Socolinsky et al. and Bhowmik et al. was recognized by Arandjelović et al. [38], who demonstrate that the optimal weights in decision-level fusion are illumination dependent. In a series of works Arandjelović et al. [111, 112, 38] extend their method aimed at achieving illumination invariance using visible spectrum data only [113], which fused raw appearance and filtered appearance based matching scores, and apply it to the fusion of matching scores based on visible and thermal data. A block diagram of their system is illustrated in Fig. 9. Their main contribution is a fusion method which learns the optimal weighting of matching scores in an illumination-specific manner. Illumination specificity is achieved implicitly. Conceptually, they exploit the observation that if the best match in the visible domain is sufficiently confident, the illumination change between training and novel data is small so more weight should be placed on the visible spectrum match. If the best match is insufficiently confident, the illumination change is significant and more weight is placed on infrared data which is largely unaffected by visible light. Conceptually similar is the fusion approach described by Moon et al. [114] which also adaptively controls the contributions of the visible and thermal infrared spectra. Unlike the Arandjelović et al. who use a combination of filtered holistic and local appearances, Moon et al. represent images of faces using the coefficients obtained from a wavelet decomposition of an input image. Different wavelet based fusion approaches have also been proposed by Kwon et al. [115] and Zahran et al. [116].

Figure 9: The method proposed by Arandjelović et al. [111, 38] comprises (i) data preprocessing and registration, (ii) glasses detection and (iii) fusion of holistic and local face representations using visual and thermal modalities.

3.4.5 Expression

The method proposed by Hariharan et al. [117] is one of the small number data-level fusion approaches. Hariharan et al. produce a synthetic image which contains information from both visible and infrared spectra. The key element of their approach is empirical mode decomposition. After decomposing the corresponding and mutually co-registered visible and thermal infrared spectrum images into their intrinsic mode functions, a new image is produced as a re-weighted sum of the intrinsic mode functions of both modalities. The re-weighting coefficients are determined experimentally on a training set in an ad hoc subjective manner which involves human judgement on how discriminative the resulting image appears. Hariharan et al. report that their method outperformed that proposed by Kong et al. [106], as well as Rockinger and Fechner [118], and particularly so in poor illumination conditions and in the presence of facial expression changes.

3.5 Other Approaches

Owing to the increasing popularity of research into infrared based recognition there are a number of approaches in the literature which we did not discuss explicitly. These include the geometric invariant moment based approaches of Abas and Ono [119, 120, 121], elastic graph matching based method of Hizem textitet al. [122], isotherm based method of Tzeng et al. [48], faceprints of Akhloufi and Bendada [123], fusion work of Toh et al. [124, 125], and others [126, 127, 128, 129, 130]. Specifically, we did not describe (i) straightforward or minor extensions of the original approaches already surveyed and (ii) those methods which lack the weight of sufficient empirical evidence to support their competitiveness with the state-of-the-art at the time when they were first proposed. Nonetheless references to these are provided herein for the sake of completeness and for the benefit of the reader.

4 Infrared Face Databases

The previous section makes it readily apparent that a major obstacle to understanding relative merits of published work on infrared based face recognition lies in the evaluation methodology used to assess the effectiveness of proposed approaches. Different authors focus their attention to different nuisance variables and, in the best case, evaluate their method on appropriate data sets. However, it is largely unclear, at least on the basis of empirical evidence, how different methods compare to one another if they are evaluated on the same data representative of that which may be acquired in a real-world application. In this section we review the most relevant databases of infrared imagery which have been collected for research purposes. We focus our attention on those which are public, that is, freely available. A quick reference summary of the key facts can be found in Table 1.

Table 1: A quick reference summary of the main databases of face images acquired in the infrared spectrum. The presence of variability due to a particular nuisance variable in the data is denoted by , some but limited variability by and little to no variability by .

4.1 Equinox

The “Human Identification at a Distance” database [131], collected by Equinox Corporation has been the most used data set for the evaluation of infrared based face recognition algorithms in the literature. It is freely available for non-commercial use. The data set contains pixel images of 90 individuals’ appearance in the (i) visible, (ii) long wave infrared, (iii) medium wave infrared, and (iv) short wave infrared spectral bands, acquired using a setup of cameras co-registered to within of a pixel. Fig. 10 shows an example of a set of four concurrently acquired images. For each subject in the database, data was collected under three different controlled lighting conditions using a directional light source illuminating from the (i) frontal, (ii) left lateral and (iii) right lateral directions. In all cases the subject was facing the camera so the database contains only frontal face images. Individuals wearing glasses were imaged with glasses both on and off. Facial expression variability was introduced by two means. First, a 4 second video sequence acquired at 10 fps was taken of the subject pronouncing the vowels. In addition, the subject was explicitly asked to assume the ‘smiling’, ‘frowning’ and ‘surprised’ expressions. Note that all images of a particular individual were acquired in a single session making this data set unsuitable for the evaluation of robustness to time-lapse associated appearance changes. A comprehensive evaluation of different recognition approaches on the Equinox database was published by Hermosilla et al. [132].

Figure 10: Four concurrently acquired images from the Equinox’s “Human Identification at a Distance Database” respectively in the visible, long wave infrared, medium wave infrared and short wave infrared spectral bands. Images are co-registered to within of a pixel.

4.2 IRIS Thermal/Visible

IRIS Thermal/Visible Face Database [133] is a free data set of thermal and visual spectrum images, collected across pose, illumination and expression variation. The set comprises 4228 pairs of pixel images which were concurrently acquired but are not mutually co-registered. There are 32 individuals in the database, with 176–250 images per person. The five illumination conditions were obtained using different on/off combinations of two directional lateral light sources and one ambient light source: (i) all light sources off, (ii) only the ambient light on, (iii) the ambient and the left directional light on, (iv) the ambient and the right directional light on, and (v) all light on. In a similar manner as in the Equinox database, images of the subject ‘smiling’, ‘frowning’ and exhibiting ‘surprise’ were acquired. Using a motorized setup, the camera viewing direction was controlled and images acquired every 36 across the 180 range, resulting in 11 images per modality for each illumination setting and subject expression. All data for a particular subject was acquired in a single session. Fig. 11 shows examples of images from the database.

Figure 11: Five pairs of matching visible (top row) and thermal (bottom) row images of the IRIS Thermal/Visible Face Database [133] database of a subject in the same pose and different illumination conditions. Note that the visible and thermal spectrum images are not mutually co-registered.

4.3 Iris-M3

Much like the Equinox and IRIS Thermal/Visible data sets, IRIS-M3 [134] is a database which contains both thermal and visible spectrum images. Unlike the previous two databases, it also includes multi-spectral images acquired in 25 sub-bands of the visible spectrum. The acquisition of multi spectral images was achieved using an electronically tunable liquid crystal filter coupled to a camera.

The IRIS-M3 data set contains images of 82 people of various ethnicity, age and sex, and a total of 2624 images in pixel resolution. Data was collected in two sessions. In the first session, which took place indoors, acquisition was performed under two illumination conditions: first using a halogen ambient lighting source and then a fluorescent ambient lighting source. Thus in both cases the faces were roughly homogeneously lit. In the second session, the acquisition of images was again performed under two illumination conditions: first using a fluorescent ambient lighting source indoors (as in the first session) and then outdoors in natural light. In the latter case, the subjects were oriented so that sunlight was illuminating their faces from a lateral direction. The IRIS-M3 data set does not contain any pose or expression variation: the subjects were asked to face the camera and maintain a neutral facial expression. Example images of a single subject from the database are shown in Fig. 12.

(a) Flourescent
(b) Thermal
(c) 480nm
(d) 540nm
(e) 600nm
(f) 660nm
(g) 720nm
(h) Sunlight
Figure 12: Eight images of a subject from the IRIS-M3 database. Shown are images acquired indoors in the (a) visible and (b) thermal spectrum, followed by (c,d,e,f,g) five multi-spectral images acquired in different sub-bands of the visible spectrum (these images are subtitled with the mean wavelength of the corresponding sub-band), and (f) an image acquired outdoors with natural daylight in a subsequent session.

4.4 University of Notre Dame (UND)

The University of Notre Dame data set (Collection C) [135] contains long wave infrared and visible spectrum images in pixel resolution of 241 subjects under two illumination conditions. Three studio lights were used, one positioned in front of the subject and the other two in front and to the right and left of the subject. The first illumination in which data was acquired was obtained by having the frontal light off and the remaining lights off. The second illumination was obtained by having all lights switched on. For each illumination, two images were taken, one with the subject in a neutral facial expression and one smiling. Thus in each session four images per modality per subject were taken. Data was collected in multiple sessions in weekly intervals, different subjects participating in varying numbers of repeated sessions. The database contains a total of 2492 images some of which are shown in Fig. 13, and it is freely available upon request.

(a) Neutral expression
(b) Smiling expression
Figure 13: Visible and long wave infrared spectrum images of a person from the University of Notre Dame data set. The left-hand pair of images shows the subject in a neutral facial expression, while the right-hand pair shows the same subject smiling.

4.5 University of Houston (UH)

The University of Houston database consists of a total of 7590 thermal images of 138 subjects, with a uniform distribution of 55 images per subject. Subjects are of various ethnicity, age and sex. With the exception of four subjects, from whom data was collected in two sessions six months apart, the data for a particular subject was acquired in a single session. The exact protocol which was used to introduce pose and expression variability in the data set was not described by the authors [24]. Example images are shown in Fig. 14. The database is available free of charge upon request.

Figure 14: False colour thermal appearance images of a subject in the five key poses in the University of Houston data set.

4.6 Surveillance Cameras Face Database (SCface)

The Surveillance Cameras Face Database [136] is a particulary interesting data set because it was acquired using a setup substantially different from those adopted for the collection of other infrared databases described here. SCface has only recently been made public which is why it was not used in any of the publications reviewed herein. Of all the publicly available databases, the variability of extrinsic factors such as illumination or pose in this data set is controlled the least. Images were collected in a “real-world” setup using a set of visual and thermal spectrum surveillance cameras imaging hallways of a University of Zagreb building. Thus illumination, pose, camera resolution, face scale (distance from the camera) and to a lesser degree facial expression are all variable. The data set contains 130 subjects and the total of 4160 images collected over five days see Fig. 15.

Figure 15: A set of images from the Surveillance Cameras Face Database [136] collected at the University of Zagreb. Images were collected in a “real-world” setup using a set of surveillance cameras of different resolutions and quality. Illumination, pose and facial expression of subjects (University staff) were not explicitly controlled.

4.7 Florida State University (FSU)

The publicly available face data set collected at Florida State University comprises 234 images in pixel resolution of 10 different subjects across a range of ad lib adopted poses and facial expressions [65], as illustrated in Fig. 16. It is available for download at http://lcv.stat.fsu.edu.

Figure 16: Examples of images from the Florida State University infrared database showing typical pose and facial expression variability in the data set.

4.8 UC Irvine Hyperspectral (UC)

The University of California/Irvine collected a data set of multi-spectral images of 200 subjects. All images were captured in pixel resolution [86] and under halogen ambient illumination. Subjects were imaged with a neutral facial expression in the frontal, and two profile and semi-profile poses, as well as with a smiling expression in the frontal pose only. For each pose and illumination 31 multi-spectral images were captured for 0.1m wide sub-bands of the near infrared spectrum. For twenty of the 200 subjects, data acquisition was repeated after a time lapse of up to five weeks. Fig. 17 (a) shows images of a subject for different poses and facial expressions, with multi-spectral images of two subjects in seven equidistant sub-bands covering the near wave infrared spectrum in Fig. 17 (b,c).

(a) Pose variation
(b) Person 1: seven multi-spectral images
(c) Person 2: seven multi-spectral images
Figure 17: (a) For the UC Irvine Hyperspectral data set subjects were imaged with a neutral facial expression in five different poses (the frontal pose twice) and smiling in the frontal pose. For each pose/expression combination, multi-spectral images were acquired in 0.1m wide sub-bands of the near infrared spectrum. (b) Multi-spectral images corresponding to seven equidistant (wavelength-wise) sub-bands spanning the near infrared spectrum are shown here.

4.9 West Virginia University Multispectral (WVUM)

The West Virginia University Multi-spectral database consists of visible and short wave infrared spectrum images of 50 subjects. In the visible spectrum, 25 frontal face images were captured for each subject in the database, giving the total of 1250 images. In the short wave infrared spectrum, faces were imaged in the frontal, and left and right semi-profile (67.5 from frontal) poses. For each pose nine multi-spectral images were acquired corresponding to 100nm wide spectral sub-bands in the range from 950nm to 1650nm. Thus there are 1350 short wave infrared images in the database. Data for each person was collected in two sessions, up to a month apart. Example images are shown in Fig. 18.

(a) 950nm
(b) 1150nm
(c) 1350nm
(d) 1550nm
(e) Visible
Figure 18: Examples of images from the West Virginia University Multispectral database. Shown are matching images acquired in different spectral sub-bands.

4.10 The Hong Kong Polytechnic University NIR Face Database (PolyU-NIR)

The Hong Kong Polytechnic University NIR Face Database is one of the few freely available data sets which contains images of faces acquired in the NIR sub-band of the infrared spectrum. It contains approximately 34,000 images of 335 individuals with a moderate degree of scale, pose and facial expression variation within the data subset of each subject. Example images are shown in Fig. 19. More information on the database can be obtained from the original publication [137] and at http://www4.comp.polyu.edu.hk/~biometrics/polyudb_face.htm.

Figure 19: Examples of images from the Hong Kong Polytechnic University NIR Face Database. Shown are images of a single subject across a moderate degree of scale, pose and facial expression variation.

4.11 The Laval University Thermal IR Face Motion Database

The Laval University Thermal IR Face Motion Database is the only freely available data set which contains videos of faces acquired in the IR spectrum. It contains 200 individuals of varying age, ethnicity and gender, with two sequences collected for each person. Each video sequence is 10 s long and was captured at 30 fps, thus resulting in 300 frames of 640 512 pixels. The imaged subjects were instructed to perform head motion that covers the yaw range from frontal (0 degrees) approximately full profile (90 degrees) face orientation relative to the camera, without any special attention to the tempo of the motion or the time spent in each pose. The subjects were also asked to display an arbitrary range of facial expressions. Examples of frames from a single video sequence are shown in Fig. 20. The data set if freely available for research purposes and can be obtained by contacting the authors [79].

Figure 20: False colour thermal appearance images of a subject in five arbitrary poses and facial expressions in the Laval University Thermal IR Face Motion data set.

5 Summary and Conclusions

Systems based on images acquired in the visible spectrum have reached a significant level of maturity with some yet limited practical success. A range of nuisance factors continue to pose serious problems when visible spectrum based face recognition methods are applied in a real-world setting. Dealing with illumination, pose and facial expression changes, and facial disguises is still a major challenge. The use of infrared imaging which has emerged as an alternative to visual spectrum based approaches, has attracted substantial research and commercial attention as a modality which could facilitate greater robustness to illumination and facial expression changes, facial disguises and dark environments. On the other hand, both theoretical and empirical evidence reveals a number of nuisance variables which affect infrared appearance too. These include occlusion by corrective eyeglasses, the person’s emotional state, postprandial thermogenesis, alcohol consumption and compensatory bodily temperature changes to ambient temperature. Early work on infrared based face recognition which mostly explored the use of standard statistical techniques applied on holistic appearance was generally unsuccessful in dealing with the aforementioned challenges when applied on realistic data. Feature based methods, which extract more robust facial biometric characteristics of a face from infrared images have been more successful. Particularly interesting are methods which are based on the distribution of superficial vessels. Vascular network based method extract and use this information explicitly, while blood perfusion based methods synthesize quasi-invariant images using peripheral blood flow models. We also expect that algorithms based on the concept of sparse representation, which have recently received a significant amount of attention in the sphere of visible spectrum-based face recognition [138, 139], could offer interesting insights when applied on IR data.

A notable limitation which we found in all of the reviewed publications, is of a methodological nature: despite the universal acknowledgment of the major challenges of infrared based face recognition, none of the reported experiments evaluate performance in the context of all of them. What is more, even papers which do use a public database often perform evaluation on only a subset of the data. These reasons make a direct comparison of different approaches difficult, as well as the assessment of their capacity for practical deployment. Consequently we encourage efforts directed towards the collection of large scale data sets acquired in realistic conditions and their consistent use in the evaluation and reporting of novel recognition algorithms. Indeed, in this paper we also reviewed a range of data sets currently available to researchers.

Considering the results published to date, in the opinion of these authors two particularly promising ideas stand out: (i) the development of identity descriptors based on persistent physiological features such as vascular networks, and (ii) the use of methods for multi-modal fusion of complementary data types, most notably those based on visible and infrared images. Both are still in their early stages, with a potential for significant further improvement.

References

  • [1] S. Kong, J. Heo, B. Abidi, J. Paik, and M. Abidi. Recent advances in visual and infrared face recognition – a review. Computer Vision and Image Understanding, 97(1):103–135, 2005.
  • [2] R. Gross, I. Matthews, and S. Baker. Active appearance models with occlusion. Image and Vision Computing (special issue on Face Processing in Video), 1(6):593–604, 2006.
  • [3] V. Blanz and T. Vetter. Face recognition based on fitting a 3D morphable model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(9):1063–1074, 2003.
  • [4] U. Mohammed, S. Prince, and J.. Kautz. Visio-lization: generating novel facial images. ACM Transactions on Graphics (TOG), 28(3):57:1–57:8, 2009.
  • [5] M. Nishiyama and O. Yamaguchi. Face recognition using the classified appearance-based quotient image. In Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pages 49–54, 2006.
  • [6] L. Wolf and A. Shashua. Learning over sets using kernel principal angles. Journal of Machine Learning Research, 4(10):913–931, 2003.
  • [7] T-J. Chin and D. Suter. A new distance criterion for face recognition using image sets. In Proc. Asian Conference on Computer Vision, pages 549–558, 2006.
  • [8] Y. M. Lui and J. R. Beveridge. Grassmann registration manifolds for face recognition. In Proc. European Conference on Computer Vision, 2:44–57, 2008.
  • [9] W. Fan and D-Y. Yeung. Face recognition with image sets using hierarchically extracted exemplars from appearance manifolds. In Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pages 177–182, 2006.
  • [10] O. Arandjelović and R. Cipolla. An illumination invariant face recognition system for access control using video. In Proc. British Machine Vision Conference, pages 537–546, 2004.
  • [11] K. Lee and D. Kriegman. Online learning of probabilistic appearance manifolds for video-based recognition and tracking. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 1:852–859, 2005.
  • [12] K. Bowyer, K. Chang, P. Flynn, and X. Chen. Face recognition using 2-D, 3-D, and infrared: is multimodal better than multisample? In Proc. IEEE, 94(11):2000–2012, 2006.
  • [13] G. Pan and Z. Wu. 3D face recognition from range data. International Journal of Image and Graphics, 5(3):573–593, 2005.
  • [14] A. Godil, S. Ressler, and R. Grother. Face recognition using 3D facial shape and color map information: comparison and combination. In Proc. SPIE, 5404:351–361, 2004.
  • [15] X. Zou, J. Kittler, and K. Messer. Illumination invariant face recognition: A survey. In Proc. IEEE Conference on Biometrics: Theory, Applications and Systems, 2007.
  • [16] X. Maldague. Theory and Practice of Infrared Technology for Non Destructive Testing. John-Wiley & Sons, 2001.
  • [17] G. Friedrich and Y. Yeshurun. Seeing people in the dark: Face recognition in infrared images. In Proc. British Machine Vision Conference, pages 348–359, 2003.
  • [18] H. Chang, A. Koschan, M. Abidi, S. G. Kong, and C. Won. Multispectral visible and infrared imaging for face recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1–6, 2008.
  • [19] F. Nicolo and N. A. Schmid. A method for robust multispectral face recognition. In Proc. IEEE International Conference on Image Analysis and Recognition, 2:180–190, 2011.
  • [20] P. Buddharaju, I. T. Pavlidis, and P. Tsiamyrtzis. Physiology-based face recognition. In Proc. IEEE Conference on Advanced Video and Singal Based Surveillance, pages 354–359, 2005.
  • [21] I. Pavlidis and P. Symosek. The imaging issue in an automatic face/disguise detection system. In Proc. IEEE Workshop on Computer Vision Beyond the Visible Spectrum, pages 15–24, 2000.
  • [22] X. Zou, J. Kittler, and K. Messer. Face recognition using active near-ir illumination. In Proc. British Machine Vision Conference, pages 209–219, 2005.
  • [23] W. Tasman and E. A. Jaeger. Duane’s Ophthalmology. Lippincott Williams & Wilkins, 2009.
  • [24] P. Buddharaju, I. T. Pavlidis, P. Tsiamyrtzis, and M. Bazakos. Physiology-based face recognition in the thermal infrared spectrum. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(4):613–626, 2007.
  • [25] S. Q. Wu, L. Z. Wei, Z. J. Fang, R. W. Li, and X. Q. Ye. Infrared face recognition based on blood perfusion and sub-block DCT in wavelet domain. In Proc. International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR), 3:1252–1256, 2007.
  • [26] S. Li, R. Chu, S. Liao, and L. Zhang. Illumination invariant face recognition using near-infrared images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(4):627–639, 2007.
  • [27] S. Wu, W. Song, L.J. Jiang, S. Xie, F. Pan, W.Y. Yau, and S. Ranganath. Infrared face recognition by using blood perfusion data. In Proc. International Conference on Audio- and Video-Based Biometric Person Authentication, pages 320–328, 2005.
  • [28] M. Akhloufi, A. Bendada, and J. Batsale. State of the art in infrared face recognition. Quantitative Infrared Thermography, 5(1):3–26, 2008.
  • [29] R. S. Ghiass, A. Bendada, and X. Maldague. Infrared face recognition: A review of the state of the art. QIRT, pages 533–540, 2010.
  • [30] R. S. Ghiass, O. Arandjelović, A. Bendada, and X. Maldague. Infrared face recognition: a literature review. In Proc. International Joint Conference on Neural Networks, pages 2791–2800, 2013.
  • [31] F. J. Prokoski, R. B. Riedel, and J. S Coffin. Identification of individuals by means of facial thermography. In Proc. IEEE International Carnahan Conference on Security Technology (ICCST): Crime Countermeasures, pages 120–125, 1992.
  • [32] F. J. Prokoski. Method for identifying individuals from analysis of elemental shapes derived from biosensor data. US Patent, pages 5,163,094, 1992.
  • [33] F. J. Prokoski and R. Riedel. BIOMETRICS: Personal Identification in Networked Society, chapter Infrared Identification of Faces and Body Parts. Kluwer Academic Publishers, 1998.
  • [34] L. B. Wolff, D. A. Socolinsky, and C. K. Eveland. Quantitative measurement of illumination invariance for face recognition using thermal infrared imagery. In Proc. IEEE International Workshop on Object Tracking and Classification Beyond the Visible Spectrum, 2001.
  • [35] A. B. Persson and I. R. Buschmann. Vascular growth in health and disease. Frontiers of Molecular Neuroscience, 4(14), 2011.
  • [36] X. Chen, P. Flynn, and K. Bowyer. IR and visible light face recognition. Computer Vision and Image Understanding, 99(3):332–358, 2005.
  • [37] S Wu, Z. Fang, Z. Xie, and W. Liang. Recent Advances in Face Recognition – Chapter 13: Blood Perfusion Models for Infrared Face Recognition. InTech, 2008.
  • [38] O. Arandjelović, R. I. Hammoud, and R. Cipolla. Thermal and reflectance based personal identification methodology in challenging variable illuminations. Pattern Recognition, 43(5):1801–1813, 2010.
  • [39] R. Cutler. Face recognition using infrared images and eigenfaces. Technical Report, University of Maryland, 1996.
  • [40] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71–86, 1991.
  • [41] D. A. Socolinsky, L. B. Wolff, J. D. Neuheisel, and C. K. Eveland. Illumination invariant face recognition using thermal infrared imagery. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 1:527, 2001.
  • [42] D. A. Socolinsky and A. Selinger. Thermal face recognition over time. In Proc. IAPR International Conference on Pattern Recognition, 4:187–190, 2004.
  • [43] A. Selinger and D. Socolinsky. Appearance-based facial recognition using visible and thermal imagery: a comparative study. Technical Report, Equinox Corporation, 2002.
  • [44] A. Selinger and D. Socolinsky. Face recognition in the dark. In Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops, 8:129, 2004.
  • [45] D. Cristinacce, T. F. Cootes, and I. Scott. A multistage approach to facial feature detection. In Proc. British Machine Vision Conference, 1:277–286, 2004.
  • [46] O. Arandjelović and A. Zisserman. Automatic face recognition for film character retrieval in feature-length films. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 1:860–867, 2005.
  • [47] L. Ding and A. M. Martinez. Features versus context: An approach for precise and detailed detection and delineation of faces and facial features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(11):2022–2038, 2010.
  • [48] H.-W. Tzeng, H.-C. Lee, and M.-Y. Chen. The design of isotherm face recognition technique based on nostril localization. In Proc. International Conference on System Science and Engineering, pages 82–86, 2011.
  • [49] T. Jin, C. Shouming, X. Xiuzhen, and J. Gu. Eyes localization in an infrared image. In Proc. IEEE International Conference on Automation and Logistics (ICAL), pages 217–222, 2009.
  • [50] T. Bourlai and Z. Jafri. Eye detection in the middle-wave infrared spectrum: Towards recognition in the dark. In Proc. IEEE International Workshop on Information Forensics and Security (WIFS), pages 1–6, 2011.
  • [51] T. Bourlai, C. Whitelam, and I. Kakadiaris. Pupil detection under lighting and pose variations in the visible and active infrared bands. In Proc. IEEE International Workshop on Information Forensics and Security (WIFS), pages 1–6, 2011.
  • [52] B. Martinez, X. Binefa, and M. Pantic. Facial component detection in thermal imagery. In Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 48–54, 2010.
  • [53] X. Chen, P. Flynn, and K. Bowyer. Visible-light and infrared face recognition. In Proc. Workshop on Multimodal User Authentication, pages 48–55, 2003.
  • [54] S. Zhao and R. Grigat. An automatic face recognition system in the near infrared spectrum. MLDM, pages 437–444, 2005.
  • [55] X. Zou, J. Kittler, and K. Messer. Accurate face localisation for faces under active near-IR illumination. In Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pages 369–374, 2006.
  • [56] T. Elguebaly and N. Bouguila. A Bayesian method for infrared face recognition. Machine Vision Beyond Visible Spectrum, 1:123–138, 2011.
  • [57] Z. Lin, Z. Wenrui, S. Li, and F. Zhijun. Infrared face recognition based on compressive sensing and PCA. In Proc. International Conference on Computer Science and Automation Engineering (CSAE), 2:51–54, 2011.
  • [58] Y. Yoshitomi, T. Miyaura, S. Tomita, and S. Kimura. Face identification using thermal image processing. RO-MAN, pages 374–379, 1997.
  • [59] S. Z. Li and his face team. AuthenMetric F1: A highly accurate and fast face recognition system. In Proc. IEEE International Conference on Computer Vision – Demos, 2005.
  • [60] S. Li, R. Chu, M. Ao, L. Zhang, and R. He. Highly accurate and fast face recognition using near infrared images. In Proc. IAPR International Conference on Biometrics, pages 151–158, 2006.
  • [61] S. Li, L. Zhang, S. Liao, X. Zhu, R. Chu, M. Ao, and R. He. A near-infrared image based face recognition system. In Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pages 455–460, 2006.
  • [62] T. Ojala, M. Pietikäinen, and D. Harwood. Performance evaluation of texture measures with classification based on Kullback discrimination of distributions. In Proc. IAPR International Conference on Pattern Recognition, 1:582–585, 1994.
  • [63] H. Maeng, H.-C Choi, U. Park, S.-W. Lee, and A. K. Jain. NFRAD: Near-infrared face recognition at a distance. In Proc. International Joint Conference on Biometrics (IJCB), pages 1–7, 2011.
  • [64] D. Goswami, C. H. Chan, D. Windridge, and J. Kittler. Evaluation of face recognition system in heterogeneous environments (visible vs NIR). In Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2160–2167, 2011.
  • [65] A. Srivastava, X. Liu, B. Thomasson, and C. Hesher. Spectral probability models for infrared images and their applications to ir face recognition. CVBVS, 2001.
  • [66] A. Srivastana and X. Liu. Statistical hypothesis pruning for recognizing faces from infrared images. Image and Vision Computing, 21(7):651–661, 2003.
  • [67] P. Buddharaju, I. Pavlidis, and I. Kakadiaris. Face recognition in the thermal infrared spectrum. In Proc. IEEE International Workshop on Object Tracking and Classification Beyond the Visible Spectrum, pages 133–138, 2004.
  • [68] J. Chen, S. Shan, C. He, G. Zhao, M. Pietikäine, X. Chen, and W. Gao. WLD: a robust local image descriptor. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1705–1720, 2010.
  • [69] T. Mandal, A. Majumdar, and Q. M. J. Wu. Face recognition by curvelet based feature extraction. ICIAR, pages 806–817, 2007.
  • [70] Z. Xie, S. Wu, G. Liu, and Z. Fang. Infrared face recognition based on radiant energy and curvelet transformation. In Proc. International Conference on Information Assurance and Security (IAS), 2:215–218, 2009.
  • [71] Z. Xie, S. Wu, G. Liu, and Z. Fang. Infrared face recognition method based on blood perfusion image and curvelet transformation. In Proc. International Conference on Wavelet Analysis and Pattern Recognition, pages 360–364, 2009.
  • [72] Z. Xie, G. Liu, S. Wu, and Y. Lu. A fast infrared face recognition system using curvelet transformation. ISECS, 2:145–149, 2009.
  • [73] P. Buddharaju, I. Pavlidis, and P. Tsiamyrtzis. Pose-invariant physiological face recognition in the thermal infrared spectrum. In Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 53–60, 2006.
  • [74] T. R. Gault, N. Blumenthal, A. A. Farag, and T. Starr. Extraction of the superficial facial vasculature, vital signs waveforms and rates using thermal imaging. In Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1–8, 2010.
  • [75] A. Seal, M. Nasipuri, D. Bhattacharjee, and D.K. Basu. Minutiae based thermal face recognition using blood perfusion data. International Conference on Image Information Processing, pages 1–4, 2011.
  • [76] T. Sim and S. Zhang. Exploring face space. In Proc. IEEE Workshop on Face Processing in Video, page 84, 2004.
  • [77] J. Lee, B. Moghaddam, H. Pfister, and R. Machiraju. Finding optimal views for 3D face shape modeling. In Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pages 31–36, 2004.
  • [78] R. S. Ghiass, O. Arandjelović, A. Bendada, and X. Maldague. Vesselness features and the inverse compositional AAM for robust face recognition using thermal IR. In Proc. AAAI Conference on Artificial Intelligence, pages 357–364, 2013.
  • [79] R. S. Ghiass, O. Arandjelović, A. Bendada, and X. Maldague. Illumination-invariant face recognition from a single image across extreme pose using a dual dimension AAM ensemble in the thermal infrared spectrum. In Proc. International Joint Conference on Neural Networks, pages 2781–2790, 2013.
  • [80] P. Buddharaju and I. Pavlidis. Physiological face recognition is coming of age. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 128–135, 2009.
  • [81] S. Cho, L. Wang, and W. J. Ong. Thermal imprint feature analysis for face recognition. ISIE, pages 1875–1880, 2009.
  • [82] T. F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. In Proc. European Conference on Computer Vision, 2:484–498, 1998.
  • [83] Z. Xie, G. Liu, S. Wu, and Z. Fang. Infrared face recognition based on blood perfusion and fisher linear discrimination analysis. IST, pages 85–88, 2009.
  • [84] S. Wu, Z. Gu, K. A. Chia, and S. H. Ong. Infrared facial recognition using modified blood perfusion. ICICS, pages 1–5, 2007.
  • [85] Z. Xie, S. Wu, C He, Z. Fang, and J. Yang. Infrared face recognition based on blood perfusion using bio-heat transfer model. CCPR, pages 1–4, 2010.
  • [86] Z. Pan, G. Healey, M. Prasad, and B. Tromberg. Face recognition in hyperspectral images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(12):1552–1560, 2003.
  • [87] Z. Pan, G. Healey, M. Prasad, and B. Tromberg. Multiband and spectral eigenfaces for face recognition in hyperspectral images. In Proc. SPIE, 5779:144–151, 2005.
  • [88] T. Bourlai, N. Kalka, A. Ross, B. Cukic, and L. Hornak. Cross-spectral face verification in the short wave infrared (SWIR) band. In Proc. IAPR International Conference on Pattern Recognition, pages 1343–1347, 2010.
  • [89] P. Viola and M. Jones. Robust real-time face detection. International Journal of Computer Vision, 57(2):137–154, 2004.
  • [90] B. Klare and A. K. Jain. Dynamic scene shape reconstruction using a single structured light pattern. In Proc. IAPR International Conference on Pattern Recognition, pages 1513–1516, 2008.
  • [91] N. Dalai and B. Triggs. Histograms of oriented gradients for human detection. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 1:886–893, 2005.
  • [92] J. Chen, D. Yi, J. Yang, G. Zhao, S. Z. Li, and M. Pietikäinen. Learning mappings for face synthesis from near infrared to visual light images. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 156–163, 2009.
  • [93] Z. Lei and S. Z. Li. Coupled spectral regression for matching heterogeneous faces. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 1123–1128, 1993.
  • [94] S. M. Mavadati, M. T. Sadeghi, and J. Kittler. Fusion of visible and synthesized near infrared information for face authentication. In Proc. IEEE International Conference on Image Processing, pages 3801–3804, 2010.
  • [95] M. Shao, Y. Wang, and Y. Wang. A super-resolution based method to synthesize visual images from near infrared. In Proc. IEEE International Conference on Image Processing, pages 2453–2456, 2009.
  • [96] T. Bourlai, A. Ross, C. Chen, and L. Hornak. A study on using mid-wave infrared images for face recognition. In Proc. SPIE, 2012.
  • [97] A. Bosch, A. Zisserman, and X. Munoz. Representing shape with a spatial pyramid kernel. In Proc. ACM International Conference on Image and Video Retrieval, pages 401–408, 2007.
  • [98] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91–110, 2003.
  • [99] J. Wilder, P. Phillips, C. Jiang, and S. Wiener. Comparison of visible and infra-red imagery for face recognition. In Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pages 182–187, 1996.
  • [100] R. Siddiqui, M. Sher, and R. Khalid. Face identification based on biological trait using infrared images after cold effect enhancement and sunglasses filtering. In Proc. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, 12:161–164, 2004.
  • [101] X. Chen, P. Flynn, and K. Bowyer. PCA-based face recognition in infrared imagery: Baseline and comparative studies. AMFG, pages 127–134, 2003.
  • [102] A. Gyaourova, G. Bebis, and I. Pavlidis. Fusion of infrared and visible images for face recognition. In Proc. European Conference on Computer Vision, 4:456–468, 2004.
  • [103] S. Singha, A. Gyaourovaa, G. Bebisa, and I. Pavlidis. Infrared and visible image fusion for face recognition. In Proc. SPIE Defense and Security Symposium (Biometric Technology for Human Identification), 2004.
  • [104] X. Chen, Z. Jing, and G. Xiao. Fuzzy fusion for face recognition. FSKD, pages 672–675, 2005.
  • [105] J. Heo, S. G. Kong, B. R. Abidi, and M. A. Abidi. Fusion of visual and thermal signatures with eyeglass removal for robust face recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops, page 122, 2004.
  • [106] S. G. Kong, J. Heo, F. Boughorbel, Y Zheng, B. R. Abidi, A. Koschan, M. Yi, and M. A. Abidi. Multiscale fusion of visible and thermal ir images for illumination-invariant face recognition. International Journal of Computer Vision, 71(2):215–233, 2007.
  • [107] D. Socolinsky, A. Selinger, and J. Neuheisel. Face recognition with visible and thermal infrared imagery. Computer Vision and Image Understanding, 91(1–2):72–114, 2003.
  • [108] D. Socolinsky and A. Selinger. Thermal face recognition in an operational scenario. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2:1012–1019, 2004.
  • [109] D. Socolinsky and A. Selinger. A comparative analysis of face recognition performance with visible and thermal infrared imagery. In Proc. IAPR International Conference on Pattern Recognition, 4:217–222, 2002.
  • [110] M.K. Bhowmik, D. Bhattacharjee, M. Nasipuri, D.K. Basu, and M. Kundu. Optimum fusion of visual and thermal face images for recognition. In Proc. International conference on Information Assurance and Security (IAS), pages 311–316, 2010.
  • [111] O. Arandjelović, R. I. Hammoud, and R. Cipolla. Multi-sensory face biometric fusion (for personal identification). In Proc. IEEE International Workshop on Object Tracking and Classification Beyond the Visible Spectrum, pages 128–135, 2006.
  • [112] O. Arandjelović, R. Hammoud, and R. Cipolla. On person authentication by fusing visual and thermal face biometrics. In Proc. IEEE Conference on Advanced Video and Singal Based Surveillance, pages 50–56, 2006.
  • [113] O. Arandjelović and R. Cipolla. A new look at filtering techniques for illumination invariance in automatic face recognition. In Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pages 449–454, 2006.
  • [114] S. Moon, S. G. Kong, J. Yoo, and K. Chung. Face recognition with multiscale data fusion of visible and thermal images. CIHSPS, pages 24–27, 2006.
  • [115] O. K. Kwon and S. G. Kong. Multiscale fusion of visual and thermal images for robust face recognition. CIHSPS, pages 112–116, 2005.
  • [116] E. G. Zahran, A. M. Abbas, M. I. Dessouky, M. A. Ashour, and K. A. Sharshar. High performance face recognition using PCA and ZM on fused LWIR and VISIBLE images on the wavelet domain. International Conference on Computer Engineering & Systems, pages 449–454, 2009.
  • [117] H. Hariharan, A. Koschan, B. Abidi, A. Gribok, and M. Abidi. Fusion of visible and infrared images using empirical mode decomposition to improve face recognition. In Proc. IEEE International Conference on Image Processing, pages 2049–2052, 2006.
  • [118] O. Rockinger and T. Fechner. Pixel-level image fusion: The case of image sequences. In Proc. SPIE, 3374:378–388, 1998.
  • [119] K. H. Abas and O. Ono. Infrared-based face identification system via multi-thermal moment invariants distribution. In Proc. International Conference on Signals, Circuits and Systems, pages 1–5, 2009.
  • [120] K. H. Abas and O. Ono. Implementation of frontal-centroid moment invariants in thermal-based face identification system. In Proc. International Conference on Signal-Image Technology & Internet-Based Systems, pages 36–41, 2009.
  • [121] K. H. Abas and O. Ono. Thermal physiological moment invariants for face identification. In Proc. International Conference on Signal-Image Technology & Internet-Based Systems, pages 1–6, 2010.
  • [122] W. Hizem, L. Allano, A. Mellakh, and B. Dorizzi. Face recognition from synchronised visible and near-infrared images. IET Signal Processing, 3(4):282–288, 2009.
  • [123] M. A. Akhloufi and A. Bendada. Thermal faceprint: A new thermal face signature extraction for infrared face recognition. Canadian Conference on Computer and Robot Vision, pages 269–272, 2008.
  • [124] K.-A. Toh, J. Kim, and S. Lee. Biometric scores fusion based on total error rate minimization. Pattern Recognition, 41(3):1066–1082, 2008.
  • [125] K.-A. Toh. A projection framework for biometric scores fusion. In Proc. International Conference on Control, Automation, Robotics and Vision, pages 1262–1267, 2010.
  • [126] M. Shao and Y.-H. Wang. A BEMD based muti-layer face matching: From near infrared to visual images. In Proc. IEEE International Workshop on Analysis and Modeling of Faces and Gestures, pages 1634–1640, 2009.
  • [127] E. G. Zahran, A. M. Abbas, M. I. Dessouky, M. A. Ashour, and K. A. Sharshar. Performance analysis of infrared face recognition using PCA and ZM. International Conference on Computer Engineering & Systems, pages 45–50, 2009.
  • [128] M. A. Akhloufi and A. Bendada. A new framework for face recognition in and beyond the visible spectrum. In Proc. IEEE International Conference on Systems, Man and Cybernetics, pages 1846–1850, 2010.
  • [129] F. M. Pop, M. Gordan, C. Florea, and A. Vlaicu. Fusion based approach for thermal and visible face recognition under pose and expresivity variation. In Proc. Roedunet International Conference (RoEduNet), pages 61–66, 2010.
  • [130] M. A. Akhloufi and A. Bendada. A new framework for face recognition in and beyond the visible spectrum. In Proc. IEEE International Conference on Systems, Man and Cybernetics, pages 3308–3314, 2010.
  • [131] Equinox Corp. Equinox human identification at a distance database. http://www.equinoxsensors.com/products/HID.html, Last accessed 2012.
  • [132] G. Hermosilla, J. Ruiz-del Solar, R. Verschae, and M. Correa. Face recognition using thermal infrared images for human-robot interaction applications: A comparative study. In Proc. Latin American Robotics Symposium, pages 1–7, 2009.
  • [133] University of Tennessee. IRIS thermal/visible face database. http://www.cse.ohio-state.edu/otcbvs-bench/, Last accessed 2012.
  • [134] H. Chang, H. Harishwaran, M. Yi, A. Koschan, B. Abidi, and M. Abidi. An indoor and outdoor, multimodal, multispectral and multi-illuminant database for face recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops, page 54, 2006.
  • [135] University of Notre Dame. University of Notre Dame biometric data set collection C. http://www.nd.edu/~cvrl/CVRL/Data_Sets.html, Last accessed 2012.
  • [136] M. Grgic, K. Delac, and S. Grgic. SCface – surveillance cameras face database. Multimedia Tools and Applications Journal, 51(3):863–879, 2011.
  • [137] B. Zhang, L. Zhang, D. Zhang, and L. Shen. Directional binary code with application to polyu near-infrared face database. Pattern Recognition Letters, 31(14):2337–2344, 2010.
  • [138] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):210–227, 2009.
  • [139] W. Deng, J. Hu, and J. Guo. Extended SRC: Undersampled face recognition via intraclass variant dictionary. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9):1864–1870, 2012.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
12498
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description