Robust Face Recognition via Block Sparse Bayesian Learning
Abstract
Face recognition (FR) is an important task in pattern recognition and computer vision. Sparse representation (SR) has been demonstrated to be a powerful framework for FR. In general, an SR algorithm treats each face in a training dataset as a basis function, and tries to find a sparse representation of a test face under these basis functions. The sparse representation coefficients then provide a recognition hint. Early SR algorithms are based on a basic sparse model. Recently, it has been found that algorithms based on a block sparse model can achieve better recognition rates. Based on this model, in this study we use block sparse Bayesian learning (BSBL) to find a sparse representation of a test face for recognition. BSBL is a recently proposed framework, which has many advantages over existing blocksparsemodel based algorithms. Experimental results on the Extended Yale B, the AR and the CMU PIE face databases show that using BSBL can achieve better recognition rates and higher robustness than stateoftheart algorithms in most cases.
keywords:
Face Recognition, Classification, Sparse Representation, Sparse Learning, Block Sparse Bayesian Learning (BSBL), Block Sparsity1 Introduction
Owing to the rapid development of network and computer technologies, face recognition(FR) plays an important role in many applications, such as video surveillance, manmachine interface, digital entertainment and so on. Many methods of FR have been developed over the past two decades martinez2001pca ; phillips2005overview ; turk1991face ; brunelli1993face ; he2005face . Basically, FR is a typical problem of classification.
In a typical FR system, besides the face detection and face alignment, there are two main stages in the process of FR. One is feature extraction, which obtains a set of relevant information from a face image for further classification. Because of the huge size of face images, it is desired to extract features from each face image, which have lower dimensions and facilitate recognition. Lots of feature extraction methods have been proposed, such as PCA turk1991face ; turk1991eigenfaces , LPP he2005face and LDA belhumeur1997eigenfaces and so on. Another stage is classification, which builds a classification model and assigns a label to a test face image. There are many classification algorithms. Typical algorithms include Nearest Neighbor (NN) duda2012pattern , Nearest Subspace (NS) ho2003clustering and Support Vector Machine (SVM) vapnik1999nature .
Recently, Wright et al. proposed a novel FR method called Sparse Representation Classification (SRC) wright2009robust . In this method, face images in the training set form a dictionary matrix (each face image is vectorized and forms a column of the dictionary matrix), and then a vectorized test face image is represented under this dictionary matrix. The representation coefficients provide hints for recognition. For example, if a test face image and a training face image belong to the same subject, then the representation coefficients of the vectorized test face image under the dictionary matrix are sparse (or compressive), i.e., most coefficients are zero (or close to zero). For each class (i.e., the columns in the dictionary matrix which are associated with a subject), one can calculate the reconstruction error of the vectorized test face image using these columns and the associated representation coefficients. The class with the minimum reconstruction error suggests the test face image belongs to this class. More frequently, one uses a feature vector extracted from a face image, instead of the original vectorized face image, in this method. SRC is so robust that it can achieve good performance in occlusion and noise environments.
Following the idea of SRC, a number of SRC related recognition methods have been proposed. Gao et al. extended the basic SRC method to a kernel version gao2010kernel . Yang et al. proposed a face recognition method via sparse coding which is much more robust than SRC in occlusion, corruption and disguise environments yang2011robust . Some other works improved the basic SRC method using weighted sparse representations lu2012face , Gabor feature based sparse representations yang2010gabor , dimensionality reduction zhang2010dimensionality , locally adaptive sparse representations chen2010robust and supervised sparse representations xu2011supervised .
Recently, it is found that using algorithms based on a block sparse model grouplasso , instead of the algorithms based on the basic sparse representation model, can achieve higher recognition rates in face recognition elhamifar2012block . However, these algorithms ignore intrablock correlation in representation coefficients. The existence of intrablock correlation in representation coefficients results from the fact that training face images with the same class as a test face image are all correlated with the test face image, and thus the representation coefficients associated with the training face images are not independent. In sparse reconstruction scenarios it is shown that exploiting the intrablock correlation can significantly improve algorithmic performance Zhang2012TSP .
In this study, we use block sparse Bayesian learning (BSBL) Zhang2012TSP to estimate the representation coefficients. BSBL has many advantages over existing blocksparsemodel based algorithms, especially it has the ability to exploit the intrablock correlation in representation coefficients for better algorithmic performance. Experimental results on the Extended Yale B ,the AR and the CMU PIE databases show that BSBL achieves better results than stateoftheart SRC algorithms in most cases.
The rest of this paper is organized as follows. Section 2 gives a brief review of the original face recognition via sparse representation. Section 3 introduces sparse Bayesian learning. The Block Sparse Bayesian Learning approach for face recognition is proposed in Section 4. Experimental results are reported in Section 5. Conclusion is drawn in the last section.
2 Related work
2.1 Face recognition via sparse representation
We first describe the basic SRC method wright2009robust for face recognition. Given training faces of all K subjects, a dictionary matrix is formed as follows
(1) 
where , and is the th face ^{1}^{1}1For simplicity, we describe as a vectorized face image. But in practice, is a feature vector extracted from the face image, as done in our experiments. of the th subject. Then, a vectorized test face is represented under the dictionary matrix as follows
(2)  
where is the representation coefficient vector. In the basic SRC method, it is suggested that if the new test face belongs to a subject in the training set, say the th subject, then under a sparsity constraint on , only some of the coefficients are significantly nonzero, while other coefficients, i.e. , are zero or close to zero.
Mathematically, the above idea can be described as the following sparse representation problem
(3) 
where counts the number of nonzero elements in the vector . Once we have obtained the solution , the class label of can be found by
(4) 
where is the characteristic function which maintains the elements of associated with the th class, while sets other elements of to zero.
However, finding the solution to (3) is NPhard natarajan1995sparse . Recent theories in compressed sensing donoho2006most ; candes2006near show that if the true solution is sparse enough, under some mild conditions the solution can be found by solving the following convexrelaxation problem
(5) 
Further, to deal with small dense model noise, the problem (5) can be changed to the following one
(6) 
where is a noisetolerance constant. Many minimization algorithms can be used to find the solution to (5) or to (6), such as LASSO lasso and Basis Pursuit Denoising BP .
In a practical face recognition problem, the coefficient vector (or ) is not only sparse but also block sparse. To see this, we can rewrite the sparse representation problem (2) as follows
(7) 
where is the coefficient vector associated with the th class, and . When a test face belongs to the th class, ideally only elements in are significantly nonzero. In other words, only the block has significantly nonzero norm. Clearly, this is a canonical block sparse model grouplasso ; ModelCS . Many algorithms for the block sparse model can be used here. For example, in elhamifar2012block it is suggested to use the following algorithm:
(8) 
This is a natural extension of basic minimization algorithms, which imposes norm on block elements and then norm over blocks. It has been shown that exploiting the block structure can largely improve the estimation quality of ModelCS ; huang2010benefit ; rao2012universal .
However, one should note that when the test face belongs to the th class, not only the representation coefficient block is a nonzero block, but also its elements are correlated in amplitude. The correlation arises because the faces of the th class in the training set are all correlated with the test face, and thus the elements in are mutually dependent. It is shown that exploiting the correlation within blocks can further improve the estimation quality of Zhang2012TSP ; Zhang2012TBME than only exploiting the block structure.
Therefore, in this study we propose to use block sparse Bayesian learning (BSBL) Zhang2012TSP to estimate by exploiting the block structure and the correlation within blocks. In the next section we first briefly introduce sparse Bayesian learning (SBL), and then introduce BSBL.
3 SBL and BSBL
SBL tipping2001sparse was initially proposed as a machine learning method. But later it has been shown to be a powerful method for sparse representation, sparse signal recovery and compressed sensing.
3.1 Advantages of SBL
Compared to LASSOtype algorithms (such as the original LASSO algorithm, Basis Pursuit Denoising, Group Lasso, Group Basis Pursuit, and other algorithms based on minimization), SBL has the following advantages ZhangDissertation ; Wipf2006Thesis .

Its recovery performance is robust to the characteristics of the matrix , while most other algorithms are not. For example, it has been shown that when columns of are highly coherent, SBL still maintains good performance, while algorithms such as LASSO have seriously degraded performance Wipf2011NIPS . This advantage is very attractive to sparse representation and other applications, since in these applications the matrix is not a random matrix and its columns are highly coherent.

SBL has a number of desired advantages over many popular algorithms in terms of local and global convergence. It can be shown that SBL provides a sparser solution than Lassotype algorithms. In particular, in noiseless situations and under certain conditions, the global minimum of SBL cost function is unique and corresponds to the true sparsest solution, while the global minimum of the cost function of LASSOtype algorithms is not necessarily the true sparsest solution Wipf2004IEEE . These advantages imply that SBL is a better choice in feature selection via sparse representation Jing2012CVPR .

Recent works in SBL Zhang2012TSP ; Zhang2011IEEE provide robust learning rules for automatically estimating values of its regularizer (related to noise variance) such that SBL algorithms can achieve good performance. In contrast, LASSOtype algorithms generally need users to choose values for such regularizer, which is often obtained by crossvalidation. However, this takes lots of time for largescale datasets, which is not convenient and even impossible in some scenarios.
3.2 Introduction to BSBL
BSBL Zhang2012TSP is an extension of the basic SBL framework, which exploits a block structure and intrablock correlation in the coefficient vector . It is based on the assumption that can be partitioned into nonoverlapping blocks:
(9) 
Among these blocks, few blocks are nonzero. Then, each block is assumed to satisfy a parameterized multivariate Gaussian distribution:
(10) 
with the unknown parameters and . Here is a nonnegative parameter controlling the blocksparsity of . When , the th block becomes zero. During the learning procedure most tend to be zero, due to the mechanism of automatic relevance determination tipping2001sparse . Thus sparsity at the block level is encouraged. is a positive definite and symmetrical matrix, capturing the intrablock correlation of the th block. Under the assumption that blocks are mutually uncorrelated, the prior of is , where ^{2}^{2}2Here means is a block diagonal matrix, and its principal diagonal blocks are .. To avoid overfitting, all the will be imposed by some constraints and their estimates will be further regularized. The model noise is assumed to satisfy , where is a positive scalar to be estimated. Based on the above probability models, one can obtain a closeform posterior. Therefore, the estimate of can be obtained by using the MaximumAPosteriori (MAP) estimation, providing all the parameters are estimated.
To estimate the parameters , one can use the Type II maximum likelihood method Mackay1992evidence ; tipping2001sparse . This is equivalent to minimizing the following cost function
(11)  
where denotes all the parameters, i.e., . There are several optimization methods to minimize the cost function, such as the expectationmaximum method, the boundoptimization method, the duality method and so on. This framework is called the BSBL framework.
BSBL not only has the advantages of the basic SBL listed in Section 3.1, but also has another two advantages:

BSBL provides large flexibility to model and exploit correlation structure in signals, such as intrablock correlation Zhang2012TSP ; Zhang2012TBME . By exploiting the correlation structures, recovery performance is significantly improved.

BSBL has the unique ability to find lesssparse Zhang2012Letter and nonsparse Zhang2012TBME true solutions with very small errors ^{3}^{3}3Note that for an underdetermined inverse problem, i.e, , where is one matrix or a product of a sensing matrix and a dictionary matrix as used in compressed sensing, one cannot find the true solution without any error, if the true solution is nonsparse (i.e., ).. This is attractive for practical use, since in practice the true solutions may not be very sparse, and existing sparse signal recovery algorithms generally fail in this case.
Therefore, BSBL is promising for pattern recognition. In the following we use BSBL for face recognition. Among a number of BSBL algorithms, we choose the boundoptimization based BSBL algorithm Zhang2012TSP , denoted by BSBLBO ^{4}^{4}4The BSBLBO code can be downloaded at http://dsp.ucsd.edu/~zhilin/BSBL.html)..
4 Face recognition via BSBL
As stated in Section 2, we use BSBLBO to estimate , denoted by , and then use the rule (4) to assign a test face to a class.
In practice, a test face may contain some outliers, i.e., , where is the outlierfree face image and is a vector whose each entry is an outlier. Generally, the number of outliers is small, and thus is sparse. Addressing the outlier issue is important to a practical face recognition system. In wright2009robust , an augmented sparse model was used to deal with this issue. We now extend this method to our block sparse model, and use BSBLBO to estimate the solution. In particular, we adopt the following augmented block sparse model:
(12)  
where is a vector modeling dense Gaussian noise, and . Here is an identity matrix of the dimension . Clearly, is also a block sparse vector, whose first blocks are the blocks of and last elements are blocks with the block size being 1 ^{5}^{5}5In experiments we found that treating the elements as one big block resulted in similar performance, while significantly sped up the algorithm.. Thus, (12) is still a block sparse model, and can be solved by BSBLBO. Once BSBLBO obtains the solution, denoted by , its first blocks (denoted by ) and its last elements (denoted by ) are used to assign to a class according to
(13) 
We now take the Extended Yale B database GeBeKr01 as an example to show how our method works. As shown in SRC wright2009robust , we randomly select half of the total 2414 faces (i.e., 1207 faces) as the training set and the rest as the testing set. Each face is downsampled from 192 168 to 24 21 = 504. The training set contains 38 subjects. Each subject has about 32 faces. Therefore, in our model , and . The matrix has the size , and thus the matrix has the size .
The procedure is illustrated in Fig. 1. Fig. 1 (a) shows that a test face (belonging to Subject 4) can be linearly combined by a few training faces. Most of the coefficients estimated by BSBLBO (i.e., ) are zero or near zero and only those associated with the test face are significantly nonzero. Fig. 1 (b) shows the residuals for . The residual at is 0.0008, while the residuals at are all close to 1, which makes it easy to assign the test face to Subject 4. See Section 5.1.1 for more details.
5 Experimental results
To demonstrate the superior performance of BSBL, we performed experiments on three widely used face databases: Extended Yale B GeBeKr01 , AR martinez1998ar and CMU PIE sim2002cmu face databases. The face images of these three databases were captured under varying lighting, pose or facial expression. The AR database also has occluded face images for the test of robustness of face recognition algorithms. Section 5.1 shows experimental results on face images without occlusion, and Section 5.2 shows experimental results on face images with three kinds of occlusion.
5.1 Face recognition without occlusion
For the experiments on face images without occlusion, we used downsampling, Eigenfaces turk1991face ; turk1991eigenfaces , and Laplacicanfaces he2005face to reduce the dimensionality of original faces. We compared our method with three classical methods, including Nearest Neighbor (NN) duda2012pattern , Nearest Subspace (NS) ho2003clustering , and Support Vector Machine(SVM) vapnik1999nature . We also compared our method with recently proposed sparserepresentation based classification methods, including the basic sparserepresentation classifier (SRC) wright2009robust and the blocksparse recovery algorithm via convex optimization (BSCO) elhamifar2012block . For NS, the subspace dimension was fixed to 9. For BSCO, we used the algorithm elhamifar2012block which has been shown to be the best one among all the structured sparsitybased classifiers proposed in that work.
5.1.1 Extended Yale B database
The Extended Yale B database GeBeKr01 consists of 2414 frontalface images of 38 subjects (each subject has about 64 images). In the experiment, we used the cropped face images which were captured under various lighting conditions KCLee05 . Two subjects are shown in Fig. 2 for illustration (for each subject, only 10 face images are shown). We randomly selected half face images of each subject as the training set and the rest as the testing set. We used downsampling, Eigenfaces, and Laplacicanfaces to extract features from face images. The dimensions of extracted features were 30, 56, 120 and 504 respectively.
Experimental results are shown in Fig. 3, where we can see our method uniformly outperformed other algorithms regardless of used features. Particularly, our method had better performance when using Laplacianfaces. The superiority of our method was much clearer when the feature dimension was smaller and Laplacianfaces were used. For example, when the feature dimension was 56, our method achieved the highest rate of 98.9%, while NN, NS, SVM, SRC and BSCO achieved the rate of 83.5%, 90.4%, 85.0%, 91.7% and 79.4%, respectively. Higher performance using low dimensional features is attractive for recognition, since lower feature dimension generally implies the computational load is accordingly lower.
5.1.2 AR database
The AR database martinez1998ar consists of more than 4000 frontface images of 126 human subjects. Each subject has 26 images in two separated sessions, as shown in Fig. 4. This database includes more facial expression and facial disguise. We chose 100 subjects (50 male and 50 female) in this experiment. For each subject, seven face images with different illumination and facial expression (i.e., the first 7 images of each subject) in Session 1 were selected for training, and the first 7 images of each subject in Session 2 for testing. All the images were converted to gray mode and were resized to . Downsampled faces, Eigenfaces and Laplacianfaces were applied with the dimension of 30, 54, 130 and 540. Experimental results are shown in Fig. 5.
From Fig. 5(a), we can see that our algorithm significantly outperformed other classifiers when using downsampled features. However, our method did not achieve the highest rate when using Eigenfaces and Laplacianfaces. This might be due to the small block size in this experiment (). Although our method did not uniformly outperform other algorithms when using different face features, the recognition rate achieved by our method using downsampled faces (96.7%) was not exceeded by other algorithms using any face features.
5.1.3 CMU PIE database
The CMU PIE database sim2002cmu consists of 41368 frontface images of 68 human subjects under different poses, illumination and expressions. We chose one subset(C29) which included 1632 face images of 68 subjects(24 images for each subject) for this experiment. The first subject in this subset is shown in Fig. 6, which varies in pose, illumination and expression. All the images were cropped and resized to . For each subject, we randomly selected 10 images for training, and the rest (14 images for each subject) for testing. Downsampled faces, Eigenfaces and Laplacianfaces were applied with four dimensions, i.e., 36, 64, 144 and 256. Experimental results are shown in Fig. 7.
From Fig. 7(a), we can see that sparserepresentationbased classifiers usually outperformed classical ones in this dataset. Among the sparserepresentationbased classifiers, BSBL and BSCO achieved higher recognition rates than SRC. For BSBL and BSCO, BSBL slightly outperformed BSCO with downsampled faces and Laplacianfaces while BSCO outperformed BSBL with Eigenfaces. Specifically, for each feature space, BSBL achieved the highest recognition rate of 95.80% with downsampled faces and 94.12% with Laplacianfaces while BSCO achieved 98.42% with Eigenfaces, which was the highest one in this experiment. Nevertheless, BSBL outperformed BSCO in 8 out of 12 different combinations of dimensions and features.
5.2 Face recognition with occlusion
For the experiments on face images with occlusion, we used downsampling to reduce the size of face images and compared our method with NN duda2012pattern , SRC wright2009robust and BSCO elhamifar2012block .
5.2.1 Face recognition with pixel corruption
We tested face recognition with pixel corruption on 3 subsets of the Extended Yale B database: 719 face images with normaltomoderate lighting conditions from Subset 1 and 2 for training and 455 face images with more extreme lighting conditions from Subset 3 for testing. For each test image, we first replaced a certain percentage(0%  50%) of its original pixels by uniformly distributed gray values in [0,255]. Both the gray values and the locations were random and hence unknown to the algorithms. We then downsampled all the images to the size of , , and respectively. Two corrupted face images were shown in Fig. 8(a)(b).
Results are shown in Table 1. It can be seen that in almost all dimensions and corruption, BSBL achieved the highest recognition rate when compared with NN and SRC, and the performance gap between our algorithm and the compared algorithms was very large. For example, when the dimension was and 50% pixels were corrupted, BSBL achieved the recognition rate of 67.25%, while SRC only had a recognition rate of 46.37%. Meanwhile, BSBL outperformed BSCO in 17 out of 24 dimension and corruption situations. Fig. 9(a) shows the recognition rates of the four algorithms at different pixel corruption levels with the dimension of .
Method  Dimension  Percent corrupted(%)  

0  10  20  30  40  50  
NN  36.92  42.42  49.67  46.15  28.35  14.95  
48.79  54.95  60.00  59.34  40.88  20.22  
67.25  75.17  79.56  74.73  58.02  35.39  
87.25  93.19  94.95  92.53  76.48  56.04  
SRC  54.51  44.62  50.55  46.59  32.75  21.76  
82.64  61.32  66.59  63.52  49.23  28.79  
98.02  85.06  85.28  83.96  71.87  46.37  
100.00  98.24  98.24  97.14  92.09  73.63  
BSCO  87.48  83.52  63.30  39.12  23.30  14.29  
98.02  96.04  90.99  75.60  49.45  30.33  
99.34  99.12  97.58  93.85  82.20  58.68  
100.00  100.00  99.12  98.68  97.14  96.70  
BSBL  87.25  85.71  68.79  51.43  30.99  19.56  
94.29  92.97  86.15  72.53  59.12  39.34  
99.56  99.34  97.80  92.31  84.18  67.25  
100.00  100.00  99.78  99.12  97.58  89.01 
5.2.2 Face recognition with block occlusion
In this experiment, we used the same training and testing images as those in the previous pixel corruption experiment. For each test image, we replaced a randomly located square block with an unrelated image(the baboon image in SRC wright2009robust ), which occluded 0%  50% of the original testing image. We then downsampled all the images to the size of , , and respectively. Two occluded face images were shown in Fig. 8(c)(d).
Table 2 shows the recognition rates of NN, SRC, BSCO and BSBL on different dimensions and percentages of occlusion. Again, BSBL outperformed the compared algorithms in most cases. For example, when the occlusion percentage ranged from 10% to 50% and the face dimension was , BSBL achieved about 8.35%13.19% higher recognition rate than BSCO, as shown in Fig. 9(b).
Method  Dimension  Percent occluded(%)  

0  10  20  30  40  50  
NN  36.92  34.29  27.69  24.40  20.44  15.17  
48.79  44.84  38.68  32.09  21.54  18.46  
67.25  64.18  52.09  45.71  30.33  22.64  
87.25  85.50  76.92  67.25  52.31  37.14  
SRC  54.51  36.26  28.13  22.64  17.36  14.29  
82.64  50.99  39.56  31.65  20.66  17.36  
98.02  75.39  59.78  48.57  30.33  20.88  
100.00  96.48  89.23  72.31  54.29  35.17  
BSCO  87.48  30.11  14.51  8.57  5.49  3.96  
98.02  51.65  32.53  18.02  13.41  8.35  
99.34  71.87  49.23  32.09  20.88  14.73  
100.00  99.56  92.97  80.88  63.74  45.93  
BSBL  87.25  46.59  28.35  18.68  11.65  10.55  
94.29  66.59  40.88  26.59  20.22  12.53  
99.56  83.30  60.00  45.28  33.19  23.08  
100.00  96.92  92.31  75.60  56.48  42.64 
5.2.3 Face recognition with real face disguise
We used a subset of AR database to test the performance of our method on face recognition with disguise. We chose 799 images of various facial expression without occlusion (i.e., the first 4 face images in each session except a corrupted image named ‘W02714.bmp’) for training. We formed two separate testing sets of 200 images. The images in the first set were from the neutral expression with sunglasses (the 8th image in each session) which cover roughly 20% of the face, while the ones in the second set were from the neutral expression with scarves (the 11th image in each session) which cover roughly 40% of the face. All the images were resized to , , and respectively.
Results are shown in Table 3. In the case of neutral expression with sunglasses, both SRC and NN acheived higher recognition rates than BSCO and BSBL. However, in the case of neutral expression with scarves, BSBL outperformed NN, SRC and BSCO significantly. Totally, BSBL achieved the highest recognition rates 72.50% and 74.5% with the dimensions of and respectively for the two testing sets, while SRC achieved the highest recognition rates 28.25% and 44.00% with the other two dimensions.
Method  Dimension  Sunglasses  Scarves  Total 

NN  35.00  6.50  20.75  
48.00  7.00  27.50  
65.50  9.50  37.50  
68.00  11.50  39.75  
SRC  46.50  10.00  28.25  
72.00  16.00  44.00  
83.00  21.50  52.25  
89.00  37.00  63.00  
BSCO  14.50  9.50  12.00  
35.00  19.50  27.25  
68.00  44.00  56.00  
76.00  50.00  63.00  
BSBL  22.00  23.00  22.50  
40.50  46.00  43.25  
64.00  81.00  72.50  
65.50  83.50  74.50 
6 Conclusions
Classification via sparse representation is a popular methodology in face recognition and other classification tasks. Recently it was found that using blocksparse representation, instead of the basic sparse representation, can yield better classification performance. In this paper, by introducing a recently proposed block sparse Bayesian learning (BSBL) algorithm, we showed that the BSBL is a better framework than the basic blocksparse representation framework, due to its various advantages over the latter. Experiments on common face databases confirmed that the BSBL is a promising sparserepresentationbased classifier.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (Grant No. 60903128) and also by the Fundamental Research Funds for the Central Universities (Grant No. JBK130142 and JBK130503).
References
 (1) A. Martinez and A. Kak, “Pca versus lda,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228–233, 2001.
 (2) P. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in Proceedings of IEEE Conference on Computer Visual and Pattern Recognition(CVPR ’05), vol. 1, pp. 947–954, San Diego, CA, USA, June 2005.
 (3) M. Turk and A. Pentland, “Face recognition using eigenfaces,” in Proceedings of IEEE Conference on Computer Visual and Pattern Recognition(CVPR ’91), pp. 586–591, Maui, HI, USA, June 1991.
 (4) R. Brunelli and T. Poggio, “Face recognition: Features versus templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1042–1052, 1993.
 (5) X. He, S. Yan, Y. Hu, P. Niyogi, and H. Zhang, “Face recognition using laplacianfaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 3, pp. 328–340, June 2005.
 (6) M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of cognitive neuroscience, vol. 3, no. 1, pp. 71–86, 1991.
 (7) P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. fisherfaces: Recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711–720, 1997.
 (8) R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification. John Wiley & Sons, 2012.
 (9) J. Ho, M. Yang, J. Lim, K. Lee, and D. Kriegman, “Clustering appearances of objects under varying illumination conditions,” in Proceedings of IEEE Conference on Computer Visual and Pattern Recognition(CVPR ’03), vol. 1, pp. 1–11, Madison, WI, USA, June 2003.
 (10) V. Vapnik, The nature of statistical learning theory. Springer, 2000.
 (11) J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210–227, 2009.
 (12) S. Gao, I. W.H. Tsang, and L.T. Chia, “Kernel sparse representation for image classification and face recognition,” in Computer Vision–ECCV 2010, pp. 1–14, Springer, 2010.
 (13) M. Yang, L. Zhang, J. Yang, and D. Zhang, “Robust sparse coding for face recognition,” in Proceedings of IEEE Conference on Computer Visual and Pattern Recognition(CVPR ’11), pp. 625–632, Colorado Springs, CO, USA, June 2011.
 (14) C. Lu, H. Min, J. Gui, L. Zhu, and Y. Lei, “Face recognition via weighted sparse representation,” Journal of Visual Communication and Image Representation, vol. 24, no. 2, pp. 111–116, 2013.
 (15) M. Yang and L. Zhang, “Gabor feature based sparse representation for face recognition with gabor occlusion dictionary,” in Computer Vision–ECCV 2010, pp. 448–461, Springer, 2010.
 (16) L. Zhang, M. Yang, Z. Feng, and D. Zhang, “On the dimensionality reduction for sparse representation based face recognition,” in Proceedings of International Conference on Pattern Recognition(ICPR ’10), pp. 1237–1240, Istanbul, Turkey, August 2010.
 (17) Y. Chen, T. Do, and T. Tran, “Robust face recognition using locally adaptive sparse representation,” in Proceedings of IEEE 17th International Conference on Image Processing(ICIP ’10), pp. 1657–1660, Hong Kong, September 2010.
 (18) Y. Xu, W. Zuo, and Z. Fan, “Supervised sparse representation method with a heuristic strategy and face recognition experiments,” Neurocomputing, vol. 79, no. 1, pp. 125–131, 2012.
 (19) M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 68, pp. 49–67, 2006.
 (20) E. Elhamifar and R. Vidal, “Blocksparse recovery via convex optimization,” IEEE Transactions on Signal Processing, vol. 60, no. 8, pp. 4094–4107, 2012.
 (21) Z. Zhang and B. D. Rao, “Extension of SBL algorithms for the recovery of block sparse signals with intrablock correlation,” IEEE Transactions on Signal Processing, vol. 61, no. 8, pp. 2009–2015, 2013.
 (22) B. Natarajan, “Sparse approximate solutions to linear systems,” SIAM journal on computing, vol. 24, no. 2, pp. 227–234, 1995.
 (23) D. Donoho, “For most large underdetermined systems of linear equations the minimal norm solution is also the sparsest solution,” Communications on pure and applied mathematics, vol. 59, no. 6, pp. 797–829, 2006.
 (24) E. Candes and T. Tao, “Nearoptimal signal recovery from random projections: Universal encoding strategies?,” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406–5425, 2006.
 (25) R. Tibshirani, “Regression shrinkage and selection via the Lasso,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 58, no. 1, pp. 267–288, 1996.
 (26) S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM journal on scientific computing, vol. 20, no. 1, pp. 33–61, 1998.
 (27) R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Modelbased compressive sensing,” IEEE Transactions on Information Theory, vol. 56, no. 4, pp. 1982–2001, 2010.
 (28) J. Huang and T. Zhang, “The benefit of group sparsity,” The Annals of Statistics, vol. 38, no. 4, pp. 1978–2004, 2010.
 (29) N. Rao, B. Recht, and R. Nowak, “Universal measurement bounds for structured sparse signal recovery,” in International Conference on Artificial Intelligence and Statistics(AISTATS ’12), pp. 942–950, Canary Islands, Spain, April 2012.
 (30) Z. Zhang, T.P. Jung, S. Makeig, and B. D. Rao, “Compressed sensing for energyefficient wireless telemonitoring of noninvasive fetal ECG via block sparse Bayesian learning,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 2, pp. 300–309, 2013.
 (31) M. Tipping, “Sparse bayesian learning and the relevance vector machine,” The Journal of Machine Learning Research, vol. 1, pp. 211–244, 2001.
 (32) Z. Zhang, Sparse Signal Recovery Exploiting Spatiotemporal Correlation. PhD thesis, University of California, San Diego, 2012.
 (33) D. P. Wipf, Bayesian methods for finding sparse representations. PhD thesis, University of California, San Diego, 2006.
 (34) D. P. Wipf, “Sparse estimation with structured dictionaries,” in Advances in Neural Information Processing Systems 24, pp. 2016–2024, 2011.
 (35) D. Wipf and B. Rao, “Sparse Bayesian learning for basis selection,” IEEE Transactions on Signal Processing, vol. 52, no. 8, pp. 2153–2164, 2004.
 (36) J. Wan, Z. Zhang, J. Yan, T. Li, B. Rao, S. Fang, S. Kim, S. Risacher, A. Saykin, and L. Shen, “Sparse Bayesian multitask learning for predicting cognitive outcomes from neuroimaging measures in Alzheimer’s disease,” in Proceedings of IEEE Conference on Computer Visual and Pattern Recognition(CVPR ’12), pp. 940–947, Providence, Rhode Island, USA, June 2012.
 (37) Z. Zhang and B. D. Rao, “Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 5, pp. 912–926, 2011.
 (38) D. MacKay, “The evidence framework applied to classification networks,” Neural computation, vol. 4, no. 5, pp. 720–736, 1992.
 (39) Z. Zhang, T.P. Jung, S. Makeig, and B. D. Rao, “Compressed sensing of EEG for wireless telemonitoring with low energy consumption and inexpensive hardware,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 1, pp. 221–224, 2013.
 (40) A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 643–660, 2001.
 (41) A. Martinez, “The ar face database,” CVC Technical Report, vol. 24, 1998.
 (42) T. Sim, S. Baker, and M. Bsat, “The cmu pose, illumination, and expression (pie) database,” in Proceedings of Fifth IEEE International Conference on Automatic Face and Gesture Recognition(FG ’02), pp. 46–51, Washington D.C., USA, May 2002.
 (43) K. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 684–698, 2005.