Global and Local Structure Preserving Sparse Subspace Learning: An Iterative Approach to Unsupervised Feature Selection

Global and Local Structure Preserving Sparse Subspace Learning: An Iterative Approach to Unsupervised Feature Selection

Nan Zhou Yangyang Xu Hong Cheng Jun Fang Witold Pedrycz Center for Robotics, School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China Department of Electrical and Computer Engineering, University of Alberta, AB, T6G2R3, Canada Department of Combinatorics and Optimization, University of Waterloo, ON, N2L3G1, Canada

As we aim at alleviating the curse of high-dimensionality, subspace learning is becoming more popular. Existing approaches use either information about global or local structure of the data, and few studies simultaneously focus on global and local structures as the both of them contain important information. In this paper, we propose a global and local structure preserving sparse subspace learning (GLoSS) model for unsupervised feature selection. The model can simultaneously realize feature selection and subspace learning. In addition, we develop a greedy algorithm to establish a generic combinatorial model, and an iterative strategy based on an accelerated block coordinate descent is used to solve the GLoSS problem. We also provide whole iterate sequence convergence analysis of the proposed iterative algorithm. Extensive experiments are conducted on real-world datasets to show the superiority of the proposed approach over several state-of-the-art unsupervised feature selection approaches.

Machine learning, Feature selection, Subspace learning, Unsupervised learning
journal: Pattern Recognition

1 Introduction

With the advances in data processing, the dimensionality of the data increases and can be extremely high in many fields such as computer vision, machine learning and image processing. The high dimensionality of the data not only greatly increases the time and storage space required to realize data analysis but also introduces much redundancy and noise which can decrease the accuracy of ensuing methods. Hence, dimensionality reduction becomes an important and often necessary preprocessing step to accomplish certain machine learning tasks such as clustering and classification.

Generally speaking, dimensionality reduction approaches can be divided into two classes: feature selection and subspace learning. Feature selection methods aim to select a subset of most representative features following a certain criterion (e.g.,guyon2003introduction (); yang2004essence (); herman2013mutual (); yan2015sparse (); zhao2013similarity ()) , while subspace learning methods aim to learn a (linear or nonlinear) transformation to map the original high-dimensional data into a lower-dimensional subspace (e.g., wang2014robust (); duda2012pattern (); he2005neighborhood (); cai2007spectral ()). Subspace learning methods, such as principal component analysis (PCA), combine all original features at each dimension of the learned subspace, and this causes some interpretation difficulties. To overcome this difficulty, sparse subspace learning methods (e.g., zou2006sparse (); moghaddam2006generalized (); cai2007spectralsparse ()) and joint models that simultaneously perform subspace learning and feature selection (e.g., cai2010unsupervised (); gu2011joint (); li2012unsupervised (); wang2015subspace ()) have been developed.

This paper exhibits the following main contributions:

  1. We propose a novel unsupervised sparse subspace learning model for feature selection. The model simultaneously perserves global and local structures of the data, both of which contain important discriminative information for feature selection, as demonstrated in du2013local (); liu2014global (). We derive the model by first relaxing an existing combinatorial model and then adding a group sparsity regularization term. The regularization term controls the row sparsity of the transformation matrix, and since each row of the transformation matrix corresponds to a feature, the proposed model can automatically select representative features and makes easy interpretation.

  2. We, for the first time, propose a greedy algorithm to the original combinatorial optimization problem. In addition, we apply the accelerated block coordinate descent (BCD) method proposed in xu2013block () to the relaxed continuous but nonconvex problem. The BCD method utilizes the bi-convexity structure of the problem and has been found very efficient for our purposes.

  3. We establish a whole iterate sequence convergence result of the BCD method for our problem under consideration by assuming the existence of a full rank limit point. Because of the peculiarity of the formulated problem, the result is new and not implied by any existing convergence results of BCD.

  4. We conduct extensive experimental studies. The proposed method is tested on six real-world datasets coming from different areas and compared to eight state-of-the-art unsupervised feature selection algorithms. The results demonstrate the superiority of the proposed method over all the other compared methods. In addition, we study the sensitivity of the proposed method to the parameters of the model and observe that it can perform in a stable way within a large range of values of the parameters.

Organization and notation

The paper is organized as follows. In Sect. 2, we give a brief review of recent related studies on subspace learning. Sect. 3 reviews two local structure preserving methods and proposes a local structure preserving sparse subspace learning model. In Sect. 4, we present an algorithm leading to the solution of the proposed model. Convergence results are also shown. Experimental results are reported in Sect. 5. Finally, Sec. 6 concludes this paper.

To facilitate the presentation of the material, we list a notation in Table 1.

Notation Description
The number of instances
The number of features
The number of selected features
The row of the matrix
The dimension of subspace
The number of nearest neighbors
, the sum of the -norm of rows in
, the number of nonzero elements in vector
cardinality of set
Table 1: Notation

2 Related Studies

Subspace learning

One well-known subspace learning method is principal component analysis (PCA) duda2012pattern (); lipovetsky2009pca (). It maximizes the global data structure information in the principal space and thus it becomes optimal in terms of data fitting. Beside global structure, local structure of the data also contains important discriminative information bottou1992local (), which plays a crucial role in pattern recognition jiang2011linear (). Many subspace learning methods preserve different local structures of the data for different problems and can yield better performance than the traditional PCA method. These methods usually use the linear extension of graph embedding (LGE) to preserve local structure. With different choices of the graph adjoint matrix, LGE framework leads to different subspace learning methods. The popular ones include Linear Discriminant Analysis (LDA) duda2012pattern (); ching2012regularized (), Locality Preserving Projection (LPP) he2005laplacian (); niyogi2004locality (); zhang2010graph () and Neighborhood Preserving Embedding (NPE) he2005neighborhood (). One drawback of these locality preservation methods is that they require eigen-decomposition of dense matrices, which can be very expensive in both CPU time and machine storage, especially for problems involving high-dimensional data. To overcome this drawback, Cai et al. cai2007spectral () proposed a Spectral Regression (SR) method to transform the eigen-decomposition problem into a two-step regression problem that becomes easier to solve.

Sparse subspace learning

Although the subspace learning can transform the original high-dimensional data into a lower-dimensional space, it mingles all features and lacks interpretability. For better interpretability, sparse subspace learning methods have been proposed in the literature by adding certain sparsity regularization terms or sparsity constraints into subspace learning models. For example, the sparse PCA (SPCA) zou2006sparse () adds “Elastic Net” term into the traditional PCA. Moghaddam et al. moghaddam2005spectral () proposed a spectral bounds framework for sparse subspace learning. Cai et al. cai2007spectralsparse () proposed a unified sparse subspace learning method based on spectral regression model, which adds an regularization term in the regression step. Qiao et al. qiao2010sparsity () introduced the Sparsity Preserving Projection (SPP) method for subspace learning, while SPP utilizes the sparsity coefficients to construct the graph Laplacian. It is worth mentioning that besides subspace learning, sparsity regularized methods have also been used in many other fields such as computer vision cheng2009sparsity (); chengsparsity (), image processing fang2015pattern (), and signal recovery zhou2015bayesian ().

Simultaneous feature selection and subspace learning

Recently, joint methods have been proposed to simultaneously perform feature selection and subspace learning. The core idea of these methods is to use the transformation matrix to guide feature selection according to the norm of its row/column vectors. Cai et al. cai2010unsupervised () combined the sparse subspace learning with feature selection and proposed the Multi-Cluster Feature Selection (MCFS) method. Because MCFS uses -term to control the sparsity of the transformation matrix, different dimensions of the learned subspace may combine different features, and thus the model lacks sound interpretability. Gu et al. gu2011joint () improved the MCFS method by using -term to enforce the row sparsity of the transformation matrix. This way, the transformation matrix will have zero-rows corresponding to irrelavant features. Wang et al. wang2015subspace () proposed an unsupervised feature selection framework, which uses the global regression term for subspace learning and orthogonal transformation matrix for feature selection. In general, the orthogonality constraint may limit its applications, as mentioned in qian2013robust () in practice, feature weight vectors are not necessarily orthogonal to each other. In addition, the model discussed in wang2015subspace () does not utilize local structure of the data. As demonstrated in bottou1992local (), local structure of the data often contains essential discriminative information.

Other related works

There are some other related methods for subspace learning. Provided with only weak label information (e.g., preference relationships between examples), Xu et al. xu2014large () proposes a Weakly Supervised Dimensionality Reduction (WSDR) method, which considers samples’ pairwise angles and also distances. For the -means problem, Boutsidis et al. boutsidis2015randomized () proposed randomized feature selection and subspace learning methods and showed that a constant-factor approximation can be guaranteed with respect to the optimal -means objective value. Other popular subspace learning methods include: Nonnegative Matrix Factorization (NMF) lee2001algorithms (); yuan2005projective () that considers subspace learning of nonnegative data; joint LDA and -means ding2007adaptive () that combines LDA and -means clustering together for unsupervised subspace learning; Dictionary Learning (DL) zhang2011linear () that first learns a dictionary via sparse coding and then uses the dictionary to decompose each sample into more discriminative and less discriminative parts for subspace learning. For more subspace learning methods, see de2003framework () and the references therein.

3 The Proposed Framework of Local Structure Preserving Sparse Subspace Learning

In this section, we introduce our feature selection models that encourage global data fitting and also preserve local structure information of the data. The first model is of combinatorial nature, only allowing 0-1 valued variables. The modeling idea is intuitive and inspired from (11) of wang2015subspace (), but it is not easy to find a good approximate solution to the problem. The second model relaxes the first one and becomes its continuous counterpart. Various optimization methods can be utilized to determine its solution. More importantly, we find that the relaxed model can most times produce better performance than the original one; one can refer to the numerical results reported in Section 5. We want to emphasize again here that our main contributions concern the second model and the algorithm developed for it.

3.1 A Generic Formulation

Given data samples located in the -dimensional space, the goal of feature selection is to find a small set of features that can capture most useful information of the data which can better serve to solve classification or clustering problems. One natural way to measure the information content is to see how close the original data samples are to the learned subspace spanned by the selected features. Mathematically, the distance of a vector to a subspace can be represented as , where denotes the projection onto and is the Euclidean 2-norm. Hence, the feature selection problem can be described as follows


where . Concerning the proposed model, we make a few remarks:

  1. The matrix is the selection matrix with entries of “” or “1”. The constraint enforces that each column of has only one “”. Therefore, at most features are selected.

  2. The constraint enforces that has nonzero rows. No feature will be selected more than once, and thus exactly features will be chosen.

  3. Given , the optimal produces the coefficients of all features projected onto the subspace spanned by the selected features. Hence, (1) expresses the distance of to the learned subspace.

The recent work wang2015subspace () mentions to use the 0-1 feature selection matrix, but it does not explicitly formulate an optimization model like (1). As shown in bottou1992local (), local structure of the data often contains discriminative information that is important for distinguishing different samples. To make the learned subspace preserve local structure, one can add a regularization term to the objective to promote such structural information, namely, to solve the regularized model


where is a local structure promoting regularization term, and is a parameter to balance the data fitting and regularization. In the next subsection, we introduce different forms of .

3.2 Local Structure Preserving Methods

Local structure of the data often contains important information that can be used to distinguish the samples cai2010unsupervised (); he2005laplacian (). A predictor utilizing local structure information can be much more efficient than that only using global information bottou1992local (). Therefore, one may want the learned lower dimensional subspace to be able to preserve local structure of the training data. We briefly review two widely used local structure preserving methods.

3.2.1 Local Linear Embedding

The Local Linear Embedding (LLE) roweis2000nonlinear () method first finds the set of nearest neighbors for all and then constructs the similarity matrix as the (normalized) solution of the following problem


One can regard as the coefficient of the sample when approximating the sample, and the coefficient is zero if the sample is not the neighbor of the one. After obtaining from (3), LLE further normalizes it such that . Then it computes the lower-dimensional representation through solving the following problem


Note that if is a selection matrix defined as (2), becomes a lower-dimensional sample, keeping the selected features by and removing all other features. Let , where is the identity matrix. Then it is easy to see that (4) can be equivalently expressed as


3.2.2 Linear Preserve Projection

For the Linear Preserve Projection niyogi2004locality () (LPP) method, the similarity matrix is generated by


where is the set of nearest neighbors of . The LPP method requires the lower-dimensional representation to preserve the similarity of the original data and forms the transformation matrix by solving the following optimization problem


Let be the Laplacian matrix, where is a diagonal matrix, called degree matrix, with diagonal elements . Then (7) can be equivalently expressed as


3.3 Relaxed Formulation

The problem (2) is of combinatorial nature, and we do not have many choices to solve it. In the next section, we develop a greedy algorithm, which chooses features one by one, with each selection decreasing the objective value the most among all the remaining features. Numerically, we observe that the greedy method can often make satisfactory performance. However, it can sometimes perform very bad; see results on Yale64 and Usps in section 5. For this reason, we seek an alternative way to select features by first relaxing (2) to a continuous problem and then employing a reliable optimization method to solve the relaxed problem. As observed in our tests, the relaxed method can perform comparably well with and, most of the time, much better than the original one.

As remarked at the end of Section 3.1, any feasible solution is nonnegative and has non-zero rows. If (that is usually satisfied), then has lots of zero rows. Based on these observations, we relax the 0-1 constraint to nonnegativity constraint and the hard constraints to , where measures the row-sparsity of . One choice of is group Lasso yuan2006model (), i.e.,


where denotes the -th row of . This way, we relax (2) to


or equivalently


where denotes the set of nonnegative matrices, and is a parameter corresponding to . Note that now also serves as a transformation matrix of subspace learning, and is the dimension of the learned subspace. It is not necessary . For better approximation by subspace learning, we will choose . We will focus on (11) because it is easier than (10) to solve. Practically, one needs to tune the parameters , , , and . As shown in section 5, the model with a wide range of values of the parameters can give stably satisfactory performance.

Our model is similar to the Matrix Factorization Feature Selection (MFFS) model proposed in wang2015subspace (). The difference is that the MFFS model restricts the matrix to be orthogonal while we use regularization term to promote row-sparsity of . Although orthogonal makes their model closer to the original model (1), it increases difficulty of solving their problem. In addition, MFFS does not utilize local structure preserving term as we do and thus may lose some important local information. Numerical tests in section 5 demonstrate that the proposed model along with an iterative method can produce better results than those obtained by using the MFFS method.

Before completing this section, let us make some remarks on the relaxed model. Originally, is restricted to have exactly non-zeros, so it could be extremely sparse as , and one may consider to include a sparsity-promoting term (e.g., -norm) in the objective of (11). However, doing so is not necessary since both and the nonnegativity constraint encourage sparsity of , and numerically we notice that output by our algorithm is indeed very sparse. Another point worth mentioning is that the elements of given by (11) are real numbers and do not automatically select features. For the purpose of feature selection, after obtaining a solution , we choose the features corresponding to the rows of that have the largest norms because larger values imply more important roles played by the features.

3.4 Extensions

In (11), Frobenius norm is used to measure the data fitting and typically suitable when Gaussian noise is assumed in the data and also commonly used if no priori information is assumed. One can of course use other norm or metric if different priori information is known. For instance, if the data come with outliers, one can employ the Cauchy Regression (CR) liu2014robustness () instead of the Frobenius norm to improve robustness and modify (11) read as


When the data involves heavy tailed noise, liu2015performance (); guan2012mahnmf () suggest to use the Manhattan distance defined by , and this way, (11) can be modified to


4 Solving the Proposed Sparse Subspace Learning

In this section, we present algorithms to approximately solve (2) and (11). Throughout the rest of the paper, we assume that takes the function either as (5) or (8) and is given by (9). Due to the combinatorial nature of (2), we propose a greedy method to solve it. The problem (11) is smooth, and various optimization methods can be applied. Although its objective is nonconvex jointly with respect to and , it is convex with regard to one of them while the other one is fixed. Based on this property, we choose the block coordinate descent method to solve (11).

4.1 Greedy Strategy for (2)

In this subsection, a greedy algorithm is developed for selecting out of features based on (2). The idea is as follows: each time, we select one from the remaining unselected features such that the objective value is decreased the most. We begin the design of the algorithm by making the following observation.

Observation 1.

Let and be two index sets of features. Assume , and and are submatrices of with columns indexed by and respectively. Then


From the above observation, if the current index set of selected features is , the data fitting will become no worse if we enlarge by adding more features. Below we describe in details on how to choose such additional features. Assume is normalized such that


where denotes the column of . Let be the current index set of selected features. The optimal to is given by


where denotes the Moore-Penrose pseudoinverse of a matrix. Now consider to add one more feature into , say the one. Then the lowest data fitting error is

where the last equality is achieved at . Hence, we can choose such that is the largest among all features not in .

Carrying out a comparison to , we find that can serve better. It turns out that the latter is exactly the correlation between and the residual . Denote the correlation between and as

As shown in tropp2007signal (), if is large, then the columns of can be better linearly represented by . To preserve local structure, we need also incorporate . If the set of selected features is , then

Assuming , i.e., using the LPP method in section 3.2.2 (that is used throughout our tests), we have from (15) that

Therefore, we can enlarge by adding one more feature index such that

where is given in (16), and we have set in (2) for simplicity. Algorithm 1 summarizes our greedy method, and for better balancing the correlation and local structure preserving terms, we normalize both of them in the 5th line of Algorithm 1.

1:  Input: Data matrix , and the number of features to be selected.
2:  Output: Index set of selected features with .
3:  Initialize residual , candidate set , selected set .
4:  for  to  do
5:     .
6:      and .
8:  end for
Algorithm 1 Greedy Locally Preserved Subspace Learning (GLPSL)

4.2 Accelerated block coordinate update method for (11)

In this subsection, we present an alternative method for feature selection based on (11). Utilizing bi-convexity of the objective, we employ the accelerated block coordinate update (BCU) method proposed in xu2013block () to solve (11). As explained in xu2013block (), BCU especially fits to solving bi-convex111More precisely, in xu2013block (), BCU is proposed to solve multi-convex optimization problems, which includes bi-convex problems as special cases. optimization problems like (11). It owns low iteration-complexity as shown in section 4.3 and also guarantees the whole iterate sequence convergence on solving (11) as shown in section 4.4. The whole iterate sequence convergence is important because otherwise running the algorithm for different numbers of iterations may result in significantly different solutions, which will further affect the clustering or classfication results. Many existing methods such as the multiplicative rule method lee2001algorithms () only guarantee nonincreasing monotonicity of the objective values or iterate subsequence convergence, and thus our convergence result is much stronger.

Following the framework of BCU, our algorithm is derived by alternatingly updating and , one at a time while the other one is fixed at its most recent value. Specifically, let


At the -th iteration, we perform the following updates:


where we take as the Lipschitz constant of with respect to and


is an extrapolated point with weight .

Note that the -subproblem (19b) can be simply reduced to a linear equation and has the closed-form solution:


If is restricted to be nonnegative, in general, (19b) does not exhibit a closed-form solution. In this case, one can update in the same manner as that of , i.e., completing a block proximal-linearization update. In the following, we discuss in details on parameter settings and how to solve -subproblem (19a).

4.2.1 Parameter settings

By direct computation, it is not difficult to have


For any , we have

where denotes the spectral norm and equals the largest singular value of , the first inequality follows from the triangle inequality, and the last inequality is from the fact for any matrices and of appropriate sizes. Hence, is a Lipschitz constant of with respect to , and in (19a), we set


As suggested in xu2013block (), we set the extrapolation weight as


where is predetermined and with

The weight has been used to accelerate proximal gradient method for convex optimization problem (cf. beck2009fast ()). It is demonstrated in xu2014ecyclic (); xu2015apg-ntd () that the extrapolation weight in (24) can significantly accelerate BCU for nonconvex problems.

  for  do
     Let be the row of and the index set of positive components of
     Set to a zero vector
     if  then
     end if
     Set the row of to
  end for
Algorithm 2 Proximal operator for nonnegative group Lasso: Prox-NGL()

4.2.2 Solution of -subproblem

Note that (19a) can be equivalently written as

which can be decomposed into smaller independent problems, each one involving one row of and coming in the form


We show that (25) has a closed-form solution and thus (19a) can be solved explicitly.

Theorem 1.

Given , let be the index set of positive components of . Then the solution of (25) is given as follows

  1. For any , ;

  2. If , then ; otherwise, .


For , we must have because if , setting the component to zero and keeping all others unchanged will simultaneously decrease and . Hence, we can reduce (25) to the following form


Without nonnegativity constraint on , the minimizer of (26) is given by item 2 of Theorem 1 (for example, see parikh2013proximal ()). Note that the given is nonnegative. Hence, it solves (26), and this completes the proof. ∎

The above proof gives a way to find the solution of (25). Using this method, we can explicitly form the solution of (19a) by the subroutine Prox-NGL in Algorithm 2, where and are inputs, and is the output. Arranging the above discusstion together, we have the pseudocode in Algorithm 3 for solving (11).

1:  Input: Data matrix , the number of selected features and parameter .
2:  Output: Index set of selected features with
3:  Initialize , , choose a positive number ; set .
4:  while Not convergent do
5:     Compute and according to (23) and (24) respectively.
6:     Let .
7:     Update .
8:     Update
9:     if  then
10:        Set .
11:     else
12:        Let .
13:     end if
14:  end while
15:  Normalize each column of .
16:  Sort and select features corresponding to the largest ones.
Algorithm 3 Global and Local Structure Preserving Sparse Subspace Learning (GLoSS)

4.3 Complexity Analysis

In this section, we count the flops per iteration of Algorithm 3. Our analysis is for general case, namely, we do not assume any structure of . Note that if is sparse, the computational complexity will be lower. The main cost of our algorithm is in the update of and , i.e., the and lines in Algorithm 3. For updating , the major cost is in the computation of . Assume the dimension of subspace . Then from (22), we can obtain the partial gradient by first computing , and , then and , and finally left multiplying to . This way, it takes about flops. To update by (21), we can use the same trick and obtain in about flops. Note that with and pre-computed, the objective value required in line can be easily obtained in about flops. Therefore, we have the per-iteration computational complexity of order since , and if , then the algorithm is scalable to data size.

4.4 Convergence analysis

In this section, we analyze the convergence of Algorithm GLoSS. Let us denote

to be the indicator function of the nonnegative quadrant. Also, let us denote

Then the problem (11) is equivalent to

and the first-order optimality condition is . Here, denotes the subdifferential of (see rockafellar2009variational () for example) and equals if is differentiable and a set otherwise. By Proposition 2.1 of attouch2010proximal (), is equivalent to



We call a critical point of (11) if it satisfies (27).

In the following, we first establish a subsequence convergence result, stating that any limit point of the iterates is a critical point. Assuming existence of a full rank limit point, we further show that the whole iterate sequence converges to a critical point. The proofs of both results involve many technical details and thus are deferred to the appendix for the readers’ convenience.

Theorem 2 (Iterate subsequence convergence).

Let be the sequence generated from Algorithm 3. Any finite limit point of is a critical point of (11).

Due to the coercivity of and the nonincreasing monotonicity of the objective value, must be bounded. However, in general, we cannot guarantee the boundedness of because may be rank-degenerate (i.e., not full rank). As shown in the next theorem, if we have rank-nondegeneracy of in the limit, a stronger convergence result can be established. The nondegeneracy assumption is similar to that assumed in (GolubVanLoan1996, , section 7.3.2) and convergence-HOOI () for (higher-order) orthogonal iteration methods.

Theorem 3 (Whole iterate sequence convergence).

Let be the sequence generated from Algorithm 3. If there is a finite limit point such that is full-rank, then the whole sequence must converge to .

5 Experimental Studies

In this section, the proposed methods GLPSL (Algorithm 1) and GLoSS (Algorithm 3) are tested on six benchmark datasets and compared to one widely used subspace learning method PCA and seven state-of-the-art unsupervised feature selection methods.

5.1 Datasets

The six benchmark datasets we use come from different areas, and their characteristics are listed in Table 2. Yale64, WarpPIE, Orl64 and Orlraws222 are face images, each sample of the datasets representing a face image. Usps333 is a handwritten digit dataset that contains 9,298 handwritten digit images. Isolet333 is a speech signal dataset containing 30 speakers’ speech signal of alphabet twice. All datasets are normalized such that the vector corresponding to each feature has unit -norm.

Dataset Instances Features Classes Type of Data
Yale64 165 4096 15 Face image
WarpPIE 210 2420 10 Face image
Orl64 400 4096 50 Face image
Orlraws 100 10304 10 Face image
Usps 9298 256 10 Digit image
Isolet 1560 617 26 Speech signal
Table 2: The datasets detail

5.2 Experimental Settings

Our algorithms are compared to the following methods:

  1. PCA: Principal component analysis (PCA) duda2012pattern () is an unsupervised subspace learning method that maximizes global structure information of the data in the principal space.

  2. LS: Laplacian score (LS) method he2005laplacian () uses the Laplacian score to evaluate effectiveness of the features. It selects the features individually that retain the samples’ local similarity specified by a similarity matrix.

  3. MCFS: Multi-cluster feature selection (MCFS) cai2010unsupervised () is a two-step method, and it formulates the feature selection process as a spectral information regression problem with -norm regularization term.

  4. UDFS: Unsupervised discriminative feature selection (UDFS) method yang2011l2 () combines the data’s local discriminative property and the -norm sparse constraint in one convex model to select the features which have the highest power of local discriminative property.

  5. RSR: Regularized self-representation (RSR) feature selection method zhu2015unsupervised () uses the -norm to measure the fitting error and also -norm to promote sparsity. Specifically, it solves the following problem:

  6. NDFS: Nonnegative Discriminative Feature Selection (NDFS) method li2012unsupervised () utilizes the nonnegative spectral analysis with -norm regularization term.

  7. GLSPFS: Global and local structure preservation for feature selection (GLSPFS) method liu2014global () uses both global and local similarity structure to model the feature selection problem. It solves the following problem:

  8. MFFS: Matrix factorization feature selection (MFFS) method wang2015subspace () is similar to ours. It performs the subspace learning and feature selection process simultaneously by enforcing a nonnegative orthogonal transformation matrix . This solves the following problem:


There are some parameters we need to set in advance. The dimension of the subspace is fixed to for GLoSS method, and the number of selected features is taken from for all datasets. We use the LPP method in section 3.2.2 to preserve local structure of the data in GLSPFS, NDFS, GLPSL and GLoSS because both MCFS and LS use the LPP Laplacian graph, and we set the number of nearest neighbors to for LS, MCFS, UDFS, GLSPFS, NDFS, GLPSL and GLoSS. The parameter is required by LS, MCFS, GLSPFS, NDFS, GLPSL and GLoSS to build a similarity matrix and UDFS to build the local total scatter and between-class scatter matrices. For simplicity, the parameter of local structure preserving term is fixed to in GLSPFS and GLoSS for all the tests in Sections 5.3.1 and 5.3.2. We study the sensitivity of GLoSS to in Section 5.3.3. The sparsity parameter for UDFS, RSR, GLSPFS, NDFS and GLoSS is tuned from . After completing the feature selection process, we use the -means algorithm to cluster the samples using the selected features. The number of iterations of UDFS, GLSPFS, NDFS, MFFS, and GLoSS are set to 30. Because the performance of -means depends on the initial point, we run it 20 times with different random starting points and report the average value.

The compared algorithms are evaluated based on their clustering results. For each sample of all datasets, we set its class number as the cluster number. To measure the clustering performance, we use clustering accuracy (ACC) and normalized mutual information (NMI), which are defined below. Let and be the predicted and true labels of the sample, respectively. The ACC is computed as


where if and otherwise, and is a permutation mapping that maps each predicted label to the equivalent true label. We use the Kuhn-Munkres algorithm laszlo2009matching () to realize such a mapping. High value of ACC indicates the predicted labels are close to the true ones, and thus the higher ACC is, the better the clustering result is. The NMI is used to measure the similarity of two clustering results. For two label vectors and , it is defined as


where is the mutual information of and , and are the entropies of and gray2011entropy (). In our experiments, contains the clustering labels using the selected features and the true labels of samples in the dataset. Higher value of implies that better predicts .

5.3 Experimental results

In this subsection, we report the results of all tested methods. In addition, we study the sensitivity of the parameters present in (11).

5.3.1 Performance comparison

In Tables 3 and 4, we present the ACC and NMI values produced by different methods. For each method, we vary the number of selected features among and report the best result. From the tables, we see that GLoSS performs the best among all the compared methods except for Yale64 and WarpPIE in Table 3 and Yale64 and Orl64 in Table 4, for each of which GLoSS is the second best. In addition, we see that the greedy method GLPSL performs reasonably well in many cases but can be very bad in some cases such as Usps in both Tables, and this justifies our reason to relax (2) and develop GLoSS method. Finally, we see that GLoSS outperforms MFFS for all datasets, and this is possibly due to the local structure preserving term used in GLoSS.

Dateset Isolet Yale64 Orl64 WarpPIE Usps Orlraw
PCA 47.90 2.97 32.79 3.22 33.75 1.58 39.95 4.37 59.90 3.89 48.20 3.68
LS 55.14 3.15 41.25 3.28 41.75 1.71 32.33 1.03 59.79 2.72 66.12 6.82
MCFS 54.95 3.28 44.88 3.72 50.75 1.25 50.38 2.25 66.55 3.11 77.43 7.15
UDFS 29.60 2.73 38.21 3.02 40.78 1.03 55.57 2.92 50.59 1.97 65.32 6.18
RSR 49.88 3.75 45.48 3.34 53.24 1.83 37.52 2.23 62.54 2.34 72.54 6.52
NDFS 54.33 3.73 45.79 3.81 49.85 1.69 34.10 3.81 63.32 3.35 67.80 6.48
GLSPFS 54.09 3.22 50.84 5.34 53.63 2.62 45.94 2.38 64.65 3.69 78.00 7.47
MFFS 55.39 3.32 49.09 3.64 50.19 1.64 36.57 2.32 63.30 3.36 73.55 7.68
GLPSL 49.05 3.02 53.97 3.45 41.72 1.05 47.52 1.87 51.91 2.18 72.16 7.03
GLoSS 62.45 3.58 53.45 3.88 54.27 1.87 52.76 2.12 67.24 3.27 79.37 7.34
Table 3: Clustering results (ACC% std%) of different feature selection algorithms on different datasets. The best results are highlighted in bold and the second best results are underlined. (The higher ACC is, the better the result is.)

5.3.2 Compare the performance with all features

To illustrate the effect of feature selection to clustering, we compare the clustering results using all features and selected features given by different methods. Figure 1 plots the ACC value and Figure 2 the NMI value with respect to the number of selected features. The baseline corresponds to the results using all features. From the figures, we see that in most cases, the proposed GLoSS method gives the best results, and selecting reasonably many features (but far less than the total number of features), it can give comparable and even better clustering results than those by using all features. Hence, the feature selection eliminates the redundancy of the data for clustering purpose. In addition, note that using fewer features can save the clustering time of the -means method, and thus feature selection can improve both clustering accuracy and efficiency.

Dateset Isolet Yale64 Orl64 WarpPIE Usps Orlraw
PCA 61.48 1.20 41.43 2.72 58.57 0.86 42.83 3.82 56.08 1.54 57.30 3.93
LS 69.73 1.43 46.88 2.07 62.61 1.53 30.06 2.89 56.62 0.95 73.38 3.12
MCFS 69.82 1.37 53.70 1.58 69.33 1.62 54.37 4.95 61.01 0.92 83.91 3.53
UDFS 44.98 1.02 47.40 1.64 62.38 1.41 54.55 4.38 41.31 1.21 68.78 3.45
RSR 63.47 1.10 56.08 1.43 72.33 1.75 41.81 3.75 55.32 1.52 83.96 4.35
NDFS 70.05 2.00 54.67 2.35 70.42 1.14 28.16 4.45 58.78 0.99 78.81 3.99
GLSPFS 68.80 1.07 56.18 3.40 73.05 1.52 52.23 4.42 60.33 1.65 82.99 4.73
MFFS 72.64 1.73 56.17 4.47 70.65 1.25 40.95 3.39 59.11 0.76 81.09 4.12
GLPSL 65.41 1.23 61.39 1.72 64.76 1.50 53.33 3.89 40.98 0.87 72.97 3.37
GLoSS 74.28 1.25 58.87 1.65 73.02 2.02 55.76 4.56 61.29 1.25 85.65 4.15
Table 4: Clustering results (NMI% std%) of different feature selection algorithms on different datasets. The best results are highlighted in bold and the second best results are underlined. (The higher NMI, the better result is.)
Figure 1: The clustering accuracy (ACC) of using all features and selected features by different methods.
Figure 2: The normalized mutual information (NMI) of using all features and selected features by different methods.

5.3.3 Sensitivity of parameters

To further demonstrate the performance of the proposed GLoSS method, we study its sensitivity with regard to the parameters and in (11). First, we fix and vary and . Figures 3 and 4 plot the ACC and NMI values given by GLoSS for different and ’s. From the figures, we see that except for Isolet, GLoSS performs stably well for different combinations of and , and thus the users can choose the parameters within a large interval to have satisfactory clustering performance. Secondly, we fix and vary and . Figures 5 and 6 plot the ACC and NMI values given by GLoSS for different and ’s. Again, we see that GLoSS performs stably well except for the Isolet dataset.

Figure 3: Clustering accuracy (ACC) produced by GLoSS with different and .
Figure 4: Normalized mutual information (NMI) produced by GLoSS with different and .
Figure 5: The clustering accuracy (ACC) given by GLoSS with different and .
Figure 6: The normalized mutual information (NMI) given by GLoSS with different and .

6 Conclusions

We have proposed a new unsupervised joint model on subspace learning and feature selection. The model preserves both global and local structure of the data, and it is derived by relaxing an existing combinatorial model with 0-1 variables. A greedy algorithm has been developed, for the first time, to solve the combinatorial problem, and an accelerated block coordinate descent (BCD) method was applied to solve the relaxed continuous probelm. We have established the whole iterate sequence convergence of the BCD method. Extensive numerical tests on real-world data demonstrated that the proposed method outperformed several state-of-the-art unsupervised feature selection methods.


The authors would like to thank two anonymous referees for their careful reviews and constructive comments. Y. Xu is partially supported by AFOSR. W. Pedrycz is partially supported by NSERC and CRC.

Appendix A Proof of Theorem 2

For simplicity, we assume , i.e., there is no extrapolation. The case of is more complicated but can be treated similarly with more care taken to handle details; see xu2013block () for example.

The following result is well-known (c.f. Lemma 2.1 of xu2013block ())




By Lemma 3.1 of xu2014ihosvd (), we have




where contains the left leading singular vectors of and is the rank of .

Note that

Hence, summing (31) and (33) over and noting nonnegativity of we obtain

and thus