A Dictionary-Based Generalization of Robust PCA Part II: Applications to Hyperspectral Demixing

A Dictionary-Based Generalization of Robust PCA Part II: Applications to Hyperspectral Demixing

Sirisha Rambhatla, Xingguo Li, Jineng Ren and Jarvis Haupt
Department of Electrical and Computer Engineering,
University of Minnesota – Twin Cities, Minneapolis, MN-55455
{rambh002, lixx1661, renxx282, jdhaupt}@umn.edu.
   Sirisha Rambhatla,  Xingguo Li,  Jineng Ren,  and Jarvis HauptThis work was supported by the DARPA YFA, Grant N66001-14-1-4047. Preliminary versions appeared in the proceedings of the 2016 IEEE Global Conference on Signal & Information Processing (GlobalSIP), 2017 Asilomar Conference on Signals, Systems, & Computers, and the 2018 IEEE International Conference on Acoustics, Speech & Signal Processing (ICASSP). S. Rambhatla, J. Ren, and J. Haupt are with the Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN, 55455, USA e-mail: {rambh002, renxx282, jdhaupt}@umn.edu, respectively. X. Li is with the Computer Science Department, Princeton University, Princeton, NJ 08540, USA email: xingguol@cs.princeton.edu. Different authors contributed to different phases of the work.
Abstract

We consider the task of localizing targets of interest in a hyperspectral (HS) image based on their spectral signature(s), by posing the problem as two distinct convex demixing task(s). With applications ranging from remote sensing to surveillance, this task of target detection leverages the fact that each material/object possesses its own characteristic spectral response, depending upon its composition. However, since signatures of different materials are often correlated, matched filtering-based approaches may not be apply here. To this end, we model a HS image as a superposition of a low-rank component and a dictionary sparse component, wherein the dictionary consists of the a priori known characteristic spectral responses of the target we wish to localize, and develop techniques for two different sparsity structures, resulting from different model assumptions. We also present the corresponding recovery guarantees, leveraging our recent theoretical results from a companion paper. Finally, we analyze the performance of the proposed approach via experimental evaluations on real HS datasets for a classification task, and compare its performance with related techniques.

Hyperspectral imaging, Robust-PCA, dictionary sparse, target localization, and remote sensing.

I Introduction

Hyperspectral (HS) imaging is an imaging modality which senses the intensities of the reflected electromagnetic waves (responses) corresponding to different wavelengths of the electromagnetic spectra, often invisible to the human eye. As the spectral response associated with an object/material is dependent on its composition, HS imaging lends itself very useful in identifying the said target objects/materials via their characteristic spectra or signature responses, also referred to as endmembers in the literature. Typical applications of HS imaging range from monitoring agricultural use of land, catchment areas of rivers and water bodies, food processing and surveillance, to detecting various minerals, chemicals, and even presence of life sustaining compounds on distant planets; see [1, 2], and references therein for details. However, often, these spectral signatures are highly correlated, making it difficult to detect regions of interest based on these endmembers. In this work, we present two techniques to localize target materials/objects in a given HS image based on some structural assumptions on the data, using the a priori known signatures of the target of interest.

The primary property that enables us to localize a target is the approximate low-rankness of HS images when represented as a matrix, owing to the fact that a particular scene is composed of only a limited type of objects/materials [3]. For instance, while imaging an agricultural area, one would expect to record responses from materials like biomass, farm vehicles, roads, houses, water bodies, and so on. Moreover, the spectra of complex materials can be assumed to be a linear mixture of the constituent materials [3, 4], i.e. the received HS responses can be viewed as being generated by a linear mixture model [5]. For the target localization task at hand, this approximate low-rank structure is used to decompose a given HS image into a low-rank part, and a component that is sparse in a known dictionary – a dictionary sparse part– wherein the dictionary is composed of the spectral signatures of the target of interest. We begin by formalizing the specific model of interest in the next section.

I-a Model

A HS sensor records the response of a region, which corresponds to a pixel in the HS image as shown in Fig. 1, to different frequencies of the electromagnetic spectrum. As a result, each HS image , can be viewed as a data-cube formed by stacking matrices of size , as shown in Fig. 1. Therefore, each volumetric element or voxel, of a HS image is a vector of length , and represents the response of the material to measurement channels. Here, is determined by the number of channels or frequency bands across which measurements of the reflectances are made.

Fig. 1: The HS image data-cube corresponding to the Indian Pines dataset.

Formally, let be formed by unfolding the HS image , such that, each column of corresponds to a voxel of the data-cube. We then model as arising from a superposition of a low-rank component with rank , and a dictionary-sparse component, expressed as , i.e.,

(1)

Here, represents an a priori known dictionary composed of appropriately normalized characteristic responses of the material/object (or the constituents of the material), we wish to localize, and refers to the sparse coefficient matrix (also referred to as abundances in the literature). Note that can also be constructed by learning a dictionary based on the known spectral signatures of a target; see [6, 7, 8, 9].

I-B Our Contributions

In this work, we present two techniques111The code is made available at github.com/srambhatla/Diction ary-based-Robust-PCA. for target detection in a HS image, depending upon different sparsity assumptions on the matrix , by modeling the data as shown in (1). Building on the theoretical results of [10, 11, 12], our techniques operate by forming the dictionary using the a priori known spectral signatures of the target of interest, and leveraging the approximate low-rank structure of the data matrix [13]. Here, the dictionary can be formed from the a priori known signatures directly, or by learning an appropriate dictionary based on target data; see [6, 7, 8, 9].

Fig. 2: Correlated spectral signatures. The spectral signatures of even different materials are highly correlated. Shown here are spectral signatures of classes from the Indian Pines dataset [14]. Here, the shaded region shows the lower and upper ranges of reflectance values the signatures take.

We consider two types of sparsity structures for the coefficient matrix , namely, a) global or entry-wise sparsity, wherein we let the matrix have non-zero entries globally, and b) column-wise sparse structure, where at most columns of the matrix have non-zero elements. The choice of a particular sparsity model depends on the properties of the dictionary matrix . In particular, if the target signature admits a sparse representation in the dictionary, entry-wise sparsity structure is preferred. This is likely to be the case when the dictionary is overcomplete () or fat, and also when the target spectral responses admit a sparse representation in the dictionary. On the other hand, the column-wise sparsity structure is amenable to cases where the representation can use all columns of the dictionary. This potentially arises in the cases when the dictionary is undercomplete () or thin. Note that, in the column-wise sparsity case, the non-zero columns need not be sparse themselves. The applicability of these two modalities is also exhibited in our experimental analysis; see Section V for further details. Further, we specialize the theoretical results of [12], to present the conditions under which such a demixing task will succeed under the two sparsity models discussed above; see also [10] and [11].

Next, we analyze the performance of the proposed techniques via extensive experimental evaluations on real-world demixing tasks over different datasets and dictionary choices, and compare the performance of the proposed techniques with related works. This demixing task is particularly challenging since the spectral signatures of even distinct classes are highly correlated to each other, as shown in Fig. 2. The shaded region here shows the upper and lower ranges of different classes. For instance, in Fig. 2 we observe that the spectral signature of the “Stone-Steel” class is similar to that of class “Wheat”. This correlation between the spectral signatures of different classes results in an approximate low-rank structure of the data, captured by the low-rank component , while the dictionary-sparse component is used to identify the target of interest. We specifically show that such a decomposition successfully localizes the target despite the high correlation between spectral signatures of distinct classes.

Finally, it is worth noting that although we consider thin dictionaries () for the purposes of this work, since it is more suitable for the current exposition, our theoretical results are also applicable for the fat case (); see [10],[11], and [12] for further details.

I-C Prior Art

The model shown in (1) is closely related to a number of well-known problems. To start, in the absence of the dictionary sparse part , (1) reduces to the popular problem of principal component analysis (PCA) [15, 16]. The problem considered here also shares its structure with variants of PCA, such as robust-PCA [17, 18] (with for an identity matrix ,) outlier pursuit [19] (where and is column-wise sparse,) and others [20, 21, 22, 23, 24, 25, 26, 27, 28].

On the other hand, the problem can be identified as that of sparse recovery [29, 30, 31, 32], in the absence of the low-rank part . Following which, sparse recovery methods for analysis of HS images have been explored in [33, 34, 35, 36]. In addition, in a recent work [37], the authors further impose a low-rank constraint on the coefficient matrix for the demixing task. Further, applications of compressive sampling have been explored in [38], while [5] analyzes the case where HS images are noisy and incomplete. The techniques discussed above focus on identifying all materials in a given HS image. However, for target localization tasks, it is of interest to identify only specific target(s) in a given HS image. As a result, there is a need for techniques which localize targets based on their a priori known spectral signatures.

The model described in (1) was introduced in [39] as a means to detect traffic anomalies in a network, wherein, the authors focus on a case where the dictionary is overcomplete, i.e., fat, and the rows of are orthogonal, e.g., . Here, the coefficient matrix is assumed to possess at most nonzero elements per row and column, and nonzero elements globally. In a recent work [10] and the accompanying theoretical work [12], we analyze the extension of [39] to include a case where the dictionary has more rows than columns, i.e., is thin, while removing the orthogonality constraint for both the thin and the fat dictionary cases, when is small. This case is particularly amenable for the target localization task at hand, since often we aim to localize targets based on a few a priori known spectral signatures. To this end, we focus our attention on the thin case, although a similar analysis applies for the fat case [10]; see also [12].

I-D Related Techniques

To study the properties of our techniques, we compare and contrast their performance with related works. First, as a sanity check, we compare the performance of the proposed techniques with matched filtering-based methods (detailed in Section V). In addition, we compare the performance of our techniques to other closely related methods based on the sparsity assumptions on the matrix , as described below.

For entry-wise sparse structure: The first method we compare to is based on the observation that in cases where the known dictionary is thin, we can multiply (1) on the left by the pseudo-inverse of , say , in which case, the model shown in (1) reduces to that of robust PCA, i.e.,

(RPCA)

where and . Therefore, in this case, we can recover the sparse matrix by robust PCA [17, 18], and estimate the low-rank part using the estimate of . Note that this is not applicable for the fat case due to the non-trivial null space of its pseudo-inverse.

Although at a first glance this seems like a reasonable technique, somewhat surprisingly, it does not succeed for all thin dictionaries. Specifically, in cases where , the rank of , is greater than the number of dictionary elements , the pseudo-inversed component is no longer “low-rank.” In fact, since the notion of low-rankness is relative to the potential maximum rank of the component, can be close to full-rank. As a result, the robust PCA model shown in RPCA is no longer applicable and the demixing task may not succeed; see our corresponding theoretical paper [12] for details.

Moreover, even in cases where RPCA succeeds (), our proposed one-shot procedure guarantees the recovery of the two components under some mild conditions, while the pseudo-inverse based procedure RPCA will require a two-step procedure – one to recover the sparse coefficient matrix and other to recover the low-rank component – in addition to a non-trivial analysis of the interaction between and the low-rank part . This is also apparent from our experiments shown in Section V, which indicate that optimization based on the model in (1) is more robust as compared to RPCA for the classification problem at hand across different choices of the dictionaries.

For column-wise sparse structure: The column-wise sparse structure of the matrix results in a column-wise sparse structure of the dictionary-sparse component . As a result, the model at hand is similar to that studied in OP [19]. Specifically, the OP technique is aimed at identifying the outlier columns in a given matrix. However, it fails in cases where the target of interest is not an outlier, as in case of HS data. On the other hand, since the proposed technique uses the dictionary corresponding to the spectral signatures of the target of interest to guide the demixing procedure, it results in a spectral signature-driven technique for target localization. This distinction between the two procedures is also discussed in our corresponding theoretical work [12] Section V, and is exemplified by our experimental results shown in Section V.

Further, as in the entry-wise case, one can also envision a pseudo-inverse based procedure to identify the target of interest via OP [19] on the pseudo-inversed data (referred to as OP in our discussion) i.e.,

(OP)

where and , with admitting a column-wise sparse structure. However, this variant of OP does not succeed when the rank of the low-rank component is greater than the number of dictionary elements, i.e., , as in the previous case; see our related theoretical work for details [12] Section V.

The rest of the paper is organized as follows. We formulate the problem and introduce relevant theoretical quantities in Section II, followed by specializing the theoretical results for the current application in Section III. Next, in Section IV, we present the specifics of the algorithms for the two cases. In Section V, we describe the experimental set-up and demonstrate the applicability of the proposed approaches via extensive numerical simulations on real HS datasets for a classification task. Finally, we conclude this discussion in Section VI.

Notation: Given a matrix , denotes its -th column and denotes the element of . We use for the spectral norm, where denotes the maximum singular value of the matrix, , , and , where denotes the canonical basis vector with at the -th location and elsewhere. Further, we denote the -norm of a vector as . In addition, , , and refer to the nuclear norm, entry-wise - norm, and norm (sum of the norms of the columns) of the matrix, respectively, which serve as convex relaxations of rank, sparsity, and column-wise sparsity inducing optimization, respectively.

Ii Problem Formulation

In this section, we introduce the optimization problem of interest and different theoretical quantities pertinent to our analysis.

Ii-a Optimization problems

Our aim is to recover the low-rank component and the sparse coefficient matrix , given the dictionary and samples generated according to the model shown in (1). Here the coefficient matrix can either have an entry-wise sparse structure or a column-wise sparse structure. We now crystallize our model assumptions to formulate appropriate convex optimization problems for the two sparsity structures.

Specifically, depending upon the priors about the sparsity structure of , and the low-rank property of the component , we aim to solve the following convex optimization problems, i.e.,

(D-RPCA(E))

for the entry-wise sparsity case, and

(D-RPCA(C))

for the column-wise sparse case, to recover and with regularization parameters and , given the data and the dictionary . Here, the a priori known dictionary is assumed to be undercomplete (thin, i.e., ) for the application at hand. Analysis of a more general case can be found in[12]. Further, here “D-RPCA” refers to “dictionary based robust principal component analysis”, while the qualifiers “E” and “C” indicate the entry-wise and column-wise sparsity patterns, respectively.

Note that, in the column-wise sparse case there is an inherent ambiguity regarding the recovery of the true component pairs corresponding to the low-rank part and the dictionary sparse component, respectively. Specifically, any pair satisfying , where and have the same column space, and and have the identical column support, is a solution of D-RPCA(C). To this end, we define the following oracle model to characterize the optimality of any solution pair .

Definition D.1 (Oracle Model for Column-wise Sparse Case).

Let the pair be the matrices forming the data as per (1), define the corresponding oracle model . Then, any pair is in the Oracle Model , if , and hold simultaneously, where and are projections onto the column space of and column support of , respectively.

For this case, we then first establish the sufficient conditions for the existence of a solution based on some incoherence conditions. Following which, our main result for the column-wise case states the sufficient conditions under which solving a convex optimization problem recovers a solution pair in the oracle model.

Ii-B Conditions on the Dictionary

For our analysis, we require that the dictionary follows the generalized frame property (GFP) defined as follows.

Definition D.2.

A matrix satisfies the generalized frame property (GFP), on vectors , if for any fixed vector where , we have

where and are the lower and upper generalized frame bounds with .

The GFP is met as long as the vector is not in the null-space of the matrix , and is bounded. Therefore, for the thin dictionary setting for both entry-wise and column-wise sparsity cases, this condition is satisfied as long as has a full column rank, and can be the entire space. For example, being a frame[40] suffices; see [41] for a brief overview of frames.

Ii-C Relevant Subspaces

Before we define the relevant subspaces for this discussion, we define a few preliminaries. First, let the pair be the solution to D-RPCA(E) (the entry-wise sparse case), and for the column-wise sparse case, let the pair be in the oracle model ; see Definition D.1.

Next, for the low-rank matrix , let the compact singular value decomposition (SVD) be represented as

where and are the left and right singular vectors of , respectively, and is a diagonal matrix with singular values arranged in a descending order on the diagonal. Here, matrices and each have orthogonal columns. Further, let be the linear subspace consisting of matrices spanning the same row or column space as , i.e.,

Next, let () be the space spanned by matrices with the same non-zero support (column support, denoted as csupp) as , and let be defined as

Here, denotes the index set containing the non-zero column indices of for the column-wise sparsity case. In addition, we denote the corresponding complements of the spaces described above by appending ‘’.

We use calligraphic ‘’ to denote the projection operator onto a subspace defined by the subscript, and ‘’ to denote the corresponding projection matrix with the appropriate subscripts. Therefore, using these definitions the projection operators onto and orthogonal to the subspace are defined as

and

respectively.

Ii-D Incoherence Measures

We also employ various notions of incoherence to identify the conditions under which our procedures succeed. To this end, we first define the incoherence parameter that characterizes the relationship between the low-rank part and the dictionary sparse part , as

(2)

The parameter is the measure of degree of similarity between the low-rank part and the dictionary sparse component. Here, a larger implies that the dictionary sparse component is close to the low-rank part. In addition, we also define the parameter as

(3)

which measures the similarity between the orthogonal complement of the column-space and the dictionary .

The next two measures of incoherence can be interpreted as a way to identify the cases where for with SVD as : (a) resembles the dictionary , and (b) resembles the sparse coefficient matrix . In these cases, the low-rank part may resemble the dictionary sparse component. To this end, similar to [39], we define the following measures to identify these cases as

(4)

Here, achieves the upper bound when a dictionary element is exactly aligned with the column space of the , and lower bound when all of the dictionary elements are orthogonal to . Moreover, achieves the upper bound when the row-space of is “spiky”, i.e., a certain row of is -sparse, meaning that a column of is supported by (can be expressed as a linear combination of) a column of . The lower bound here is attained when it is “spread-out”, i.e., each column of is a linear combination of all columns of . In general, our recovery of the two components is easier when the incoherence parameters and are closer to their lower bounds. In addition, for notational convenience, we define constants

(5)

Here, is the maximum absolute entry of , which measures how close columns of are to the singular vectors of . Similarly, for the column-wise case, measures the closeness of columns of to the singular vectors of under a different metric (column-wise maximal -norm).

Iii Theoretical Results

In this section, we specialize our theoretical results [12] for the HS demixing task. Specifically, we provide the main results corresponding to each sparsity structure of for the thin dictionary case considered here. We start with the theoretical results for the entry-wise sparsity case, and then present the corresponding theoretical guarantees for the column-wise sparsity structure; see [12] for detailed proofs.

Iii-a Exact Recovery for Entry-wise Sparsity Case

For the entry-wise case, our main result establishes the existence of a regularization parameter , for which solving the optimization problem D-RPCA(E) will recover the components and exactly. To this end, we will show that such a belongs to a non-empty interval , where and are defined as

(6)

Here, where is a constant that captures the relationship between different model parameters, and is defined as

where . Given these definitions, we have the following result for the entry-wise sparsity structure.

Theorem 1.

Suppose , where and has at most non-zeros, i.e., , and the dictionary for obeys the generalized frame property (2) with frame bounds , where , and follows

(7)

Then given , and , and defined in (2), (4), (5), respectively, with defined in (6), solving D-RPCA(E) will recover matrices and .

We observe that the conditions for the recovery of are closely related to the incoherence measures (, , and ) between the low-rank part, , the dictionary, , and the sparse component . In general, smaller sparsity, rank, and incoherence parameters are sufficient for ensuring the recovery of the components for a particular problem. This is in line with our intuition that the more distinct the two components, the easier it should be to tease them apart. For our HS demixing problem, this indicates that a target of interest can be localized as long as its the spectral signature is appropriately different from the other materials in the scene.

Iii-B Recovery for Column-wise Sparsity Case

For the column-wise sparsity model, recall that any pair in the oracle model described in D.1 is considered optimal. To this end, we first establish the sufficient conditions for the existence of such an optimal pair by the following lemma.

Lemma 2.

Given , , and , any pair satisfies and if .

In essence, we need the incoherence parameter to be strictly smaller than . Next, analogous to the entry-wise case, we show that belongs to a non-empty interval , using which solving D-RPCA(C) recovers an optimal pair in the oracle model D.1 in accordance with Lemma 2. Here, for a constant , and are defined as

(8)

This leads us to the following result for the column-wise case.

Theorem 3.

Suppose with defining the oracle model , where , for . Given , , , as defined in (2), (3), (4), (5), respectively, and any , for defined in (8), solving D-RPCA(C) will recover a pair of components , if the dictionary obeys the generalized frame property D.2 with frame bounds , for .

Theorem 3 outlines the sufficient conditions under which the solution to the optimization problem D-RPCA(C) will be in the oracle model defined in D.1. Here, for a case where , which can be easily met by a tight frame when , constant , and , we have , which is of same order as in the Outlier Pursuit (OP) [19]. Moreover, our numerical results in [12] show that D-RPCA(C) can be much more robust than OP, and may recover even when the rank of is high and the number of outliers is a constant proportion of . This implies that, D-RPCA(C) will succeed as long as the dictionary can successfully represent the target of interest while rejecting the columns of the data matrix corresponding to materials other than the target.

0:  , , , , , , and
   Initialize: , , , and set .
  while not converged do
     Generate points and using momentum:
           ,
           .
     Take a gradient step using these points :
           ,
           .
     Update Low-rank part via singular value thresholding:
           ,
           .
     Update the Dictionary Sparse part:
        
     Update the momentum term parameter :
           .
     Update the continuation parameter :
           .
     
  end while
   return ,
Algorithm 1 APG Algorithm for D-RPCA(E) and D-RPCA(C), adapted from [39]

Iv Algorithmic Considerations

The optimization problems of interest, D-RPCA(E) and D-RPCA(C), for the entry-wise and column-wise case, respectively, are convex but non-smooth. To solve for the components of interest, we adopt the accelerated proximal gradient (APG) algorithm, as shown in Algorithm 1. Note that [39] also applied the APG algorithm for D-RPCA(E), and we present a unified algorithm for both sparsity cases for completeness.

Iv-a Background

The APG algorithm is motivated from a long line of work starting with [42], which showed the existence of a first order algorithm with a convergence rate of for a smooth convex objective, where denotes the iterations. Following this, [43] developed the popular fast iterative shrinkage-thresholding algorithm (FISTA) which achieves this convergence rate for convex non-smooth objectives by accelerating the proximal gradient descent algorithm using a momentum term (the term in Algorithm 1) as prescribed by [42]. As a result, it became a staple to solve a wide range of convex non-smooth tasks including matrix completion [44], and robust PCA [45] and its variants [39, 19]. Also, recently [46] has shown further improvements in the rate of convergence.

In addition to the momentum term, the APG procedure operates by evaluating the gradient at a point further in the direction pointed by the negative gradient. Along with faster convergence, this insight about the next point minimizes the oscillations around the optimum point; see [43] and references therein.

Iv-B Discussion of Algorithm 1

For the optimization problem of interest, we solve an unconstrained problem by transforming the equality constraint to a least-square term which penalizes the fit. In particular, the problems of interest we will solve via the APG algorithm are given by

(9)

for the entry-wise sparsity case, and

(10)

for the column-wise sparsity case. We note that although for the application at hand, the thin dictionary case with () might be more useful in practice, Algorithm 1 allows for the use of fat dictionaries () as well.

Method Threshold Performance at best operating point AUC
TPR FPR
4 0.01 D-RPCA(E) 0.300 0.979 0.023 0.989
RPCA 0.650 0.957 0.049 0.974
MF N/A 0.957 0.036 0.994
MF N/A 0.914 0.104 0.946
0.1 D-RPCA(E) 0.800 0.989 0.017 0.997
RPCA 0.800 0.989 0.014 0.997
MF N/A 0.989 0.016 0.998
MF N/A 0.989 0.010 0.998
0.5 D-RPCA(E) 0.600 0.968 0.031 0.991
RPCA 0.600 0.935 0.067 0.988
MF N/A 0.548 0.474 0.555
MF N/A 0.849 0.119 0.939
(a) Learned dictionary,
Method Threshold Performance at best operating point AUC
TPR FPR
10 0.01 D-RPCA(E) 0.600 0.935 0.060 0.972
RPCA 0.700 0.978 0.023 0.990
MF N/A 0.624 0.415 0.681
MF N/A 0.569 0.421 0.619
0.1 D-RPCA(E) 0.500 0.968 0.029 0.993
RPCA 0.500 0.871 0.144 0.961
MF N/A 0.688 0.302 0.713
MF N/A 0.527 0.469 0.523
0.5 D-RPCA(E) 1.000 0.978 0.031 0.996
RPCA 2.200 0.849 0.113 0.908
MF N/A 0.807 0.309 0.781
MF N/A 0.527 0.465 0.539
(b) Learned dictionary,
Method Threshold Performance at best operating point AUC
TPR FPR
15 D-RPCA(E) 0.300 0.989 0.021 0.998
RPCA 3.000 0.849 0.146 0.900
MF N/A 0.957 0.085 0.978
MF N/A 0.796 0.217 0.857
(c) Dictionary by sampling voxels,
Method TPR FPR AUC
Mean St.Dev. Mean St.Dev. Mean St.Dev.
D-RPCA(E) 0.972 0.019 0.030 0.014 0.991 0.009
RPCA 0.919 0.061 0.079 0.055 0.959 0.040
MF 0.796 0.179 0.234 0.187 0.814 0.178
MF 0.739 0.195 0.258 0.192 0.775 0.207
(d) Average performance
TABLE I: Entry-wise sparsity model for the Indian Pines Dataset. Simulation results are presented for our proposed approach (D-RPCA(E)), robust-PCA based approach on transformed data (RPCA), matched filtering (MF) on original data , and matched filtering on transformed data (MF), across dictionary elements , and the regularization parameter for initial dictionary learning procedure ; see Algorithm 2 . Threshold selects columns with column-norm greater than threshold such that AUC is maximized. For each case, the best performing metrics are reported in bold for readability. Further, denotes the case where ROC curve was “flipped” (i.e. classifier output was inverted to achieve the best performance).

Algorithm 1 also employs a continuation technique [45], which can be viewed as a “warm start” procedure. Here, we initialize the parameter at some large value and geometrically reduced until it reaches a value . A smaller choice of results in a solution which is closer to the optimal solution of the constrained problem. Further, as approaches zero, (9) and (10) recover the optimal solution of D-RPCA(E) and D-RPCA(C), respectively. Moreover, Algorithm 1 also utilizes the knowledge of the smoothness constant (the Lipschitz constant of gradient) to set the step-size parameter.

Specifically, the APG algorithm requires that the gradient of the smooth part,

of the convex objectives shown in (9) and (10) is Lipschitz continuous with minimum Lipschitz constant . Now, since the gradient with respect to is given by

we have that the gradient is Lipschitz continuous as

where

as shown in Algorithm 1.

The update of the low-rank component and the sparse matrix for the entry-wise case, both involve a soft thresholding step, , where for a matrix , is defined as

In case of the low-rank part we apply this function to the singular values (therefore referred to as singular value thresholding) [44], while for the update of the dictionary sparse component, we apply it to the sparse coefficient matrix .

The low-rank update step for the column-wise case remains the same as for the entry-wise case. However, for the update of the column-wise case we threshold the columns of based on their column norms, i.e., for a column of a matrix , the column-norm based soft-thresholding function, is defined as

Iv-C Parameter Selection

Since the choice of regularization parameters by our main theoretical results contain quantities (such as incoherence etc.) that cannot be evaluated in practice, we employ a grid-search strategy over the range of admissible values for the low-rank and dictionary sparse component to find the best values of the regularization parameters. We now discuss the specifics of the grid-search for each sparsity case.

Iv-C1 Selecting parameters for the entry-wise case

The choice of parameters and in Algorithm 1 is based on the optimality conditions of the optimization problem shown in (9). As presented in [39], the range of parameters and associated with the low-rank part and the sparse coefficient matrix , respectively, lie in and , i.e., for Algorithm 1 .

These ranges for and are derived using the optimization problem shown in (9). Specifically, we find the largest values of these regularization parameters which yield a solution for the pair by analyzing the optimality conditions of (9). This value of the regularization parameter then defines the upper bound on the range. For instance, let and , then the optimality condition is given by

where the sub-differential set is defined as

Therefore, for a zero solution pair we have that

which yields the condition that . Therefore, the maximum value of which drives the low-rank part to an all-zero solution is .

Similarly, for the dictionary sparse component the optimality condition for choosing is given by

where the the sub-differential set is defined as

Again, for a zero solution pair we need that

which implies that . Meaning, that the maximum value of that drives the dictionary sparse part to zero is .

Iv-C2 Selecting parameters for the column-wise case

Again, the choice of parameters and is derived from the optimization problem shown in (10). In this case, the range of parameters and associated with the low-rank part and the sparse coefficient matrix , respectively, lie in and , i.e., for Algorithm 1 . The range of regularization parameters are evaluated using the analysis similar to the entry-wise case, by analyzing the optimality conditions for the optimization problem shown in (10), instead of (9).

V Experimental Evaluation

We now evaluate the performance of the proposed technique on real HS data. We begin by introducing the dataset used for the simulations, following which we describe the experimental set-up and present the results.

V-a Data

Indian Pines Dataset: We first consider the “Indian Pines” dataset [14], which was collected over the Indian Pines test site in North-western Indiana in the June of 1992 using the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) [47] sensor, a popular choice for collecting HS images for various remote sensing applications. This dataset consists of spectral reflectances across bands in wavelength of ranges nm from a scene which is composed mostly of agricultural land along with two major dual lane highways, a rail line and some built structures, as shown in Fig. 3(a). The dataset is further processed by removing the bands corresponding to those of water absorption, which results in a HS data-cube with dimensions is as visualized in Fig. 1. Here, and . This modified dataset is available as “corrected Indian Pines” dataset [14], with the ground-truth containing classes; Henceforth, referred to as the “Indian Pines Dataset". We form the data matrix by stacking each voxel of the image side-by-side, which results in a data matrix . We will analyze the performance of the proposed technique for the identification of the stone-steel towers (class in the dataset), shown in Fig. 3(a), which constitutes about voxels in the dataset.

Pavia University Dataset: Acquired using Reflective Optics System Imaging Spectrometer (ROSIS) sensor, the Pavia University Dataset [48] consists of spectral reflectances across bands (in the range nm) of an urban landscape over northern Italy. The selected subset of the scene, a data-cube, mainly consists of buildings, roads, painted metal sheets and trees, as shown in Fig. 3(b). Note that class- corresponding to “Gravel” is not present in the selected data-cube considered here. For our demixing task, we will analyze the localization of target class , corresponding to the painted metal sheets, which constitutes voxels in the scene. Note that for this dataset , and .

(a) Indian Pines (b) Pavia University
Fig. 3: Ground-truth classes in the datasets. Panels (a) and (b) show the ground truth classes for the Indian Pines dataset [14] and Pavia University dataset [48], respectively.
0:  Data , regularization parameter , and the number of dictionary elements .
0:  The dictionary
   Initialize: , with entries and columns normalized to have norm , , and tolerance .
  while  do
     Update Coefficient Matrix :
(11)
     Update Dictionary :
(12)
     Form Estimate of Data :
           
  end while
   return
Algorithm 2 Dictionary Learning [8, 9]
Method Threshold Performance at best operating point AUC
TPR FPR
30 0.01 D-RPCA(E) 0.150 0.989 0.015 0.992
RPCA 0.700 0.849 0.146 0.925
MF N/A 0.929 0.073 0.962
MF N/A 0.502 0.498 0.498
0.1 D-RPCA(E) 0.050 0.982 0.019 0.992
RPCA 3.000 0.638 0.374 0.664
MF N/A 0.979 0.053 0.986
MF N/A 0.620 0.381 0.660
0.5 D-RPCA(E) 0.080 0.982 0.019 0.992
RPCA 2.500 0.635 0.381 0.671
MF N/A 0.980 0.159 0.993
MF N/A 0.555 0.447 0.442
(a) Learned dictionary,
Method Threshold Performance at best operating point AUC
TPR FPR
60 D-RPCA(E) 0.060 0.986 0.016 0.995
RPCA 1.000 0.799 0.279 0.793
MF N/A 0.980 0.011 0.994
MF N/A 0.644 0.355 0.700
(c) Average performance
Method TPR FPR AUC
Mean St.Dev. Mean St.Dev. Mean St.Dev.
D-RPCA(E) 0.984 0.003 0.014 0.002 0.993 0.001
RPCA 0.730 0.110 0.295 0.110 0.763 0.123
MF 0.967 0.025 0.074 0.062 0.983 0.0149
MF 0.580 0.064 0.420 0.065 0.575 0.125
(b) Dictionary by sampling voxels,
TABLE II: Entry-wise sparsity model and Pavia University Dataset. Simulation results are presented for the proposed approach (D-RPCA(E)), robust-PCA based approach on transformed data (RPCA), matched filtering (MF) on original data , and matched filtering on transformed data (MF), across dictionary elements , and the regularization parameter for initial dictionary learning step . Threshold selects columns with column-norm greater than threshold such that AUC is maximized. For each case, the best performing metrics are reported in bold for readability. Further, denotes the case where ROC curve was “flipped” (i.e. classifier output was inverted to achieve the best performance).

V-B Dictionary

We form the known dictionary two ways: 1) where a (thin) dictionary is learned based on the voxels using Algorithm 2, and 2) when the dictionary is formed by randomly sampling voxels from the target class. This is to emulate the ways in which we can arrive at the dictionary corresponding to a target – 1) where the exact signatures are not available, and/or there is noise, and 2) where we have access to the exact signatures of the target, respectively. Note that, the optimization procedures for D-RPCA(E) and D-RPCA(C) are agnostic to the selection of the dictionary.

In our experiments for case 1), we learn the dictionary using the target class data via Algorithm 2, which (approximately) solves the following optimization problem,

Algorithm 2 operates by alternating between updating the sparse coefficients (11) via FISTA [43] and dictionary (12) via the Newton method [49].

For case 2), the columns of the dictionary are set as the known data voxels of the target class. Specifically, instead of learning a dictionary based on a target class of interest, we set it as the exact signatures observed previously. Note that for this case, the dictionary is not normalized at this stage since the specific normalization depends on the particular demixing problem of interest, discussed shortly. In practice, we can store the un-normalized dictionary (formed from the voxels), consisting of actual signatures of the target material, and can normalize it after the HS image has been acquired.

Method Threshold Performance at best operating point AUC
TPR FPR
4 0.01 D-RPCA(C) 0.905 0.989 0.014 0.998
OP 0.895 0.989 0.015 0.998
MF N/A 0.656 0.376 0.611
MF N/A 0.624 0.373 0.639
0.1 D-RPCA(C) 0.805 0.989 0.013 0.998
OP 1.100 0.720 0.349 0.682
MF N/A 0.742 0.256 0.780
MF N/A 0.828 0.173 0.905
0.5 D-RPCA(C) 1.800 0.989 0.010 0.998
OP 1.300 0.989 0.012 0.998
MF N/A 0.548 0.474 0.556
MF N/A 0.849 0.146 0.939
(a) Learned dictionary,
Method Threshold Performance at best operating point AUC
TPR FPR
10 0.01 D-RPCA(C) 0.800 0.946 0.016 0.993
OP 1.300 0.946 0.060 0.988
MF N/A 0.946 0.060 0.987
MF N/A 0.527 0.468 0.511
0.1 D-RPCA(C) 0.550 0.979 0.029 0.997
OP 0.800 0.893 0.112 0.928
MF N/A 0.688 0.302 0.714
MF N/A 0.527 0.470 0.523
0.5 D-RPCA(C) 1.400 0.989 0.037 0.997
OP 0.800 0.807 0.148 0.847
MF N/A 0.807 0.309 0.781
MF N/A 0.527 0.468 0.539
(b) Learned dictionary,
Method Threshold Performance at best operating point AUC
TPR FPR
15 D-RPCA(C) 0.800 0.989 0.018 0.998
OP 2.200 0.882 0.126 0.900
MF N/A 0.957 0.085 0.978
MF N/A 0.796 0.217 0.857
(c) Dictionary by sampling voxels,
Method TPR FPR AUC
Mean St.Dev. Mean St.Dev. Mean St.Dev.
D-RPCA(C) 0.981 0.016 0.020 0.010 0.997 0.002
OP 0.889 0.099 0.117 0.115 0.906 0.114
MF 0.763 0.151 0.266 0.149 0.772 0.166
MF 0.668 0.151 0.331 0.148 0.702 0.192
(d) Average performance
TABLE III: Column-wise sparsity model and Indian Pines Dataset. Simulation results are presented for the proposed approach (D-RPCA(C)), Outlier Pursuit (OP) based approach on transformed data (OP), matched filtering (MF) on original data , and matched filtering on transformed data (MF), across dictionary elements , and the regularization parameter for initial dictionary learning step . Threshold selects columns with column-norm greater than threshold such that AUC is maximized. For each case, the best performing metrics are reported in bold for readability. Further, denotes the case where ROC curve was “flipped” (i.e. classifier output was inverted to achieve the best performance).

V-C Experimental Setup

Normalization of data and the dictionary: For normalizing the data, we divide each element of the data matrix by to preserve the inter-voxel scaling. For the dictionary, in the learned dictionary case, i.e., case 1), the dictionary already has unit-norm columns as a result of Algorithm 2. Further, when the dictionary is formed from the data directly, i.e., for case 2), we divide each element of by , and then normalize the columns of , such that they are unit-norm.

Dictionary selection for the Indian Pines Dataset: For the learned dictionary case, we evaluate the performance of the aforementioned techniques for both entry-wise and column-wise settings for two dictionary sizes, and , for three values of the regularization parameter , used for the initial dictionary learning step, i.e., and . Here, the parameter controls the sparsity during the initial dictionary learning step; see Algorithm 2. For the case when dictionary is selected from the voxels directly, we randomly select voxels from the target class- to form our dictionary.

Dictionary selection for the Pavia University Dataset: Here, for the learned dictionary case, we evaluate the performance of the aforementioned techniques for both entry-wise and column-wise settings for a dictionary of size for three values of the regularization parameter , used for the initial dictionary learning step, i.e., and . Further, we randomly select voxels from the target class-, when the dictionary is formed from the data voxels.

Comparison with matched filtering (MF)-based approaches: In addition to the robust PCA-based and OP-based techniques introduced in Section I-D, we also compare the performance of our techniques with two MF-based approaches. These MF-based techniques are agnostic to our model assumptions, i.e., entry-wise or column-wise sparsity cases. Therefore, the following description of these techniques applies to both sparsity cases.

For the first MF-based technique, referred to as MF, we form the inner-product of the column-normalized data matrix , denoted as , with the dictionary , i.e., , and select the maximum absolute inner-product per column. For the second MF-based technique, MF, we perform matched filtering on the pseudo-inversed data . Here, the matched filtering corresponds to finding maximum absolute entry for each column of the column-normalized . Next, in both cases we scan through threshold values between to generate the results.

Performance Metrics: We evaluate the performance of these techniques via the receiver operating characteristic (ROC) plots. ROC plots are a staple for analysis of classification performance of a binary classifier in machine learning; see [50] for details. Specifically, it is a plot between the true positive rate (TPR) and the false positive rate (FPR), where a higher TPR (close to ) and a lower FPR (close to ) indicate that the classifiier performs detects all the elements in the class while rejecting those outside the class.

A natural metric to gauge good performance is the area under the curve (AUC) metric. It indicates the area under the ROC curve, which is maximized when TPR and FPR , therefore, a higher AUC is preferred. Here, an AUC of indicates that the performance of the classifier is roughly as good as a coin flip. As a result, if a classifier has an AUC , one can improve the performance by simply inverting the result of the classifier. This effectively means that AUC is evaluated after “flipping” the ROC curve. In other words, this means that the classifier is good at rejecting the class of interest, and taking the complement of the classifier decision can be used to identify the class of interest.

In our experiments, MF-based techniques often exhibit this phenomenon. Specifically, when the dictionary contains element(s) which resemble the average behavior of the spectral signatures, the inner-product between the normalized data columns and these dictionary elements may be higher as compared to other distinguishing dictionary elements. Since, MF-based techniques rely on the maximum inner-product between the normalized data columns and the dictionary, and further since the spectral signatures of even distinct classes are highly correlated; see, for instance Fig. 2, where MF-based approaches in these cases can effectively reject the class of interest. This leads to an AUC . Therefore, as discussed above, we invert the result of the classifier (indicated as in the tables) to report the best performance. If using MF-based techniques, this issue can potentially be resolved in practice by removing the dictionary elements which tend to resemble the average behavior of the spectral signatures.

V-D Parameter Setup for the Algorithms

Entry-wise sparsity case: We evaluate and compare the performance of the proposed method D-RPCA(E) with RPCA (described in Section I-C), MF, and MF. Specifically, we evaluate the performance of these techniques via the receiver operating characteristic (ROC) plot for the Indian Pines dataset and the Pavia University dataset, with the results shown in Table I(a)-(d) and Table II(a)-(c), respectively.

Method