Composite Kernel Local Angular Discriminant Analysis for Multi-Sensor Geospatial Image Analysis

Composite Kernel Local Angular Discriminant Analysis for Multi-Sensor Geospatial Image Analysis


With the emergence of passive and active optical sensors available for geospatial imaging, information fusion across sensors is becoming ever more important. An important aspect of single (or multiple) sensor geospatial image analysis is feature extraction — the process of finding “optimal” lower dimensional subspaces that adequately characterize class-specific information for subsequent analysis tasks, such as classification, change and anomaly detection etc. In recent work, we proposed and developed an angle-based discriminant analysis approach that projected data onto subspaces with maximal “angular” separability in the input (raw) feature space and Reproducible Kernel Hilbert Space (RKHS). We also developed an angular locality preserving variant of this algorithm. In this letter, we advance this work and make it suitable for information fusion — we propose and validate a composite kernel local angular discriminant analysis projection, that can operate on an ensemble of feature sources (e.g. from different sources), and project the data onto a unified space through composite kernels where the data are maximally separated in an angular sense. We validate this method with the multi-sensor University of Houston hyperspectral and LiDAR dataset, and demonstrate that the proposed method significantly outperforms other composite kernel approaches to sensor (information) fusion.


Optical remote sensing has made significant advances in recent years. Among these has been the deployment and wide-spread use of hyperspectral imagery on a variety of platforms (including manned and unmanned aircraft and satellites) for a wide variety of applications, ranging from environmental monitoring, ecological forecasting, disaster relief to applications pertaining to national security. With rapid advancements in sensor technology, and the resulting reduction of size, weight and power requirements of the imagers, it is also now common to deploy multiple sensors on the same platform for multi-sensor imaging. As a specific example, it is appealing for a variety of remote sensing applications to acquire hyperspectral imagery and Light Detection and Ranging (LiDAR) data simultaneously — hyperspectral imagery offers a rich characterization of object specific properties, while LiDAR provides topographic information that complements Hyperspectral imagery [1]. Modern LiDAR systems provide the ability to record entire waveforms for every return signal as opposed to providing just the point cloud. This enables a richer representation of surface topography.

While feature reduction is an important preprocessing to analysis of single-sensor high dimensional passive optical imagery (particularly hyperspectral imagery), it becomes particularly important with multi-sensor data where each sensor contributes to high dimensional raw features. A variety of feature projection approaches have been used for feature reduction, including classical approaches such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and their many variants, manifold learning approaches such as Supervised and Unsupervised Locality Preserving Projections [6]. Several of these methods are implemented in both the input (raw) feature space and the Reproducible Kernel Hilbert Space (RKHS) for data that are nonlinearly separable. Further, most traditional approaches to feature extraction are designed for single-sensor data — a unique problem with multi-sensor data is that feature spaces corresponding to each sensor often have different statistical properties, and a single feature projection may hence be sub-optimal. It is hence desired to have a projection for feature reduction that preserves the underlying information from each sensor in a lower dimensional subspace.

More recently, we developed a feature projection approach, referred to as Angular Discriminant Analysis (ADA) [7], that was optimized for hyperspectral imagery and demonstrated robustness to spectral variability. Specifically, the approach sought a lower dimensional subspace where classes were maximally separated in an angular sense, preserving important spectral shape related characteristics. We also developed a local variant of the algorithm (LADA) that preserved angular locality in the subspace. In this paper, we propose a composite kernel implementation of this framework and demonstrate for the purpose of feature projection in multi-sensor frameworks. Specifically, by utilizing a composite kernel (a dedicated kernel for each sensor), and ADA (or LADA) for each sensor, the resulting projection is highly suitable for classification. The proposed approach serves as a very effective feature reduction algorithm for sensor fusion — it optimally fuses multi-sensor data and projects it to a lower dimensional subspace. A traditional classifier can be employed following this, for supervised learning. We validate the method with the University of Houston multi-sensor dataset comprising of Hyperspectral and LiDAR data and show that the proposed method significantly outperforms other approaches to feature fusion.

The outline of the remainder of this paper is as follows. In Section 2, we review related work. In Section 3, we describe the proposed approach for multi-sensor feature extraction. In Section 4, we describe the experimental setup and present results with the proposed method, comparing it to several state-of-the-art techniques to feature fusion.

2Related Work

Traditional approaches to feature projection based dimensionality reduction such as PCA, LDA and their variants largely rely on Euclidean measures. Manifold learning approaches [6] also seek to preserve manifold structures and neighborhood locality through projections that preserve such structures. Other projection based approaches to feature reduction, such as Locality Preserving Projections (LPP), Local Fisher’s Discriminant Analysis (LFDA) [6] etc. integrate ideas of local neighborhoods through affinity matrices, into classical projection based analysis approaches such as PCA, LDA etc. As a general feature extraction approach, Euclidean distance is a reasonable choice, including for remotely sensed image analysis. However, by noting the well understood benefits of spectral angle for hyperspectral image analysis, in previous work, we developed an alternate feature projection paradigm that worked with angular distance measures instead of euclidean distance measures [7] — we demonstrated that when projecting hyperspectral data through this class of transformations, the resulting subspaces were very effective for downstream classification and significantly outperformed their Euclidean distance counterparts. In addition to benefits with classification, we also demonstrated other benefits of this class of methods, including robustness to illumination differences — something that is very important for remote sensing. In other previous work, it has been shown that a reproducible kernel Hilbert space (RKHS) generated by composite kernels (a weighted linear combination of basis kernels) is very effective for multi-source fusion [2]. Here, we briefly review the developments related to angular discriminant analysis. This will provide a context and motivation for the proposed work in this paper that seeks to demonstrate the benefits of composite kernel angular discriminant analysis for multi-source image analysis.

2.1Angular Discriminant Analysis

Here, we briefly review Angular Discriminant Analysis (ADA) and its locality preserving counterpart, Local Angular Discriminant Analysis (LADA). Consider a -dimensional feature space (e.g. hyperspectral imagery with spectral channels). Let be the -th training sample with an associated class label , where is the number of classes. The total number of training samples in the library is , where denotes the number of training samples from class . Let be the desired projection matrix, where denotes the reduced dimensionality. We also denote symbols having unit norm with a tilde — this will be useful where we normalize the data to a unit norm to focus on angular separability.


Traditional LDA seeks to find a subspace that maximizes between-class scatter while minimizing within-class scatter, where the scatter is measured using Euclidean distances. While similar in philosophy, ADA is an entirely new approach to subspace learning that is based on angular scatter — it seeks a subspace where within-class angular scatter is maximized, and the between-class angular scatter is maximized. Just like LDA, the ADA optimization problem can be posed as a generalized eigenvalue problem. Specifically, ADA seeks to find a projection where the ratio of between-class inner product to within-class inner product of data samples is minimized. The within-class outer product matrix and between-class outer product matrix are defined as

where is the normalized mean of -th class samples, and is defined as the normalized total mean.

It was shown in [7] that the projection matrix of ADA can be approximated as the solution to the following trace ratio problem

The projection matrix can be obtained by solving the generalized eigenvalue problem involving and .


Similar to LDA, ADA is a “global” projection in that it does not specifically promote preservation of local (neighborhood) angular relationships under the projection. We hence developed LADA in [7], which is a local variant of ADA. The within and between-class outer product matrices of LADA are obtained as follows

where the normalized weight matrices are defined as

The normalized affinity matrix between and is defined as

where denotes the local angular scaling of data samples in the angular neighborhood of , and is the K-th nearest neighbors of .

Similar to ADA, the projection matrix of LADA can be defined as

3Composite Kernel Angular Discriminant Analysis for Image Fusion

In this section, we develop and describe the proposed approach to multi-source feature extraction — composite kernel angular discriminant analysis (CKADA) and its locality preserving counterpart (CKLADA). Our underlying hypothesis with this work is that even when angular information is important for optical image analysis, in a multi-source (e.g. multi-sensor scenario), having dedicated kernels (specific to each source) would result in a superior projection that addresses source-specific nonlinearities. With that goal, we extend our previous work with angular discriminant analysis by implementing it in a composite kernel reproducible kernel Hilbert space and demonstrate for a specific application of multi-sensor image analysis that the resulting subspace is highly discriminative and outperforms other subspace learning approaches.

Consider a nonlinear mapping from the input space to a RKHS as follows:

and a kernel function defined as:

where is the inner product of two vectors. Consider next a set of co-registered multi-source images resulting in the following -Tuple of feature vectors from co-registered images for every geolocation (co-registered pixels): , where . Associated with every pixel (geolocation) for which ground truth is available, there is a class label . A composite kernel RKHS can then be constructed as

where is a basis kernel for the ’th source, formed by any valid Mercer’s kernel. To implement Composite Kernel ADA (CKADA), note that and can be reformulated as

where is given as

and is given as

ADA can hence be re-expressed as the solution to the following generalized eigenvalue problem

Since can be represented as a linear combination of columns of , it can be formulated using a vector as

where is a symmetric kernel (Gram) matrix. Here represents a simple inner product kernel, but can be replaced by by utilizing the kernel trick. Multiplying on both sides of , results in the following generalized eigenvalue problem.

Let be the generalized eigenvectors associated with the smallest eigenvalues . A test sample can be embedded in via

where is a vector. Composite Kernel Local ADA (CKLADA) can likewise be implemented by replacing the weight matrices ( and ) above with their local counterparts defined in and .

We note that in the proposed approach, the empirical kernel (Gram) matrix from that is formed as a weighted linear combination over all sources is used in the generalized eigenvalue problem for CKLADA . The algorithm projects the data from sources onto a unified RKHS through a bank of kernels individually optimized for each source. The final embedding seeks to optimally separate (in an angular sense) data in the RKHS. The linear mixture of kernel enables us to optimize each kernel (for example the kernel parameters) for each source instead of applying a single kernel for all sources, and to specify source importance (via mixing weights) to the overall analysis task at hand.

Practical Considerations:

We note the following free parameters in the overall embedding that affect the subspace that is generated: Embedding dimension, , mixture weights used in the composite kernel, , choice of kernel and related kernel parameters. We note that unlike some other embedding techniques such as LDA and its variants where the embedding dimension is upper bounded due to rank deficiency of the between class scatter, with composite kernel local ADA, the between class angular scatter is not rank limited, and as a result, the projection matrix resulting from the solution to the generalized eigenvalue problem does not enforce an upper bound on the embedding dimension. Hence, is a free parameter that represents the unified subspace generated by all sources. The choice of should hence be governed by the information content (as quantified for example in the eigenspectra of the decomposition). The choice of weights can be made through cross validation or techniques such as kernel alignment — in our experience, there is often a very wide plateau over a range of values of the weights, and hence we chose to use simple cross validation to learn weights from our training data. We utilized a standard radial basis function (RBF) kernel for each source (), but the kernel parameter (width of the RBF kernel) is optimized for each source individually via cross validation.


We note that following a CKLADA projection, a simple classifier can be utilized for down-stream analysis. This follows from the observation that applying kernel projections while simultaneously ensuring preservation of angular locality will result in subspaces where class-specific data are compactly clustered. We validate and measure the efficacy of subspaces resulting from CKLADA by utilizing the following classifiers: (1) A K Nearest neighbor (KNN) classifier, (2) A Gaussian maximum-likelihood (ML) classifier, and a (3) sparse representation based classifier (SRC) [15]. While the choice of KNN and ML are obvious for subspaces formed by Kernel projections as noted in [2], we make a remark on choice of SRC as an additional classifier to measure efficacy of subspaces — this choice is motivated not only by the observation that SRC has emerged as a powerful classification approach for high dimensional remote sensing data [16] and that it exploits the inherent sparsity when representing samples using training dictionaries, but also because popular solvers used (e.g. Orthogonal Matching Pursuit, OMP) are driven by inner products to learn the sparse representation and hence they essentially exploit angular information. They implicitly seek a representation where a test sample is represented sparsely in a dictionary of training data such that the atoms that eventually contribute (have non-zero, significant representation coefficients) to the representation are angularly similar to the test data samples. We hence contend that CKLADA is particularly well suited for SRC and its variants.

4Experimental Setup and Results


The dataset we utilize represents a sensor fusion scenario, comprising of LiDAR pseudo-waveforms, and a hyperspectral image cube, and is popular in the remote sensing community as a benchmark. The data were acquired over the University of Houston campus and the neighboring urban area. All the images are at the same spatial resolution of 2.5 m and have the same spatial size of . The hyperspectral image was acquired with the ITRES CASI sensor, containing 144 spectral bands, ranging from 380 nm to 1050 nm. The LiDAR DSM data was acquired by an Optech Gemini sensor and then co-registered to the hyperspectral image. The laser pulse wavelength and repetition rate were 1064 nm and 167 kHz, respectively. The instrument can make up to 4 range measurements. The total number of ground reference samples is 2832, covering 15 classes of interest, with approximately 200 samples for each class — these were determined by photo-interpretation of high resolution optical imagery.. The groundtruth map is overlaid with the gray scale image showing one channel of the hyperspectral image in Figure 1.

Figure 1:  True color image of hyperspectral University of Houston data, and the ground truth.
 True color image of hyperspectral University of Houston data, and the ground truth.
 True color image of hyperspectral University of Houston data, and the ground truth.
Figure 1: True color image of hyperspectral University of Houston data, and the ground truth.

From the dense LiDAR point cloud, a pseudo-waveform was generated for each geolocation, that is co-registered with the hyperspectral image. The pseudo-waveform was generated by quantizing the elevation into uniform sized bins, and determining the average intensity of points as a function of elevation bins. This provides us with a co-registered cube of waveform-like LiDAR data that is coregistered with our hyperspectral image. We note that like spectral reflectance profiles that have unique shapes depending on the material in the pixel, shapes of pseudo-waveform also correlate with the material and topographic properties in the image. Hence, angular measures (such as provided by CKLADA) would be appropriate for such analysis compared to Euclidean measures.


To validate the efficacy of the subspaces generated by CKLADA and CKADA, we setup classification experiments using the University of Houston multi-sensor dataset. We used popular and commonly employed embeddings as baselines to compare against, including CKLFDA and KPCA. Each of these embeddings was used with 3 classifiers: KNN, ML and SRC. We note that CKLFDA is a composite kernel counterpart of LFDA based on Euclidean distance measures, and is the best possible multi-source embedding that can be compared with CKLADA — a comparison of CKLFDA vs. CKLADA provides a direct understanding of the benefits of angular information for multi-source embeddings, and of the resulting algorithmic framework proposed in Section 3.

With CKLFDA, CKADA and CKLADA, we treat each sensor (hyperspectral imagery and pseudo-waveform LiDAR) as a source, each getting its dedicated base kernel. With a single kernel KPCA, we stack features from the two sensors and project them via a single transformation based on these methods. In all cases, we use RBF kernels as our base kernels, and the width of the kernel is determined via cross validation. Other free parameters including sparsity level used in SRC, number of nearest neighbors in are also determined empirically from the training data via cross-validation.

4.3Visualization of Embeddings

To provide a visual demonstration on the power of composite kernel angular discriminant analysis for geospatial image analysis, we provide visualization of composite kernel projections CKLADA, CKADA (both angular discriminant analysis) and CKLFDA. These results are depicted in fig. ?. The figure depicts false color images generated by projecting the multi-sensor data onto the first three most significant eigenvectors learned from CKLADA, CKADA and CKLFDA respectively. It can be clearly seen that CKLADA (and CKADA to some degree) preserve object specific properties throughout the image (for example, the highly textured objects such as urban vegetation, residential areas etc. have their spatial context significantly preserved in the lower dimensional subspace). On the contrary, CKLFDA, which can be considered as the closest benchmark/baseline competitor does not perform as well. Further, towards the right corner of the image, we point the reader to the substantial benefit of CKLADA under cloud shadows - spatial structures under cloud shadows are visible under CKLADA (and CKADA to some extent), but not when using CKLFDA.

CKLADA (Proposed)
CKLADA (Proposed)
CKADA (Proposed)
CKADA (Proposed)

4.4Comparative Results

Experimental results comparing performance of CKADA and CKLADA with baseline embeddings are provided. As mentioned previously, free parameters were determined empirically via cross-validation. Tab. ? depicts overall accuracy as a function of training sample size, ranging from a small to a sufficiently large value for the proposed and baseline embeddings with various classifiers. We notice that the proposed composite kernel angular discriminant analysis approaches (CKADA and CKLADA) provide anywhere from improvement in performance compared to state of the art (CKLFDA), and provide even higher accuracies compared to a traditional single-kernel baseline, KPCA. We note that even when using very limited training data, CKLADA is able to substantially outperform other composite kernel and single-kernel methods (using just 10 samples per class, we obtain as much as a gain in performance with CKLADA). Even when using a sufficiently large training sample set (e.g. 50 samples per class), CKLADA and CKADA outperform other methods. In Tab. , Tab. , and Tab. , we depict class specific accuracies, overall and average accuracies using the proposed methods and baselines using SRC, ML and KNN classifiers respectively. Once again, it is clear that CKADA and CKLADA consistently provide robust classification, particularly for the “difficult” classes (such as residential buildings, commercial buildings, roads, parking lots etc.). We also provide classification maps in fig. ? using the proposed method (CKLADA) and its closest competitor, CKLFDA, using the SRC classifier. We note that CKLADA results in a map with very little noise and misclassifications, and is particularly robust under the very challenging area in the right corner of the image that is under a cloud shadow (for e.g., when using CKLFDA, the area under a cloud shadow get systematically misclassified as water — something that is visibly remedied by CKLADA). The improvements to misclassifications occurring over difficult classes is even more apparent with these ground cover classification maps.

Comparison of various feature embedding algorithms for multi-sensor image analysis as a function of training sample size


10 20 30 40 50
CKLADA-SRC 73.23.8 87.11.1 90.41.1 92.30.9 93.30.9
CKADA-SRC 74.52.3 85.81.2 89.21.2 91.21.1 92.30.8
CKLFDA-SRC 66.62.5 81.61.6 86.21.3 87.81.0 88.91.0
KPCA-SRC 67.91.4 79.31.3 84.11.1 86.70.8 88.50.9
CKLADA-ML 74.332.4 85.71.6 91.11.2 93.31.0 94.30.7
CKADA-ML 77.22.7 86.71.6 91.61.2 93.01.0 94.00.8
CKLFDA-ML 70.42.4 81.21.7 86.71.6 88.91.1 90.21.0
KPCA-ML 72.152.7 85.11.9 91.21.2 93.31.0 94.30.8
CKLADA-KNN 80.31.7 88.11.3 91.41.0 93.00.9 93.90.7
CKADA-KNN 80.31.6 87.01.2 90.11.0 91.40.9 92.40.8
CKLFDA-KNN 70.72.2 82.51.6 86.51.2 88.11.0 89.30.9
KPCA-KNN 69.71.6 79.21.1 83.71.3 86.40.9 87.80.9

Using proposed and baseline feature embeddings with SRC


Grass-healthy 30 168 99.41.0 99.02.0 98.61.7 98.01.8
Grass-stressed 30 160 97.71.4 96.01.6 98.01.2 95.93.1
Grass-synthetic 30 162 99.70.5 99.70.4 95.82.5 98.31.3
Tree 30 158 98.11.2 95.92.7 98.81.2 99.50.5
Soil 30 156 99.80.3 98.60.9 98.51.7 97.32.4
Water 30 152 98.72.3 95.13.0 97.02.3 96.92.3
Residential 30 166 85.85.5 81.35.0 80.34.6 77.16.1
Commercial 30 161 86.85.5 82.96.4 77.26.4 79.55.6
Road 30 163 80.25.7 78.56.1 73.75.5 64.45.1
Highway 30 161 90.93.2 90.83.7 86.93.9 77.95.2
Railway 30 151 88.53.9 86.24.5 82.64.1 76.15.8
Parking Lot 1 30 162 75.25.7 74.26.3 67.86.4 62.35.0
Parking Lot 2 30 154 65.54.7 74.54.5 58.46.8 50.75.7
Tennis Court 30 151 99.50.5 98.51.1 98.62.0 96.62.2
Running Track 30 157 98.90.5 98.70.7 98.40.9 99.80.4
OA 91.01.0 89.90.9 87.31.2 84.71.1
AA 91.02.8 90.03.2 87.43.4 84.73.5

Using proposed and baseline feature embeddings with Gaussian ML


Grass-healthy 30 168 94.95.4 95.14.1 92.65.2 95.94.4
Grass-stressed 30 160 99.40.8 97.81.9 99.80.3 98.61.9
Grass-synthetic 30 162 96.50.3 96.82.7 88.24.8 96.72.7
Tree 30 158 99.70.8 99.50.8 98.91.7 99.90.3
Soil 30 156 97.62.7 93.24.4 95.04.4 97.42.5
Water 30 152 95.12.9 92.64.0 94.03.5 96.02.8
Residential 30 166 86.16.9 80.86.9 82.47.9 82.27.8
Commercial 30 161 86.18.3 91.36.6 73.511.8 86.59.7
Road 30 163 83.58.6 87.86.8 76.87.2 82.68.9
Highway 30 161 82.88.3 83.96.9 76.77.8 83.48.3
Railway 30 151 84.57.7 81.87.6 80.68.0 83.68.6
Parking Lot 1 30 162 71.78.7 49.811.6 64.57.6 71.810.2
Parking Lot 2 30 154 92.23.3 96.71.5 84.86.7 92.54.3
Tennis Court 30 151 98.02.0 96.73.4 95.52.8 98.51.9
Running Track 30 157 96.62.0 97.11.9 94.92.9 97.91.5
OA 90.91.2 89.31.3 86.51.3 84.61.6
AA 91.04.7 89.44.7 86.65.5 90.95.0

Using proposed and baseline feature embeddings with KNN


Grass-healthy 30 168 98.63.0 99.42.3 97.42.9 98.22.8
Grass-stressed 30 160 97.51.5 96.91.4 97.32.0 97.22.6
Grass-synthetic 30 162 99.50.8 99.70.4 96.91.9 97.91.6
Tree 30 158 98.11.8 96.02.7 96.92.9 99.70.4
Soil 30 156 99.70.5 98.50.8 98.12.6 98.11.1
Water 30 152 98.42.4 94.82.4 97.32.2 96.41.7
Residential 30 166 84.65.2 79.66.3 73.86.0 64.46.7
Commercial 30 161 82.18.1 76.48.2 77.57.8 79.26.7
Road 30 163 83.14.9 78.75.1 74.05.8 64.35.3
Highway 30 161 94.32.7 93.33.2 89.03.4 80.83.8
Railway 30 151 93.43.2 90.34.2 83.95.3 76.06.3
Parking Lot 1 30 162 78.85.9 76.66.4 70.76.5 67.86.5
Parking Lot 2 30 154 65.45.0 69.64.1 54.24.9 41.04.7
Tennis Court 30 151 99.60.5 99.20.8 98.31.7 99.30.7
Running Track 30 157 99.10.4 98.80.6 97.71.0 99.20.7
OA 91.51.1 89.81.0 86.81.1 83.91.1
AA 91.53.1 89.83.3 86.73.8 84.03.4

Classification map generated using CKLADA (Proposed) with 20 training samples per class.
Classification map generated using CKLADA (Proposed) with 20 training samples per class.
Classification map generated using CKLFDA  with 20 training samples per class.
Classification map generated using CKLFDA with 20 training samples per class.
Classification map generated using CKLFDA  with 20 training samples per class.
Classification map generated using CKLFDA with 20 training samples per class.


We presented a composite kernel variant of angular discriminant analysis and local angular discriminant analysis. Angular discriminant analysis was previously shown to be very beneficial for high dimensional hyperspectral classification. In this paper, we expanded those developments via a composite kernel and demonstrated that this paradigm can be a very useful feature embedding algorithm in multi-source scenarios, such as when fusing multiple geospatial images. We validated our results with a popular multi-sensor benchmark and demonstrated that composite kernel angular discriminant analysis consistently outperforms other feature embeddings.


  1. M. Dalponte, L. Bruzzone, and D. Gianelle, “Fusion of hyperspectral and lidar remote sensing data for classification of complex forest areas,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1416–1427, 2008.
  2. Y. Zhang and S. Prasad, “Locality preserving composite kernel feature extraction for multi-source geospatial image analysis,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 3, pp. 1385–1392, March 2015.
  3. R. Brennan and T. Webster, “Object-oriented land cover classification of lidar-derived surfaces,” Canadian Journal of Remote Sensing, vol. 32, no. 2, pp. 162–172, 2006.
  4. M. Shimoni, G. Tolt, C. Perneel, and J. Ahlberg, “Detection of vehicles in shadow areas using combined hyperspectral and lidar data,” in Geoscience and Remote Sensing Symposium (IGARSS).1em plus 0.5em minus 0.4emIEEE, 2011, pp. 4427–4430.
  5. M. Pedergnana, P. R. Marpu, M. Dalla Mura, J. A. Benediktsson, and L. Bruzzone, “Fusion of hyperspectral and lidar data using morphological attribute profiles,” in SPIE Remote Sensing.1em plus 0.5em minus 0.4emInternational Society for Optics and Photonics, 2011, pp. 81 801G–81 801G.
  6. D. Lunga, S. Prasad, M. Crawford, and O. Ersoy, “Manifold-learning-based feature extraction for classification of hyperspectral data: A review of advances in manifold learning,” Signal Processing Magazine, IEEE, vol. 31, no. 1, pp. 55–66, 2014.
  7. M. Cui and S. Prasad, “Angular discriminant analysis for hyperspectral image classification,” Selected Topics in Signal Processing, IEEE Journal of, vol. 9, no. 6, pp. 1003–1015, 2015.
  8. S. Prasad and M. Cui, “Sparse representations for classification of high dimensional multi-sensor geospatial data,” in Proceedings of the 2013 Asilomar Conference on Signals, Systems and Computers., November 2013, pp. 811–815.
  9. M. Cui and S. Prasad, “Sparsity promoting dimensionality reduction for classification of high dimensional hyperspectral images,” in 38th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), May 2013, pp. 2154–2158.
  10. S. Prasad, M. Cui, W. Li, , and J. Fowler, “Segmented mixture of gaussian classification for robust sub-pixel hyperspectral ATR,” IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 1, pp. 138–142, January 2014.
  11. M. Cui, S. Prasad, W. Li, and L. Bruce, “Locality preserving genetic algorithms for spatial-spectral hyperspectral image classification,” Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of, vol. 6, no. 3, pp. 1688–1697, 2013.
  12. Z. Wang and X. Sun, “Multiple kernel local fisher discriminant analysis for face recognition,” Signal Processing, vol. 93, no. 6, pp. 1496–1509, 2013.
  13. D. Tuia, F. Ratle, A. Pozdnoukhov, and G. Camps-Valls, “Multisource composite kernels for urban-image classification,” IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 1, pp. 88–92, 2010.
  14. G. Camps-Valls, L. Gomez-Chova, J. Muñoz-Marí, J. Vila-Francés, and J. Calpe-Maravilla, “Composite kernels for hyperspectral image classification,” IEEE Geoscience and Remote Sensing Letters, vol. 3, no. 1, pp. 93–97, 2006.
  15. J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 31, no. 2, pp. 210–227, 2009.
  16. M. Cui and S. Prasad, “Class dependent sparse representation classifier for robust hyperspectral image classification,” IEEE Transactions on Geosciences and Remote Sensing, vol. 53, no. 5, pp. 2683–2695, May 2015.
  17. X.-T. Yuan, X. Liu, and S. Yan, “Visual classification with multitask joint sparse representation,” Image Processing, IEEE Transactions on, vol. 21, no. 10, pp. 4349–4360, 2012.
  18. Y. Chen, N. M. Nasrabadi, and T. D. Tran, “Hyperspectral image classification using dictionary-based sparse representation,” Geoscience and Remote Sensing, IEEE Transactions on, vol. 49, no. 10, pp. 3973–3985, 2011.
  19. Y. Chen, N. Nasrabadi, and T. Tran, “Sparse representation for target detection in hyperspectral imagery,” Selected Topics in Signal Processing, IEEE Journal of, vol. 5, no. 3, pp. 629–640, 2011.
  20. L. Zhang, M. Yang, and X. Feng, “Sparse representation or collaborative representation: Which helps face recognition?” in Computer Vision (ICCV), 2011 IEEE International Conference on.1em plus 0.5em minus 0.4emIEEE, 2011, pp. 471–478.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description