Low-Shot Learning for the Semantic Segmentation of Remote Sensing Imagery

Low-Shot Learning for the Semantic Segmentation of Remote Sensing Imagery

Ronald Kemker, Ryan Luu, and Christopher Kanan,  R. Kemker, R. Luu, and C. Kanan are with the Machine and Neuromorphic Perception Laboratory in the Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, 14623 USA (http://klab.cis.rit.edu/).
Abstract

Recent advances in computer vision using deep learning with RGB imagery (e.g., object recognition and detection) have been made possible thanks to the development of large annotated RGB image datasets. In contrast, multispectral image (MSI) and hyperspectral image (HSI) datasets contain far fewer labeled images, in part due to the wide variety of sensors used. These annotations are especially limited for semantic segmentation, or pixel-wise classification, of remote sensing imagery because it is labor intensive to generate image annotations. Low-shot learning algorithms can make effective inferences despite smaller amounts of annotated data. In this paper, we study low-shot learning using self-taught feature learning for semantic segmentation. We introduce 1) an improved self-taught feature learning framework for HSI and MSI data and 2) a semi-supervised classification algorithm. When these are combined, they achieve state-of-the-art performance on remote sensing datasets that have little annotated training data available. These low-shot learning frameworks will reduce the manual image annotation burden and improve semantic segmentation performance for remote sensing imagery.

Hyperspectral imaging, self-taught learning, feature learning, deep learning, semi-supervised, semantic segmentation

I Introduction

Semantic segmentation is a computer vision task that involves assigning a categorical label to each pixel in an image (i.e., pixel-wise classification). For color (RGB) imagery, deep convolutional neural networks (DCNNs) are continually pushing the state-of-the-art for this task. This is enabled by the availability of large annotated RGB datasets. When small amounts of data are used, conventional DCNNs generalize poorly, especially deeper models. This has made it difficult to use models designed for RGB data with multispectral imagery (MSI) and hyperspectral imagery (HSI) that are widely used in remote sensing, since publicly available annotated data is scarce. Due to the limited availability of annotated data for these “non-RGB” sensors, adapting DCNNs to remote sensing problems requires using low-shot learning. Low-shot learning methods seek to accurately make inferences using a small quantity of annotated data. These methods typically build meaningful feature representations using unsupervised or semi-supervised learning to cope with the reduced amount of labeled data.

Many researchers have explored unsupervised feature extraction as a way to boost performance in semantic segmentation of MSI and HSI. They have tried shallow features (e.g., gray-level co-occurrence matrices [20], Gabor [25], sparse coding [26], and extended morphological attribute profiles [9]), and deep-learning models (e.g., autoencoders [17, 15, 33, 18, 28]) that learn spatial-spectral feature extractors directly from the data. Recently, self-taught learning models have been introduced to build feature-extracting frameworks that generalize well across multiple datasets [13]. In self-taught learning, spatial-spectral feature extractors are trained using a large quantity of unlabeled HSI and then used to extract features from other datasets that we may want to classify (i.e. the target datasets). Self-taught learning for HSI semantic segmentation was pioneered in [13].

Fig. 1: Our proposed SuSA architecture for semantic segmentation of remote sensing imagery. For feature extraction, SuSA uses our SMCAE model, a stacked multi-loss convolutional autoencoder that has been trained on unlabeled data using unsupervised learning. For classification, SuSA uses our semi-supervised multi-layer perceptron (SS-MLP) model.

As the dimensionality of each feature vector increases, the performance for many deterministic models (e.g., support vector machine (SVM)) will degrade [1]. The most common method for preventing this is to reduce the dimensionality of the feature space (e.g., using principal component analysis (PCA)); however, this involves tuning at least one more hyperparameter (i.e., number of dimensions to retain) through cross-validation.

Multi-layer perceptron (MLP) neural networks can learn which features are the most important for classification; however, they normally require a large quantity of annotated data to generalize well. Semi-supervised learning uses an unsupervised task to regularize classifiers that do not have enough annotated data to work with. For example, the ladder network architecture proposed by Rasmus et al. [22] trains on labeled and unlabeled data simultaneously to boost segmentation performance on smaller training sets. Semi-supervised frameworks give the model the ability to increase the dimensionality in the feature space, which allows them to learn what features are most important for optimal performance, and also enables them to perform well with little annotated data.

In this paper, we describe the semantic segmentation framework SuSA (self-taught semi-supervised autoencoder) shown in Fig. 1. SuSA is designed to perform well on MSI and HSI data where image annotations are scarce. SuSA is made of two modules. The first module is responsible for extracting spatial-spectral features, and the second module classifies these features.

We evaluated SuSA across multiple training/testing paradigms, and we compared our performance against state-of-the-art solutions for each respective paradigm found in literature, including two recent self-taught feature learning frameworks: MICA-SVM and SCAE-SVM [13]. We describe these in more detail in later sections.

This paper’s major contributions are:

  • We describe the stacked multi-loss convolutional auto-encoder (SMCAE) model (Fig. 4) for spatial-spectral feature extraction in non-RGB remote sensing imagery. SMCAE uses unsupervised self-taught learning to acquire a deep bank of feature extractors. SMCAE is used by SuSA for feature extraction.

  • We propose the semi-supervised multi-layer perceptron (SS-MLP) model (Fig. 5) for the semantic segmentation of non-RGB remote sensing imagery. SuSA uses SS-MLP to classify the feature representations from SMCAE, and SS-MLP’s semi-supervised mechanism enables it to perform well at low-shot learning.

  • We demonstrate that SuSA achieves state-of-the-art results on the Indian Pines and Pavia University datasets hosted on the IEEE GRSS Data and Algorithm Standard Evaluation website.

Ii Related Work

Ii-a Self-Taught Feature Learning

The self-taught feature learning paradigm was recently introduced as an unsupervised method for improving the performance for the semantic segmentation of HSI [13]. In the past, researchers learned spatial-spectral features directly from the target data and then passed them to a classifier [18, 20, 27, 28]. Learning spatial-spectral features on a per-image basis is computationally expensive, which may not be ideal for near-real-time analysis. Self-taught feature learning uses large quantities of unlabeled image data to build discriminative feature extractors that generalize well across many datasets, so there is no need to re-train these types of feature extracting frameworks [21].

The authors in [13] introduced two self-taught learning frameworks for the semantic segmentation of HSI. The first model, multi-scale independent component analysis (MICA) learned low-level feature extracting filters corresponding to bar/edge detectors, color opponency, image gradients, etc. The second model, the stacked convolutional autoencoder (SCAE), is a deep learning approach that is able to extract deep spatial-spectral features from HSI. These pre-trained models would extract features from the source image (i.e., the image we want to classify) and pass them to a support vector machine (SVM) classifier. Since MICA-SVM and SCAE-SVM provide state-of-the-art performance across multiple benchmark datasets, we compare our proposed work against them.

The SCAE model consisted of three separate convolutional autoencoder (CAE) modules trained in sequence. The training loss for each CAE was the mean-squared error (MSE) between the input data and the reconstructed output (also known as the data layer). It was shown in [30] that backpropagation is better at optimizing trainable parameters that are closer to where the training loss is computed (i.e., training error signal) than the trainable parameters in deeper layers. The solution was to take a weighted sum of the reconstruction loss for every encoder/decoder pair, which allowed the network to reduce reconstruction errors that occur in deeper layers.

In this paper, we introduce the stacked multi-loss convolutional autoencoder (SMCAE) spatial-spectral feature extracting framework (Fig. 4). It is made up of multiple MCAE modules, where each uses multiple loss functions to incorporate and correct reconstruction errors from both shallow and deeper CAE layers. SMCAE trains, extracts, and concatenates feature responses from the individual MCAEs in the way SCAE is built from individual CAEs. SMCAE allows the user to extract deep spatial-spectral features directly from the image data.

Ii-B Semi-Supervised Learning

Self-taught feature learning focuses on unsupervised learning of features on additional data, and then use these features with a supervised system. Semi-supervised algorithms use supervised and unsupervised learning to improve generalization on supervised tasks; which in turn, improves classification performance on test data [30, 24, 22]. In both cases, unsupervised learning helps these algorithms to avoid overfitting when given only a small number of labeled HSI samples.

A number of discriminative semi-supervised methods have been adapted for HSI classification. The transductive support vector machine (TSVM) is a low-density separation algorithm that saw early success. TSVM seeks to choose a decision boundary that maximizes the margin between classes using both labeled and unlabeled data [4]. TSVM outperformed the inductive SVM when evaluated on the Indian Pines HSI dataset [3]. TSVM is computationally expensive and has a tendency to fall into a local minima.

Camps-Valls et al. [5] trained graph-based models for HSI classification using labeled and unlabeled data. Their model iteratively assigned labels to unlabeled pixels that were clustered near labeled pixels. Their model outperformed a standard SVM on the Indian Pines dataset. Using manifold regularization, the Laplacian support vector machine (LapSVM) expanded the graph-based model and showed promise in MSI classification and cloud screening [11]. LapSVM was later modified to incorporate spatial-spectral information [32] and semi-supervised kernel propagation with sparse coding [31]. Ratle et al. [23] recognized the shortcomings of using an SVM and replaced it with a semi-supervised neural network. This neural network outperformed LapSVM and TSVM on the Indian Pines and Kennedy Space Center HSI datasets in both classification accuracy and computational efficiency.

Dopido et al. [7] introduced a semi-supervised model that jointly learned the classification and spectral unmixing task to help improve classification performance on training sets with only a few labeled samples. Liu et al. [16] used the ladder network architecture proposed by [22] to semantically segment HSI. Their ladder network model used convolutional hidden layers in order to learn spatial-spectral features directly from the image. Both of these frameworks introduce an unsupervised task that is jointly optimized with the classification task to help regularize the model, which helped the model generalize and perform well with smaller training sets.

Iii Methods

Iii-a Multi-Loss Convolutional Autoencoder

Fig. 2: MCAE model architecture. Dashed lines indicate where the mean-squared error loss is calculated for layer , and solid lines are the feed-forward and lateral network connections where information is passed. The refinement layers (Fig. 3 are responsible for reconstructing the downsampled feature response.

Here, we describe the MCAE model (Fig. 2), a significant improvement over the original CAE model [13]. Formally, an autoencoder is an unsupervised neural network that attempts to reconstruct the input x (e.g., sample from HSI) such that , where is the autoencoder’s reconstruction of the original input . An autoencoder can be trained with various constraints to learn a meaningful feature representation that can still be used for reconstruction. Typically, autoencoders include a separate encoder network that learns a compressed feature representation of the data and a symmetrical decoder network that reconstructs the compressed feature representation back into an estimate of the original input. These networks have hidden layers that use trainable weights W and biases b to compress and then reconstruct the input. Since this is an unsupervised learning method, the MSE between x and is the loss used to train the network. Once trained, we can extract the features h from an autoencoder with a single hidden-layer such that,

(1)

where is the non-linear activation function (e.g., CAE used the Rectified Linear Unit (ReLU) activation). A CAE replaces multiply/add operations with 2-D convolution operations,

(2)

where denotes the 2-D convolution operation and X is the 2-D image data that will be convolved. 2-D convolution operations learn position invariant feature representations; that is, the feature response for a given object in an image is independent of the pixel location. It slides learned convolution filters across the target image, so the number of trainable parameters are , where is the number of pixels along the edge of the convolution filter (e.g., typically ), and and are the number of input/output features respectively . Standard multi-layer perceptron (MLP) neural networks have a trainable parameter relating every pixel to every input/output feature, resulting in trainable parameters, where is the number of pixels in the image data. DCNNs almost always have fewer trainable parameters than MLPs of equivalent depth, which can prevent the model from overfitting. In [13], the stacked CAE (SCAE) model is built using several CAEs, where the input to the -th CAE is the output from the last hidden-layer of the -th CAE,

(3)

where . This allowed the model to learn a deeper feature representation from the input data. Each CAE contains multiple hidden-layers and the down-sampled feature response is reconstructed by the refinement layer shown in Fig. 3.

Fig. 3: Refinement layer used in CAE and MCAE.

Valpola [30] showed that, for an autoencoder with multiple hidden-layers, errors in deeper layers had a harder time being corrected during back-propagation because they are too far from the training signal. To fix this, we train each CAE using a weighted sum of the reconstruction losses for each hidden-layer,

(4)

where is the number of hidden-layers, is the MSE of the encoder and decoder at layer , and is the loss weight at layer . We refer to this new feature extracting model as MCAE. The SMCAE model (Fig. 4) trains, extracts, and concatenates feature responses from the individual MCAEs in the same manner as SCAE.

Fig. 4: The stacked multi-loss convolutional autoencoder (SMCAE) spatial-spectral feature extractor used in this paper consists of two or more MCAE modules. The red lines denote where features are being extracted, transferred to the next MCAE, and concatenated into a final feature response.

Iii-B Semi-Supervised Multi-Layer Perceptron Neural Network

In [13], a major bottleneck in their self-taught learning model was that it used PCA to reduce the feature dimensionality prior to being classified by an SVM. This was necessary because SVMs can suffer from the curse of dimensionality when the feature dimensionality is too high. The ideal number of principal components varied across datasets and required cross-validation. In contrast, MLP-based neural networks are able to learn what features are most important for semantic segmentation. The downside is that standard MLPs require large quantities of labeled data or they will overfit.

Fig. 5: The semi-supervised multi-layer perceptron (SS-MLP) classification framework used in this paper.

To overcome this problem we propose a semi-supervised MLP (SS-MLP). As shown in Fig. 5, SS-MLP has a symmetric encoder-decoder framework. The feed-forward encoder network segments the original input and the decoder reconstructs the compressed feature representation back to the original input. The reconstruction serves as an additional regularization operation that can prevent the model from overfitting when there are only a few training samples available. SS-MLP is trained by minimizing the total supervised and unsupervised loss

(5)

where is the cross-entropy loss for classification, is the MSE of the reconstruction at layer , is the importance of the unsupervised loss term at layer , and is the number of hidden layers in SS-MLP. The weights are set empirically. This optimization strategy is similar to the ladder network introduced in [16], where the network uses convolutional units to learn spatial-spectral features from a single HSI cube. In this case, the learned spatial-spectral features are specific to this dataset alone and may not transfer well to other HSI we wish to classify. Self-taught learning features, which are learned from a large quantity of imagery, can be more discriminative and generalize well across multiple datasets. In this paper, we will use pre-trained SMCAE models to extract features from the labeled data and then pass them to SS-MLP to generate the final classification map.

Iii-C Adaptive Non-Linear Activations

Kemker and Kanan [13] showed that classification performance with low-level features could be improved by applying an adaptive non-linearity to the feature response. The SCAE is a deep-feature extractor, but it only used a Rectified Linear Unit (ReLU) activation which just sets all negative values to zero. Fixed activations like this may not be the ideal non-linearity required for every network layer; so in this paper, we use the Parametric Exponential Linear Unit (PELU) activation [29],

(6)

where and are positive trainable parameters. PELU was shown to increase performance by learning the ideal activation function for each network layer [29]. Depending on the values of and , PELU can approximate a ReLU activation function or a number of other commonly used activation functions (e.g., LeakyReLU [19] and exponential linear units [6]). In this paper, we use PELU activations with our SMCAE feature extractor and SS-MLP classifier.

Iv Experimental Setup

Iv-a Data Description

The SCAE and SMCAE frameworks were trained using publicly-available HSI data collected by three different NASA sensors: 1) NASA Jet Propulsion Laboratory’s Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), 2) EO-1 Hyperion imaging spectrometer, and 3) Goddard’s LiDAR, Hyperspectral & Thermal Imager (GLiHT). Relevant technical specifications for each sensor are available in Table I. We attempted to collect data from a wide variety of different locations and climates (e.g., urban, forest, farmland, etc.) so that the frameworks would learn spatial-spectral features that generalize across multiple labeled datasets. Samples from all three sensors can be seen in Fig. 6.

(a) AVIRIS
(b) Hyperion
(c) GLiHT
Fig. 6: RGB visualization of HSI from all three sensors used to train SCAE and SMCAE.
AVIRIS Hyperion GLiHT ROSIS
Platform Airborne Satellite Airborne Airborne
Spectral 400-2500 400-2500 400-1000 430-838
Range [nm]
Spectral 224 220 402 115
Bands [#]
FWHM [nm] 10 10 5 5
GSD [m] Varies 30 0.3-0.7 (best)
Sensor Type Whisk Grating Image 2-D CCD Grating Image
Broom Spectrometer Imager Spectrometer
AVIRIS - Airborne Visible/Infrared Imaging Spectrometer
CCD - Charged Couple Device
FWHM - Full-width, Half-Max
GSD - Ground Sample Distance
GLiHT - Goddard’s LiDAR, Hyperspectral & Thermal Imager
ROSIS - Reflective Optics System Imaging Spectrometer
TABLE I: Various HSI sensors used in this paper to train and evaluate our SMCAE SS-MLP framework.

The three annotated HSI datasets used to evaluate the SuSA framework are Indian Pines (Fig. 7(a)), Pavia University (Fig. 7(b)), and Salinas Valley (Fig. 7(c)). Indian Pines and Salinas Valley were captured by the AVIRIS HSI sensor and contain mostly agricultural scenes. Pavia University was collected by the Reflective Optics System Imaging Spectrometer (ROSIS) airborne sensor and is an urban scene with several man-made objects. Fig. 7 shows a RGB visualization of all three datasets and Fig. 8 shows their corresponding ground truth maps.

Indian Pavia Salinas
Pines University Valley
Sensor AVIRIS ROSIS AVIRIS
Spatial Dimensions [pix]
GSD [m] 20 1.3 3.7
Spectral Bands 224 103 224
Spectral Range [nm] 400-2500 430-838 400-2500
Number of Classes 16 9 16
GSD - Ground Sample Distance
ROSIS - Reflective Optics System Imaging Spectrometer
AVIRIS - Airborne Visible/Infrared Imaging Spectrometer
TABLE II: Benchmark HSI datasets used in this paper to evaluate the algorithms.
(a) Indian Pines
(b) Pavia Univ.
(c) Salinas
Fig. 7: RGB visualization for Indian Pines, Pavia University, and Salinas Valley HSI datasets. See Table II for scale.
(a) Indian Pines
(b) Pavia Univ.
(c) Salinas
Fig. 8: Classification truth maps for Indian Pines, Pavia University, and Salinas Valley HSI datasets.

Iv-B Training Parameters

Iv-B1 Mcae

The CAE and MCAE frameworks use the parameters listed in Table III throughout this paper. We used the same layer shape found to work well in [13] for CAE and MCAE to provide a fair comparison between the two models. These networks were trained using the open-source imagery listed in Table I. We randomly sampled a total of 50,000 3232 image patches from these different HSI images, where is the number of spectral bands that correspond to each sensor. Of the 50,000 image patches, 45,000 are reserved for training and 5,000 are reserved for validation. Bands that correspond to low SNR and atmospheric absorption are removed. We center each feature in the patch array to zero-mean and unit-variance prior to training the model. The weights are initialized with Xavier initialization [10](i.e., drawn from a normal distribution with its variance chosen based on the number of units), and the biases and PELU parameters are initialized with ones.

CAE MCAE
Multi-Loss Weights None 1, , ,
Convolution Layer 256,512,512,1024
Refinement Layer 512,512,256
Activation ReLU PELU
Initial Learning Rate
Batch Size 512
TABLE III: Training parameters for CAE and MCAE.

SCAE and SMCAE were trained using the Nadam optimizer, which is a common variant of stochastic gradient descent used to speed up training of deep learning models [8]. During training, the learning rate was dropped by a factor of 10 when the validation loss did not improve for five consecutive epochs. The models were also trained using early stopping, where training terminated when the validation loss did not improve for ten consecutive epochs. The output of the last hidden layer is then fed to the next CAE/MCAE to build the corresponding SCAE/SMCAE frameworks.

After training SCAE and SMCAE, we use them to extract features from the annotated datasets. First, we re-sample the data to match the same spectral-bands and full-width, half-maxes (FWHMs) as the data used to train the corresponding feature extracting framework. Throughout this paper, we use the band resampling method used in [2], which has been made publically available. This method assumes that the target sensor has a (per-band) Gaussian response. For each target band and corresponding full-width, half-max (FWHM), the algorithm searches for the source bands that overlap and then integrates those responses over the region of overlap in the target sensor.

Next, we center each feature in the data to zero-mean/unit-variance. Finally, we pass this data through the first CAE/MCAE and extract the features from the last hidden layer. These features are fed to the second CAE/MCAE, and so on. The output from each CAE/MCAE is concatenated along the feature dimension. Each feature in the feature response is centered to zero-mean, unit-variance. Finally, we incorporate translation invariance into our final feature response by pooling the feature response with a mean-pooling filter. The receptive field of this filter is considerably smaller than the one used in [13], which will prevent the mean-pooling operation from blurring out small objects and will also preserve sharp boundaries between object classes.

Iv-B2 Ss-Mlp

The input to the SS-MLP classifier is the extracted features from SMCAE. The HSI cube is reshaped into a 2-dimensional vector (i.e., number of pixels number of features). The parameters for the SS-MLP classifier used in this paper are shown in Table IV. The relatively high weight decay term was shown in [29] to work well for the PELU activation.

Hidden Layer Shapes
Activation PELU
Initial Learning Rate
Mini-Batch Size 8
Weight Decay
TABLE IV: Training parameters for SS-MLP.

The weights are initialized with Xavier initialization [10], and the biases and PELU parameters are initialized with ones. We optimize the joint loss function using the Nadam optimizer. The initial learning rate is the default . We drop the learning rate by a factor of 10 when the validation accuracy plateaus for 25 consecutive epochs; and we stop training the model when the validation accuracy plateaus for 50 consecutive epochs. The training/validation folds are built by randomly sampling the available training data 90%/10% respectively.

V Experimental Results and Discussion

We conducted experiments to measure the performance of our proposed SuSA framework. All of the results are reported as the mean and standard deviation of 30 trials. In each trial, we randomly sample labeled samples from the HSI dataset for training. The reported performance is the semantic segmentation result on all available labeled samples. The three reported metrics used for this section are overall accuracy (OA), mean-class (average) accuracy (AA), and Cohen’s kappa coefficient ().

Before giving the results of the full model across three datasets in Section V-C, we first describe preliminary experiments to compare single- vs. multi-loss CAE and study the effect of stacking features using the Pavia University dataset.

V-a Single- vs. Multi-Loss CAE

In this section, we compare the CAE model proposed earlier in [13] to the MCAE model proposed in this paper using the Pavia University dataset for both and samples per class. In this experiment, we extracted the features from a single CAE/MCAE trained on unlabeled AVIRIS HSI. The results are given in Table V. We also show performance on the raw spectrum (i.e., pass the original HSI to SS-MLP). MCAE outperforms its CAE predecessor, although the gap is not large. In the next sections, we increase this gap by including features from stacked MCAEs trained by HSI from three different sensors.

OA AA
Raw Spectrum
CAE
MCAE
Raw Spectrum
CAE
MCAE
TABLE V: Classification results on the Pavia University dataset using a single CAE and MCAE model trained on unlabeled AVIRIS data. These results were generated by training SS-MLP on labeled samples per class. Best performance for each experiment is in bold.

V-B Stacked Feature Representations

In this experiment, we examine the impact of stacking MCAE feature representations on classification performance. We extracted features from the SMCAE model, trained on unlabeled AVIRIS HSI, and fed it to four different classifiers: linear kernel SVM, radial basis function (RBF) SVM, standard MLP, and our SS-MLP. For the SVM experiments, we cross-validate for the optimal cost and kernel width (RBF only) hyperparameters. We use the same hyperparameters in Table IV for the standard and semi-supervised MLP classifiers.

Each model was trained on Pavia University using samples per class. Fig. 9 shows the mean-class test accuracy of each classifier (as a mean of 30 runs) as additional stacked MCAE feature representations are added. SS-MLP model outperformed these standard classification methods and the performance improves as more MCAE features are added. Since the performance saturates at 4-5 MCAEs, we will use 5 MCAEs from each sensor for the remainder of this paper. The SVM classifier’s peak performance occurs at 2-3 CAEs and then decreases when additional CAE features are added due to overfitting.

Fig. 9: SMCAE performance on four different classifiers: linear kernel SVM, radial basis function (RBF) SVM, standard MLP, and our SS-MLP. Our SS-MLP model does the best.

V-C Multi-Sensor Fusion

In this section, we show how combining features from SMCAE models trained on HSI collected from different sensors can significantly improve semantic segmentation performance. In this experiment, we evaluate performance using the Pavia University dataset, where our framework is trained using samples per class. We trained three variants of SMCAE, where the model is trained on HSI from the AVIRIS, Hyperion, and GLiHT sensors. We also tested each possible combination of SMCAE frameworks, where the output of each SMCAE is concatenated along the feature axis. Table VI shows the impact that each SMCAE has on performance. Performance across models differs noticeably, and combining features from multiple sensors yields the best performance. This could indicate that each SMCAE model learns novel information that is not available from the SMCAE models trained on different sensors (see Section V-E for more details). The SMCAE model trained on GLiHT yielded superior results than the other two SMCAE models. This is likely because Pavia University and GLiHT share similar spectral range and bands; whereas AVIRIS and Hyperion expand beyond the range covered by the ROSIS sensor that collected Pavia University.

Data Source(s) OA AA
AVIRIS
Hyperion
GLiHT
AVIRIS/Hyperion
AVIRIS/GLIHT
Hyperion/GLiHT
AVIRIS/Hyperion/
GLiHT
TABLE VI: Classification performance using features extracted from SMCAE models that were trained with data from different HSI sensors.

V-D State-of-the-Art Comparison

In this section, we use the same SMCAE configuration discussed in Section V-C, where we stacked features from all three sensors listed in Table VI. Table VII shows the classification performance when samples per class. We compared against models found to work well using this training paradigm. For Indian Pines and Pavia University, we compare against a semi-supervised classification approach that uses spectral-unmixing to help improve classification performance [7]. They showed that introducing the unsupervised task helped regularize the model, thus improving generalization when only small quantities of annotated image data are available. Their results were reported as the mean and standard deviation of 10 separate runs. Imani and Ghassemian [12] proposed a model that was supposed to work well on all three of the annotated HSI datasets evaluated in this paper; however, they showed that a SVM classifier yielded the best results. They only reported the mean (no standard deviation) of the mean-class accuracy over three runs. To generate more detailed results, we reproduced this experiment using an SVM-RBF classifier. We reported the overall accuracy, mean-class accuracy, and kappa statistic as the mean and standard deviation over 30 trials. Our SuSA framework achieved superior results compared to each of these frameworks.

Model OA AA
Indian Pines
Dopido et al. [7]
MICA-SVM [13]
SCAE-SVM [13]
SuSA
Pavia University
Dopido et al. [7]
MICA-SVM [13]
SCAE-SVM [13]
SuSA
Salinas Valley
SVM-RBF [12]
MICA-SVM [13]
SCAE-SVM [13]
SuSA
TABLE VII: Results of low-shot learning experiment where the training set contains only =10 samples per class.

Table VIII directly compares against previous self-taught and semi-supervised frameworks discussed in this paper. The SCAE-SVM framework introduced by [13] performed well on the samples per class training paradigms. Note, the Indian Pines dataset used samples per class except for the three classes that had the smallest number of annotated training samples available, where we only used samples per class. Liu et al. [16] only evaluated their ladder network on Pavia University with samples per class. In every case, SuSA outperforms the previous state-of-the-art classification frameworks.

Model OA AA
Indian Pines
DAFE [9] 93.27 95.86 0.923
MICA-SVM [13]
SCAE-SVM [13]
SuSA
Pavia University
SSAE [28]
MICA-SVM [13]
SCAE-SVM [13]
SuSA
Pavia University
SS-CNN [16] 98.32 98.47 Unknown
MICA-SVM [13]
SCAE-SVM [13] 0.9812
SuSA
Salinas Valley
GLCM+ [20] Unknown Unknown
MICA-SVM [13]
SCAE-SVM [13]
SuSA
TABLE VIII: Performance comparison of SuSA against the other semi-supervised and self-taught learning frameworks discussed in this paper.

We performed a statistical significance test (using a 99% confidence interval) on the mean-class accuracy results in Table VIII. We chose mean-class accuracy because the class distributions are imbalanced, so this is a more meaningful measurement of model performance. The results for all four training/testing paradigms were shown to be statistically significant.

Finally, Table IX shows that SuSA yielded state-of-the-art performance on the Indian Pines and Pavia University HSI datasets hosted on the IEEE GRSS Data and Algorithm Standard Evaluation (DASE) website. The training/testing folds are pre-defined, and the server provides the classification performance on the test set. This dataset is more difficult to perform well on because the training samples are co-located instead of being randomly sampled across the image. At this time, the server only lists the top-10 performers, so we are unable to ascertain the identity of the previous state-of-the-art performer or what method they used. It also only lists their overall accuracy; however, we have provided all of the relevant statistics, including the classification maps in Fig. 10. The main performance degradation for Pavia University occurred when SuSA predicted meadows (largest object class) when it should have predicted bare soil. There was also a problem predicting trees when it should have predicted meadows. For Indian Pines, SuSA mis-predicted corn for corn no-till and corn-min, and pasture/mowed grass was confused for soybeans-min.

Indian Pines Pavia University
State-of-Art Performer:
OA 90.73 73.06
SuSA:
OA 91.32 81.86
AA 81.17 74.09
0.90 0.77
TABLE IX: Classification results for the Indian Pines and Pavia University datasets from the IEEE GRSS Data and Algorithm Standard Evaluation website.
(a) Indian Pines
(b) Pavia University
Fig. 10: Classification maps for SuSA on the Indian Pines and Pavia University datasets from the IEEE GRSS Data and Algorithm Standard Evaluation website.

V-E Dissimilarity Between Learned Features

In this paper, we show that our SuSA framework yields state-of-the-art performance when only a few training samples are available. We also show that transferring spatial-spectral features from multiple sensors can improve classification performance. This would mean that SMCAE learns different features from different data and sensor modalities. To quantify the dissimilarity between different SMCAE models, we used the dissimilarity metric proposed by [14]. The authors computed the dissimilarity of two feature representations such that,

(7)

where is the Spearman-correlation matrix and is the number of rows in . We select a random AVIRIS HSI, generate SMCAE features from all three sensors, and then compute the dissimilarity metrics for every feature response pair (Table X). Although there is some feature overlap between different SMCAE models, there is some new information that comes from combining learned features from multiple sensors. The SMCAE models trained on Hyperion and AVIRIS are more similar than any combination with the SMCAE trained on GLiHT because AVIRIS and Hyperion span the short-wave infrared spectrum whereas GLiHT only spans through the near infrared.

AVIRIS GLiHT Hyperion
AVIRIS 0.000 0.108 0.093
GLiHT 0.000 0.105
Hyperion 0.000
TABLE X: Dissimilarity between the feature responses from all three SMCAE models. The higher the value, the more dissimilar the two feature representations are.

The annotated benchmarks evaluated in this paper all have dramatically different ground sample distances (GSDs) ranging from 1.3 meters to 20 meters; yet, the state-of-the-art performance on each of these datasets could indicate that SMCAE is learning scale-invariant features. In addition, the collection of HSI from different climates, scenes, and weather/atmosphere conditions further improve learned feature generalization; and ultimately, could enable the seamless transfer of spatial-spectral features across different sensors, environments, and machine learning tasks.

Vi Conclusion

In this paper, we demonstrated that SMCAE learns more discriminative self-taught learning features by correcting errors in both shallow and deeper layers during training. We have also shown that our SS-MLP classifier is effective at low-shot learning and able to handle high-dimensional inputs. Our SuSA framework achieved state-of-the-art performance on both IEEE GRSS benchmarks for HSI semantic segmentation and have established a high bar for low-shot learning of HSI datasets. Future work will include scaling these frameworks to other data modalities (e.g., MSI, thermal, synthetic aperture radar, etc.), higher GSD imagery (e.g., centimeter resolution imagery taken from drones), and other remote sensing tasks (e.g., target detection, crop health estimation, etc.).

Acknowledgements

The authors would like to thank Purdue, Pavia University, the Hysens project, and NASA/JPL-Caltech for making their remote sensing data publicly available.

References

  • [1] R. Bellman. Dynamic programming. Courier Corporation, 2013.
  • [2] T. Boggs. Spectral python, 2010.
  • [3] L. Bruzzone, M. Chi, and M. Marconcini. Transductive svms for semisupervised classification of hyperspectral data. In Geoscience and Remote Sensing Symposium, volume 1, pages 4–pp. IEEE, 2005.
  • [4] L. Bruzzone, M. Chi, and M. Marconcini. A novel transductive svm for semisupervised classification of remote-sensing images. Transactions on Geoscience and Remote Sensing, 44(11):3363–3373, 2006.
  • [5] G. Camps-Valls, T. V. B. Marsheva, and D. Zhou. Semi-supervised graph-based hyperspectral image classification. Transactions on Geoscience and Remote Sensing, 45(10):3044–3054, 2007.
  • [6] D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In Proceedings of the International Conference on Learning Representations, 2016.
  • [7] I. Dópido, J. Li, P. Gamba, and A. Plaza. A new hybrid strategy combining semisupervised classification and unmixing of hyperspectral data. Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7(8):3619–3629, 2014.
  • [8] T. Dozat. Incorporating nesterov momentum into adam. 2016.
  • [9] P. Ghamisi et al. Automatic framework for spectral–spatial classification based on supervised feature extraction and morphological attribute profiles. Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7(6):2147–2160, 2014.
  • [10] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics, pages 249–256, 2010.
  • [11] L. Gómez-Chova, G. Camps-Valls, J. Munoz-Mari, and J. Calpe. Semisupervised image classification with laplacian support vector machines. Geoscience and Remote Sensing Letters, 5(3):336–340, 2008.
  • [12] M. Imani and H. Ghassemian. Boundary based supervised classification of hyperspectral images with limited training samples. ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, (3):203–207, 2013.
  • [13] R. Kemker and C. Kanan. Self-taught feature learning for hyperspectral image classification. Transactions on Geoscience and Remote Sensing, 55(5):2693–2705, 2017.
  • [14] N. Kriegeskorte, M. Mur, and P. Bandettini. Representational similarity analysis–connecting the branches of systems neuroscience. Frontiers in systems neuroscience, 2, 2008.
  • [15] Z. Lin, Y. Chen, X. Zhao, and G. Wang. Spectral-spatial classification of hyperspectral image using autoencoders. In Information, Communications and Signal Processing, pages 1–5. IEEE, 2013.
  • [16] B. Liu, X. Yu, P. Zhang, X. Tan, A. Yu, and Z. Xue. A semi-supervised convolutional neural network for hyperspectral image classification. Remote Sensing Letters, 8(9):839–848, 2017.
  • [17] Y. Liu, G. Cao, Q. Sun, and M. Siegel. Hyperspectral classification via deep networks and superpixel segmentation. International Journal of Remote Sensing, 36(13):3459–3482, 2015.
  • [18] X. Ma, J. Geng, and H. Wang. Hyperspectral image classification via contextual deep learning. EURASIP Journal on Image and Video Processing, 2015(1):20, 2015.
  • [19] A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the International Conference on Machine Learning, volume 30, 2013.
  • [20] F. Mirzapour and H. Ghassemian. Improving hyperspectral image classification by combining spectral, texture, and shape features. International Journal of Remote Sensing, 36(4):1070–1096, 2015.
  • [21] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: transfer learning from unlabeled data. In Proceedings of the International Conference on Machine Learning, pages 759–766. ACM, 2007.
  • [22] A. Rasmus, H. Valpola, M. Honkala, M. Berglund, and T. Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems, pages 3546–3554, 2015.
  • [23] F. Ratle, G. Camps-Valls, and J. Weston. Semisupervised neural networks for efficient hyperspectral image classification. Transactions on Geoscience and Remote Sensing, 48(5):2271–2282, 2010.
  • [24] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, pages 2234–2242, 2016.
  • [25] L. Shen et al. Discriminative gabor feature selection for hyperspectral image classification. Geoscience and Remote Sensing Letters, 10(1):29–33, 2013.
  • [26] A. Soltani-Farani and H. R. Rabiee. When pixels team up: spatially weighted sparse coding for hyperspectral image classification. Geoscience and Remote Sensing Letters, 12(1):107–111, 2015.
  • [27] Y. Y. Tang, Y. Lu, and H. Yuan. Hyperspectral image classification based on three-dimensional scattering wavelet transform. Transactions on Geoscience and Remote Sensing, 53(5):2467–2480, 2015.
  • [28] C. Tao, H. Pan, Y. Li, and Z. Zou. Unsupervised spectral–spatial feature learning with stacked sparse autoencoder for hyperspectral imagery classification. Geoscience and Remote Sensing Letters, 12(12):2438–2442, 2015.
  • [29] L. Trottier, P. Giguère, and B. Chaib-draa. Parametric exponential linear unit for deep convolutional neural networks. CoRR, abs/1605.09332, 2016.
  • [30] H. Valpola. From neural PCA to deep unsupervised learning. Advances in Independent Component Analysis and Learning Machines, pages 143–171, 2015.
  • [31] L. Yang, M. Wang, S. Yang, R. Zhang, and P. Zhang. Sparse spatio-spectral lapsvm with semisupervised kernel propagation for hyperspectral image classification. Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(5):2046–2054, 2017.
  • [32] L. Yang, S. Yang, P. Jin, and R. Zhang. Semi-supervised hyperspectral image classification using spatio-spectral laplacian support vector machiney. Geoscience and Remote Sensing Letters, 11(3):651–655, 2014.
  • [33] W. Zhao, Z. Guo, J. Yue, X. Zhang, and L. Luo. On combining multiscale deep learning features for the classification of hyperspectral remote sensing imagery. International Journal of Remote Sensing, 36(13):3368–3379, 2015.

Ronald Kemker is a PhD Candidate in the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology. His research currently involves applying computer vision and machine learning techniques to tackle remote sensing problems. He received his MS degree in Electrical Engineering from Michigan Technological University.

Ryan Luu is a student at Victor Senior High School and an intern at the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology. His current projects involve object detection and robot navigation.

Christopher Kanan is an assistant professor in the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology. His lab applies deep learning to problems in computer vision. His recent projects have focused on object recognition, object detection, active vision, visual question answering, semantic segmentation, and lifelong machine learning. He received a PhD in computer science from the University of California at San Diego and a MS in computer science from the University of Southern California.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
133355
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description