Hyperspectral Image Restoration via Multi-mode and Double-weighted Tensor Nuclear Norm Minimization

Hyperspectral Image Restoration via Multi-mode and Double-weighted Tensor Nuclear Norm Minimization

Abstract

Tensor nuclear norm (TNN) induced by tensor singular value decomposition plays an important role in hyperspectral image (HSI) restoration tasks. In this letter, we first consider three inconspicuous but crucial phenomenons in TNN. In the Fourier transform domain of HSIs, different frequency components contain different information; different singular values of each frequency component also represent different information. The two physical phenomenons lie not only in the spectral dimension but also in the spatial dimensions. Then, to improve the capability and flexibility of TNN for HSI restoration, we propose a multi-mode and double-weighted TNN based on the above three crucial phenomenons. It can adaptively shrink the frequency components and singular values according to their physical meanings in all modes of HSIs. In the framework of the alternating direction method of multipliers, we design an effective alternating iterative strategy to optimize our proposed model. Restoration experiments on both synthetic and real HSI datasets demonstrate their superiority against related methods.

Hyperspectral image, tensor nuclear norm, double weighting, frequency components, multi-mode.

I Introduction

Hyperspectral image (HSI) has been widely used in many fields [4, 1] due to its wealthy spatial and spectral information of a real scene. However, the observed HSIs are usually corrupted by different noises, e.g., Gaussian noise, impulse noise, deadlines, stripes and their mixtures. Therefore, HSI restoration, as a preprocessing step to remove mixed noise for various subsequent applications, is a valuable and active research topic.

HSIs can be treated as 3-order tensors. Its low-rankness is a critical property for HSI restoration tasks. Due to the nonunique definitions of the tensor rank, different tensor decompositions and their corresponding tensor ranks are proposed, such as the Tucker decomposition [18, 12], PARAFAC decomposition [14, 6], and tensor singular value decomposition (t-SVD) [15, 24, 9], to exploit the low-rankness of HSIs.

Among them, the tensor tubal rank induced by t-SVD can characterize the low-rank structure of a tensor very well [23]. Its convex relaxation is the tensor nuclear norm (TNN) [10]. TNN is effective to keep the intrinsic structure of tensors; t-SVD can be calculated easily in the Fourier domain and TNN minimization problem can be efficiently solved by convex optimization algorithms. Hence, TNN has attracted extensive attention for HSI restoration problems in recent years [24, 20, 21]. However, during the definition of TNN, there are three kinds of prior knowledge that are underutilized for further exploiting the low-rankness in HSIs. Firstly, in the Fourier transform domain of HSIs, the low-frequency slices carry the profile information of HSIs, while the high-frequency slices mainly carry the detail and noise information of HSIs. Secondly, in each frequency slices, bigger singular values mainly contain information on clean data and smaller singular values mainly contain information on noise. Thirdly, low-rankness not only exists in the spectral dimension but also lies in the spatial dimensions [18]. The classical TNN only takes the Fourier transform to connect the spatial dimensions with the spectral dimension and lacks flexibility for handling different correlations along with different modes of HSIs [24].

In this letter, to take full advantage of the above prior knowledge and improve the capability and flexibility of TNN, we propose a multi-mode and double-weighted TNN (MDWTNN) for HSI restoration tasks. The merits of our model are four-fold. First, according to information types in different frequency slices in the Fourier transform domain, we adaptively assign bigger weights to slices that mainly contain noise information and smaller weights to slices that mainly contain profile information, which can depress noise more and simultaneously preserve the profile information of clean HSIs better. Second, in each frequency slice, we use the partial sum of singular values (PSSV) to only shrink small singular values, which can better protect the clean data information contained in big singular values. Third, we apply the double-weighted TNN in all modes of HSIs, which can achieve a more flexible and accurate characterization of HSI low-rankness. Finally, we develop an alternating direction method of multiplier (ADMM) based algorithm to efficiently solve the proposed model, and obtain the best restoration performance both on the synthetic and real HSI dataset in comparison to all competing HSI restoration methods.

Ii Preliminaries

Ii-a Notations

In this letter, matrix and tensor are denoted as bold upper-case letter and calligraphic letter , respectively. For a 3rd-order tensor , its -th component is represented as . For , , their inner product is defined as . Then the Frobenius norm of a tensor is defined as . The -th frontal slice of is denoted as . The fast Fourier transform along the third mode of is represented as and its inverse operation is . The mode- permutation of is defined as , , where the -th mode-3 slice of is the -th mode- slice of , i.e., . Also, its inverse operation is .

Ii-B Problem Formulation

An ideal HSI can be viewed as a 3rd-order tensor and usually is assumed to be low-rank. Corrupted by mixed noise, its observed version can be modeled as

(1)

where ; denotes the sparse noise; denotes the Gaussian white noise.

HSI restoration aims to recover the ideal HSI from the observed HSI in (1). Under the framework of regularization theory, it can briefly be formulated as

(2)

where is norm to detect the sparse noise; describes the Gaussian noise; represents the rank of unknown ideal HSI; and are non-negative parameters.

In model (2), regularization term Rank is approximated by different relaxations. As mentioned above, TNN is widely used convex relaxation, which can be defined as

(3)

Iii The Proposed Weighted TNN

Iii-a Frequency-Weighted TNN

In (3), we notice that one frontal slice of corresponds to one frequency component of . Specifically, for , its profile information is contained in the low-frequency frontal slices, while its detailed information is contained in the high-frequency ones. When is distorted by outliers, the effects on high-frequency frontal slices are more severe. However, different frequency slices of have the same impact on TNN in (3), which is obviously inconsistent with the physics meaning of frequency components. Therefore, we improve TNN in (3) by assigning different weights for different frequency slices, and propose the frequency-weighted TNN as follows:

(4)

where is the -th weight parameter. For HSI restoration problems, the lower the frequencies are, the less the corresponding frequency slices should be punished. By amounts of data simulations, the weights approximatively consist with the frequencies and are inversely proportionate to . We let

(5)

where is to avoid dividing by zero; and are two parameters.

Iii-B Double-Weighted TNN

For in (3), the matrix nuclear norm is used as the tightest convex surrogate for rank. However, it has limitation in the accuracy of approximation due to its convexity. Recently, a series of improvement methods are proposed for better approximation [17, 5, 3, 8]. To differently treat singular values of , we choose partial sum of singular values (PSSV) to only punish the smaller singular values which mainly contain the noise information of HSIs. Then, a double-weighted TNN is proposed by replacing the matrix nuclear norm in (4) with the PSSV of , which is defined as

(6)

where ; is the -th biggest singular value of matrix ; is a parameter indicating the number of main singular values. The double-weighted TNN minimization problem can be solved by following theorem.

Theorem 1. Assuming that , , for the minimization problem

(7)

its solution is

(8)

where ; ; .

Iii-C Multi-mode and Double-Weighted TNN

TNN in (3) only approximates the correlations connected by mode-3 Fourier transform in the spatial dimensions with the spectral dimension. It lacks of flexibility for describing low-rankness in all modes of HSIs. To connect the -th mode with other two modes, we can define the double-weighted TNN for each mode- permutation of HSIs, i.e., As are different according to different modes, we use the weighted average of double-weighted TNNs along all modes to approximate the tensor rank of HSIs. Finally, the multi-mode and double-weighted TNN (MDWTNN) is proposed as follows:

(9)

where ; is the -th frontal slice of and its assigned weight is ; and .

Iv HSI Restoration via MDWTNN Minimization

MDWTNN in (9) takes full advantage of physical meanings in frequency components, singular values, and modes of HSIs, which can provide a better approximation to the tensor rank. Then we use MDWTNN to replace the regularization term Rank in (2) and propose the HSI restoration model as follows:

(10)

Introducing auxiliary variables, model (10) is equivalent to

(11)

By augmented Lagrangian multiplier method, the Lagrangian function of model (11) can be written as

where and are the Lagrangian multipliers; and are the Lagrange penalty parameters. Its minimization problem can be efficiently solved in the framework of ADMM [2]. At the -th iteration, each variable in the Lagrangian function can be updated by solving its corresponding subproblem respectively when other variables are fixed at the -th iteration.

For , , their corresponding subproblems can be written as

(12)

The closed-form solution of (12) obtained from theorem 1 are as follows:

(13)

For , its corresponding subproblem can be written as

(14)

It has the closed-form solution as follows:

(15)

For , its corresponding subproblem can be written as

(16)

It can be solved by the soft-thresholding operator [13] as:

(17)

For , its corresponding subproblem can be written as

(18)

It has the closed-form solution as follows :

(19)

For multipliers and , they can be updated as follows:

(20)

The proposed algorithm for our HSI restoration model is summarized in Algorithm 1.

Input: The observed tensor ; weight parameters , , ; regularization parameters , ; and stopping criterion .
Output: Denoised image .
  1: Initialize: ====; ; ==;
  ; ; and .
  2: Repeat until convergence:
  3. Update , , , , via
      step 1: Update by (13)
      step 2: Update by (15)
      step 3: Update by (17)
      step 4: Update by (19)
      step 5: Update by (20)
      step 6: Update , , by (5)
  4: Check the convergence condition.

Algorithm 1 HSI Restoration via the MDWTNN minimization

V Experiments

(a) Case 1
(b) Case 2
(c) Case 3
(d) Case 4
(e) Case 5
(f) Case 1
(g) Case 2
(h) Case 3
(i) Case 4
(j) Case 5
Fig. 1: PSNR and SSIM for each band. From left column to right column, PSNR and SSIM of each band in all restoration results under noise case 1-5.
(a) Original image
(b) Noise image
(c) BM4D(27.78dB)
(d) LRMR(28.87dB)
(e) LRTDTV (29.44dB)
(f) 3DTNN(30.84dB)
(g) Our(32.98dB)
Fig. 2: The 80-th band of all restoration results for the simulate dataset under noise case 5.
(a) Noise image
(b) LRTA
(c) BM4D
(d) LRMR
(e) LRTDTV
(f) 3DTNN
(g) Our
Fig. 3: The 150-th band of all restoration results for the real dataset.

To verify the effectiveness of our MDWTNN based HSI restoration model, various experiments are performed on a set of challenging simulated and real HSI datasets. From the Washington DC Mall dataset1, we choose a sub-blocks with a size of as a simulation dataset. From the Indian Pines dataset2, we choose a sub-blocks with a size of as a real dataset. For comparison, four state-of-the-art HSI denoising methodes are employed as the benchmark in the experiments, i.e., BM4D [16], LRMR [22], LRTDTV [18] and 3DTNN [24]. Since the BM4D method is only suitable to remove Gaussian noise, we implement it on HSIs which are preprocessed by the RPCA restoration method [15].

In the simulation experiments, the hybrid of white Gaussian and impulse noises with 5 different intensity levels are added to the simulation dataset band by band. Let and denote the variance of Gaussian white noise and percentage of impulse noise, respectively. In noise case 1-3, the same intensity noise is added to all the bands. In noise case 1, =0.1 and =0.2; In noise case 2, =0.2 and =0.2; In noise case 3, =0.1 and =0.4; In noise case 4 and 5, the noise intensities are different for different bands. In noise case 4, is randomly selected from 0.1 to 0.2 and =0.2; In noise case 5, =0.1 and is randomly selected from 0.2 to 0.4.

For quantitatively evaluating the restoration results of all the test methods, the CPU times and the means of PSNR [7], SSIM [19] and SAM [11] in each band, i.e., MPSNR, MSSIM and MSAM, are listed in Table I. Also, the PSNR and SSIM of each band in all restoration results are presented in Fig. 1. It is clear that our proposed model enjoys a superior performance over the other popular approaches. Although the CPU times of our model are not the shortest, one can update by (13) in parallel to further shorten the CPU times of our model. For visual evaluation in Fig. 2, we show the 80-th band in all restoration results of the simulation dataset under noise case 5. In Fig. 3, we list the 150-th in all restoration results of the real dataset. It can be seen that the image restored by our model maintains the best structure and texture information.

Case Level Index Noise BM4D LRMR LRTDTV 3DTNN Our
Case1 G=0.1 MPSNR 11.068 31.014 31.567 32.800 34.270 36.095
MSSIM 0.085 0.893 0.867 0.896 0.936 0.950
P=0.2 MSAM 43.139 4.576 5.042 4.327 3.481 2.937
time - 547.005 378.898 538.160 270.211 334.656
Case2 G=0.2 MPSNR 10.216 27.192 27.642 29.489 29.055 32.525
MSSIM 0.061 0.791 0.743 0.807 0.801 0.894
P=0.2 MSAM 45.297 6.830 7.859 6.382 6.726 4.393
time - 529.243 395.461 584.461 309.977 378.728
Case3 G=0.1 MPSNR 8.305 29.691 29.183 30.270 29.572 34.185
MSSIM 0.037 0.866 0.798 0.844 0.784 0.928
P=0.4 MSAM 50.092 5.264 6.579 6.182 6.542 3.604
time - 528.440 394.272 582.318 316.547 375.719
Case4 G(0.1,0.2) MPSNR 10.648 28.970 29.518 31.073 31.852 34.379
MSSIM 0.073 0.848 0.810 0.859 0.887 0.930
P=0.2 MSAM 44.265 5.695 6.439 5.468 4.873 3.579
time - 541.960 396.478 537.374 273.466 340.113
Case5 G=0.1 MPSNR 9.669 30.419 30.412 31.560 32.783 35.180
MSSIM 0.060 0.883 0.836 0.874 0.902 0.942
P(0.2,0.4) MSAM 47.270 4.865 5.761 5.405 4.311 3.220
time - 546.378 397.474 541.124 277.123 340.162
TABLE I: Quantitative comparison and time of all competing methods under different levels of noises on simulate dataset.

Vi Conclusion

In this letter, we propose a multi-mode and double-weighted TNN for HSI restoration tasks. The proposed TNN can efficiently characterize the physical meanings of the frequency components, singular values, and orientations ignored by the standard TNN. And the weight parameters also can be obtained adaptively. They powerfully improve capability and flexibility for describing low-rankness in HSIs. The experiments conducted with both simulate and real HSI datasets show that our MDWTNN based HSI restoration model is a competitive method to remove the hybrid noise. Besides, our proposed MDWTNN regularization term can also be applied to other low-rankness based tasks, i.e., hyperspectral imagery classification, tensor completion, MRI reconstruction.

Footnotes

  1. http://lesun.weebly.com/hyperspectral-data-set.html
  2. https://engineering.purdue.edu/∼biehl/MultiSpec/hyperspectral

References

  1. J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. Nasrabadi and J. Chanussot (2013) Hyperspectral remote sensing data analysis and future challenges. IEEE Geoscience and Remote Sensing Magazine 1 (2), pp. 6–36. Cited by: §I.
  2. S. Boyd, N. Parikh and E. Chu (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Now Publishers Inc. Cited by: §IV.
  3. Y. Chen, Y. Guo, Y. Wang, D. Wang, C. Peng and G. He (2017) Denoising of hyperspectral images using nonconvex low rank matrix approximation. IEEE Transactions on Geoscience and Remote Sensing 55 (9), pp. 5366–5380. Cited by: §III-B.
  4. A. F. Goetz (2009) Three decades of hyperspectral remote sensing of the earth: a personal view. Remote Sensing of Environment 113, pp. S5–S16. Cited by: §I.
  5. S. Gu, L. Zhang, W. Zuo and X. Feng (2014) Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2862–2869. Cited by: §III-B.
  6. X. Huang, S. Xu, C. Zhang and J. Zhang (2020) Robust cp tensor factorization with skew noise. IEEE Signal Processing Letters 27, pp. 785–789. Cited by: §I.
  7. Q. Huynh-Thu and M. Ghanbari (2008) Scope of validity of psnr in image/video quality assessment. Electronics Letters 44 (13), pp. 800–801. Cited by: §V.
  8. T. Ji, T. Huang, X. Zhao, T. Ma and L. Deng (2017) A non-convex tensor rank approximation for tensor completion. Applied Mathematical Modelling 48, pp. 410–422. Cited by: §III-B.
  9. T. Jiang, T. Huang, X. Zhao and L. Deng (2020) Multi-dimensional imaging data recovery via minimizing the partial sum of tubal nuclear norm. Journal of Computational and Applied Mathematics 372, pp. 112680. Cited by: §I.
  10. M. E. Kilmer, K. Braman, N. Hao and R. C. Hoover (2013) Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM Journal on Matrix Analysis and Applications 34 (1), pp. 148–172. Cited by: §I.
  11. F. Kruse, A. Lefkoff and J. Dietz (1993) Expert system-based mineral mapping in northern death valley, california/nevada, using the airborne visible/infrared imaging spectrometer (aviris). Remote Sensing of Environment 44 (2-3), pp. 309–336. Cited by: §V.
  12. D. Letexier and S. Bourennane (2008) Noise removal from hyperspectral images by multidimensional filtering. IEEE Transactions on Geoscience and Remote Sensing 46 (7), pp. 2061–2069. Cited by: §I.
  13. Z. Lin, M. Chen and Y. Ma (2010) The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055. Cited by: §IV.
  14. X. Liu, S. Bourennane and C. Fossati (2012) Denoising of hyperspectral images using the parafac model and statistical performance analysis. IEEE Transactions on Geoscience and Remote Sensing 50 (10), pp. 3717–3724. Cited by: §I.
  15. C. Lu, J. Feng, Y. Chen, W. Liu, Z. Lin and S. Yan (2019) Tensor robust principal component analysis with a new tensor nuclear norm. IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (4), pp. 925–938. Cited by: §I, §V.
  16. M. Maggioni and A. Foi (2012) Nonlocal transform-domain denoising of volumetric data with groupwise adaptive variance estimation. In Computational Imaging X, Vol. 8296, pp. 82960O. Cited by: §V.
  17. T. Oh, Y. Tai, J. Bazin, H. Kim and I. S. Kweon (2015) Partial sum minimization of singular values in robust pca: algorithm and applications. IEEE transactions on Pattern Analysis and Machine Intelligence 38 (4), pp. 744–758. Cited by: §III-B.
  18. Y. Wang, J. Peng, Q. Zhao, Y. Leung, X. Zhao and D. Meng (2017) Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 11 (4), pp. 1227–1243. Cited by: §I, §I, §V.
  19. Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13 (4), pp. 600–612. Cited by: §V.
  20. H. Zeng, X. Xie, H. Cui, H. Yin and J. Ning (2020) Hyperspectral image restoration via global spatial-spectral total variation regularized local low-rank tensor recovery. IEEE Transactions on Geoscience and Remote Sensing. Cited by: §I.
  21. H. Zeng, X. Xie and J. Ning (2021) Hyperspectral image denoising via global spatial-spectral total variation regularized nonconvex local low-rank tensor approximation. Signal Processing 178, pp. 107805. External Links: ISSN 0165-1684 Cited by: §I.
  22. H. Zhang, W. He, L. Zhang, H. Shen and Q. Yuan (2013) Hyperspectral image restoration using low-rank matrix recovery. IEEE Transactions on Geoscience and Remote Sensing 52 (8), pp. 4729–4743. Cited by: §V.
  23. Z. Zhang, G. Ely, S. Aeron, N. Hao and M. Kilmer (2014) Novel methods for multilinear data completion and de-noising based on tensor-svd. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3842–3849. Cited by: §I.
  24. Y. Zheng, T. Huang, X. Zhao, T. Jiang, T. Ma and T. Ji (2019) Mixed noise removal in hyperspectral image via low-fibered-rank regularization. IEEE Transactions on Geoscience and Remote Sensing 58 (1), pp. 734–749. Cited by: §I, §I, §V.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
426440
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description