Fast Matrix Inversion and Determinant Computation for Polarimetric Synthetic Aperture Radar

Fast Matrix Inversion and Determinant Computation for Polarimetric Synthetic Aperture Radar

D. F. G. Coelho D. F. G. Coelho was with the Graduate Program of Electrical and Computer Engineering Department, University of Calgary, Canada. E-mail: diegofgcoelho@gmail.com    R. J. Cintra R. J. Cintra is with the Signal Processing Group, Departamento de Estatística, Universidade Federal de Pernambuco and with the Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB, Canada. E-mail: rjdsc@de.ufpe.br    A. C. Frery A. C. Frery is with the Laboratório de Computação Científica e Análise Numérica, Universidade Federal de Alagoas, Brazil. E-mail: acfrery@gmail.com    V. S. Dimitrov V. S. Dimitrov is with the Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB, Canada.
{onecolabstract}

This paper introduces a fast algorithm for simultaneous inversion and determinant computation of small sized matrices in the context of fully Polarimetric Synthetic Aperture Radar (PolSAR) image processing and analysis. The proposed fast algorithm is based on the computation of the adjoint matrix and the symmetry of the input matrix. The algorithm is implemented in a general purpose graphical processing unit (GPGPU) and compared to the usual approach based on Cholesky factorization. The assessment with simulated observations and data from an actual PolSAR sensor show a speedup factor of about two when compared to the usual Cholesky factorization. Moreover, the expressions provided here can be implemented in any platform.

Keywords

PolSAR, Matrix inversion, Determinant, Fast algorithms

1 Introduction

Microwave remote sensing is basilar as it provides complementary information to that provided by classical sensors which perceive the optical spectrum. Longer wavelengths in the to the can penetrate clouds and other adverse atmospheric conditions, as well as canopies and soils. There are passive and active microwave sensors; the latter carry their own source of illumination, and can be either imaging or non-imaging. Radar devices belong to the former. They transmit a radio signal and detect the returned echo (backscatter), with which an image is formed. Several signal processing techniques are used to enhance the spatial resolution of such imagery. In particular, the use of synthetic antennas leads to synthetic aperture radar (SAR) imaging; as well as to polarimetric SAR (PolSAR) imaging which employs polarized signals

The statistical properties of PolSAR were studied in the context of optical polarimetry [1]. The simplest case occurs when the backscatter is constant. In such a case, the targets are characterized by a scattering (or Sinclair) matrix , which describes dependence of its scattering properties on the polarization. The scattering matrix is defined on the horizontal (H) and vertical (V) basis as

(1)

where each element is a complex value, representing the amplitude and the phase of the scattered signal. Reciprocity  holds for most natural targets, so one may use the scattering vector  without loss of information, where  represents transposition [2].

More often than not, these single-look complex data are processed in order to improve the signal-to-noise ratio. Multilook fully polarimetric data are formed as

(2)

where denotes the conjugate transposition and indexes theoretically independent looks of the same scene. This complex matrix is positive Hermitian with real entries in the diagonal.

As noted by Torres et al. [3] and the references therein, many techniques for PolSAR image processing and analysis rely on the statistical properties of . The density of several models (Wishart [4],  [5],  [6],  [7], Kummer-U [8] to name a few) depends solely on a few parameters: the covariance matrix, the number of looks and, sometimes, texture descriptors. The covariance matrix is the expected value of , namely , and it enters the expressions only through its determinant and its inverse.

Moreover, divergences among these models (Kullback-Leibler, Hellinger, Rényi, Bhattacharya, Triangular, Harmonic [9]), test statistics (likelihood ratio) [10], and classification rules [11] only require the computation of , , and . A typical Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) image may have pixels, where each pixel is represented by a  Hermitian matrix as in (2). As illustrative example, if one relies on Nonlocal Means Filter approach based on stochastic distances [3], at each sear window, which are typically large, e.g. pixels, one needs to compute inverse and determinant operations for each pixel. This scales to a long computing time as it results in a total of  inverse matrix and determinant calculations per image. Such amount of data is likely to increase with the incoming availability of sensors with finer resolution. Thus one reaches the conclusion that accurate and fast procedures are of paramount importance for dealing with PolSAR data.

The Cholesky factorization is the most popular numerical analysis method for the direct solution of linear algebra tasks involving positive definite dense matrices [12, 13, 14]. It is also the algorithm of choice for matrix inversion and determinant calculation in the context of image classification of PolSAR images [3]. In this paper, we propose a fast algorithm for the computation of matrix inverse and determinant of  Hermitian matrices in the context of PolSAR image classification that outperforms the Cholesky factorization. The introduced algorithm is proven to reduce the overall arithmetic complexity associated with the matrix inversion and determinant calculation when compared with the Cholesky factorization approach. Such lower arithmetic cost results in a reduction of the computation time to about a half of the Cholesky based method. The proposed algorithm and the matrix inversion and determinant calculation based on Cholesky factorization are implemented in a general purpose graphical processing unit (GPGPU) using C/C++ and open computing language (OpenCL), which are tools that have been used for accelerating a several of algorithms in geoscience and remote sensing [15, 16, 17, 18]. The proposed algorithm and the method based on Cholesky factorization are tested using both simulated and measured PolSAR data [19].

The paper unfolds as follows. Section 2 introduces notation and preliminary considerations. The Cholesky factorization is reviewed and described as a means for matrix inversion and determinant computation. In Section 3, the proposed method and its fast algorithm is introduced. Section 3 brings the arithmetic complexity assessment and a discussion on the numerical error analysis of the proposed method compared with the Cholesky factorization approach. Section 4 has implementation considerations including software and hardware comments. Experiment results shows the effectiveness of the proposed method. Final comments and suggested directions for future works are in Section 5.

2 Mathematical Review

2.1 Preliminaries and Notation

The type of matrix that occurs in the PolSAR problem is the  correlation matrix with complex entries. Being a correlation matrix, it inherits the Hermitian property [13]. Therefore, it can be represented according to:

where , , and are real quantities; , , and  are complex numbers; and the overbar denotes the complex conjugation. Because  is Hermitian, in actual implementations, only the upper or lower triangular part of the matrix is stored for computation.

The  cofactor of the matrix  is the determinant of the submatrix formed with the elimination of the th row and th column times  [13]. The adjoint matrix is computed according to the following [13]:

where the element in position  represents the cofactor of the element  in the original matrix  and are given by

(3)

The adjoint matrix  is also Hermitian; thus  and  are real numbers.

2.2 Cholesky Factorization

Cholesky factorization is applied in many numerical problems [20, 21, 22]. Let  be the input matrix we are interested into inverting and computing the determinant. Cholesky factorization decomposes an input matrix into the product , where is a lower triangular matrix and represents the transpose conjugate operation. Let  be the  entry of . The Cholesky factorization is based on the following relations between the elements of  and :

where  returns the magnitude of its complex argument [23].

Once matrix is derived, its inverse can be directly obtained from the following expressions:

Given , the inverse of the input matrix  is furnished by: . The determinant of , , is also available as a by-product of the discussed factorization [13]. The detailed procedure for inverting  matrices based on Cholesky factorization is described in Algorithm 1.

 

0:   and 
0:   and  and   
  Compute  and auxiliary variables:
 
  Compute :
 
  Compute the determinant:  
  Compute :
 
  return   and 

 

Alg. 1: The matrix inversion and determinant calculation based on the Cholesky factorization.

3 Fast Inversion and Determinant Computation

3.1 Proposed Fast Algorithm

From matrix theory [24, p. 59], we know that

(4)

and

(5)

Notice that (5) is a direct consequence of the fact that the determinant of a matrix can be obtained by the summation of any of the row or column elements weighted by their corresponding cofactors [25]. Notice that the first term in (5), , is purely real. Also notice that because is Hermitian, its determinant is real-valued [13]. Therefore, the term  must be real. As a consequence, we calculate only the real part of  in order to compute  as follows:

(6)

where  and  return the real and imaginary parts of its complex argument, respectively.

Expressions (3), (4), and (6) can be combined to derive a fast algorithm for the computation of  and . Algorithm 2 details the efficient procedure. As input data, it requires only the upper triangular part of the input matrix . The proposed algorithm returns (i) the upper triangular part of , which can be fully reconstructed due to the Hermitian property and (ii) inverse and determinant. The quantity , which appears in all the aforementioned densities and divergences, is readily available as a by-product of Algorithm 2 under the auxiliary variable . Notice that, although not explicitly stated, both Algorithm 1 and Algorithm 2 can be adapted to check whether the matrices subject to calculation are full-rank. This can be done by simply checking if . This does not represent any significant overhead since  is actually necessary for the computation.

 

0:   and 
0:   and  and   
  Compute cofactors:
 
  Compute determinant:
 
  Compute the inverse of the determinant:  
  Compute the  entries:
 
  return   and 

 

Alg. 2: Proposed algorithm for fast matrix inversion and determinant calculation.

3.2 Complexity Assessment and Discussion

Algorithm 1 requires the computation of the squared norm of six complex quantities. This demands 12 multiplications and six additions of real quantities [26]. Besides these operations, Algorithm 1 demands 44 multiplications, 12 additions, three inversions, three squarings, and three square roots of real quantities. This accounts for a total of 89 operations as shown in Table 1. Note that the proposed Algorithm 2 outperforms Algorithm 1 not only in the total count of operations. In fact, it reduces the arithmetic cost for all operations listed in Table 1, except for the number of additions which requires only two extra additions.

Method Total
Cholesky 59 24 3 3 89
Proposed 37 26 1 0 64
Table 1: Operation Counts in Terms of Real Arithmetic Operations with Real Inputs  and 

In contrast, the proposed Algorithm 2 requires four multiplications and two additions for the computation of , and , each. It also requires ten multiplications and six additions for the computation of , and , each. The computation of  and its inverse demands five multiplications, four additions, and only one inversion. Because of that, Algorithm 2 requires a total of 37 multiplications, 26 additions, and one division of real quantities. A total of 64 operations; 28% less than the usual Cholesky factorization approach. Above quantities are summarized in Table 1.

3.3 Numerical Errors

Modern arithmetic logic units (ALUs) in contemporary processors are built to provide high accuracy to a wide range of arithmetic functions. Under certain conditions, the accuracy obtained for fundamental operations, such as addition and multiplication, can also be achieved for other arithmetic functions, such as division and square-root operations. This is not obvious since more complex operations require a number of additions and multiplications and are usually implemented through iterative methods [27]. However, in modern GPGPUs that follow the OpenCL and CUDA standards, such as the one employed in our computations, division and square-root operations are correctly rounded up to the very last bit as it is also the case for addition and multiplication [28, cf. Table 7] [29, cf. Table 7.2].

The only operations required by the Algorithm 1 and Algorithm 2 are addition, multiplication, division, and square-root operations. Because all those operations are computed with the same precision, their numerical accuracies are the result of the amount of operations required by each of them. In particular, quantities that require the largest number of operations are expected to dominate the the numerical error. This is given by the critical path in terms of the number of arithmetic operations. A careful analysis of Algorithm 1 shows that one needs 37 operations along the critical path to obtain the desired output. There are more than one critical path, but as an example, one of them is: . In the case of Algorithm 2, the critical path requires 22 operations. A possible path is: . Because of that, the proposed method in Algorithm 2 can compute the matrix inverse and determinant with a higher accuracy when compared with the method in Algorithm 1.

4 Implementation and Results

4.1 Material and Equipment

We implemented the proposed Algorithm 2 and the Cholesky factorization based approach using open computing language (OpenCL) with the C application interface (API) [30]; being compiled for a general purpose graphical processing unit (GPGPU). The device used is a NVIDIA GeForce 940M, which contains three computing units and allows the data to be scheduled through the maximum of three different dimensions. Each dimension allows the maximum use of 1024, 1024, and 64 work items at the same time, respectively. The device has a global memory of . The host side was implemented in C/C++ language. The machine used has a Core i7 microprocessor and of random access memory (RAM) and runs a Linux operating system with distribution Ubuntu version 16.04 LTS.

The communication between the proposed method and popular platforms for image processing, such as Matlab, ENVI, and NEST can be performed by means of interface capabilities offered by each different vendor. For Matlab, this can be accomplished by mex functions. ENVI or NEST can resort to direct calls of the compiled C/C++ code to interface with the proposed algorithm.

4.2 Data Setup

We separated simulated and measured data for numerical analysis.

4.2.1 Simulated Data

Simulated data were generated from random matrices obtained with the Eigen C/C++ library [31] through the method Eigen::MatrixXcd::Random. The real and imaginary parts of the matrix coefficients are drawn from the uniform distribution on the closed interval . The obtained matrices are then converted into Hermitian matrices by computing the product of itself with the transpose conjugate [13].

4.2.2 Measured Data

For the measured data, we considered the data available in [19]. Such data were acquired by the Airborne Synthetic Aperture Radar (AIRSAR) sensor over San Francisco, CA, in fully polarimetric mode and L-band. The approximate spatial resolution is . The data were stored in .rdata format, original from R language [32]. For recovering the data, we employ RInside C/C++ library [33], which allows the binding of R and C/C++ applications. The data were converted to the appropriate types in C/C++ and then sent to the GPGPU using OpenCL scheduling runtime routines. The data used for this experiment has a total of  pixels.

4.3 Methodology and Results

The OpenCL implementations of Algorithm 1 and Algorithm 2 were executed using all the compute units and their respective work items as a single dimension [30, pg. 41][34]. The GPGPU device was allowed to manage and use the preferred work-group size [30]. The computation was performed with  replicates in order to reduce the operating system fluctuations when estimating the speedup.

For the simulated data computation, we considered  matrices for each replicate. This amount of matrices was carefully selected to use the GPGPU device at the limit imposed by its available memory for allocation, and corresponds to an image of approximately  pixels. Indeed, GPGPU finite registers (buffers) are dedicated to data interchange with the host device and the buffer size sets an upper bound to the maximum allocated memory. This is a limiting factor in the high performance applications. For measured data, we submitted the  matrices associated to the selected 450600 pixel image.

Table 2 displays descriptive statistics of the computing time required by Algorithm 1 and Algorithm 2 for simulated and measured data. We also provide the ratio between the measures as figure of merit for the speed gain. Figure 1 displays boxplots of the runtimes for the Cholesky based approach and the proposed method both for measured and simulated data. The time statistics for simulated data are larger when compared to the results from measured data, because simulated and measured data sets have different number of matrices. However, the ratios of the computation time statistics for the simulated and measure data are very close.

Time Statistics Simulated Data ( matrices) Measured Data ( matrices)
Cholesky Proposed Gain Cholesky Proposed Gain
498.89 223.06 2.23 24.68 11.18 2.21
507.54 231.51 2.19 28.35 12.98 2.18
556.79 273.08 2.03 62.60 28.59 2.18
9.34 6.11 2.55 1.44
Table 2: Run-time execution statistics: minimum (), average (), maximum (), and standard deviation () for matrix inversion and determinant evaluation. Measurements in milliseconds
Figure 1: Boxplots for the computing time (in milliseconds) of the Cholesky based approach and the proposed method using simulated data (left) and measured data (right).

One may notice that PolSAR images can be larger than the images considered adopted in this work. Indeed, images to be processed may not fit into the global memory of a particular GPGPU adopted for the data processing. In this case, the data can be streamed into the GPU to achieve its maximum capability. One can start from the left-top corner of the image and stream the data as it is being fed into the GPGPU. After the maximum amount of data is sent, the data can be processed and then pulled out. The process must be repeated until all images have been processed. The maximum capability, naturally, depends on the device specifications of the computational platform employed for processing the PolSAR images.

4.4 Discussion

Although Table 1 suggests an improvement of , i.e., roughly a reduction in complexity, the actual improvement gain or complexity savings is much larger. Indeed, to correctly evaluate the speed improvements, one must take into account that inversion and square-root computations demand more GPU clock cycles when compared with simple additions or multiplications. In particular, reducing the number of calls for the square root operations proves to be significant. This is because, in modern processors, the square root operation is not computed directly but through the calculation of the inverse square root [35], which requires calling a division operation at the end of the calculation in order to obtain the sought result [36, 37, 38]. Therefore, the total impact of the arithmetic reduction in the proposed algorithm is not just the ratio of the counting of arithmetic operations of the Cholesky based approach and the proposed fast algorithm. Rather, this ratio is a conservative lower bound. Results from Table 2 confirm this analysis, showing that the speedup gain is more than double.

5 Final Comments and Future Works

In this paper, we reviewed the Cholesky factorization method for the inversion and determinant calculation of  Hermitian matrices in the context of PolSAR image processing. As the main contribution, we proposed a fast algorithm for the computation of matrix inverse and determinant computation for such matrices. The proposed algorithm is based on the computation of the adjoint matrix and careful examination of the quantities involved. The derived algorithm was compared to the usual Cholesky based factorization approach and shown to significantly reduce the total number of arithmetic operations. The proposed algorithm and the Cholesky based approach were both implemented in GPGPU using OpenCL and C/C++. The proposed approach proved to reduce the total computation time in about a half when compared to the Cholesky based approach with real data; in other words, the proposed method offers a 120.7% improvement in speed, at least.

Future works may include the combination of several matrix inversions into a single matrix inversion call and the application to multifrequency PolSAR images [39]. In this particular case, the matrices to be inverted and have their determinants calculated follow a block diagonal structure, where each block is also Hermitian. If the images are obtained with a single look , the inverse and the determinant calculation of the matrices of interest can be obtained after applications of the proposed method to each of the diagonal blocks.

Acknowledgments

Authors acknowledge CNPq, FAPEAL, and NSERC for the partial support.

References

  • [1] J. W. Goodman, Statistical Optics. Wiley Series in Pure & Applied Optics, 1985.
  • [2] S. N. Anfinsen, A. P. Doulgeris, and T. Eltoft, “Goodness-of-fit tests for multilook polarimetric radar data based on the Mellin transform,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, pp. 2764–2781, July 2011.
  • [3] L. Torres, S. J. S. Sant’Anna, C. da Costa Freitas, and A. C. Frery, “Speckle reduction in polarimetric SAR imagery with stochastic distances and nonlocal means,” Pattern Recogn., vol. 47, pp. 141–157, Jan. 2014.
  • [4] N. R. Goodman, “Statistical analysis based on a certain multivariate complex Gaussian distribution (an introduction),” Annals of Mathematical Statistics, vol. 34, pp. 152–177, Mar. 1963.
  • [5] S. H. Yueh, J. A. Kong, J. K. Jao, R. T. Shin, H. A. Zebker, T. L. Toan, and H. Ottl, “K-distribution and polarimetric terrain radar clutter,” Progress in Electromagnetics Research, vol. PIER 03, pp. 237–275, 1990.
  • [6] A. C. Frery, H. J. Muller, C. C. F. Yanasse, and S. J. S. Sant’Anna, “A model for extremely heterogeneous clutter,” IEEE Transactions on Geoscience and Remote Sensing, vol. 35, pp. 648–659, May 1997.
  • [7] C. C. Freitas, A. C. Frery, and A. H. Correia, “The polarimetric distribution for SAR data analysis,” Environmetrics, vol. 16, pp. 13–31, Feb. 2005.
  • [8] A. P. Doulgeris, “An automatic -distribution and Markov random field segmentation algorithm for PolSAR images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, pp. 1819–1827, Apr. 2015.
  • [9] A. D. C. Nascimento, R. J. Cintra, and A. C. Frery, “Hypothesis testing in speckled data with stochastic distances,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, pp. 373–385, Jan. 2010.
  • [10] K. Conradsen, A. A. Nielsen, J. Schou, and H. Skriver, “A test statistic in the complex Wishart distribution and its application to change detection in polarimetric SAR data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 41, pp. 4–19, Jan. 2003.
  • [11] H. Skriver, “Crop classification by multitemporal C- and L-band single- and dual-polarization and fully polarimetric SAR,” IEEE Transactions on Geoscience and Remote Sensing, 2012.
  • [12] Å. Björck, Numerical Methods in Matrix Computations. Texts in Applied Mathematics, Springer International Publishing, 2014.
  • [13] G. H. Golub and C. F. Van Loan, Matrix Computations. Baltimore, MD: Johns Hopkins University Press, 3rd ed., 1996.
  • [14] T. S. Shores, Applied Linear Algebra and Matrix Analysis. Lincoln, NE: Springer, 2007.
  • [15] N. Lukač and B. Žalik, “GPU-based roofs’ solar potential estimation using liDAR data,” Computers & Geosciences, vol. 52, no. Supplement C, pp. 34–41, 2013.
  • [16] M. Steinbach and R. Hemmerling, “Accelerating batch processing of spatial raster analysis using GPU,” Computers & Geosciences, vol. 45, no. Supplement C, pp. 212–220, 2012.
  • [17] X. Li, T. Huang, D.-T. Lu, and C. Niu, “Accelerating experimental high-order spatial statistics calculations using GPUs,” Computers & Geosciences, vol. 70, no. Supplement C, pp. 128–137, 2014.
  • [18] X. Li, C. Song, S. López, Y. Li, and J. F. López, “Fast computation of bare soil surface roughness on a fermi GPU,” Computers & Geosciences, vol. 82, no. Supplement C, pp. 38–44, 2015.
  • [19] The Polarimetric SAR Data Processing and Educational Tool, “PolSARpro database.” https://earth.esa.int/web/polsarpro/home. Information retrieved on Nov 08, 2017.
  • [20] F. Aquilante, P.-Å. Malmqvist, T. B. Pedersen†, A. Ghosh, and B. O. Roos, “Cholesky decomposition–based multiconfiguration second-order perturbation theory (CD-CASPT2): Application to the spin-state energetics of Co (diiminato)(NPh),” Journal of Chemical Theory and Computation, vol. 5, pp. 694–702, Apr. 2008.
  • [21] S. Wilson, “Universal basis sets and Cholesky decomposition of the two-electron integral matrix,” Computer Physics Communications, vol. 58, no. 1, pp. 71–81, 1990.
  • [22] D. S. Kershaw, “The incomplete Cholesky–conjugate gradient method for the iterative solution of systems of linear equations,” Journal of Computational Physics, vol. 26, no. 1, pp. 43–65, 1978.
  • [23] R. L. Graham, D. E. Knuth, and O. Patashnik, Concrete Mathematics. Reading, MA: Addison-Wesley Publishing Company, 1989.
  • [24] G. A. F. Seber, A Matrix Handbook for Statisticians. Wiley Series in Probability and Statistics, Hoboken, NJ: John Wiley & Sons, Inc., 2008.
  • [25] D. S. Watkins, Fundamentals of Matrix Computations. Pure and Applied Mathematics: A Wiley Series of Texts, Monographs and Tracts, Wiley, 2004.
  • [26] R. E. Blahut, Fast Algorithms for Digital Signal Processing. Cambridge University Press, 2010.
  • [27] U. W. Kulisch and W. L. Miranker, Computer Arithmetic in Theory and Practice. Academic Press, 1981.
  • [28] NVIDIA Corporation, CUDA C Programming Guide, 9.2.88 ed., May 2018.
  • [29] E. A. Munshi, The OpenCL Specification. Khronos Group, Nov. 2012. Version 1.2.
  • [30] Advanced Micro Devices, “Introduction to OpenCL™ programming,” May 2010.
  • [31] G. Guennebaud, B. Jacob, et al., “Eigen v3.” http://eigen.tuxfamily.org, 2010.
  • [32] R Development Core Team, R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2008. http://www.R-project.org/.
  • [33] D. Eddelbuettel, RInside. New York, NY: Springer New York, 2013.
  • [34] Advanced Micro Devices, “AMD APP SDK OpenCL™ user guide,” Aug. 2015.
  • [35] B. Parhami, Computer Arithmetic: Algorithms and Hardware Designs. Oxford, UK: Oxford University Press, 2nd ed., 2010.
  • [36] A. Hasnat, T. Bhattacharyya, A. Dey, S. Halder, and D. Bhattacharjee, “A fast FPGA based architecture for computation of square root and inverse square root,” in Devices for Integrated Circuit (DevIC), pp. 383–387, Mar. 2017.
  • [37] E. Libessart, M. Arzel, C. Lahuec, and F. Andriulli, “A scaling-less Newton–Raphson pipelined implementation for a fixed-point inverse square root operator,” in 15th IEEE International New Circuits and Systems Conference (NEWCAS), pp. 157–160, June 2017.
  • [38] A. H. Karp and P. Markstein, “High-precision division and square root,” ACM Transactions on Mathematical Software, vol. 23, pp. 561–589, Dec. 1997.
  • [39] A. C. Frery, A. H. Correia, and C. C. Freitas, “Classifying multifrequency fully polarimetric imagery with multiple sources of statistical evidence and contextual information,” IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 10, pp. 3098–3109, 2007.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
266472
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description