Analytic Expressions for Stochastic Distances Between Relaxed Complex Wishart Distributions
The scaled complex Wishart distribution is a widely used model for multilook full polarimetric SAR data whose adequacy has been attested in the literature. Classification, segmentation, and image analysis techniques which depend on this model have been devised, and many of them employ some type of dissimilarity measure. In this paper we derive analytic expressions for four stochastic distances between relaxed scaled complex Wishart distributions in their most general form and in important particular cases. Using these distances, inequalities are obtained which lead to new ways of deriving the Bartlett and revised Wishart distances. The expressiveness of the four analytic distances is assessed with respect to the variation of parameters. Such distances are then used for deriving new tests statistics, which are proved to have asymptotic chi-square distribution. Adopting the test size as a comparison criterion, a sensitivity study is performed by means of Monte Carlo experiments suggesting that the Bhattacharyya statistic outperforms all the others. The power of the tests is also assessed. Applications to actual data illustrate the discrimination and homogeneity identification capabilities of these distances.
Polarimetric Synthetic Aperture Radar (PolSAR) devices transmit orthogonally polarized pulses towards a target, and the returned echo is recorded with respect to each polarization. Such remote sensing apparatus provides the means for a better capture of scene information when compared to its univariate counterpart, namely the conventional SAR technology, and complementary information with respect to other remote sensing modalities [1, 2].
PolSAR can achieve high spatial resolution due to its coherent processing of the returned echoes . Being multichanneled by design, PolSAR also allows individual characterization of the targets in various channels. Moreover, it enables the identification of covariance structures among channels.
Resulting images from coherent systems are prone to a particular interference pattern called speckle . This phenomenon can seriously affect the interpretation of PolSAR imagery . Thus, specialized signal analysis techniques are usually required.
Segmentation , classification , boundary detection [6, 7], and change detection  techniques often employ dissimilarity measures for data discrimination. Such measures have been used to quantify the difference between image regions, and are often called ‘contrast measures’. The analytical derivation of contrast measures and their properties is an important venue for image understanding. Methods based on numerical integration have several disadvantages with respect to closed formulas, such as lack of convergence of the iterative procedures, and high computational cost. Stochastic distances between models for PolSAR data often require dealing with integrals whose domain is the set of all positive definite Hermitian matrices.
Goudail and Réfrégier  applied stochastic measures to characterize the performance of target detection and segmentation algorithms in PolSAR image processing. In that study, both Kullback-Leibler and Bhattacharyya distances were considered as tools for quantifying the dissimilarity between circular complex Gaussian distributions. The Bhattacharyya measure was reported to possess better contrast capabilities than the Kullback-Leibler measure. However, the statistical properties of the measures were not explicitly considered in that work.
Erten et al.  derived a “coherent similarity” between PolSAR images based on the mutual information. Morio et al.  applied the Shannon entropy and Bhattacharyya distance for the characterization of polarimetric interferometric SAR images. They decomposed the Shannon entropy into the sum of three terms with physical meaning.
PolSAR theory prescribes that the returned (backscattered) signal of distributed targets is adequately represented by its complex covariance matrix. Goodman  presents a comprehensive analysis of complex Gaussian models, along with the connection between the class of complex covariance matrices and the Wishart distribution. Indeed, the complex scaled Wishart distribution is widely adopted as a statistical model for multilook full polarimetric data .
Conradsen et al.  proposed a methodology based on the likelihood-ratio test for the discrimination of two Wishart distributed targets, leading to a test statistic that takes into account the complex covariance matrices of PolSAR images. In a similar fashion, hypothesis tests for monopolarized SAR data were proposed in .
In this paper, we present analytic expressions for the Kullback-Leibler, Rényi (of order ), Bhattacharyya, and Hellinger distances between scaled complex Wishart distributions in their most general form and in important particular cases. Frery et al  obtained analytical expressions for these distances, as well as for the distance, and they show that the last one is numerically unstable. Therefore, in the present work tests based on the distance were not considered.
We also verify that those distances present scale invariance with respect to their covariance matrices. Using such distances, we derive inequalities which depend on covariance matrices; two among them, obtained from Kullback-Leibler and Hellinger distances, provide alternative forms for deriving the revised Wishart  and Bartlett  distances, respectively.
Besides advancing the comparison of samples by means of their covariance matrices, the proposed distances are a venue for contrasting images rendered by different numbers of looks.
Considering the hypothesis test methodology proposed by Salicrú et al. , the derived distances are multiplied by a coefficient which involves the sizes of two samples of PolSAR images. The asymptotic and finite-sample behavior of the resulting quantities is studied.
In order to quantify the sensitivity of the distances, we perform Monte Carlo experiments in several possible scenarios. We illustrate the behavior of these distances and their associated hypothesis tests with actual data.
This paper unfolds as follows. Section II presents the scaled and the relaxed complex Wishart distributions and estimators for their parameters. Section III recalls the background of stochastic dissimilarities. Section IV presents the analytic expressions of distances between Wishart models, with a new way to derive the Bartlett and the revised Wishart distances. Section V illustrates the application of these distances in PolSAR image discrimination. Section VI concludes the paper.
Ii The complex Wishart distribution
PolSAR sensors record intensity and relative phase data which can be presented as complex scattering matrices. In principle, these matrices consist of four complex elements and , where H and V refer to the horizontal and vertical wave polarization states, respectively. Under the conditions of the reciprocity theorem [18, 19], we have that . This scenario is realistic when natural targets are considered .
In general, we may consider systems with polarization elements, which constitute a complex random vector denoted by:
where the superscript ‘’ indicates vector transposition. In PolSAR image processing, is often admitted to obey the multivariate complex circular Gaussian distribution with zero mean , whose probability density function is:
where is the determinant of a matrix or the absolute value of a scalar, the superscript ‘’ denotes the complex conjugate transpose of a vector, is the covariance matrix of given by
and is the statistical expectation operator. Besides being Hermitian and positive definite, the covariance matrix contains all the necessary information to characterize the backscattered data under analysis .
In order to enhance the signal-to-noise ratio, independent and identically distributed (iid) samples are usually averaged in order to form the -looks covariance matrix :
where , , are realizations of (1). Under the aforementioned hypotheses, follows a scaled complex Wishart distribution. Having and as parameters, such law is characterized by the following probability density function:
where , , is the gamma function, and is the trace operator. This situation is denoted , and this distribution satisfies , which is a Hermitian positive definite matrix . In practice, is treated as a parameter and must be estimated. In , Anfinsen et al. removed the restriction . The resulting distribution has the same form as in (2) and is termed the relaxed Wishart distribution denoted as . This model accepts variations of along the image, and will be assumed henceforth.
Due to its optimal asymptotic properties, the maximum likelihood (ML) estimation is employed to estimate and . Let be a random sample of size obeying the distribution. If (i) it is assumed that the parameter is a known quantity and (ii) the profile likelihood of is considered in terms of , we establish the following estimator for :
Deriving the profile likelihood from (2) with respect to we obtain:
Fig. 1 presents a polarimetric SAR image obtained by an EMISAR sensor over surroundings of Foulum, Denmark. The informed (nominal) number of looks is . According to Skriver et al. , the area exhibits three types of crops: (i) winter rape (B1), (ii) mixture of winter rape and winter wheat (B2), and (iii) beets (B3). Table I presents the resulting ML parameter estimates, as well as the sample sizes. The closest estimate of to the nominal number of looks occurs at the most homogeneous scenario, i.e., with beets. Notice that two out of three ML estimates of the number of looks are higher than the nominal number of looks. Similar overestimation was also noticed by Anfinsen et al. , who explained this phenomenon as an effect of the specular reflection on ocean scenarios. In our case, winter rape and, to a lesser extent, beets, appear smoother to the sensor than homogeneous targets.
Fig. 2 depicts the empirical densities of data samples from the selected regions. Additionally, the associated fitted marginal densities and are displayed for comparison. In this case, the scaled Wishart density collapses to a gamma density as demonstrated in :
for , where is the -th entry of , and is the -th entry of the random matrix . In order to assess the data fittings, Table II presents the Akaike information criterion (AIC) values and the sum of squares due to error (SSE) between the histogram of for , and the fitted densities with :
where # pixels denote the number of considered pixels. This measure was used in . In all cases, the distribution presented the best fit for both measures. Table II also shows the Kolmogorov-Smirnov (KS) statistic and its -value. It is consistent with the other results, i.e., the scaled Wishart distribution provides better descriptions of the data.
The most accurate fit is in region B2. The equivalent number of looks in this region is slightly smaller than the nominal one, as expected. These samples will be used to validate our proposed methods in Section IV-D.
|B1||8401.||105||8354.||817||58.||169||76.||853||0.||070 (0.325)||0.||051 (0.006)|
|B2||4973.||743||4987.||579||1.||245||1.||487||0.||018 (0.789)||0.||034 (0.110)|
|B3||13725.||650||13734.||330||2362.||108||2836.||599||0.||063 (2.383)||0.||059 (6.435)|
Iii Stochastic dissimilarities
In the following we adhere to the convention that a “divergence” is any non-negative function between two probability measures which obeys the identity of definiteness property [27, ch. 11, p. 328]. If the function is also symmetric, it is called a “distance”. Finally, we understand “metric” as a distance which also satisfies the triangular inequality [28, ch. 1 and 14].
An image can be understood as a set of regions, in which the enclosed pixels are observations of random variables following a certain distribution. Therefore, stochastic dissimilarity measures can be used as image processing tools, since they may be able to assess the difference between the distributions that describe different image areas . Dissimilarity measures were submitted to a systematic and comprehensive treatment in [29, 30] and, as a result, the class of -divergences was proposed .
Assume that and are random matrices associated with densities and , respectively, where and are parameter vectors. The densities are assumed to share a common support : the cone of Hermitian positive definite matrices . The -divergence between and is defined by
where is a strictly increasing function with , is a convex function, and indeterminate forms are assigned the value zero (we assume the conventions (i) , (ii) , and, for , (iii) ) [32, pp. 31]. In particular, Ali and Silvey  proposed a detailed discussion about the function . The differential element is given by
where is the -th entry of matrix , and operators and return real and imaginary parts of their arguments, respectively .
Well-known divergences arise after adequate choices of and . Among them, we examined the following: (i) Kullback-Leibler , (ii) Rényi, (iii) Bhattacharyya , and (iv) Hellinger . As the triangular inequality is not necessarily satisfied, not every divergence measure is a metric . Additionally, the symmetry property is not followed by some of these divergence measures. Nevertheless, such tools are mathematically appropriate for comparing the distribution of random variables . The following expression has been suggested as a possible solution for this issue :
Functions are distances over since, for all , the following properties hold:
Table III shows the functions and which lead to the distances considered in this work.
|Rényi (order )|
In the following we discuss integral expressions of these -distances. For simplicity, we suppress the explicit dependence on and , reminding that the integration is with respect to on .
The Rényi distance of order :
The distance proves to be more algebraically tractable than for some manipulations with the complex Wishart density. Thus, we use the former in subsequent analyses.
The Bhattacharyya distance:
Goudail et al.  showed that this distance is an efficient tool for contrast definition in algorithms for image processing.
The Hellinger distance:
Estimation methods based on the minimization of have been successfully employed in the context of stochastic differential equations . This is the only bounded distance among the ones considered in this paper.
When considering the distance between particular cases of the same distribution, only parameters are relevant. In this case, the parameters and replace the random variables and as arguments of the discussed distances. This notation is in agreement with that of .
In the following, the hypothesis test based on stochastic distances proposed by Salicrú et al.  is introduced. Let -point vectors and be the ML estimators of parameters and based on independent samples of sizes and , respectively. Under the regularity conditions discussed in [17, p. 380] the following lemma holds:
If and , then
where “” denotes convergence in distribution and represents the chi-square distribution with degrees of freedom.
Based on Lemma 1, statistical hypothesis tests for the null hypothesis can be derived in the form of the following proposition.
Let and be large and , then the null hypothesis can be rejected at level if .
We denote the statistics based on the Kullback-Leibler, Rényi, Bhattacharya, and Hellinger distances as , , , and , respectively.
Iv Analytic expressions, sensitivity, inequalities, and finite sample size behavior
In the following, analytic expressions for the stochastic distances , , , and between two relaxed complex Wishart distributions are derived (Section IV-A). We examine the special cases in terms of the parameter values: (i) and , which correspond to the most general case, (ii) same equivalent number of looks and different covariance matrices , and (iii) same covariance matrix and different equivalent number of looks . Case (ii) is likely to be the most frequently used in practice since it allows the comparison of two possibly different areas from the same image. Case (iii) allows the assessment of a change in distribution due only to multilook processing on the same area.
The sensitivity of the tests to variations of parameters is qualitatively assessed and discussed in Section IV-B.
In Section IV-C we derive inequalities which and must obey. These inequalities lead to the Bartlett and revised Wishart distances in a different and simple way when compared to a well-known method available in literature . Distances are also shown to satisfy scale invariance with respect to .
The performance of the tests for finite size samples is quantified by means of (i) Monte Carlo simulation and (ii) true data analysis in Section IV-D.
Iv-a Analytic expressions
Iv-A1 Kullback-Leibler distance
Iv-A2 Rényi distance of order
- Case (i):
where , for , and .
- Case (ii):
- Case (iii):
Iv-A3 Bhattacharyya distance
- Case (i):
- Case (ii):
- Case (iii):
Iv-A4 Hellinger distance
- Case (i):
- Case (ii):
- Case (iii):
Iv-B Sensitivity analysis
Now we examine the behavior of the statistics presented in Lemma 1 with respect to parameter variations, i.e., under the alternative hypotheses. These statistics are directly comparable since they all have the same asymptotic distribution; we used . Two simple alternative hypotheses are illustrated: changes in an entry in the diagonal of the covariance matrix, and changes in the number of looks.
Firstly, we assumed looks, and , where
Since the covariance matrix is Hermitian, only the upper triangle and the diagonal are displayed. The fixed covariance matrix was previously analyzed in  in PolSAR data of forested areas.
Fig. 3(a) shows the statistics for . They present roughly the same behavior.
Secondly, we considered fixed covariance matrices with varying equivalent number of looks: and , for . Fig. 3(b) shows the statistics. It is noticeable that the test statistics are steeper to the left of the minimum. The number of looks, being a shape parameter, alters the distribution in a nonlinear fashion. Such change is perceived visually and by distance measures, and it is more intense for low values of the parameter. In other words, the difference between and , for any fixed and any , becomes smaller when increases.
Iv-C Invariance and inequalities
The derived distances are invariant under scalings of the covariance matrix . In fact, it can be shown that
where is a positive real value and . This fact stems directly from the mathematical definition of these distances.
De Maio and Alfano  derived a new estimator for the covariance matrix under the complex Wishart model using inequalities relating the sought parameters. In the following we derive new inequalities for this model. Due to the major role of the covariance matrix in polarimetry , we limit our analysis to inequalities that depend on .
Case (ii) described in previous subsections paved the way for the new inequalities. The following results stem from the nonnegativity of the four distances:
The revised Wishart  and Bartlett distances  can be obtained in a new and simple manner. Indeed, the revised Wishart distance () can be derived after simple manipulations of inequality (11), yielding:
The Bartlett distance arises after taking the logarithm of both sides of inequality (14). Straightforward algebra leads to:
Iv-D Finite sample size behavior
We assessed the influence of estimation on the size of the new hypothesis tests using simulated data. To that end, the study was conducted considering the following simulation parameters: number of looks and the forest covariance matrix shown in (10) with . The sample sizes relate to square windows of size , , and pixels, i.e., . Nominal significance levels were verified.
Let be the number of Monte Carlo replicas and the number of cases for which the null hypothesis is rejected at nominal level . The empirical test size is given by . Following the methodology described in , we employed replicas.
Table IV presents the empirical test sizes at and nominal levels, the execution time in milliseconds, the test statistic mean (), and coefficient of variation (CV). All numerical calculations and the execution time quantification were performed, running on a PC with an Intel Core 2 Duo processor 2.10 , 4 of RAM, Windows XP, and the R platform v. 2.8.1. For each case, the best obtained empirical sizes and distance means are in boldface. Results for and are consistent with the ones shown, and are omitted for brevity.
We tested ten parameters: nine related to the covariance matrix of order , and the number of looks , leading to test statistics which asymptotically follow distributions. Thus, the statistics expected value should converge in probability to , as a consequence of the weak law of large numbers. In Table IV, notice that tends to as the sample size increases. By fixing the sample size while varying the number of looks , test sizes obey the inequalities , as illustrated in Fig. 3. These inequalities suggest that, for this study, the statistics based on the Kullback-Leibler distance is the best discrimination measure.
Regarding execution times, the Kullback-Leibler-based test presented the best performance, while the test based on the Hellinger distance showed the best empirical test size in 6 out of 18 cases.
|time (ms)||CV||time (ms)||CV||time (ms)||CV|
split the sample in disjoint blocks of size ;
for each block from (i), split the remaining sample in disjoint blocks of size ;
perform the hypothesis test as described in Proposition 1 for each pair of samples with sizes and .
Table V presents the results, omitting some entries as in Table IV. All test sizes were smaller than the nominal level, i.e., the proposed tests do not reject the null hypothesis when similar samples are considered.
We also made the following study on the tests power: In each one of Monte Carlo experiments, random matrices both of sizes were sampled from the and from the distributions. The covariance matrix is given in (10), and the experiment consists in contrasting samples from the relaxed Wishart distribution it indexes, and the law indexed by a version scaled by , arbitrarily chosen. Subsequently, it was verified whether these samples come from similar populations according to Proposition 1. Let be the number of situations for which the null hypothesis is rejected at nominal level ; the empirical test power is given by . Fig. 4 presents these estimates for the test power. Notice that the discrimination ability is about the same for all tests above .
In general terms, the proposed hypothesis tests presented good results regarding their power even for small samples: with samples of size , they are able to discriminate between covariance matrices which are only different in about of the time. As the sample size increases, all the tests discriminate better and better.