Analytic Expressions for Stochastic Distances Between Relaxed Complex Wishart Distributions
Abstract
The scaled complex Wishart distribution is a widely used model for multilook full polarimetric SAR data whose adequacy has been attested in the literature. Classification, segmentation, and image analysis techniques which depend on this model have been devised, and many of them employ some type of dissimilarity measure. In this paper we derive analytic expressions for four stochastic distances between relaxed scaled complex Wishart distributions in their most general form and in important particular cases. Using these distances, inequalities are obtained which lead to new ways of deriving the Bartlett and revised Wishart distances. The expressiveness of the four analytic distances is assessed with respect to the variation of parameters. Such distances are then used for deriving new tests statistics, which are proved to have asymptotic chisquare distribution. Adopting the test size as a comparison criterion, a sensitivity study is performed by means of Monte Carlo experiments suggesting that the Bhattacharyya statistic outperforms all the others. The power of the tests is also assessed. Applications to actual data illustrate the discrimination and homogeneity identification capabilities of these distances.
I Introduction
Polarimetric Synthetic Aperture Radar (PolSAR) devices transmit orthogonally polarized pulses towards a target, and the returned echo is recorded with respect to each polarization. Such remote sensing apparatus provides the means for a better capture of scene information when compared to its univariate counterpart, namely the conventional SAR technology, and complementary information with respect to other remote sensing modalities [1, 2].
PolSAR can achieve high spatial resolution due to its coherent processing of the returned echoes [3]. Being multichanneled by design, PolSAR also allows individual characterization of the targets in various channels. Moreover, it enables the identification of covariance structures among channels.
Resulting images from coherent systems are prone to a particular interference pattern called speckle [3]. This phenomenon can seriously affect the interpretation of PolSAR imagery [2]. Thus, specialized signal analysis techniques are usually required.
Segmentation [4], classification [5], boundary detection [6, 7], and change detection [8] techniques often employ dissimilarity measures for data discrimination. Such measures have been used to quantify the difference between image regions, and are often called ‘contrast measures’. The analytical derivation of contrast measures and their properties is an important venue for image understanding. Methods based on numerical integration have several disadvantages with respect to closed formulas, such as lack of convergence of the iterative procedures, and high computational cost. Stochastic distances between models for PolSAR data often require dealing with integrals whose domain is the set of all positive definite Hermitian matrices.
Goudail and Réfrégier [9] applied stochastic measures to characterize the performance of target detection and segmentation algorithms in PolSAR image processing. In that study, both KullbackLeibler and Bhattacharyya distances were considered as tools for quantifying the dissimilarity between circular complex Gaussian distributions. The Bhattacharyya measure was reported to possess better contrast capabilities than the KullbackLeibler measure. However, the statistical properties of the measures were not explicitly considered in that work.
Erten et al. [10] derived a “coherent similarity” between PolSAR images based on the mutual information. Morio et al. [11] applied the Shannon entropy and Bhattacharyya distance for the characterization of polarimetric interferometric SAR images. They decomposed the Shannon entropy into the sum of three terms with physical meaning.
PolSAR theory prescribes that the returned (backscattered) signal of distributed targets is adequately represented by its complex covariance matrix. Goodman [12] presents a comprehensive analysis of complex Gaussian models, along with the connection between the class of complex covariance matrices and the Wishart distribution. Indeed, the complex scaled Wishart distribution is widely adopted as a statistical model for multilook full polarimetric data [2].
Conradsen et al. [13] proposed a methodology based on the likelihoodratio test for the discrimination of two Wishart distributed targets, leading to a test statistic that takes into account the complex covariance matrices of PolSAR images. In a similar fashion, hypothesis tests for monopolarized SAR data were proposed in [14].
In this paper, we present analytic expressions for the KullbackLeibler, Rényi (of order ), Bhattacharyya, and Hellinger distances between scaled complex Wishart distributions in their most general form and in important particular cases. Frery et al [15] obtained analytical expressions for these distances, as well as for the distance, and they show that the last one is numerically unstable. Therefore, in the present work tests based on the distance were not considered.
We also verify that those distances present scale invariance with respect to their covariance matrices. Using such distances, we derive inequalities which depend on covariance matrices; two among them, obtained from KullbackLeibler and Hellinger distances, provide alternative forms for deriving the revised Wishart [16] and Bartlett [5] distances, respectively.
Besides advancing the comparison of samples by means of their covariance matrices, the proposed distances are a venue for contrasting images rendered by different numbers of looks.
Considering the hypothesis test methodology proposed by Salicrú et al. [17], the derived distances are multiplied by a coefficient which involves the sizes of two samples of PolSAR images. The asymptotic and finitesample behavior of the resulting quantities is studied.
In order to quantify the sensitivity of the distances, we perform Monte Carlo experiments in several possible scenarios. We illustrate the behavior of these distances and their associated hypothesis tests with actual data.
This paper unfolds as follows. Section II presents the scaled and the relaxed complex Wishart distributions and estimators for their parameters. Section III recalls the background of stochastic dissimilarities. Section IV presents the analytic expressions of distances between Wishart models, with a new way to derive the Bartlett and the revised Wishart distances. Section V illustrates the application of these distances in PolSAR image discrimination. Section VI concludes the paper.
Ii The complex Wishart distribution
PolSAR sensors record intensity and relative phase data which can be presented as complex scattering matrices. In principle, these matrices consist of four complex elements and , where H and V refer to the horizontal and vertical wave polarization states, respectively. Under the conditions of the reciprocity theorem [18, 19], we have that . This scenario is realistic when natural targets are considered [13].
In general, we may consider systems with polarization elements, which constitute a complex random vector denoted by:
(1) 
where the superscript ‘’ indicates vector transposition. In PolSAR image processing, is often admitted to obey the multivariate complex circular Gaussian distribution with zero mean [12], whose probability density function is:
where is the determinant of a matrix or the absolute value of a scalar, the superscript ‘’ denotes the complex conjugate transpose of a vector, is the covariance matrix of given by
and is the statistical expectation operator. Besides being Hermitian and positive definite, the covariance matrix contains all the necessary information to characterize the backscattered data under analysis [2].
In order to enhance the signaltonoise ratio, independent and identically distributed (iid) samples are usually averaged in order to form the looks covariance matrix [20]:
where , , are realizations of (1). Under the aforementioned hypotheses, follows a scaled complex Wishart distribution. Having and as parameters, such law is characterized by the following probability density function:
(2) 
where , , is the gamma function, and is the trace operator. This situation is denoted , and this distribution satisfies , which is a Hermitian positive definite matrix [20]. In practice, is treated as a parameter and must be estimated. In [21], Anfinsen et al. removed the restriction . The resulting distribution has the same form as in (2) and is termed the relaxed Wishart distribution denoted as . This model accepts variations of along the image, and will be assumed henceforth.
Due to its optimal asymptotic properties, the maximum likelihood (ML) estimation is employed to estimate and . Let be a random sample of size obeying the distribution. If (i) it is assumed that the parameter is a known quantity and (ii) the profile likelihood of is considered in terms of , we establish the following estimator for [22]:
Deriving the profile likelihood from (2) with respect to we obtain:
(3) 
where is the digamma function [23, p. 258]. Thus, the solution of above nonlinear equation provides the ML estimator for . Several estimation methods for are discussed in [20].
Fig. 1 presents a polarimetric SAR image obtained by an EMISAR sensor over surroundings of Foulum, Denmark. The informed (nominal) number of looks is . According to Skriver et al. [24], the area exhibits three types of crops: (i) winter rape (B1), (ii) mixture of winter rape and winter wheat (B2), and (iii) beets (B3). Table I presents the resulting ML parameter estimates, as well as the sample sizes. The closest estimate of to the nominal number of looks occurs at the most homogeneous scenario, i.e., with beets. Notice that two out of three ML estimates of the number of looks are higher than the nominal number of looks. Similar overestimation was also noticed by Anfinsen et al. [20], who explained this phenomenon as an effect of the specular reflection on ocean scenarios. In our case, winter rape and, to a lesser extent, beets, appear smoother to the sensor than homogeneous targets.
Regions  # pixels  

B1  9.216  2.507  1131 
B2  7.200  5.717  1265 
B3  8.555  4.114  1155 
Fig. 2 depicts the empirical densities of data samples from the selected regions. Additionally, the associated fitted marginal densities and are displayed for comparison. In this case, the scaled Wishart density collapses to a gamma density as demonstrated in [25]:
for , where is the th entry of , and is the th entry of the random matrix . In order to assess the data fittings, Table II presents the Akaike information criterion (AIC) values and the sum of squares due to error (SSE) between the histogram of for , and the fitted densities with :
where # pixels denote the number of considered pixels. This measure was used in [26]. In all cases, the distribution presented the best fit for both measures. Table II also shows the KolmogorovSmirnov (KS) statistic and its value. It is consistent with the other results, i.e., the scaled Wishart distribution provides better descriptions of the data.
The most accurate fit is in region B2. The equivalent number of looks in this region is slightly smaller than the nominal one, as expected. These samples will be used to validate our proposed methods in Section IVD.
Regions  AIC  SSE  KS (value)  

B1  8401.  105  8354.  817  58.  169  76.  853  0.  070 (0.325)  0.  051 (0.006) 
B2  4973.  743  4987.  579  1.  245  1.  487  0.  018 (0.789)  0.  034 (0.110) 
B3  13725.  650  13734.  330  2362.  108  2836.  599  0.  063 (2.383)  0.  059 (6.435) 
Iii Stochastic dissimilarities
In the following we adhere to the convention that a “divergence” is any nonnegative function between two probability measures which obeys the identity of definiteness property [27, ch. 11, p. 328]. If the function is also symmetric, it is called a “distance”. Finally, we understand “metric” as a distance which also satisfies the triangular inequality [28, ch. 1 and 14].
An image can be understood as a set of regions, in which the enclosed pixels are observations of random variables following a certain distribution. Therefore, stochastic dissimilarity measures can be used as image processing tools, since they may be able to assess the difference between the distributions that describe different image areas [14]. Dissimilarity measures were submitted to a systematic and comprehensive treatment in [29, 30] and, as a result, the class of divergences was proposed [17].
Assume that and are random matrices associated with densities and , respectively, where and are parameter vectors. The densities are assumed to share a common support : the cone of Hermitian positive definite matrices [31]. The divergence between and is defined by
(4) 
where is a strictly increasing function with , is a convex function, and indeterminate forms are assigned the value zero (we assume the conventions (i) , (ii) , and, for , (iii) ) [32, pp. 31]. In particular, Ali and Silvey [29] proposed a detailed discussion about the function . The differential element is given by
where is the th entry of matrix , and operators and return real and imaginary parts of their arguments, respectively [12].
Wellknown divergences arise after adequate choices of and . Among them, we examined the following: (i) KullbackLeibler [33], (ii) Rényi, (iii) Bhattacharyya [34], and (iv) Hellinger [14]. As the triangular inequality is not necessarily satisfied, not every divergence measure is a metric [35]. Additionally, the symmetry property is not followed by some of these divergence measures. Nevertheless, such tools are mathematically appropriate for comparing the distribution of random variables [36]. The following expression has been suggested as a possible solution for this issue [33]:
Functions are distances over since, for all , the following properties hold:

(Nonnegativity).

(Symmetry).

(Definiteness).
Table III shows the functions and which lead to the distances considered in this work.
distance  

KullbackLeibler  
Rényi (order )  
Bhattacharyya  
Hellinger 
In the following we discuss integral expressions of these distances. For simplicity, we suppress the explicit dependence on and , reminding that the integration is with respect to on .

The Rényi distance of order :
where . The divergence has been used for analysing geometric characteristics with respect to probability laws [38]. By the Fejér inequality [39], we have that
The distance proves to be more algebraically tractable than for some manipulations with the complex Wishart density. Thus, we use the former in subsequent analyses.

The Bhattacharyya distance:
Goudail et al. [40] showed that this distance is an efficient tool for contrast definition in algorithms for image processing.

The Hellinger distance:
Estimation methods based on the minimization of have been successfully employed in the context of stochastic differential equations [41]. This is the only bounded distance among the ones considered in this paper.
When considering the distance between particular cases of the same distribution, only parameters are relevant. In this case, the parameters and replace the random variables and as arguments of the discussed distances. This notation is in agreement with that of [17].
In the following, the hypothesis test based on stochastic distances proposed by Salicrú et al. [17] is introduced. Let point vectors and be the ML estimators of parameters and based on independent samples of sizes and , respectively. Under the regularity conditions discussed in [17, p. 380] the following lemma holds:
Lemma 1
If and , then
(5) 
where “” denotes convergence in distribution and represents the chisquare distribution with degrees of freedom.
Based on Lemma 1, statistical hypothesis tests for the null hypothesis can be derived in the form of the following proposition.
Proposition 1
Let and be large and , then the null hypothesis can be rejected at level if .
We denote the statistics based on the KullbackLeibler, Rényi, Bhattacharya, and Hellinger distances as , , , and , respectively.
Iv Analytic expressions, sensitivity, inequalities, and finite sample size behavior
In the following, analytic expressions for the stochastic distances , , , and between two relaxed complex Wishart distributions are derived (Section IVA). We examine the special cases in terms of the parameter values: (i) and , which correspond to the most general case, (ii) same equivalent number of looks and different covariance matrices , and (iii) same covariance matrix and different equivalent number of looks . Case (ii) is likely to be the most frequently used in practice since it allows the comparison of two possibly different areas from the same image. Case (iii) allows the assessment of a change in distribution due only to multilook processing on the same area.
The sensitivity of the tests to variations of parameters is qualitatively assessed and discussed in Section IVB.
In Section IVC we derive inequalities which and must obey. These inequalities lead to the Bartlett and revised Wishart distances in a different and simple way when compared to a wellknown method available in literature [42]. Distances are also shown to satisfy scale invariance with respect to .
The performance of the tests for finite size samples is quantified by means of (i) Monte Carlo simulation and (ii) true data analysis in Section IVD.
Iva Analytic expressions
IvA1 KullbackLeibler distance
IvA2 Rényi distance of order
 Case (i):

(7) where
where , for , and .
 Case (ii):

 Case (iii):

IvA3 Bhattacharyya distance
 Case (i):

(8)  Case (ii):

 Case (iii):

IvA4 Hellinger distance
 Case (i):

(9)  Case (ii):

 Case (iii):

IvB Sensitivity analysis
Now we examine the behavior of the statistics presented in Lemma 1 with respect to parameter variations, i.e., under the alternative hypotheses. These statistics are directly comparable since they all have the same asymptotic distribution; we used . Two simple alternative hypotheses are illustrated: changes in an entry in the diagonal of the covariance matrix, and changes in the number of looks.
Firstly, we assumed looks, and , where
(10) 
Since the covariance matrix is Hermitian, only the upper triangle and the diagonal are displayed. The fixed covariance matrix was previously analyzed in [6] in PolSAR data of forested areas.
Fig. 3(a) shows the statistics for . They present roughly the same behavior.
Secondly, we considered fixed covariance matrices with varying equivalent number of looks: and , for . Fig. 3(b) shows the statistics. It is noticeable that the test statistics are steeper to the left of the minimum. The number of looks, being a shape parameter, alters the distribution in a nonlinear fashion. Such change is perceived visually and by distance measures, and it is more intense for low values of the parameter. In other words, the difference between and , for any fixed and any , becomes smaller when increases.
IvC Invariance and inequalities
The derived distances are invariant under scalings of the covariance matrix . In fact, it can be shown that
where is a positive real value and . This fact stems directly from the mathematical definition of these distances.
De Maio and Alfano [44] derived a new estimator for the covariance matrix under the complex Wishart model using inequalities relating the sought parameters. In the following we derive new inequalities for this model. Due to the major role of the covariance matrix in polarimetry [13], we limit our analysis to inequalities that depend on .
Case (ii) described in previous subsections paved the way for the new inequalities. The following results stem from the nonnegativity of the four distances:
(11) 
(12) 
(13) 
and
(14) 
respectively. Fixing in (12), we obtain (14) directly; taking the logarithm of both sides of (14) yields (13). This result is justified by the following two relations:

and

.
The revised Wishart [16] and Bartlett distances [5] can be obtained in a new and simple manner. Indeed, the revised Wishart distance () can be derived after simple manipulations of inequality (11), yielding:
The Bartlett distance arises after taking the logarithm of both sides of inequality (14). Straightforward algebra leads to:
The leftmost term in the inequality above is referred to as the Bartlett distance [5, 16].
IvD Finite sample size behavior
We assessed the influence of estimation on the size of the new hypothesis tests using simulated data. To that end, the study was conducted considering the following simulation parameters: number of looks and the forest covariance matrix shown in (10) with . The sample sizes relate to square windows of size , , and pixels, i.e., . Nominal significance levels were verified.
Let be the number of Monte Carlo replicas and the number of cases for which the null hypothesis is rejected at nominal level . The empirical test size is given by . Following the methodology described in [14], we employed replicas.
Table IV presents the empirical test sizes at and nominal levels, the execution time in milliseconds, the test statistic mean (), and coefficient of variation (CV). All numerical calculations and the execution time quantification were performed, running on a PC with an Intel Core 2 Duo processor 2.10 , 4 of RAM, Windows XP, and the R platform v. 2.8.1. For each case, the best obtained empirical sizes and distance means are in boldface. Results for and are consistent with the ones shown, and are omitted for brevity.
We tested ten parameters: nine related to the covariance matrix of order , and the number of looks , leading to test statistics which asymptotically follow distributions. Thus, the statistics expected value should converge in probability to , as a consequence of the weak law of large numbers. In Table IV, notice that tends to as the sample size increases. By fixing the sample size while varying the number of looks , test sizes obey the inequalities , as illustrated in Fig. 3. These inequalities suggest that, for this study, the statistics based on the KullbackLeibler distance is the best discrimination measure.
Regarding execution times, the KullbackLeiblerbased test presented the best performance, while the test based on the Hellinger distance showed the best empirical test size in 6 out of 18 cases.
Factors  

time (ms)  CV  time (ms)  CV  time (ms)  CV  
49  49  1.309  5.491  0.44  10.189  44.804  1.472  6.291  0.49  10.292  45.873  1.509  6.364  0.39  10.400  45.828  
121  121  0.818  4.545  0.43  9.843  44.743  1.255  5.618  0.49  10.052  45.241  1.000  4.836  0.48  9.982  44.309  
400  400  1.055  4.836  0.42  9.950  45.272  1.055  5.327  0.50  10.073  44.895  1.036  4.655  0.46  10.051  44.539  
49  49  1.255  5.309  0.54  10.157  44.658  1.436  6.127  0.51  10.272  45.782  1.473  6.291  0.58  10.385  45.763  
121  121  0.800  4.473  0.57  9.830  44.687  1.236  5.582  0.61  10.044  45.203  0.964  4.800  0.56  9.976  44.286  
400  400  1.036  4.836  0.57  9.946  45.255  1.054  5.327  0.57  10.071  44.885  1.036  4.655  0.65  10.049  44.531  
49  49  1.164  5.055  1.10  10.101  44.408  1.418  5.873  1.12  10.235  45.624  1.436  6.164  1.07  10.358  45.650  
121  121  0.782  4.418  1.11  9.809  44.588  1.218  5.473  0.99  10.030  45.135  0.963  4.745  1.12  9.967  44.245  
400  400  1.036  4.836  0.99  9.939  45.224  1.055  5.323  1.13  10.066  44.866  1.036  4.636  1.20  10.046  44.515  
49  49  0.655  3.891  1.10  9.797  43.017  0.891  4.400  1.12  9.920  44.184  1.000  4.782  1.07  10.035  44.225  
121  121  0.618  4.018  1.11  9.691  44.054  0.927  5.018  0.99  9.906  44.580  0.855  4.091  1.12  9.845  43.705  
400  400  1.036  4.691  0.99  9.902  45.056  0.909  5.145  1.13  10.029  44.698  1.018  4.473  1.20  10.009  44.348 
The presented methodology for assessing test sizes was also applied to the three forest samples from the ESAR image shown in Fig. 1. Each sample was submitted to the following procedure [14]:

split the sample in disjoint blocks of size ;

for each block from (i), split the remaining sample in disjoint blocks of size ;

perform the hypothesis test as described in Proposition 1 for each pair of samples with sizes and .
Table V presents the results, omitting some entries as in Table IV. All test sizes were smaller than the nominal level, i.e., the proposed tests do not reject the null hypothesis when similar samples are considered.
We also made the following study on the tests power: In each one of Monte Carlo experiments, random matrices both of sizes were sampled from the and from the distributions. The covariance matrix is given in (10), and the experiment consists in contrasting samples from the relaxed Wishart distribution it indexes, and the law indexed by a version scaled by , arbitrarily chosen. Subsequently, it was verified whether these samples come from similar populations according to Proposition 1. Let be the number of situations for which the null hypothesis is rejected at nominal level ; the empirical test power is given by . Fig. 4 presents these estimates for the test power. Notice that the discrimination ability is about the same for all tests above .
In general terms, the proposed hypothesis tests presented good results regarding their power even for small samples: with samples of size , they are able to discriminate between covariance matrices which are only different in about of the time. As the sample size increases, all the tests discriminate better and better.
Factors  B  B  B  

CV  CV  CV  
49  49  0.00  0.00  59.80  157.36  0.00  0.00  45.16  64.18  0.00  0.00  47.05  87.32  
121  121  0.00  0.00  38.27  100.85  0.00  0.00  19.76  61.66  0.00  0.00  29.54  87.72  
49  49  0.00  0.00  40.77  153.99  0.00  0.00  31.40  63.83  0.00  0.00 