On the efficient estimation of the mean of multivariate truncated normal distributions††thanks: This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program “Education and Lifelong Learning” of the National Strategic Reference Framework (NSRF) - Research Funding Program: ARISTEIA- HSI-MARS-1413.
A non trivial problem that arises in several applications is the estimation of the mean of a truncated normal distribution. In this paper, an iterative deterministic scheme for approximating this mean is proposed, motivated by an iterative Markov chain Monte Carlo (MCMC) scheme that addresses the same problem. Conditions are provided under which it is proved that the scheme converges to a unique fixed point. The quality of the approximation obtained by the proposed scheme is assessed through the case where the exponential correlation matrix is used as covariance matrix of the initial (non truncated) normal distribution. Finally, the theoretical results are also supported by computer simulations, which show the rapid convergence of the method to a solution vector that (under certain conditions) is very close to the mean of the truncated normal distribution under study.
Keywords: Truncated normal distribution, contraction mapping, diagonally dominant matrix, MCMC methods, exponential correlation matrix
A non trivial problem that appears in several applications, e.g. in multivariate regression and Bayesian statistics, is the estimation of the mean of a truncated normal distribution. The problem arises in cases where a random vector follows a normal distribution with mean and covariance matrix , denoted by , but is restricted to a closed subset of . However, the restrictions considered in most of the cases are of the form , with , . If and ( and ), we have a one-sided truncation from the left (right), while in the case where both and are reals, we have two-sided truncation.
The problem has attracted the attention of several researchers from the sixties. Since then, several deterministic approaches have been proposed with some of them trying to estimate not only the mean but also other moments (such as the variance) of a multivariate truncated normal distribution. These approaches can be categorized according to whether they are suitable for one-sided truncated normal distributions (, , , , ) or for two-sided truncated normal distributions (, , , , , ), or according to whether they consider the bivariate, , (, , , ) or the multivariate case (, , , , , , ). In addition, some of these methods put additional restrictions to the distribution paramaters (e.g. , ,  require that ). Most of these methods either perform direct integration (e.g. ) or they utilize the moment generating function tool (e.g. ).
An alternative strategy to deal with this problem has been followed in . More specifically, in , a Markov Chain Monte Carlo (MCMC) iterative scheme has been developed. According to this, at each iteration, succesive samplings take place, one from each one-dimensional conditionals of the truncated normal distribution. The mean of the -th such distribution is the mean of conditioned on the . After performing several iterations, the estimation of the mean of the truncated normal results by performing an averaging over the produced samples. Convergence issues of this scheme to the mean of the truncated normal distribution are a subject of the Markov chain theory. A relative work that accelerates the method in  is exhibited in .
The work presented in this paper for approximating the mean of a multivariate truncated normal distribution has been inspired from that of . Specifically, instead of selecting a sample from each one of the above one-dimensional distributions, we select its mean. Thus, the proposed scheme departs from the statistical framework adopted in  and moves to the deterministic one.
This work is an extension of a relative scheme used in  in the framework of spectral unmixing in hyperspectral images. In addition, a convergence proof of the proposed scheme is given when certain conditions are fulfilled. The quality of the approximation of the mean offered by the proposed method is assessed via the case where is the exponential correlation matrix. Experimental results show that the new scheme converges significantly faster than the MCMC approach of .
The paper is organized as follows. Section 2 contains some necessary definitions and a brief description of the work in . In Section 3, the newly proposed method is described and in Section 4 conditions are given under which it is proved to converge. In Section 5, the proposed method is applied to the case where is the exponential correlation matrix. In Section 6 simulation results are provided and a relevant discussion is presented. Finally, Section 7 concludes the paper.
Ii Preliminaries and previous work
Let us consider the -dimensional normal distribution
where the -dimensional vector is its mean and the matrix is its covariance matrix.
Let be a subset of with positive Lebesgue measure. We denote by the truncated normal distribution which results from the truncation of in . Speaking in mathematical terms
Note that is proportional to , where , if and , otherwise.
In the scheme discussed in , a Markov Chain Monte Carlo (MCMC) method is proposed, to compute the mean of single or doubly truncated (per coordinate) normal distribution , where . The method relies on the sampling of the one-dimensional conditionals of the truncated normal distribution. More specifically, letting and denoting the expectation and the variance of conditioned on the rest coordinates, respectively111That is, follows the (non-truncated) -th conditional of ., and denoting the (one dimensional) truncated normal distribution which results from the truncation of a normal distribution with mean and variance in , the iterative sampling scheme proposed in  can be written as
where denotes the sampling action and denotes the current iteration. After performing several, say , iterations (and after discarding the first few, say , ones) the mean of each coordinate is estimated as
The quantities and of each one of the above one dimensional conditionals are expressed in terms of the parameters and of the non-truncated multidimensional normal distribution as follows
with being the matrix that results from after removing its -th column and its -th row, being the -th column of excluding its -th element and , being the (-dimensional) vectors that result from and , respectively, after removing their -th coordinates, and , respectively. Note that depends on all ’s except .
Iii The proposed model
In the sequel, we focus on the case where is a set of the form , where for each interval it is either, (i) and or (ii) and . This means that, along each dimension, the truncation is one-sided and more specifically, case (i) corresponds to right truncation while case (ii) corresponds to left truncation.
The proposed model for estimating the mean of , is of iterative nature and, at each iteration, it requires the computation of the (one-dimensional) function. This model has a close conceptual affinity with the one (briefly) presented in the previous section (). More specifically, instead of utilizing the samples produced by the (one dimensional) distributions , we use the corresponding mean values (here denoted by ).
As it is well known (see e.g. ), the mean of a truncated one dimensional normal distribution , which has resulted from the (non-truncated) normal distribution with mean and variance , is expressed as
where , and .
However, since in the present paper we consider the cases where either (i) and or (ii) and , let us see now how eq. (6) becomes for each of these cases.
(i) , . In this case it is and as a consequence, , . Thus, taking also into account the definitions of and and the fact that , eq. (6) gives,
(ii) , . In this case it is and, as a consequence, , . Working as in case (i), eq. (6) gives
where () is an indicator function which equals to if () and otherwise and
Let us now return to the multidimensional case. Since from the (truncated) conditional one-dimensional normals we no longer perform sampling but, instead, we consider their means, eq. (4) is altered to
where results from the current estimate of the ( - dimensional) mean vector of the truncated normal distribution, after removing its -th coordinate.
with being computed via eq. (11), where (the only parameter in (11) that varies through iterations) is defined as , that is, the most recent information about ’s is utilized. More formally, we can say that the above scheme performs sequential updating and (following the terminology used in ) it is a Gauss Seidel updating scheme.
It is reminded that, due to the type of truncation considered here (only left truncation or only right truncation per coordinate), the bracketed expression in each equation of (12) contains only one non identically equal to zero term. In the sequel, we consider separately the cases corresponding to and , i.e.,
, respectively, and is defined as in eq. (10).
Iv Convergence issues
In this section we provide sufficient conditions under which the proposed scheme is proved to converge222However, it is noted that even if these conditions are slightly violated, the algorithm still works, as is verified by the experiments presented in Section 5.. Before we proceed, we give some propositions and remind some concepts that will be proved useful in the sequel.
Proposition 1: Assume that is a symmetric positive definite matrix, is the -th column of , after removing its -th element and results from after removing its -th row and its -th column. Also, let be the element of and be the -dimensional vector resulting from the -th row of after (i) removing its -th element, , and (ii) multiplying the remaining elements by . Then, it holds
The proof of this proposition is straightforward from the inversion lemma for block partitioned matrices (, p. 53) and the use of permutation matrices, in order to define the Schur complement for each row of .
Proposition 2: It is
where denotes the derivative of , which is defined in eq. (10).
The proof of proposition 2 is given in the appendix. Definition 1: A mapping , where , is called contraction if for some norm there exists some constant (called modulus) such that
The corresponding iteration is called contracting iteration.
Proposition 3 (, pp. 182-183): Suppose that is a contraction with modulus and that is a closed subset of . Then
(a) The mapping has a unique fixed point 333A point is called fixed point of a mapping if it is ..
(b) For every initial vector , the sequence , generated by converges to geometrically. In particular,
Let us define the mappings , as
and the mapping as
Let us define next the mapping as
Performing the sequential updating as described by eq. (12) (one at a time and in increasing order) is equivalent to applying the mapping , defined as
where denotes function composition. Following the terminology given in , is called the Gauss Seidel mapping based on the mapping and the iteration is called the Gauss Seidel algorithm based on mapping .
A direct consequence of [3, Prop. 1.4, pp.186] is the following Proposition: Proposition 4: If is a contraction with respect to the norm, then the Gauss-Seidel mapping is also a contraction (with respect to the norm), with the same modulus as . In particular, if is closed, the sequence of the vectors generated by the Gauss-Seidel algorithm based on the mapping converges to the unique fixed point of geometrically. Having given all the necessary definitions and results, we will proceed by proving that (a) for each mapping it holds , , where , are the and norms, respectively, (b) if is diagonally dominant then is a contraction and (c) provided that is a contraction, the algorithm converges geometrically to the unique fixed point of . We remind here that the -dimensional vector results from the -th row of , exluding its -th element and dividing each element by the negative of .
Proposition 5: For the mappings , , it holds
Proof: (a) We consider first the case where . Let us consider the vectors . Since is constant, utilizing eq. (20) it follows that
Also, it is
Taking the difference we have
Since is continuous in , the mean value theorem guarantees that there exists such that
Taking absolute values in eq. (29) and applying Hölder’s inequality , for and , it follows that
Taking into account that (from Proposition 2), and the (trivial) fact that , it follows that
Thus, the claim has been proved.
(b) We consider now the case where . Working similarly to the previous case, the difference is
while the difference is expressed as
Utilizing the mean value theorem we have that there exists such that
From this point on, the proof is exactly the same with that of (a). Q.E.D. Proposition 6: The mapping is a contraction in , with respect to the norm, provided that is diagonally dominant. Proof: Let . Taking into account proposition 5, it easily follows that
Now, (a) taking into account that the -dimensional vector results from the -th row of , exluding its -th element and dividing each element by the negative of and (b) recalling that is the -th row of excluding its -th element , it is
Provided that is diagonally dominant, it is
which proves the claim. Q.E.D.
Theorem 1: The algorithm converges geometrically to the unique fixed point of , provided that is diagonally dominant. Proof: The proof is a direct consequence of the propositions 3, 4 and 6 exposed before, applied for . Q.E.D.
V Assessment of the accuracy of the proposed method
An issue that naturally arises with the proposed method is how accurate the estimate of the mean is. Since it is very difficult to give a theoretical analysis of this issue, mainly due to the highly complex nature of the propsoed iterative scheme (see eq. (12)), we will try to gain some insight for this subject via experimentation. To this end, we set equal to the exponential correlation matrix, which is frequently met in various fields of applications, e.g., in signal processing applications. Its general form is
It is easy to verify that the inverse of is expressed as
Also, it is straightforward to see that is diagonally dominant for all values of . Thus, it is a suitable candidate for our case. In addition, it is “controlled” by just a single parameter (), which facilitates the extraction of conclusions. Note that for , becomes the identity matrix, while as increases towards the diagonal dominancy of decreases (while its condition number increases). For close to , is alomost singular.
In the sequel, we consider the case of a zero mean normal distribution with covariance matrix as in (38), which is truncated in the region , that is the truncation point is the same along all dimensions. Without loss of generality, this choice has been deliberately selected, in order to keep our experimental framework controlled by just two parameters, and . Performing simulations for various combinations of the values of and (and for various dimensions ) we can gain some insight on the accuracy of the approximation of the mean provided by the proposed method. In the sequel, we use as benchmark the estimate of the mean provided by the (widely accepted as reliable) MCMC method ().
Figure 1, shows a three-dimensional graph of the difference (assessed by its Euclidean norm divided by ) between the estimates of the truncated mean obtained by the MCMC and the proposed methods, against and . It is worth noting that for smaller values of (less than ), the difference remains low (less than ), independently of the value of the cutting point . On the other hand, for larger values of (above ), the difference increases. More specifically, it increases more for values of between (approximately) and .
In figure 2, the shaded regions in the space correspond to low difference (less than per dimension), for . It can be deduced that the behaviour of the proposed method is only slightly affected influenced by the dimensionality of the feature space.
As a general conclusion, one can argue that the “more diagonally dominant” the is (i.e., the smaller the is), the more accurate the estimate of the mean provided by the proposed method is. From another perspective, the more “approaches diagonality” (again, as becomes smaller), the more accurate the obtained estimates are. The latter is also supported by the fact that in the extreme case of a diagonal covariance matrix, one has to solve independent one-dimensional problems, for which an analytic formula exists. In this case, it is easy to verify that the proposed method terminates after a single iteration (see also comments in the next section).
Vi Simulation results
After having gained some insight on the capability of the proposed method to approximate the mean of a multivariate truncated normal distribution in the previous section, we proceed in this section with experiments where now the involved covariance matrices have no specific structure. As in the previous section, the MCMC method is used as benchmark 444In the sequel, all results are rounded to the third decimal..
1st experiment: The purpose of this experiment is to compare the estimates of the mean of a truncated normal distribution obtained by the proposed and the MCMC methods, for dimensions varying from to . To this end, for each dimension , different truncated normal distributions (defined by the means and the covariance matrices of the corresponding untruncated normals, as well as their truncation points) have randomly been generated, such that, the corresponding inverse covariance matrix is diagonally dominant. For the -th such distribution, , both the proposed and the MCMC methods have been applied. Letting and denote the respective resulting estimates, the mean difference per coordinate between the two estimates is computed, i.e., and, averaging over , the quantity is obtained. In figure 3, is plotted versus . From this figure, it can be concluded that the proposed scheme gives estimates that are very close to those given by the MCMC. Thus, the fixed point of the proposed scheme (when the diagonal dominance condition is fulfilled) can be considered as a reliable estimate of the mean of the truncated normal distribution.
Next, in order to show the rapid convergence of the proposed scheme against the MCMC method, we focus on a single example (however, the resulting conclusions are generally applicable). More specifically, Table 1 shows the values of the norm of the difference divided by , i.e. the quantity , as the number of iterations evolves, for the -dimensional left truncated normal distribution defined by ,