Secure Lossy Transmission of Vector Gaussian Sources^{†}^{†}thanks: This work was supported by NSF Grants CCF 0729127, CNS 0964632, CCF 0964645 and CCF 1018185.
Abstract
We study the secure lossy transmission of a vector Gaussian source to a legitimate user in the presence of an eavesdropper, where both the legitimate user and the eavesdropper have vector Gaussian side information. The aim of the transmitter is to describe the source to the legitimate user in a way that the legitimate user can reconstruct the source within a certain distortion level while the eavesdropper is kept ignorant of the source as much as possible as measured by the equivocation. We obtain an outer bound for the rate, equivocation and distortion region of this secure lossy transmission problem. This outer bound is tight when the transmission rate constraint is removed. In other words, we obtain the maximum equivocation at the eavesdropper when the legitimate user needs to reconstruct the source within a fixed distortion level while there is no constraint on the transmission rate. This characterization of the maximum equivocation involves two auxiliary random variables. We show that a nontrivial selection for both random variables may be necessary in general. The necessity of two auxiliary random variables also implies that, in general, WynerZiv coding is suboptimal in the presence of an eavesdropper. In addition, we show that, even when there is no rate constraint on the legitimate link, uncoded transmission (deterministic or stochastic) is suboptimal; the presence of an eavesdropper necessitates the use of a coded scheme to attain the maximum equivocation.
1.2
1 Introduction
Information theoretic secrecy was initiated by Wyner in [1], where he studied the secure lossless transmission of a source over a degraded wiretap channel, and obtained the necessary and sufficient conditions. Later, his result was generalized to arbitrary, i.e., not necessarily degraded, wiretap channels in [2]. In recent years, information theoretic secrecy has gathered a renewed interest, where mostly channel coding aspects of secure transmission is considered, in other words, secure transmission of uniformly distributed messages is studied.
Secure source coding problem has been studied for both lossless and lossy reconstruction cases in [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. Secure lossless source coding problem is studied in [3, 4, 5, 6, 7, 8, 9]. The common theme of these works is that the legitimate receiver wants to reconstruct the source in a lossless fashion by using the information it gets from the transmitter in conjunction with its side information, while the eavesdropper is being kept ignorant of the source as much as possible. Secure lossy source coding problem is studied in [10, 11, 12, 13, 14, 15, 16]. In these works, unlike the ones focusing on secure lossless source coding, the legitimate receiver does not want to reconstruct the source in a lossless fashion, but within a distortion level.
The most relevant works to our work here are [15, 16]. In [15], the author considers the secure lossy transmission of a source over a degraded wiretap channel while both the legitimate receiver and the eavesdropper have side information about the source. In [15], in addition to the degradedness that the wiretap channel exhibits, the source and side information also have a degradedness structure such that given the legitimate user’s side information, the source and the eavesdropper’s side information are independent. For this setting, in [15], a singleletter characterization of the distortion and equivocation region is provided. In particular, the optimality of a separationbased approach, i.e., the optimality of a code that concatenates a ratedistortion code and a wiretap channel code, is shown. In [16], the setting of [15] is partially generalized such that in [16], the source and side information do not have any degradedness structure. On the other hand, as opposed to the noisy wiretap channel of [15], in [16], the channel between the transmitter and receivers is assumed to be noiseless. For this setting, in [16], a singleletter characterization of the rate, equivocation, and distortion region is provided.
Here, we consider the setting of [16] for jointly Gaussian source and side information. In particular, we consider the model where the transmitter has a vector Gaussian source which is jointly Gaussian with the vector Gaussian side information of both the legitimate receiver and the eavesdropper. In this model, the transmitter wants to convey information to the legitimate user in a way that the legitimate user can reconstruct the source within a distortion level while the eavesdropper is being kept ignorant of the source as much as possible as measured by the equivocation. A singleletter characterization of the rate, equivocation, and distortion region for this setting exists due to [16]. Although we are unable to evaluate this singleletter characterization for the vector Gaussian source and side information case to obtain the corresponding rate, equivocation, distortion region explicitly, we obtain an outer bound for this region. We obtain this outer bound by optimizing the rate and equivocation constraints separately. We note that a joint optimization of the rate and equivocation constraints for a fixed distortion level would yield the exact achievable rate and equivocation region for this fixed distortion level. Thus, optimizing the rate and equivocation constraints separately yields a larger region, i.e., an outer bound. We show that this outer bound is tight when we remove the rate constraint at the transmitter. In other words, we obtain the maximum achievable equivocation at the eavesdropper when the legitimate user needs to reconstruct the vector Gaussian source within a fixed distortion while there is no constraint on the transmission rate.
We note some implications of this result. First, we note that since there is no rate constraint on the transmitter, it can use an uncoded scheme to describe the source to the legitimate user, and, indeed, it can use any instantaneous (deterministic or stochastic) encoding scheme for this purpose. However, we show through an example that even when there is no rate constraint on the transmitter, to attain the maximum equivocation at the eavesdropper, in general, the transmitter needs to use a coded scheme. Hence, the presence of an eavesdropper necessitates the use of a coded scheme even in the absence of a rate constraint on the transmitter. Second, we note that the maximum equivocation expression has two different covariance matrices originating from the presence of two auxiliary random variables in the singleletter expression. We show through another example that both of these covariance matrices, in other words, both of these two auxiliary random variables, are needed in general to attain the maximum equivocation at the eavesdropper. The necessity of two covariance matrices, and hence two auxiliary random variables, implies that, in general, WynerZiv coding scheme [17] is not sufficient to attain the maximum equivocation at the eavesdropper.
2 Secure Lossy Source Coding
Here, we describe the secure lossy source coding problem and state the existing results. Let denote i.i.d. tuples drawn from a distribution . The transmitter, the legitimate user and the eavesdropper observe and , respectively. The transmitter wants to convey information to the legitimate user in a way that the legitimate user can reconstruct the source within a certain distortion, and meanwhile the eavesdropper is kept ignorant of the source as much as possible as measured by the equivocation. We note that if there was no eavesdropper, this setting would reduce to the WynerZiv problem [17], for which a singleletter characterization for the minimum transmission rate of the transmitter for each distortion level exists.
The distortion of the reconstructed sequence at the legitimate user is measured by the function where denotes the legitimate user’s reconstruction of the source . We consider the function that has the following form
(1) 
where is a nonnegative finitevalued function. The confusion of the eavesdropper is measured by the following equivocation term
(2) 
where , which is a function of the source , denotes the signal sent by the transmitter.
An code for secure lossy source coding consists of an encoding function at the transmitter and a decoding function at the legitimate user . A rate, equivocation and distortion tuple is achievable if there exists an code satisfying
(3)  
(4) 
The set of all achievable tuples is denoted by which is given by the following theorem.
Theorem 1
([16, Theorem 1]) iff
(5)  
(6)  
(7) 
for some satisfying the following Markov chain
(8) 
and a function .
The achievable scheme that attains the region has the same spirit as the WynerZiv scheme [17] in the sense that both achievable schemes use binning to exploit the side information at the legitimate user, and consequently, to reduce the rate requirement. The difference of the achievable scheme that attains comes from the additional binning necessitated by the presence of an eavesdropper. In particular, the transmitter generates sequences and bins both sequences. The transmitter sends these two bin indices. Using these bin indices, the legitimate user identifies the right sequences, and reconstructs within the required distortion. On the other hand, using the bin indices of , the eavesdropper identifies only the right sequence, and consequently, does not contribute to the equivocation, see (6)^{1}^{1}1The fact that the eavesdropper can decode sequence can be obtained by observing that for a selection, if , there is no loss of optimality of setting which will yield a larger region.. Indeed, this achievable scheme can be viewed as if it is using a ratesplitting technique to send the message , since has two coordinates, one for the bin index of , and one for the bin index of . This perspective reveals the similarity of the achievable scheme that attains and the one that attains the capacityequivocation region of the wiretap channel [2] where also ratesplitting is used. In particular, in the latter case, the message is divided into two parts such that is sent by the sequence and is sent by the sequence . The eavesdropper decodes whereas the other message contributes to the secrecy.
We note that Theorem 1 holds for continuous by replacing the discrete entropy term with the differential entropy term . To avoid the negative equivocation that might arise because of the use of differential entropy, we replace equivocation with the mutual information leakage to the eavesdropper defined by
(9) 
Once we are interested in the mutual information leakage to the eavesdropper, a rate, mutual information leakage, and distortion tuple is said to be achievable if there exists an code such that
(10)  
(11) 
The set of all achievable tuples is denoted by . Using Theorem 1, the region can be stated as follows.
Theorem 2
3 Vector Gaussian Sources
Now we study the secure lossy source coding problem for jointly Gaussian where the tuples are independent across time, i.e., across the index , and each tuple is drawn from the same jointly Gaussian distribution . In other words, we consider the case where is a zeromean Gaussian random vector with covariance matrix , and the side information at the legitimate user and the eavesdropper are jointly Gaussian with the source . In particular, we assume that have the following form
(16)  
(17) 
where and are independent zeromean Gaussian random vectors with covariance matrices and , respectively, and and are independent. We note that the side information given by (16)(17) are not in the most general form. In the most general case, we have
(18)  
(19) 
for some matrices. However, until Section 5, we consider the form of side information given by (16)(17), and obtain our results for this model. In Section 5, we generalize our results to the most general case given by (18)(19). We note that since the rate, information leakage and distortion region is invariant with respect to the correlation between and , the correlation between and is immaterial.
The distortion of the reconstructed sequence is measured by the mean square error matrix:
(20) 
Hence, the distortion constraint is represented by a positive semidefinite matrix , which is achievable if there is an code such that
(21) 
Throughout the paper, we assume that . Since the mean square error is minimized by the minimum mean square error (MMSE) estimator which is given by the conditional mean, we assume that the legitimate user applies this optimal estimator, i.e., the legitimate user selects its reconstruction function as
(22) 
Once the estimator of the legitimate user is set as (22), using Theorem 2, a singleletter description of the region for a vector Gaussian source can be given as follows.
Theorem 3
iff
(23)  
(24)  
(25) 
for some satisfying the following Markov chain
(26) 
We also define the region as the union of the pairs that are achievable when the distortion constraint matrix is set to . Our main result is an outer bound for the region , hence for the region .
Theorem 4
When , we have
(27) 
where is given by the union of that satisfy
(28)  
(29) 
and .
We will prove Theorem 4 in Section 4. In the remainder of this section, we provide interpretations and discuss some implications of Theorem 4.
The outer bound in Theorem 4 is obtained by minimizing the constraints on and individually, i.e., the rate lower bound in (28) is obtained by minimizing the rate constraint in (23) and the mutual information leakage lower bound in (29) is obtained by minimizing the mutual information leakage constraint in (24) separately. However, to characterize the rate and mutual information leakage region , one needs to minimize the rate constraint in (23) and the mutual leakage information constraint in (24) jointly, not separately. In particular, since the region is convex in the pairs as per a timesharing argument, joint optimization of the rate constraint in (23) and the mutual information leakage constraint in (24) can be carried out by considering the tangent lines to the region , i.e., by solving the following optimization problem
(30)  
(31) 
for all values of , where . As of now, we have been unable to solve the optimization problem for all values of . However, as stated in Theorem 4, we solve the optimization problems and by showing the optimality of jointly Gaussian to evaluate the corresponding cost functions. In other words, our outer bound in Theorem 4 can be written as follows.
(32)  
(33) 
We note that the constraint in (28), and hence , gives us the WynerZiv rate distortion function [17] for the vector Gaussian sources. Moreover, we note that gives us the minimum mutual information leakage to the eavesdropper when the legitimate user wants to reconstruct the source within a fixed distortion constraint while there is no concern on the transmission rate . Denoting the minimum mutual information leakage to the eavesdropper when the legitimate user needs to reconstruct the source within a fixed distortion constraint by , the corresponding result can be stated as follows.
Theorem 5
When , we have
(34) 
where .
Theorem 5 implies that if the transmitter’s aim is to minimize the mutual information leakage to the eavesdropper without concerning itself with the rate it costs as long as the legitimate receiver is able to reconstruct the source within a distortion constraint , the use of jointly Gaussian is optimal. Since in Theorem 5, there is no rate constraint, one natural question to ask is whether can be achieved by an uncoded transmission scheme. Now, we address this question in a broader context by letting the encoder use any instantaneous encoding function in the form of where can be a deterministic or a stochastic mapping. When is chosen to be stochastic, we assume it to be independent across time. We note that the uncoded transmission can be obtained from instantaneous encoding by selecting to be a linear function. Similarly, uncoded transmission with artificial noise can be obtained from instantaneous encoding by selecting , where denotes the noise. Hence, if the encoder uses an instantaneous encoding scheme, the transmitted signal is given by . Let be the minimum information leakage to the eavesdropper when the legitimate user is able to reconstruct the source with a distortion constraint while the encoder uses an instantaneous encoding. The following example demonstrates that, in general, cannot be achieved by instantaneous encoding.
Example 1
Consider the scalar case, where the side information at the legitimate user and the eavesdropper are given as follows
(35)  
(36) 
where and are zeromean Gaussian random variables with variances and , respectively. and are independent. We assume that , which implies that we can assume since the scalar model in (35)(36) is statistically degraded, or in other words, the correlation between and does not affect the achievable region. Using Theorem 3, for the scalar Gaussian channel under consideration can be found as follows
(37)  
(38) 
where in (38), we used the Markov chain .
As shown in Appendix A, the information leakage to the eavesdropper when the encoder uses an instantaneous mapping is given by
(39)  
(40) 
where (40) is obtained by using the Markov chain .
(41)  
(42)  
(43) 
where (43) comes from the Markov chain . Next, we note the following lemma.
Lemma 1
For jointly Gaussian satisfying the Markov chain and , if , we have
(44) 
The proof of Lemma 1 can be found in Appendix B. The proof of Lemma 1 starts with the observation that (44) is zero iff we have the Markov chain . On the other hand, since we already have the Markov chain , and and are not identical, we show in Appendix B that the Markov chain is possible iff and are independent. However, if , any that is independent of is not feasible. Hence, Lemma 1 follows. Lemma 1 implies that in general, we have , i.e., cannot be achieved by instantaneous encoding.
This example shows that an uncoded transmission is not optimal even when there is no rate constraint. This is due to the presence of an eavesdropper; the presence of an eavesdropper necessitates the use of a coded scheme.
Another question that Theorem 5 brings about is whether the minimum in (34) is achieved by a nontrivial . By a trivial selection for we mean either or . The former corresponds to the selection and the latter corresponds to the selection . We note that although (34) is monotonically decreasing in in the positive semidefinite sense, (34) is neither monotonically increasing nor monotonically decreasing in in the positive semidefinite sense. Hence, due to this lack of monotonicity of in , in general, we expect that both and may be necessary to attain the minimum in (34). The following example demonstrates that in general and may be necessary.
Example 2
Consider the Gaussian source where and are independent. The side information at the legitimate receiver and the eavesdropper are given by
(45)  
(46) 
where and are zeromean Gaussian random variables with variances and , respectively. Moreover, and are independent, and also so are and . We assume that noise variances satisfy
(47)  
(48) 
which, in view of the fact that correlation between the noise at the legitimate receiver and the noise at the eavesdropper does not affect the rate, distortion and information leakage region, lets us assume the following Markov chains
(49)  
(50) 
Moreover, we assume that the distortion constraint is a diagonal matrix with diagonal entries and . In this case, the minimum information leakage is given by
(51) 
whose proof can be found in Appendix C. The minimum information leakage in (51) corresponds the selections and , where and are independent. This selection of corresponds to neither nor .
Next, we obtain the minimum information leakage that arises when we set either or , and show that the minimum information leakage arising from these selections are strictly larger than the minimum information leakage in (51), which will imply the suboptimality of and . When we set , the minimum information leakage is given by
(52) 
whose proof is given in Appendix D. When we set , the minimum information leakage is given by
(53) 
whose proof can be found in Appendix D.
Now, we compare the minimum information leakage in (51) with (52) and (53) to show that the selections and are suboptimal in general. Using (51) and (52), we get
(54)  
(55)  
(56)  
(57)  
(58) 
where (56)(57) follow from the Markov chain
(59) 
and (58) comes from Lemma 1. Thus, in general, we have , or in other words, in general, is suboptimal.
Example 2 shows that, in general, we might need two covariance matrices, and hence two different auxiliary random variables, to attain the minimum information leakage. Indeed, if we have either or , the corresponding achievable scheme is identical to the WynerZiv scheme [17]. Hence, the necessity of two different auxiliary random variables implies that, in general, WynerZiv scheme [17] is suboptimal.
4 Proof of Theorem 4
We now provide the proof of Theorem 4. As mentioned in the previous section, this outer bound is obtained by minimizing the rate constraint in (23) and the mutual information leakage constraint in (24) separately. We first consider the rate constraint in (23) as follows
(66)  
(67)  
(68)  
(69)  
(70)  
(71) 
where (70) comes from the fact that is maximized by jointly Gaussian , and (71) comes from the monotonicity of in positive semidefinite matrices. Now we introduce the following lemma.
Lemma 2
(72) 
Next, we consider the mutual information leakage constraint in (24) as follows
(73) 
We note that the cost function of can be rewritten as follows
(74)  
(75) 
where (74) comes from the Markov chain and (75) comes from the Markov chain . We note that the first term in (75) is minimized by a jointly Gaussian as we already showed in obtaining the lower bound for the rate given by (28) above in (66)(71). On the other hand, the remaining term of (75) in the bracket is maximized by a jointly Gaussian as shown in [18]. Thus, a tension between these two terms arises if is selected to be jointly Gaussian. In spite of this tension, we will still show that a jointly Gaussian is the minimizer of . Instead of directly showing this, we first characterize the minimum mutual information leakage when is restricted to be jointly Gaussian, and show that this cannot be attained by any other distribution for . We note that any jointly Gaussian can be written as
(76)  
(77) 
where are zeromean Gaussian random vectors with covariance matrices , respectively. Moreover, are independent of but can be dependent on each other. Before characterizing the minimum mutual information leakage when is restricted to be jointly Gaussian, we introduce the following lemma.
Lemma 3
When and is Gaussian, we have the following facts.

, i.e., is positive definite, and hence, nonsingular.

We have the following equivalence:
(78)