Secret-Key Generation using Correlated Sources and Channels

Secret-Key Generation using Correlated Sources and Channels

Ashish Khisti     \IEEEmembershipStudent Member, IEEE, and Suhas N. Diggavi     \IEEEmembershipMember, IEEE    and Gregory W. Wornell     \IEEEmembershipFellow, IEEE Part of the material in this paper was presented at the 2008 Information Theory and its Application Workshop [11] and the 2008 International Symposium on Information Theory [12]. Ashish Khisti was with EECS Department, MIT (ashish.khisti@gmail.com). Suhas Diggavi is with the faculty of the School of Computer and Communication Sciences at EPFL (suhas.diggavi@epfl.ch). Gregory Wornell is with the faculty of EECS Dept., MIT (gww@mit.edu). The work of Ashish Khisti and Gregory Wornell was supported in part by NSF Grant No. CCF-0515109. The work of Suhas Diggavi was supported in part by the Swiss National Science Foundation through NCCR-MICS
Abstract

We study the problem of generating a shared secret key between two terminals in a joint source-channel setup — the sender communicates to the receiver over a discrete memoryless wiretap channel and additionally the terminals have access to correlated discrete memoryless source sequences. We establish lower and upper bounds on the secret-key capacity. These bounds coincide, establishing the capacity, when the underlying channel consists of independent, parallel and reversely degraded wiretap channels. In the lower bound, the equivocation terms of the source and channel components are functionally additive. The secret-key rate is maximized by optimally balancing the the source and channel contributions. This tradeoff is illustrated in detail for the Gaussian case where it is also shown that Gaussian codebooks achieve the capacity. When the eavesdropper also observes a source sequence, the secret-key capacity is established when the sources and channels of the eavesdropper are a degraded version of the legitimate receiver. Finally the case when the terminals also have access to a public discussion channel is studied. We propose generating separate keys from the source and channel components and establish the optimality of this approach when the when the channel outputs of the receiver and the eavesdropper are conditionally independent given the input.

\IEEEoverridecommandlockouts\overrideIEEEmargins

1 Introduction

Many applications in cryptography require that the legitimate terminals have shared secret-keys, not available to unauthorized parties. Information theoretic security encompasses the study of source and channel coding techniques to generate secret-keys between legitimate terminals. In the channel coding literature, an early work in this area is the wiretap channel model [19]. It consists of three terminals — one sender, one receiver and one eavesdropper. The sender communicates to the receiver and the eavesdropper over a discrete-memoryless broadcast channel. A notion of equivocation-rate — the normalized conditional entropy of the transmitted message given the observation at the eavesdropper, is introduced, and the tradeoff between information rate and equivocation rate is studied. Perfect secrecy capacity, defined as the maximum information rate under the constraint that the equivocation rate approaches the information rate asymptotically in the block length is of particular interest. Information transmitted at this rate can be naturally used as a shared secret-key between the sender and the receiver.

In the source coding setup [1, 15], the two terminals observe correlated source sequences and use a public discussion channel for communication. Any information sent over this channel is available to an eavesdropper. The terminals generate a common secret-key that is concealed from the eavesdropper in the same sense as the wiretap channel — the equivocation rate asymptotically equals the secret-key rate. Several multiuser extensions of this problem have been subsequently studied. See e.g.,  [5, 6].

Motivated by the above works, we study a problem where the legitimate terminals observe correlated source sequences and communicate over a wiretap channel and are required to generate a common secret-key. One application of this setup is sensor networks, where terminals measure correlated physical processes. It is natural to investigate how these measurements can be used for secrecy. In addition, the sensor nodes communicate over a wireless channel where an eavesdropper could hear transmission albeit through a different channel. Another application is secret key generation using biometric measurements [7]. During the registration phase, an enrollment biometric is stored into a database. To generate a secret key subsequently, the user is required to provide another measurement of the same biometric. This new measurement differs from the enrollment biometric due to factors such as measurement noise and hence can be modeled as a correlated signal. Again when the database is remotely located, the communication happens over a channel which could be wiretapped.

The secret-key agreement scheme, [15, 1], generates a secret key only using the source sequences. On the other hand, the wiretap coding scheme [19] generates a secret-key by exploiting the structure of the underlying broadcast channel. Clearly in the present setup, we should consider schemes that take into account both the source and channel contributions. One simple approach is timesharing — for a certain fraction of time the wiretap channel is used as a (rate limited) transmission channel whereas for the remaining time, a wiretap code is used to transmit information at the secrecy capacity. However such an approach in general is sub-optimal. As we will see, a better approach involves simultaneously exploiting both the source and channel uncertainties at the eavesdropper. As our main result we present lower and upper bounds on the secret-key capacity. The lower bound is developed by providing a coding theorem that consists of a combination of a Wyner-Ziv codebook, a wiretap codebook and a secret-key generation codebook. Our upper and lower bounds coincide, establishing the secret-key-capacity, when the wiretap channel consists of parallel independent and degraded channels.

We also study the case when the eavesdropper observes a source sequence correlated with the legitimate terminals. The secret-key capacity is established when the sources sequence of the eavesdropper is a degraded version of the sequence of the legitimate receiver and the channel of the eavesdropper is a degraded version of the channel of the legitimate receiver. Another variation — when a public discussion channel is available for interactive communication, is also discussed and the secret-key capacity is established when the channel output symbols of the legitimate receiver and eavesdropper are conditionally independent given the input.

The problem studied in this paper also provides an operational significance for the rate-equivocation region of the wiretap channel. Recall that the rate-equivocation region captures the tradeoff between the conflicting requirements of maximizing the information rate to the legitimate receiver and the equivocation level at the eavesdropper [3]. To maximize the contribution of the correlated sources, we must operate at the Shannon capacity of the underlying channel. In contrast, to maximize the contribution of the wiretap channel, we operate at a point of maximum equivocation. In general, the optimal operating point lies in between these extremes. We illustrate this tradeoff in detail for the case of Gaussian sources and channels.

In related work [16, 20, 10] study a setup involving sources and channels, but require that a source sequence be reproduced at the destination subjected to an equivocation level at the eavesdropper. In contrast our paper does not impose any requirement on reproduction of a source sequence, but instead requires that the terminals generate a common secret key. A recent work, [18], considers transmitting an independent confidential message using correlated sources and noisy channels. This problem is different from the secret-key generation problem, since the secret-key, by definition, is an arbitrary function of the source sequence, while the message is required to be independent of the source sequences. Independently and concurrently of our work the authors of [17] consider the scenario of joint secret-message-transmission and secret-key-generation, which when specialized to the case of no secret-message reduces to the scenario treated in this paper. While the expression for the achievable rate in [17] appears consistent with the expression in this paper, the optimality claims in [17] are limited to the case when either the sources or the channel do not provide any secrecy.

The rest of the paper is organized as follows. The problem of interest is formally introduced in section 2 and the main results of this work are summarized in section 3. Proofs of the lower and upper bound appear in sections 4 and 5 respectively. The secrecy capacity for the case of independent parallel reversely degraded channels is provided in section 6. The case when the wiretapper has access to a degraded source and observes transmission through a degraded channel is treated in section 7 while section 8 considers the case when a public discussion channel allows interactive communication between the sender and the receiver. The conclusions appear in section 9.

2 Problem Statement

Fig. 1 shows the setup of interest. The sender and receiver communicate over a wiretap channel and have access to correlated sources. They can interact over a public-discussion channel. We consider two extreme scenarios: (a) the discussion channel does not exist (b) the discussion channel has unlimited capacity.

\psfrag

U\psfragV \psfragch\psfragX\psfragY \psfragZ

Figure 1: Secret-key agreement over the wiretap channel with correlated sources. The sender and receiver communicate over a wiretap channel and have access to correlated sources. They communicate interactively over a public discussion channel of rate , if it is available.

The channel from sender to receiver and wiretapper is a discrete-memoryless-channel (DMC), . The sender and intended receiver observe discrete-memoryless-multiple-source (DMMS) of length and communicate over uses of the DMC. We separately consider the cases when no public discussion is allowed and unlimited discussion is allowed.

2.1 No discussion channel is available

An secrecy code is defined as follows. The sender samples a random variable 111The alphabets associated with random variables will be denoted by calligraphy letters. Random variables are denoted by sans-serif font, while their realizations are denoted by standard font. A length sequence is denoted by . from the conditional distribution . The encoding function maps the observed source sequence to the channel output. In addition, two key generation functions and at the sender and the receiver are used for secret-key generation. A secret-key rate is achievable with bandwidth expansion factor if there exists a sequence of codes, such that for a sequence that approaches zero as , we have (i) (ii) (iii). The secret-key-capacity is the supremum of all achievable rates.

For some of our results, we will also consider the case when the wiretapper observes a side information sequence sampled i.i.d. . In this case, the secrecy condition in (iii) above is replaced with

(1)

In addition, for some of our results we will consider the special case when the wiretap channel consists of parallel and independent channels each of which is degraded.

2.1.1 Parallel Channels

Definition 1

A product broadcast channel is one in which the constituent subchannels have finite input and output alphabets, are memoryless and independent of each other, and are characterized by their transition probabilities

(2)

where denotes the sequence of symbols transmitted on subchannel , where denotes the sequence of symbols obtained by the legitimate receiver on subchannel , and where denotes the sequence of symbols received by the eavesdropper on subchannel .

A special class of product broadcast channels, known as the reversely degraded broadcast channel [8] are defined as follows.

Definition 2

A product broadcast channel is reversely-degraded when each of the constituent subchannels is degraded in a prescribed order. In particular, for each subchannel , one of or holds.

Note that in Def. 2 the order of degradation need not be the same for all subchannels, so the overall channel need not be degraded. We also emphasize that in any subchannel the receiver and eavesdropper are physically degraded. Our capacity results, however, only depend on the marginal distribution of receivers in each subchannel222However, when we consider the presence of a public-discussion channel and interactive communication, the capacity does depend on joint distribution . Accordingly, our results in fact hold for the larger class of channels in which there is only stochastic degradation in the subchannels.

We obtain further results when the channel is Gaussian.

2.1.2 Parallel Gaussian Channels and Gaussian Sources

Definition 3

A reversely-degraded product broadcast channel is Gaussian when it takes the form

(3)

where the noise variables are all mutually independent, and and . For this channel, there is also an average power constraint

Furthermore we assume that and are jointly Gaussian (scalar valued) random variables, and without loss of generality we assume that and , where is independent of .

2.2 Presence of a public discussion channel

We will also consider a variation on the original setup when a public discussion channel is available for communication. This setup was first introduced in the pioneering works [15, 1] where the secret-key capacity was bounded for source and channel models. The sender and receiver can interactively exchange messages on the public discussion channel.

The sender transmits symbols at times over the wiretap channel. At these times the receiver and the eavesdropper observe symbols and respectively. In the remaining times the sender and receiver exchange messages and where . For convenience we let . The eavesdropper observes both and . More formally,

  • At time the sender and receiver sample random variables and respectively from conditional distributions and . Note that holds.

  • At times the sender generates and the receiver generates . These messages are exchanged over the public channel.

  • At times , , the sender generates and sends it over the channel. The receiver and eavesdropper observe ad respectively. For these times we set .

  • For times , where , the sender and receiver compute and respectively and exchange them over the public channel.

  • At time , the sender and receiver compute and the receiver computes .

We require that for some sequence that vanishes as , and

(4)

3 Statement of Main Results

It is convenient to define the following quantities which will be used in the sequel. Suppose that is a random variable such that , and and are random variables such that holds and . Furthermore define

(5a)
(5b)
(5c)
(5d)
(5e)
(5f)

We establish the following lower and upper bounds on the secret key rate in Section 4 and 5 respectively.

Lemma 1

A lower bound on the secret-key rate is given by

(6)

where the random variables and defined above additionally satisfy the condition

(7)

and the quantities , , and are defined in (5d), (5c), (5b) and (5a) respectively.

Lemma 2

An upper bound on the secret-key rate is given by,

(8)

where the supremum is over all distributions over the random variables that satisfy , the cardinality of is at-most the cardinality of plus one, and

(9)

The quantities , , and are defined in (5c), (5d), (5e) and (5f) respectively.

Furthermore, it suffices to consider only those distributions where are independent.

3.1 Reversely degraded parallel independent channels

The bounds in Lemmas 1 and 2 coincide for the case of reversely degraded channels as shown in section 6.1 and stated in the following theorem.

Theorem 1

The secret-key-capacity for the reversely degraded parallel independent channels in Def. 2 is given by

(10)

where the random variables are mutually independent, , and

(11)

Furthermore, the cardinality of obeys the same bounds as in Lemma 2.

3.2 Gaussian Channels and Sources

\psfrag

x1\psfragx2\psfragyr1\psfragyr2 \psfragye1\psfragye2\psfragnr1\psfragnr2 \psfragne1\psfragne2

Figure 2: An example of independent parallel and reversely degraded Gaussian channels. On the first channel, the eavesdropper channel is noisier than the legitimate receiver’s channel while on the second channel the order of degradation is reversed.

For the case of Gaussian sources and Gaussian channels, the secret-key capacity can be achieved by Gaussian codebooks as established in section 6.2 and stated below.

Corollary 1

The secret-key capacity for the case of Gaussian parallel channels and Gaussian sources in subsection 2.1.2 is obtained by optimizing (10) and (11) over independent Gaussian distributions i.e., by selecting and , for some , independent of and , , and .

(12)

where also satisfy the following relation:

(13)

3.3 Remarks

  1. Note that the secret-key capacity expression (10) exploits both the source and channel uncertainties at the wiretapper. By setting either uncertainty to zero, one can recover known results. When , i.e., there is no secrecy from the source, the secret-key-rate equals the wiretap capacity [19]. If , i.e., there is no secrecy from the channel, then our result essentially reduces to the result by Csiszar and Narayan [5], that consider the case when the channel is a noiseless bit-pipe with finite rate.

  2. In general, the setup of wiretap channel involves a tradeoff between information rate and equivocation. The secret-key generation setup provides an operational significance to this tradeoff. Note that the capacity expression (10) in Theorem 1 involves two terms. The first term is the contribution from the correlated sources. In general, this quantity increases by increasing the information rate as seen from (11). The second term, is the equivocation term and increasing this term, often comes at the expense of the information rate. Maximizing the secret-key rate, involves operating on a certain intermediate point on the rate-equivocation tradeoff curve as illustrated by an example below.

    Consider a pair of Gaussian parallel channels,

    (14)

    where , , and . Furthermore, and , where is independent of . The noise variables are all sampled from the distribution and appropriately correlated so that the users are degraded on each channel. A total power constraint is selected and the bandwidth expansion factor equals unity.

    From Theorem 1,

    (15)
    (16)
    (17)
    (18)
    Figure 3: Tradeoff inherent in the secret-key-capacity formulation. The solid curve is the secret-key-rate, which is the sum of the two other curves. The dotted curve represents the source equivocation, while the dashed curve represents the channel equivocation  (18). The secret-key-capacity is obtained at a point between the maximum equivocation and maximum rate.

    Fig. 3 illustrates the (fundamental) tradeoff between rate and equivocation for this channel, which is obtained as we vary power allocation between the two sub-channels. We also present the function which monotonically increases with the rate, since larger the rate, smaller is the distortion in the source quantization. The optimal point of operation is between the point of maximum equivocation and maximum rate as indicated by the maximum of the solid line in Fig. 3. This corresponds to a power allocation and the maximum value is .

3.4 Side information at the wiretapper

So far, we have focussed on the case when there is no side information at the wiretapper. This assumption is valid for certain application such as biometrics, when the correlated sources constitute successive measurements of a person’s biometric. In other applications, such as sensor networks, it is more realistic to assume that the wiretapper also has access to a side information sequence.

We consider the setup described in Fig. 1, but with a modification that the wiretapper observes a source sequence , obtained by independent samples of a random variable . In this case the secrecy condition takes the form in (1). We only consider the case when the sources and channels satisfy a degradedness condition.

Theorem 2

Suppose that the random variables satisfy the degradedness condition and the broadcast channel is also degraded i.e., . Then, the secret-key-capacity is given by

(19)

where the maximization is over all random variables that are mutually independent, and

(20)

holds. Furthermore, it suffices to optimize over random variables whose cardinality does not exceed that of plus two.

3.5 Secret-key capacity with a public discussion channel

When public interactive communication is allowed as described in section 2.2, we have the following upper bound on the secret-key capacity.

Theorem 3

An upper bound on the secret-key capacity for source-channel setup with a public discussion channel is

(21)

The upper bound is tight when channel satisfies either or .

Figure 4: Secret-key-rate in the presence of a public discussion channel in the Gaussian example (14). The solid curve is the secret-key-rate, which is the sum of the two other curves. The horizontal line is the key rate from the source components. Regardless of the channel rate, the rate is 0.5 bits/symbol. The dashed-dotted curve is the key-rate using the channel .

The presence of a public discussion channels allows us to decouple the source and channel codebooks. We generate two separate keys — one from the source component using a Slepian-Wolf codebook and one from the channel component using the key-agreement protocol described in [1, 15].

The upper bound expression (21) in Theorem 3 is established using techniques similar to the proof of the upper bound on the secret-key rate for the channel model [1, Theorem 3]. A derivation is provided in section 8.

Fig. 4 illustrates the contribution of source and channel coding components for the case of Gaussian parallel channels (14) consisting of (physically) degraded component channels. The term is independent of the channel coding rate, and is shown by the horizontal line. The channel equivocation rate is maximized at the secrecy capacity. The overall key rate is the sum of the two components. Note that unlike Fig. 3, there is no inherent tradeoff between source and channel coding contributions in the presence of public discussion channel and the design of source and channel codebooks is decoupled.

4 Achievability: Coding Theorem

We demonstrate the coding theorem in the special case when and in Lemma 1. Accordingly we have that (5a) and (5b) reduce to

(22a)
(22b)

The more general case, can be incorporated by introducing an auxiliary channel and superposition coding [4] as outlined in Appendix 10. Furthermore, in our discussion below we will assume that the distributions and are selected such that, for a sufficiently small but fixed , we have

(23)

We note that the optimization over the joint distributions in Lemma 1 is over the region . If the joint distributions satisfy that for some , one can use the code construction below for a bock-length and then transmit an independent message at rate using a perfect-secrecy wiretap-code. This provides a rate of

as required.

4.1 Codebook Construction

Our codebook construction is as shown in the Fig. 5.

An intuition behind the codebook construction is first described. The wiretap channel carries an ambiguity of at the eavesdropper for each transmitted message. Furthermore, each message only reveals the bin index. Hence it carries an additional ambiguity of codeword sequences. Combining these two effects the total ambiguity is . Thus a secret-key can be produced at the rate . This heuristic intuition is made precise below.

\psfrag

X\psfragY \psfragU\psfragV \psfragK \psfrag1 bins \psfrag2 cws/bin

Figure 5: Source-Channel Code Design for secret-key distillation problem. The source sequence is mapped to a codeword in a Wyner-Ziv codebook. This codeword determines the secret-key via the secret-key codebook. The bin index of the codeword constitutes a message in the wiretap codebook.
\psfrag

3List Size: \psfrag4 codewords per bin

Figure 6: Equivocation at the eavesdropper through the source-channel codebook. The channel codebook induces an ambiguity of among the codeword sequences when the decoder observes . Each sequence only reveals the bin index of the Wyner-Ziv codeword. In induces an ambiguity of at the eavesdropper, resulting in a total ambiguity of .

The coding scheme consists of three codebooks: Wyner-Ziv codebook, secret-key codebook and a wiretap codebook that are constructed via a random coding construction. In our discussion below we will be using the notion of strong typicality. Given a random variable , the set of all sequences of length and type that coincides with the distribution is denoted by . The set of all sequences whose empirical type is in an -shell of is denoted by . The set of jointly typical sequences are defined in an analogous manner. Given a sequence of type , the set of all sequences that have a joint type of is denoted by . We will be using the following properties of typical sequences

(24a)
(24b)
(24c)

where is a term that approaches zero as and .

For fixed, but sufficiently small constants and , let,

(25a)
(25b)
(25c)
(25d)

Substituting (5a)-(5d) and (23) into (25a)-(25d) we have that

(26)

We construct the Wyner-Ziv and secret-key codebooks as follows. Randomly and independently select sequences from the set of typical sequences . Denote this set . Randomly and independently partition this set into the following codebooks333As will be apparent in the analysis, the only pairwise independence is required between the codebooks i.e., , :

  • Wyner-Ziv codebook with bins consisting of sequences. The sequence in bin is denoted by .

  • Secret-key codebook with bins consisting of sequences. The sequence in bin is denoted by .

We define two functions and as follows.

Definition 4

Given a codeword sequence , define two mappings

  1. , if , such that .

  2. , if such that .

The channel codebook consists of sequences uniformly and independently selected from the set of typical sequences . The channel encoding function maps message into the sequence , i.e., is defined as .

4.2 Encoding

Given a source sequence , the encoder produces a secret-key and a transmit sequence as shown in Fig. 5.

  • Find a sequence such that . Let be the even that no such exists.

  • Compute and . Declare as the secret-key.

  • Compute , and transmit this sequence over uses of the DMC.

4.3 Decoding

The main steps of decoding at the legitimate receiver are shown in Fig. 5 and described below.

  • Given a received sequence , the sender looks for a unique index such that . An error event happens if is not the transmitted codeword.

  • Given the observed source sequence , the decoder then searches for a unique index such that . An error event is declared if a unique index does not exist.

  • The decoder computes and declares as the secret key.

4.4 Error Probability Analysis

The error event of interest is . We argue that selecting leads to .

In particular, note that . We argue that each of the terms vanishes with .

Recall that is the event that the encoder does not find a sequence in typical with . Since has sequences randomly and uniformly selected from the set , we have that .

Since the number of channel codewords equals , and the codewords are selected uniformly at random from the set , the error event .

Finally, since the number of sequences in each bin satisfies , joint typical decoding guarantees that .

4.5 Secrecy Analysis

In this section, that for the coding scheme discussed above, the equivocation at the eavesdropper is close (in an asymptotic sense) to .

First we establish some uniformity properties which will be used in the subsequent analysis.

4.5.1 Uniformity Properties

In our code construction satisfies some useful properties which will be used in the sequel.

Lemma 3

The random variable in Def. 4 satisfies the following relations

(27a)
(27b)
(27c)

where vanishes to zero as we take and for each .

Proof: Relations (27a) and (27b) are established below by using the properties of typical sequences (c.f. (24a)-(24c)). Relation (27c) follows from the secrecy analysis of the channel codebook when the message is . The details can be found in e.g., [19].

To establish (27a), define the function to identify the position of the sequence in a given bin i.e., and note that,

(28)
(29)
(30)
(31)

where (28) follows from the construction of the joint-typicality encoder, (29) from (24b) and (30) from (24a). Marginalizing (28), we have that

(32)

Eq. (27a) follows from (32) and the continuity of the entropy function. Furthermore, we have from (31) that

(33)

The relation (27b) follows by substituting (27a), since

(34)

 

Lemma 4

The construction of the secret-key codebook and Wyner-Ziv codebook is such that the eavesdropper can decode the sequence if it is revealed the secret-key in addition to its observed sequence . In particular

(35)

Proof: We show that there exists a decoding function that such that as . In particular, the decoding function searches for the sequences in the bin associated with in the secret-key codebook, whose bin-index in the Wyner-Ziv codebook maps to a sequence jointly typical with the received sequence . More formally,

  • Given , the decoder constructs a the set of indices