Composably secure time-frequency quantum key distribution

Composably secure time-frequency quantum key distribution

Nathan Walk    Jonathan Barrett and Joshua Nunn Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford OX1 3QD, United Kingdom
Clarendon Laboratory, University of Oxford, Oxford OX1 3PU, United Kingdom
July 15, 2019

We present a composable security proof, valid against arbitrary attacks and including finite-size effects, for a high dimensional time-frequency quantum key distribution (TFQKD) protocol based upon spectrally entangled photons. Previous works have focused on TFQKD schemes as they combines the impressive loss tolerance of single-photon QKD with the large alphabets of continuous variable (CV) schemes, which enable the potential for more than one bit of secret key per transmission. However, the finite-size security of such schemes has only been proven under the assumption of collective Gaussian attacks. Here, by combining recent advances in entropic uncertainty relations for CVQKD with decoy state analysis, we derive a composable security proof that predicts key rates on the order of Mbits/s over metropolitan distances (40km or less) and maximum transmission distances of up to 140km.

I Introduction

Arguably the most promising short term application of quantum information technology is in the field of cryptography, with quantum key distribution (QKD) the canonical example Bennett and Brassard (1984); Ekert (1991). In the years since its inception, researchers have worked to improve the rigour and generality of security proofs, design protocols that maximise performance and bridge the gap between theoretical proposal and experimental implementation Lo et al. (2014); Scarani et al. (2009). On the security side, one looks to derive a security proof that is composably secure against arbitrary eavesdropping attacks whilst including all finite-size statistical effects Renner (2005) (see also Tomamichel et al. (2012)). Practically, one searches for schemes that maximise both the raw clock-rate (the number of transmissions per second) and the number of secure bits per transmission to achieve the largest overall secret key rate at a given transmission distance.

Most photonic QKD implementations fall into one of two regimes. Traditional discrete variable (DV) schemes encode the secret key in a two-dimensional Hilbert space such as the polarisation degrees of freedom of a single photon. Extending from the original works Bennett and Brassard (1984); Ekert (1991), these protocols now enjoy universal security proofs Tomamichel et al. (2012) that function with reasonably small finite-size data blocks, and converge to the ideal Devetak-Winter rates for collective attacks Devetak and Winter (2005) in the asymptotic limit. Continuous variable (CV) schemes utilise an infinite-dimensional Hilbert space, commonly the quadratures of the optical field Reid (2000); Grosshans and Grangier (2002). Whilst the finite range and precision of real-life detectors ensures the key is never perfectly continuous, CVQKD nevertheless has the capability to achieve greater than one bit per transmission. Furthermore, composable, general, finite-size CVQKD security proofs have also appeared, although the present results either require extremely large block sizes Leverrier (2015), or are very sensitive to losses Furrer et al. (2012); Furrer (2014) and fail to converge to the Devetak-Winter rates.

This behaviour is in large part due to the different way loss manifests itself in DV and CV systems. If a single photon is sent through an extremely lossy channel, it will only be detected with very low probability. However, in the instances where a detection does take place, the quantum state is largely preserved and the security is unaffected. Therefore, one can in principle achieve high rates over lossy channels by improving the repetition rate of the photon source or multiplexing. But for coherent or squeezed states commonly used in CVQKD, the loss degrades the signal for all transmissions, rendering the information advantage so small that even modest experimental imperfections will eventually prohibit key extraction.

An alternative approach is to encode the key in the continuous degrees of freedom of single photons, inheriting both the loss tolerance of DVQKD and the larger encoding space of CV protocols Zhang et al. (2008). These time-frequency schemes are primarily pursued via the temporal and spectral correlations of single photons emitted during spontaneous parametric down conversion (SPDC) and the security stems from the conjugate nature of frequency and arrival time measurements. One can use fast time-resolving detectors to directly measure photon arrival times and a grating spectrometer to measure frequency. It is also possible to adopt just the former detection scheme and convert to frequency measurements via dispersive optics Mower et al. (2013), or the solely the latter and convert to time via phase modulation Nunn et al. (2013). Significant progress has been made on the theoretical Zhang et al. (2014) and experimental front Lee et al. (2014); Zhong et al. (2015) however, a general composable security proof is lacking. Exploiting techniques from traditional CVQKD Navascués et al. (2006); García-Patrón and Cerf (2006); Leverrier et al. (2010), security proofs have been derived against Gaussian collective attacks and extended to incorporate finite-size effects Lee et al. (2015) and decoy-states Bunandar et al. (2015) culminating in a result including both Bao et al. ().

In this work we present a finite-size, composably secure proof for TFQKD by combining the entropic uncertainty proofs for CVQKD Furrer et al. (2012) with efficient, finite-size decoy-state analysis Ma et al. (2005); Lim et al. (2014) for DVQKD. The resultant proofs allow for high rates of key to be distributed over urban and inter-city distances with reasonable block sizes.

Ii Security Proof I

ii.1 Generic protocol

A fairly generic TFQKD decoy-state protocol can be summarised as follows.

  1. Quantum transmission and measurement: Quantum states are distributed from Alice to Bob through a potentially eavesdropper controlled quantum channel. In particular, using a pulsed SPDC source she prepares time-frequency entangled photons. Each round of transmission is defined by a time frame of length which is centred about the peak of each pump pulse. Alice randomly varies her pump power between three values , according to probabilities . Immediately after the channel, we make the worst case assumption which is that Eve completely purifies the shared state, , such that the overall tripartite state, , is pure. Alice and Bob then randomly switch between measuring the frequency or arrival time of the photons. They choose either the time or frequency measurement for key generation and use the other to check for an eavesdroppers presence. To analyse both possibilities, we will write the two incompatible observables as positive operator valued measurements (POVMs) ) for Alice and ) for Bob. Here we will always denote as the key generating observable and as the check.

  2. Parameter Estimation: Alice and Bob first announce their measurement choices in each round over a public, but authenticated, classical channel and discard all instances where they differ, as well as any instances where two or more detections occur in the same frame. This results in raw, correlated variables which take values which are strings of length , distributed according to a probability distribution and similarly for and . Throughout, we will use uppercase to denote random variables and lowercase to denote a corresponding string that is an instantiation of that variable. Alice then announces which intensity was used in each transmission and the results are further partitioned into substrings e.g. is partitioned into of length for and similarly for the other strings. Using the number of detections for each pump power and decoy state analysis, Alice and Bob lower bound the number of signals that originated from a single photon transmission. They then announce all outcomes for the observables and evaluate the quality of their correlations. If the quality is sufficiently high (in a way we will make precise later) they proceed, otherwise they abort. Call the passing probability . Conditional on passing, they are left with raw keys which are partially correlated between Alice and Bob as well as the eavesdropper. The overall conditional state between Alice, Bob and Eve is a classical-quantum state of the form,

  3. Reconciliation: Either Alice or Bob is designated the reference partner, which means that their string is designated as the ‘correct’ string. The reference partner then sends information to the other party to correct any errors between the two strings. If the reference partner is Alice, and the reconciliation information flows in same direction as the quantum transmission this is called direct reconciliation (DR). The converse is called reverse reconciliation (RR). Here we will consider the DR case. If the reconciliation is successful, Alice and Bob will now have perfectly correlated strings which are still partially known to Eve. In fact, Eve will usually have learned some more information about the strings during the reconciliation process. The amount of ‘leaked’ information is denoted . There is also an additional loss from a reconciliation check procedure, where Alice announces a further string of size to ensure the strings are identical except with probability .

  4. Privacy Amplification: Alice and Bob now apply a function, , drawn randomly from a family, , of two-universal hashing functions to their measurement strings giving . The final state is now


    This ideally result in strings of length which are perfectly correlated, uniformly random, and completely independent of Eve. These are the final secret keys. The goal of a security analysis is to find a lower bound on the number of extractable bits, , for any given protocol.

ii.2 Composable security

We now formally state the definitions of composable security and a formalism to quantitatively relax from the ideal case Renner (2005); Tomamichel et al. (2012).

Definition 1

A protocol that outputs a state of the form (4) is

  • -correct if and correct if the condition holds for .

  • -secret if


    where , is the trace norm and is the uniform (i.e. maximally mixed) state over . It is secret if the condition holds for .

The protocol is ideal if is is both correct and secret and -secure if it is -indistinguishable from an ideal protocol. This means that there is no device or procedure that can distinguish between the actual protocol and an ideal protocol with probability higher than . If the protocol is -secret and -correct then it is -secure for any .

The choice of error reconciliation fixes so the goal is now to find a method to bound . First, we briefly introduce the entropic quantities appropriate for finite-size analysis. For a random variable coupled to a quantum system associated with a Hilbert space with the joint system described by a classical-quantum state , the conditional min-entropy of can be defined as the negative logarithm of the optimal probability of successfully guessing given Konig et al. (2009), that is,


where the supremum is taken over all POVMs and the logarithm here and throughout is taken to be base 2. A related quantity is the conditional max-entropy


where is the quantum fidelity and the supremum is over all physical states in , that is . One can also define smoothed versions of these quantities that consider -regions in the state space. Concretely we have,


where the supremum and infimum are taken over all states that are -close in the purified distance, defined as . We again emphasise that throughout this work we will be considering the classical-quantum states conditioned on the parameter estimation test having been passed. For the rest of this work we will suppress the state subscript in the entropies.

If the guessing probability is low then the variable must have a high degree of randomness with respect to an observer holding . Intuitively then, we might expect the conditional smooth min-entropy to be related to the number of secret bits extractable from variable with failure probability as described in Definition 1. This intuition is usefully formalised in the Leftover Hash Lemma (with quantum side information) Tomamichel et al. (2011a); Berta et al. (2011).

Lemma 1

Let be a state of the form (2) where is defined over a a discrete-valued and finite alphabet, E is a finite or infinite dimensional system and is a register containing the classical information learnt by Eve during information reconciliation. If Alice applies a hashing function, drawn at random from a family of two-universal hash functions 111Let be sets of finite cardinality . A family of hash functions , is a set of functions such that , that maps to and generates a string of length , then


where is the conditional smooth min-entropy of the raw measurement data given Eve’s quantum system and the information reconciliation leakage.

Comparing (6) and (14) we see that with an appropriate choice of we can ensure the security condition is met. In particular we see that the smooth min-entropy is a lower bound on the extractable key length. Suppose that we are only able to bound the smooth min-entropy with a certain probability (in this work this will be due to the use of Hoeffding’s bound in the decoy-state analysis). To get a more exact expression notice that if we choose


for some then the r.h.s of (14) is . Then, provided


the convexity and boundedness of the trace distance implies we will satisfy (6) for any secrecy parameter . Recalling that by assumption Eve learns at most bits during information reconciliation we have that,


Finally since we have the following result Tomamichel et al. (2012); Furrer et al. (2012)

Theorem 1

Let describe the state between Alice and Eve conditioned on the parameter estimation test succeeding such that the Leftover Hash lemma is applicable. For an error correction scheme as defined above we may extract an -correct and -secret key of length


So the problem has essentially condensed to bounding the conditional smooth min-entropy, . The central idea is to quantify the smooth min-entropy in one observable by observing the statistics of another, incompatible, observable. This is nothing more than a manifestation of Heisenberg’s uncertainty principle, which has long underpinned quantum cryptographic protocols. Specifically, this notion is quantitatively expressed via an uncertainty relation for the smooth min- and max-entropies Tomamichel and Renner (2011) and its extension to the infinite dimensional setting in Berta et al. (2011); Furrer et al. (2014). These relations can be formulated as follows Tomamichel et al. (2011b); Furrer et al. (2012). Let be an -mode state shared between Alice, Bob and Charlie and let Alice’s measurements be described by POVMs and with elements and respectively. Let be the random variable describing the measurement outcome and be the joint state of the measurement register and system given that Alice measured on each of the modes. Further, let describe the measurement outcome and be the joint state of the measurement register and system given the counterfactual scenario where Alice instead measured upon each mode. The sum of the corresponding smooth entropies satisfies the relation


where quantifies the compatibility of the measurements with the operator norm or the largest singular value.

We now turn to our specific measurement setup where we identify the conjugate measurements and with time and frequency.

ii.3 Time-frequency measurement uncertainty relation

Following Delgado and Muga (1997); Zhang et al. (2014) we describe the arrival time and conjugate frequency detuning measurements by the following operators,


for . If we restrict the field operators to the Hilbert space spanned by the single photon time or frequency domain states, and , then we have and so that we can write,


These operators can be shown to be maximally complementary, self-adjoint projectors describing an arrival time measurement that satisfy , and hence can be considered equivalent to the canonical position and momentum operators Delgado and Muga (1997).

Fortunately, the smooth-min entropy uncertainty relations have recently been extended to allow for observables and eavesdroppers living in infinite dimensional Hilbert spaces Furrer et al. (2011, 2012, 2014). However, only in the instances where Alice’s source emitted exactly one photon will the POVM’s be restricted as per (28) and result in a useful uncertainty relation. To this end, let be a POVM, defined as the restriction of the POVM to the single photon subspace such that it is described as per (28). We can now consider the decomposition of the measurement record into variables describing the single, vacuum and multi-photon components components such that we have . In order to apply the uncertainty relation directly we consider the case where Eve assumed to know the multi-photon and vacuum measurements and is left solely with estimating the single photon components, that is we set in (24). The following section explains how to relate to and also how to estimate the number of single photon events in a given set of detections. Even though Alice never knows in a given run how many photons are emitted, the number of single-photon events in a collection of runs can be bounded via decoy-state analysis which involves using states with known average photon numbers. For now we turn to computing the overlap for measurements described by (28).

In fact, Alice and Bob actually measure coarse grained, finite versions of these measurements. This is a practical necessity in ordinary CVQKD (all homodyne measurements have a finite precision and dynamic range) and in this case, measuring precisely an arrival time operator as defined in (28) would require a detector that has been turned on in the infinite past. Furthermore, a finite alphabet is necessary in order to apply the leftover hash lemma. In standard CVQKD the quadrature observables can usually be treated symmetrically. In this work we must consider the conjugate observables individually, partly because in practice they have different achievable measurement resolutions and partly because they are physically different quantities. For instance, for arrival time measurements the maximum value is equal to the time frame duration for each measurement round, which in turn puts immediate limits on the maximum overall clock rate of the protocol.

Alice’s measurements are divided into evenly spaced bins of width up to a maximum value such that are assumed integer alphabet sizes for simplicity. We can write binned observables corresponding to intervals on the real line . The measurement outcome range is then denoted . Thus the POVM elements of are projectors in (28) integrated over the bin intervals,


and similarly for . Notice that this is something of a problem as the two infinite end intervals of these binned measurements actually have a large overlap. In fact which would mean that for these particular measurements the RHS of (24) is approximately zero and the relationship becomes useless.

To avoid this problem, instead consider a second, hypothetical set of discrete measurements which are defined as per (30) but over a new interval set which is simply the infinite collection of intervals, , of width , enumerated such that . For these measurements the maximum overlap is given by Furrer et al. (2012),


where is the radial prolate spheroidal wavefunction of the first kind. Thus, for sufficiently small bin sizes, we can always recover a nontrivial value of and thus a useful uncertainty relation. The idea is that, for a state that mostly lives in the phase space spanned by the region , the classical-quantum states after Alice applies and will be very close. We will use our knowledge of Alice’s state preparation to quantify this ‘closeness’. In particular, we will assume that for the all states used in the protocol Alice’s source produces a tensor product state, and in particular for the states on which is measured there is some such that . Moreover, our knowledge of Alice’s state allows us to lower bound the probability of measuring a value within the range on any given run such that,


This it turn means that the probability of measuring an absolute value larger than at any point in the whole protocol given the parameter test was passed is where,


and a similar relation holds for the measurements.

We then finally have a relation between the entropies of the two discretized measurements conditional on a system , namely Furrer et al. (2012)




(recall that the scripted variable is denoting the hypothetical situation where was measured on the key generating modes instead). Putting all this together with the uncertainty relation (24) finally allows us to write,


where is the number of instances where Alice and Bob measured in the same basis and only a single photon was created. In reality however, the measurement record will also include contributions from vacuum and multi-photon terms so we will need a way to determine a lower bound on the min-entropy of the whole string, in terms of so that we can apply (42). We will also require a lower bound on and an upper bound upon based upon the correlations in the measurements of observables. Fortunately, all of these can be achieved via decoy-state analysis.

ii.4 Decoy state analysis

We employ the decoy-state analysis of Lim et al. (2014) which we will recapitulate in our notation for completeness. Recalling the decomposition of the measurements into vacuum, single and multi-photon components we have . Applying a generalisation of the chain rule for smooth-entropies Vitanov et al. (2013) gives,


for where for all . Applying the same chain rule to the second term on the rhs gives,


where is the number of basis measurements that resulted when the source produced a vacuum state. In the second inequality we have used that , which is equivalent to assuming all multi-photon events are insecure and also that where the inequality is true by definition and final equality comes from assuming that vacuum contributions are uncorrelated with the chosen bit values and uniformly distributed across the measurement range. Note that since and now no longer feature directly, we can set them arbitrarily small and neglect them from further calculations. Putting this together gives,


which we can now bound according to (42) to get


Now, we also need to derive lower bounds upon the number of vacuum and single photon contributions. Recall that in the protocol, Alice probabilistically selects a pump power, , with probability which in turn probabilistically results in an -photon state with conditional probability


assuming a Poissonian source. Although we cannot directly know how many detections are due to a particular photon number emission, we do know how many detections are due to a particular pump power. The main idea of a decoy state analysis is to use the latter information to place bounds on the former. Following Ma et al. (2005); Lim et al. (2014) we first note from the eavesdropper’s perspective it could just as well be a counterfactual scenario where Alice instead creates n-photon states and merely probabilistically partitions them so that each subset has a mean photon number . Indeed, Bayes’ rule allows us to write the down the appropriate probability of pump power given -photon emission as,




is the total probability of an -photon emission. Note that technically all of these probabilities should also be conditioned on the parameter test on the basis measurements passing. However, when considering the basis Alice can be sure that this conditioning will make no difference. To see this, consider the counterfactual case where she prepares -photon states. By simply not assigning values in the basis until after the parameter test on the is completed she can ensure that probabilities like (52) are unchanged by conditioning. In the asymptotic limit of large statistics, (52) allows us to relate the number of coincidences given a certain pump power, to the number given an -photon emission, , via


where is the asymptotic value of and we have substituted in from (52) and (50). We can then use Hoeffding’s inequality for independent events which says that the difference between observed statistics and their asymptotic values is bounded by


and hence where,


with probability at least where . Now consider the following expression:

Notice that in the above expression the summand vanishes when . This means we can split up the sum as,


where the inequality holds provided . Rearranging gives a lower bound on the vacuum conincidences,


which holds with probability at least .

The single photon bound is somewhat more involved. First, by similar reasoning as above, we have:


since now the term vanishes. Now, using the identity we have


which combined with the inequality gives


where the second last equality results in a tighter bound when we apply the condition to obtain the last inequality. Substituting back in (67) yields:


Rewriting the sum as


and substituting back into (73), we can solve for , and using the Hoeffding bounds arrive at the following lower bound for the single photon detections:


which holds with probability at least .

Now the only unbounded term in the key rate formula is the max-entropy term . Firstly, by the data processing inequality we have . We again use the results of Furrer et al. (2012), where a statistical bound on the smooth max-entropy over a classical probability distribution is found based on the observed correlations. Alice and Bob quantify the correlations by computing the average distance (essentially the Hamming distance but for non-binary strings) which for two strings and taking values in is defined as:


In order to bound we proceed in three steps. Firstly, we use decoy-state arguments to upper bound , the average distance on just the single photon terms. Then, following Furrer et al. (2012), we use this upper bound and a result by Serfling Serfling (1974) to upper bound the average distance that could be observed on the counterfactual variables . Finally, we use this quantity to upper bound the smooth max-entropy.

The quantity in (79) is just counting the number of bins between Alice and Bob’s measurements. Considering the substring corresponding to pump power , in the asymptotic limit, we expect from errors to be assigned to where


and is the number of errors in the basis resulting from -photon states. Just as when we were bounding the number of single-photon terms, we can use Hoeffding’s result to bound the difference between this unknown asymptotic quantity and the observed value,


except with probability where now to account for the non-binary nature of entries in the error strings. Hence we expect in the asymptotic limit to have


Rearranging gives,


with probability at least where is calculated in the same manner as (77). Now, say that Alice and Bob abort the protocol whenever .

Now, we again consider bounding the counterfactual average distance