Cryptographic security of quantum key distribution

Cryptographic security of quantum key distribution

Christopher Portmann\  Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland. Renato Renner\ Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland.
August 19, 2019

This work is intended as an introduction to cryptographic security and a motivation for the widely used Quantum Key Distribution (QKD) security definition. We review the notion of security necessary for a protocol to be usable in a larger cryptographic context, i.e., for it to remain secure when composed with other secure protocols. We then derive the corresponding security criterion for QKD. We provide several examples of QKD composed in sequence and parallel with different cryptographic schemes to illustrate how the error of a composed protocol is the sum of the errors of the individual protocols. We also discuss the operational interpretations of the distance metric used to quantify these errors.


[1]Title pagetitlepage




[1]List of Figuressec:lof

1 Introduction

1.1 Background

The first Quantum Key Distribution (QKD) protocols were proposed independently by Bennett and Brassard [BB84] in 1984 – inspired by early work on quantum money by Wiesner [Wie83] – and by Ekert [Eke91] in 1991. The original papers discussed security in the presence of an eavesdropper that could perform only limited operations on the quantum channel. The first security proofs that considered an unbounded adversary were given more than a decade later [May96, BBB00, SP00, May01, BBB06]. Another decade after the first such proof, König et al. [KRBM07] showed that the security criterion used was insufficient: even though it guarantees that an eavesdropper cannot guess the key, this only holds if the key is never used. If part of the key is revealed to the eavesdropper – for example, by using it to encrypt a message known to her – the rest becomes insecure. A new security criterion for QKD was introduced, along with a new proof of security for BB84 [RK05, BHL05, Ren05]. It was argued that , the joint state of the final key () and quantum information gathered by an eavesdropper (), must be close to an ideal key, , that is perfectly uniform and independent from the adversary’s information :


where is the probability that the protocol aborts,111In [Ren05], \eqnrefeq:d was introduced with a subnormalized state , with , instead of explicitly writing the factor . The two formulations are however mathematically equivalent. is the trace distance222This metric is defined and discussed in detail in \appendixrefapp:op. and is a (small) real number.333Another formulation of this security criterion, , has also been proposed in the literature. We discuss this alternative in \appendixrefapp:alternative.

The type of security flaw suffered by the early QKD security criteria is well known in classical cryptography. It was addressed independently by Pfitzmann and Waidner [PW00, PW01, BPW04, BPW07] and Canetti [Can01, CDPW07, Can13], who introduced general frameworks to define cryptographic security, which they dubbed reactive simulatability and universal composability, respectively. These frameworks were adapted to quantum cryptography by Ben-Or and Mayers [BM04] and Unruh [Unr04, Unr10], and the security of QKD was discussed within these frameworks by Ben-Or et al. [BHL05] and Müller-Quade and Renner [MQR09]. Recently, Maurer and Renner [MR11] introduced a new cryptographic security framework, Abstract Cryptography (AC), which both simplifies and generalizes previous frameworks, and applies equally to the classical and quantum settings.

The core idea of all these security frameworks is to prove that the functionality constructed by the real protocol is indistinguishable from the functionality of an ideal resource that fulfills in a perfect way whatever task is expected of the cryptographic protocol – in the case of QKD, this ideal resource provides the two players with a perfect key, unknown to the adversary. If this ideal system is indistinguishable from the real one, then one can be substituted for the other in any context. Players who run a QKD protocol can thus treat the resulting key as if it were perfect, which trivially implies that it can be safely used and composed arbitrarily with other (secure) protocols.

1.2 Contributions

Since the security criterion of \eqnrefeq:d provides the aforementioned compositional guarantees, it is widely used in the QKD literature and generally introduced as the correct security definition (see, e.g., the QKD review paper [SBPC09]). A more detailed explanation as to why this is the case is however usually omitted due to the highly involved security frameworks. Even the technical works [RK05, BHL05, Ren05, MQR09] that introduced and discuss \eqnrefeq:d do not provide a self contained justification of this security notion. The current paper aims to fill in this gap by revisiting the security of QKD using the AC framework.

Our goals are twofold. Firstly, we provide an introduction to cryptographic security. We do not discuss the AC framework in detail, but explain the main ideas underlying cryptographic security and illustrate protocol composition with many examples. Secondly, we use this framework to show how \eqnrefeq:d can be derived. We also provide in \appendixrefapp:op an extensive discussion of the interpretation and operational meaning of the trace distance used in \eqnrefeq:d.

1.3 Abstract cryptography

The traditional approach to defining security [PW00, PW01, Can01] can be seen as bottom/up. One first defines (at a low level) a computational model (e.g., a Turing machine). One then defines how the machines communicate (e.g., by writing to and reading from shared tapes) and some form of scheduling. Next, one can define notions of complexity and efficiency. Finally, the security of a cryptosystem can be defined.

Abstract cryptography (AC) on the other hand uses a top/down approach. In order to state definitions and develop a theory, one starts from the other end, the highest possible level of abstraction – the composition of abstract systems – and proceeds downwards, introducing in each new lower level only the minimal necessary specializations. The (in)distinguishability of the real and ideal systems is defined as a metric on abstract systems, which, at a lower level, can be chosen to capture the distinguishing power of a computationally bounded or unbounded environment. The abstract systems are instantiated with, e.g., a synchronous or asynchronous network of (abstract) machines. These machines can be instantiated with either classical or quantum processes.

One may give the analogous example of group theory, which is used to describe matrix multiplication. In the bottom/up approach, one would start explaining how matrices are multiplied, and then based on this find properties of the matrix multiplication. In contrast to this, the top/down approach would correspond to first defining the (abstract) multiplication group and prove theorems already on this level. The matrix multiplication would then be introduced as a special case of the multiplicative group. This simplifies greatly the framework by avoiding unnecessary specificities from lower levels, and does not hard code a computation or communication model (e.g., classical or quantum, synchronous or asynchronous) in the security framework.

1.4 Structure of this paper

In \secrefsec:ac we start by introducing a simplified version of the AC framework [MR11], which is sufficient for the specific adversarial structure relevant to QKD, namely honest Alice and Bob, and dishonest Eve. In \secrefsec:qkd we model the real and ideal systems of a generic QKD protocol, and plug it in the AC security framework, obtaining a security definition for QKD. In \secrefsec:security we then prove that this can be reduced to \eqnrefeq:d.444More precisely, the security definition of QKD is reduced to a combination of two criteria, secrecy (captured by \eqnrefeq:d) and correctness. In \secrefsec:ex we illustrate the composition of protocols in AC with examples of QKD composed in various settings. We emphasize that this section does not prove that the QKD security criterion is composable – the proof of this follows from the generic proof that the AC framework is composable [MR11] – but illustrates how the security of composed protocols results from the security of individual protocols and the triangle inequality. Further examples can be found in \appendixrefapp:ex.auth, where we model the security of authentication and compose it with QKD, resulting in a key expansion protocol. We also provide a substantial review of the trace distance and its operational interpretations in \appendixrefapp:op. In particular, we prove that it corresponds to the probability a distinguisher has of correctly guessing whether it is interacting with the real or ideal QKD system – the measure used in the AC framework – and discuss how to interpet this. An overview of the other appendices is given on \prefapp.

2 Cryptographic security

A central element in modeling security is that of resources – resources used in protocols and resources constructed by protocols. For example, a QKD protocol constructs a functionality which shares a secret key between two players. This functionality is a resource, which can be used by other protocols, e.g., to encrypt a message. To construct this secret key resource, a QKD protocol typically uses two other resources, an authentic classical channel555An authentic channel guarantees that the message received comes from the legitimate sender, and has not been tampered with or generated by an adversary. and an insecure quantum channel. The authentic channel resource can in turn be constructed from an insecure channel resource and a password\footnoterememberfn:passwordA short key with min/entropy is sufficient for authentication [RW03], where is the key alphabet – i.e., having linear in the key length is sufficient. We refer to such a weak key as a password[RW03]. Composing the authentication protocol with the QKD protocol results in a scheme which constructs a secret key from a password and insecure channels. Part of the resulting secret key can be used in further rounds of authentication and QKD to produce even more secret key. This is illustrated in \figreffig:construction.

Short password

Insecure classical channel

Authentic channel

Insecure quantum channel

Long secret key

Insecure classical channel

Insecure classical channel

Authentic channel

Authentic channel

Secure channel

Insecure quantum channel

Long secret key





One-time pad


Figure 2.1: A cryptographic protocol uses (weak) resources to construct other (stronger) resources. These resources are depicted in the boxes, and the arrows are protocols. Each box is a one-time-use resource, so the same resource appears in multiple boxes if different protocols require it. The long secret key resource in the center of the figure is split in three shorter keys, and each protocol uses one of these keys.

For any cryptographic task one can define an ideal resource which fullfils this task in a perfect way. A protocol is then considered secure if the real resource actually constructed is indistinguishable from a system running the ideal resource.\footnoterememberfn:relativeNote that we use the notions real and ideal in a relative sense: the ideal resource that we wish to construct with one protocol might be considered a real resource available to another protocol. This notion of security based on distinguishing real and ideal systems is explained informally in \secrefsec:ac.view. It is then illustrated with the one-time pad666The one-time pad is an encryption scheme that XORs every bit of a message with a bit of a key , and transmits the resulting ciphertext to the receiver. The message, which can be decrypted by performing the reverse operation , is hidden from any player who intercepts the ciphertext but has no knowledge of the key . in \secrefsec:ac.otp. In \secrefsec:ac.definition we give a formal security definition in the Abstract Cryptography (AC) framework for the special case of three party protocols with honest Alice and Bob, and dishonest Eve. Finally, in \secrefsec:ac.interpretation we discuss how the metric used to quantify the (in)distinguishability between the real and ideal settings should be interpreted.

2.1 Real-world ideal-world paradigm

Cryptography aims at providing security guarantees in the presence of an adversary. And traditionally, security has been defined with respect to the information gathered by this adversary – but, as we shall see, this can be insufficient to achieve the desired security guarantees. A typical example of this is the security criterion used in early papers on QKD, e.g., [May96, BBB00, SP00, May01]. Let be the secret key produced by a run of a QKD protocol, and be a random variable obtained by an adversary attacking the scheme and measuring her quantum system . It can be argued that the key is unknown to the adversary if she gains only negligible information about it, i.e., if for all attacks and measurements of the resulting quantum system,


where is the mutual information777This information measure, the maximum mutual information over all measurements of the quantum system, is called accessible information. between and .

However, even if a key obtained from a protocol satisfying \eqnrefeq:localqkd is used in a perfectly secure encryption scheme like the one-time pad, it can leak information about the message. König et al. [KRBM07] give such an example: they find a quantum state which satisfies \eqnrefeq:localqkd, but which cannot be used to encrypt a message partly known to an adversary. They show that if the key is split in two, , and the adversary delays measuring her system until the first part, , is revealed to her – e.g., because a known message was encrypted by the one-time pad with – she can obtain information about the rest of the key. More precisely, they prove that for this state ,

where is a random variable obtained by a measurement of the joint state consisting of the partial key and the quantum information gathered during the QKD protocol.888This phenomenon is called information locking [DHL04, Win14]. Even though the key obtained from the QKD protocol is approximately uniform and independent from the adversary’s information , it is unusable in a cryptographic context, and another approach than the adversarial viewpoint is necessary for defining cryptographic security.

This new approach was proposed independently by Canetti [Can01] and Pfitzmann and Waidner [PW00, PW01] for classical cryptography. The gist of their global security paradigm lies in measuring how well some real protocol can be distinguished from some ideal system that fullfils the task in an ideal way, and is often referred to as the “real/world ideal/world” paradigm.999As already noted in \footnotereffn:relative, we use the notions real and ideal in a relative sense.

To do this, the notion of an adversary is dropped in favor of a distinguisher. Apart from having the capabilities of the adversary, this distinguisher also encompasses any protocol that is run before, after, and during the protocol being analyzed. The role of the distinguisher is to capture “the rest of the world”, everything that exists outside of the honest players and the resources they share. A distinguisher is defined as an entity that can choose the inputs of the honest players (that might come from a previously run protocol), receives their outputs (that could be used in a subsequent protocol), and simultaneously fullfils the role of the adversary, possibly eavesdropping on the communication channels and tampering with messages. This distinguisher is given a black box access to either the real or an ideal system, and must decide with which of the two it is interacting. A protocol is then considered secure if the real system constructed is indistinguishable from the ideal one. This is illustrated in \figreffig:distinguisher.

Real system


Ideal system


Figure 2.2: A distinguisher has a complete description of two systems, and is given a black/box access to one of the two. After interacting with the system, it must guess which one it is holding.

In the case of QKD, this means that the distinguisher does not only obtain the system of the eavesdropper, but also receives the final key generated by Alice and Bob. In the real world, this key is potentially correlated to , and in an ideal system, is uniformly random and independent from . The distinguisher can then run the attack of König et al. [KRBM07] to distinguish between the real and ideal systems: if , the result of the measurement of and is correlated to , it knows that it was given the real system, otherwise it must have the ideal one. This specific attack is illustrated in more detail in \secrefsec:ex.leak.

2.2 Example: one-time pad

In this section, we illustrate with the one-time pad how security is defined in the real/world ideal/world paradigm. The one-time pad protocol uses a secret key to encrypt a message as . The ciphertext is then sent on an authentic channel to the receiver, who decrypts it, obtaining . is however also leaked to the adversary that is eavesdropping on the authentic channel. This is depicted in \figreffig:otp.real.




Secret key

Authentic channel


Figure 2.3: The real one-time pad system – Alice has access to the left interface, Bob to the right interface and Eve to the lower interface – consists of the one-time pad protocol , and the secret key and authentic channel resources. The combination of these resources and protocol constructs a system that takes a message at Alice’s interface, outputs a ciphertext at Eve’s interface and the original message at Bob’s interface.

The one-time pad protocol thus uses two resources, a secret key and an authentic channel. The resource we wish to construct with this encryption scheme is a secure channel: a resource which transmits a message from the sender to the receiver, and leaks only information about the message size at the adversary’s interface, but not the contents of the message. This is illustrated in \figreffig:otp.ideal.resource.




Figure 2.4: A secure channel from Alice to Bob leaks only the message size at Eve’s interface.

Since an ideal resource “magically” solves the cryptographic task considered, e.g., by producing perfect secret keys or transmitting a message directly from Alice to Bob, the adversary’s interface of the ideal resource is usually quite different from her interface of the real system, which gives her access to the resources used. For the one-time pad, the real system from \figreffig:otp.real outputs a string at Eve’s interface, but the ideal secure channel from \figreffig:otp.ideal.resource outputs an integer, . To make the comparison between real and ideal systems possible, we define the ideal system to consist of the ideal resource as well as a simulator plugged into the adversary’s interface of the ideal resource, that recreates the communication occurring in the real system. For the one-time pad, this simulator must generate a ciphertext given the message length . This is simply done by generating a random string of the appropriate length, as depicted in \figreffig:otp.ideal. Note that putting such a simulator between the ideal resource and the adversary can only weaken her, since any operation performed by the simulator could equivalently be performed by an adversary connected directly to the interface of the ideal resource.

Secure channel



Random string


Figure 2.5: The ideal one-time pad system – Alice has access to the left interface, Bob to the right interface and Eve to the lower interface – consists of the ideal secure channel and a simulator that generates a random string of length .

To prove that the one-time pad constructs a secure channel from an authentic channel and a secret key, we view the real and ideal one-time pad systems of \figreffig:otp.real and \figreffig:otp.ideal as black boxes, and need to show that no distinguisher can tell with which of the two it has been connected. For both black boxes, if the distinguisher inputs at Alice’s interface, the same string is output at Bob’s interface and a uniformly random string of length is output at Eve’s interface. The two systems are thus completely indistinguishable – if the distinguisher were to take a guess, it would be right with probability exactly – and we say that the one-time pad has perfect security.

If two systems are indistinguishable, they can be used interchangeably in any setting. For example, let some protocol be proven secure if Alice and Bob are connected by a secure channel. Since the one-time pad constructs such a channel, it can be used in lieu of the secure channel, and composed with . Or equivalently, the contrapositive: if composing the one-time pad and were to leak some vital information, which would not happen with a secure channel, a distinguisher that is either given the real or ideal system could run internally and check whether this leak occurs to know with which of the two it is interacting.

2.3 General security definition

The previous sections introduced the concepts of resources, protocols and simulator in an informal manner. In the AC framework these elements are defined in an abstract way. For example, a resource is an abstract system that is shared between all players and provides each one with an interface that allows in- and outputs. AC does not define the internal workings of a resource. It postulates axioms that these abstract systems must fulfill – e.g. there must exist a metric and a parallel composition operator on the space of resources – and is valid for any instantiation which respects these axioms. In the group theory analogy introduced in \, these axioms correspond to the group axioms (closure, associativity, identity and invertibility). Any set and operation that respects these group axioms is an instantiation of a group, and any theorem proven for groups applies to this instantiation.

Thus, AC defines cryptographic security for abstract systems which fulfill certain basic properties. In the following we briefly sketch what these are. Note that examples – such as the model of the one-time pad given in Figures 2.3 and 2.5 – necessarily assume some instantiation of the abstract systems. Since we consider only simple examples in this work, we do not provide formal generic definitions of these lower levels, and refer to the discussions in [MR11, Mau12, DFPR14] on how this can be modeled.


An -resource is an (abstract) system with interfaces specified by a set (e.g., ). Each interface is accessible to a user and provides her or him with certain controls (the possibility of reading outputs and providing inputs). Resources are equipped with a parallel composition operator, , that maps two resources to another resource.


To transform one resource into another, we use converters. These are (abstract) systems with two interfaces, an inside interface and an outside interface. The inside interface connects to an interface of a resource, and the outside interface becomes the new interface of the constructed resource. We write either or to denote the new resource with the converter connected at the interface of ,101010There is no mathematical difference between and . It sometimes simplifies the notation to have the converters for some players written on the right of the resource and the ones for other players on the left, instead of all on the same side, hence the two notations. and or for a set of converters , for which it is clear to which interface they connect.

A protocol is a set of converters (one for every honest player) and a simulator is also a converter. Another type of converter that we need is a filter, which we often denote by or . When placed over a dishonest player’s interface, a filter prevents access to the corresponding controls and emulates an honest behavior.

Serial and parallel composition of converters is defined as follows:

Filtered resource.

A pair of a resource and a filter together specify the (reactive) behavior of a system both when no adversary is present – with the filter plugged in the adversarial interface, – and in the case of a cheating player that removes the filter and has full access to her interface of . We call such a pair a filtered resource, and usually denote it by .


There must exist a pseudo/metric on the space of resources, i.e., for any three resources , it satisfies the following conditions:111111If additionally , then is a metric.

(identity) (4)
(symmetry) (5)
(triangle inequality) (6)

Furthermore, this pseudo/metric must be non-increasing under composition with resources and converters: for any converter and resources , we require


We are now ready to define the security of a cryptographic protocol. We do so in the three player setting, for honest Alice and Bob, and dishonest Eve. Thus, in the following, all resources have three interfaces, denoted , and , and we only consider honest behaviors (given by a protocol ) at the and /̄interfaces, but arbitrary behavior at the /̄interface. We refer to [MR11] for the general case, when arbitrary players can be dishonest.


[Cryptographic security [MR11]] Let be a protocol and and denote two filtered resources. We say that constructs from within , which we write , if the two following conditions hold:

  1. We have

  2. There exists a converter – which we call simulator – such that

If it is clear from the context what filtered resources and are meant, we simply say that is /̄secure.

The first of these two conditions measures how close the constructed resource is to the ideal resource in the case where no malicious player is intervening, which we call availability.121212This is sometimes referred to as the correctness of the protocol in the cryptographic literature. But in QKD, correctness has another meaning – namely the probability that Alice and Bob end up with different keys when Eve is active. Instead, the term robustness is traditionally used to denote the performance of a QKD protocol under honest (noisy) conditions. We refer to \secrefsec:security.rob for a discussion of the relation between availability and robustness. The second condition captures security in the presence of an adversary. These two equations are illustrated in \figreffig:security.

(a) Condition (1) from \defrefdef:security. If Eve’s interfaces are blocked by filters emulating honest behavior, the functionality constructed by the protocol should be indistinguishable from the ideal resource.

(b) Condition (2) from \defrefdef:security. If Eve accesses her cheating interface of , the resulting system must be simulatable in the ideal world by a converter that only accesses Eve’s interface of the ideal resource .
Figure 2.6: A protocol constructs from within if the two conditions illustrated in this figure hold. The sequences of arrows at the interfaces between the objects represent (arbitrary) rounds of communication.

It follows from the AC framework [MR11] that if two protocols and are - and /̄secure, the composition of the two is /̄secure. We illustrate this with several examples in \secrefsec:ex and \appendixrefapp:ex.auth, and sketch a generic proof in \appendixrefapp:generic.

2.4 The distinguishing metric

The usual pseudo/metric used to define security in the real/world ideal/world paradigm is the distinguishing advantage, defined as follows. If a distinguisher can guess correctly with probability with which of two systems and it is interacting, we define its advantage as


Changing the power of the distinguisher (e.g., computationally bounded or unbounded) results in different metrics and different levels of security. In this work we are interested only in information/theoretic security, we therefore consider only a computationally unbounded distinguisher, and drop the superscript . We write

if two systems and can be distinguished with advantage at most , and in the following, the distance between two resources always refers to the distinguishing advantage of an unbounded distinguisher. A more extensive discussion of distinguishers is given in \appendixrefapp:moreAC.dist.

Although any pseudo/metric which satisfies the basic axioms can be used in \defrefdef:security, the distinguishing advantage is of particular importance, because it has an operational definition – the advantage a distinguisher has in guessing whether it is interacting with the real or ideal system. If the distinguisher notices a difference between the two, then something in the real setting did not behave ideally. This can be loosely interpreted as a failure occurring. If the distinguisher can guess correctly with probability with which system it is interacting, a failure must occur systematically. If it can only guess correctly with probability , no failure occurs at all. If it can guess correctly with probability , this can be seen as a failure occurring with probability . The distinguishing advantage can thus be interpreted as the probability that a failure occurs in the real protocol.131313A formal derivation of this interpretation is given in \appendixrefapp:op.failure for the trace distance – the distinguishing advantage between two quantum states. And in any practical implementation, the value can be chosen accordingly.

A bound on the security of a protocol does however not tell us how “bad” this failure is. For example, a key distribution protocol which produces a perfectly uniform key, but with probability Alice and Bob end up with different keys, is /̄secure. Likewise, a protocol which gives bit of the key to Eve with probability , but is perfect otherwise, and another protocol which gives the entire key to Eve with probability , but is perfect otherwise, are both /̄secure as well. One could argue that leaking the entire key is worse than leaking one bit, which is worse than not leaking anything but generating mismatching keys, and this should be reflected in the level of security of the protocol. However, leaking one bit can be as bad as leaking the entire key if only one bit of the message is vital, and this happens to be the bit obtained by Eve. Having mismatching keys and therefore misinterpreting a message could have more dire consequences than leaking the message to Eve. How bad a failure is depends on the use of the protocol, and since the purpose of cryptographic security is to make a security statement that is valid for all contexts, bounding the probability that a failure occurs is the best it can do.

Since such a security bound gives no idea of the gravity of a failure – a faulty QKD protocol might not only leak the current key, but all future keys as well if the current key is used to authenticate messages in future rounds – the probability of a failure occurring must be chosen small enough that the accumulation of all possible failure probabilities over a lifetime is still small enough. For example, if an implementation of a QKD protocol produces a key at a rate of Mbit/s with a failure per bit of , then this protocol can be run for the age of the universe and still have an accumulated failure strictly less than .

3 Quantum key distribution

In order to apply the general AC security definition to QKD, we need to specify the ideal key filtered resource, which we do in \secrefsec:qkd.ideal. Likewise, we specify in \secrefsec:qkd.protocol the real QKD system consisting of the protocol, an authentic classical channel and an insecure quantum channel. Plugging these systems in \defrefdef:security, we obtain in \ the security criteria for QKD.

3.1 Ideal key

The goal of a key distribution protocol is to generate a secret key shared between two players. One can represent such a resource by a box, one end of which is in Alice’s lab, and another in Bob’s. It provides each of them with a secret key of a given length, but does not give Eve any information about the key. This is illustrated in \figreffig:qkd.resource.simple, and is the key resource we used in the one-time pad construction (\figreffig:otp.real).





(a) A resource that always gives a key to Alice and Bob, and nothing to Eve.





(b) A resource that allows Eve to decide if Alice and Bob get a key or an error .


Secret key

(c) The resource from \figreffig:qkd.resource.switch with a filter , modeling the case with no adversary.


Secret key

(d) The resource from \figreffig:qkd.resource.switch with a simulator .
Figure 3.1: Some depictions of shared secret key resources, with filter and simulator converters in the last two.

However, if we wish to realize such a functionality with QKD, there is a caveat: an eavesdropper can always prevent any real QKD protocol from generating a key by cutting or jumbling the communication lines between Alice and Bob, and this must be reflected in the definition of the ideal resource. This box thus also has an interface accessible to Eve, which provides her with a switch that, when pressed, prevents the box from generating this key. We depict this in \figreffig:qkd.resource.switch.

If modeled with the secret key resource of \figreffig:qkd.resource.switch, the one-time pad is trivially secure conditioned on Eve preventing a key from being distributed – in this case, Alice and Bob do not have a key and do not run the one-time pad. The security of the one-time pad is thus reduced to the case where a key is generated, which corresponds to \figreffig:qkd.resource.simple and is the situation analyzed in \secrefsec:ac.otp.

If no adversary is present, a filter covers Eve’s interface of the resource, making it inaccessible to the distinguisher. This filter emulates the honest behavior that one expects in the case of a non/malicious noisy channel. For a protocol and noisy channel that together produce a key with probability , the filter should flip the switch on the /̄interface of the ideal key with probability . This is illustrated in \figreffig:qkd.resource.filter, and discussed in more detail in \secrefsec:security.rob.


[Adaptive key length] For a protocol to construct the shared secret key resource of \figreffig:qkd.resource.switch, it must either abort or produce a key of a fixed length. A more practical protocol could adapt the secret key length to the noise level on the quantum channel. This provides the adversary with the functionality to control the key length (not only whether it gets generated or not), and can be modeled by allowing the key length to be input at Eve’s interface of the ideal key resource.

3.2 Real protocol

To construct the secret key resource of \figreffig:qkd.resource.switch, a QKD protocol uses some other resources: a two-way authentic classical channel and an insecure quantum channel. An authentic channel faithfully transmits messages between Alice and Bob, but provides Eve with a copy as well. An insecure channel is completely under the control of Eve, she can apply any operation allowed by physics to the message on the channel. If Eve does not intervene, some noise might still be present on the channel, which is modeled by a filter that prevents Eve from reading the message, but introduces honest noise instead. Since an authentic channel can be constructed from an insecure channel and a short shared secret key,141414In fact, a short non/uniform key is sufficient for authentication [RW03], see \footnotereffn:password. QKD is sometimes referred to as a key expansion protocol.151515We model QKD this way in \appendixrefapp:ex.auth.qkd.

A QKD protocol typically has three phases: quantum state distribution, error estimation and classical post/processing (for a detailed review of QKD see [SBPC09]). In the first, Alice sends some quantum states on the insecure channel to Bob, who measures them upon reception, obtaining a classical string. In the error estimation phase, they communicate on the (two-way) authentic classical channel to sample some bits at random positions in the string and estimate the noise on the quantum channel by comparing these values to what Bob should have obtained. If the noise level is above a certain threshold, they abort the protocol and output an error message. If the noise is low enough, they move on to the third phase, and make use of the authentic channel to perform error correction and privacy amplification on their respective strings, resulting in keys and (which, ideally, should be equal). We sketch this in \figreffig:qkd.real.

Authentic channel

Insecure channel

(a) When Eve is present, her interface gives her complete controle of the insecure channel and allows her to read the messages on the authentic channel.

Authentic channel

Insecure channel

(b) When no eavesdropper is present, filters forward Alice’s quantum messages to Bob and block the authentic channel’s output at the /̄interface. The filter might produce non/malicious noise that modifies and models a (honest) noisy channel.
Figure 3.2: The real QKD system – Alice has access to the left interface, Bob to the right interface and Eve to the lower interface – consists of the protocol , the insecure quantum channel and two-way authentic classical channel . Alice and Bob abort if the insecure channel is too noisy, i.e., if is not similar enough to to obtain a secret key of the desired length. They run the classical post/processing over the authentic channel, obtaining keys and . The message depicted on the two-way authentic channel represents the entire classical transcript of the classical post/processing.

[Source of entanglement] In this work we use an insecure quantum channel from Alice to Bob to construct the shared secret key resource. An alternative resource that is frequently used in QKD instead of this insecure channel, is a source of entangled states under the control of Eve. The source sends half of an entangled state to Alice and another half to Bob. It can be modeled similarly to the insecure channel depicted in \figreffig:qkd.real, but with the first arrow reversed: the states are sent from Eve to Alice and from Eve to Bob.

3.3 Security

Let be the QKD protocol. Let and be the insecure quantum channel and authentic classical channel, respectively, with their filters and . Let denote the secret key resource of \figreffig:qkd.resource.switch and let be its filter. Applying \defrefdef:security, we find that constructs from and within if


The left- and right-hand sides of \eqnrefeq:qkd.robust are illustrated in Figures 1(b) and 0(c), and the left- and right-hand sides of \ are illustrated in Figures 1(a) and 0(d). These two conditions are decomposed into simpler criteria in \secrefsec:security.

4 Security reduction

By applying the general AC security definition to QKD, we obtained two criteria, \eqnsrefeq:qkd.robust and (10), capturing availability and security, respectively. In this section we derive \eqnrefeq:d, the trace distance criterion discussed in the introduction, from \ We first show in \secrefsec:security.dist that the distinguishing advantage used in the previous sections reduces to the trace distance between the quantum states gathered by the distinguisher interacting with the real and ideal systems. Then in \secrefsec:security.simulator, we fix the simulator from the ideal system. In \secrefsec:security.simple we decompose the resulting security criterion into a combination of secrecy\eqnrefeq:d – and correctness – the probability that Alice’s and Bob’s keys differ. In the last section, 4.4, we consider the security condition of \eqnrefeq:qkd.robust, which captures whether, in the absence of a malicious adversary, the protocol behaves as specified by the ideal resource and corresponding filter. We show how this condition can be used to model the robustness of the protocol – the probability that the protocol aborts with non/malicious noise.

4.1 Trace distance

The security criteria given in \eqnsrefeq:qkd.robust and (10) are defined in terms of the distinguishing advantage between resources. To simplify these equations, we rewrite them in terms of the trace distance, . A formal definition of this metric is given in \appendixrefapp:op.definitions, along with a discussion of how to interpret it in the rest of \appendixrefapp:op. We start with the simpler case of \eqnrefeq:qkd.robust in the next paragraph, then deal with \ after that.

The two resources on the left- and right-hand sides of \eqnrefeq:qkd.robust simply output classical strings (a key or error message) at Alice and Bob’s interfaces. Let these pairs of strings be given by the joint probability distributions and . The distinguishing advantage between these systems is thus simply the distinguishing advantage between these probability distributions – a distinguisher is given a pair of strings sampled according to either or and has to guess from which distribution it was sampled – i.e.,

The distinguishing advantage between two probability distributions is equal to their total variation distance161616The total variation distance between two probability distributions is equivalent to the trace distance between the corresponding (diagonal) quantum states. We use the same notation for both metrics, , since the former is a special case of the latter. – which we prove in in \appendixrefapp:op.distadv – i.e., . Putting the two together we get

where and are the distributions of the strings output by the real and ideal systems, respectively.

The resources on the left- and right-hand sides of \ are slightly more complex. They first output a state at the /̄interface, namely the quantum states prepared by Alice, which she sends on the insecure quantum channel. Without loss of generality, the distinguisher now applies any map allowed by quantum physics to this state, obtaining and puts the register back on the insecure channel for Bob, keeping the part in . Finally, the systems output some keys (or error messages) at the and /̄interfaces, and a transcript of the post/processing at the /̄interface. Let denote the tripartite state held by a distinguisher interacting with the real system, and let denote the state held after interacting with the ideal system, where the registers and contain the final keys or error messages, and the register holds both the state obtained from tampering with the quantum channel and the post-processing transcript. Distinguishing between these two systems thus reduces to maximizing over the distinguisher strategies (the choice of ) and distinguishing between the resulting states, and :

The advantage a distinguisher has in guessing whether it holds the state or is given by the trace distance between these states, i.e.,

This was first proven by Helstrom [Hel76]. For completeness, we provide a proof in \appendixrefapp:op.distadv, \thmrefthm:op.distinguishing.

The distinguishing advantage between the real and ideal systems of \ thus reduces to the trace distance between the quantum states gathered by the distinguisher. In the following, we usually omit where it is clear that we are maximizing over the distinguisher strategies, and simply express the security criterion as


where and are the quantum states gathered by the distinguisher interacting with the real and ideal systems, respectively.

4.2 Simulator

In the real setting (\figreffig:qkd.real.adv), Eve has full control over the quantum channel and obtains the entire classical transcript of the protocol. So for the real and ideal settings to be indistinguishable, a simulator must generate the same communication as in the real setting. This can be done by internally running Alice’s and Bob’s protocol , producing the same messages at Eve’s interface as the real system. However, instead of letting this (simulated) protocol decide the value of the key as in the real setting, the simulator only checks whether they actually produce a key or an error message, and presses the switch on the secret key resource accordingly. We illustrate this in \figreffig:qkd.ideal.


Secret key

Figure 4.1: The ideal QKD system – Alice has access to the left interface, Bob to the right interface and Eve to the lower interface – consists of the ideal secret key resource and a simulator .

The security criterion from \ can now be simplified by noting that with this simulator, the states of the ideal and real systems are identical when no key is produced. The outputs at Alice’s and Bob’s interfaces are classical, elements of the set , where symbolizes an error and is the set of possible keys. The states of the real and ideal systems can be written as

Plugging these in \ we get




is the renormalized state of the system conditioned on not aborting and is a perfectly uniform shared key.

4.3 Correctness & secrecy

We now break \ down into two components, often referred to as correctness and secrecy, and recover the security definition for QKD introduced in [RK05, BHL05, Ren05]. The correctness of a QKD protocol refers to the probability that Alice and Bob end up holding different keys. We say that a protocol is /̄correct if for all adversarial strategies,


where and are random variables over the alphabet describing Alice’s and Bob’s outputs.171717This can equivalently be written as , where is the probability of aborting and and are Alice and Bob’s keys conditioned on not aborting. The secrecy of a QKD protocol measures how close the final key is to a distribution that is uniform and independent of the adversary’s system. Let be the probability that the protocol aborts, and be the resulting state of the subsystems conditioned on not aborting. A protocol is /̄secret if for all adversarial strategies,


where the distance is the trace distance and is the fully mixed state.181818\eqnrefeq:qkd.sec is a reformulation of \eqnrefeq:d.


If a QKD protocol is /̄correct and /̄secret, then \ is satisfied for .


Let us define to be a state obtained from (\ by throwing away the system and replacing it with a copy of , i.e.,

From the triangle inequality we get

Since in the states and the system is a copy of the system, it does not modify the distance. Furthermore, . Hence

For the other term note that

Putting the above together with \, we get


[Tightness of the security criteria] In \thmrefthm:qkd we prove a bound on the second security condition of \defrefdef:security for QKD in terms of the correctness and secrecy of the protocol. The converse can also be shown: if \ holds for some , then the corresponding QKD protocol is both /̄correct and /̄secret.191919The factor is a result of the existence of the simulator in the security definition. We cannot exclude that for some specific QKD protocol there exists a different simulator – different from the one used in this proof – generating a state when interacting with the distinguisher, such that . However, by the triangle inequality we also have that for any , . Hence the failure of the generic simulator used in this proof is at most twice larger than optimal.

4.4 Robustness

So far in this section we have discussed the security of a QKD protocol with respect to a malicious Eve using the second condition from \defrefdef:security (\ A QKD protocol which always aborts without producing any key trivially satisfies \ with , but is not a useful protocol at all! The real system must not only be indistinguishable from ideal when an adversary is present, but also when the adversarial interfaces are covered by filters emulating honest behavior. This is modeled by the first condition from \defrefdef:security, namely \eqnrefeq:qkd.robust for QKD. If no adversary is tampering with the quantum channel – only natural non/malicious noise is present – we expect a secret key to be generated with high probability. This can be captured by designing the filter to allow a key to be produced with high probability: if the real system does not generate a key with the same probability, this immediately results in a gap noticeable by the distinguisher.

The probability of a key being generated depends on the noise introduced by the filter covering the adversarial interface of the insecure quantum channel in the real system (illustrated in \figreffig:qkd.real.filter). Suppose that this noise is parametrized by a value , e.g., a depolarizing channel with probability . For every , the protocol has a probability of aborting, , which is called the robustness. Let denote a filter of the channel that models this noise, and let denote the filter of the ideal key resource , which flips the switch to prevent a key from being generated with corresponding probability . \eqnrefeq:qkd.robust thus becomes


where varying and results in a family of real and ideal systems.

We now prove that in this case the failure from \eqnrefeq:robustness is bounded by . Note that this statement is only useful if the probability of aborting, , is small for reasonable noise models .


If the filters from \eqnrefeq:robustness are parametrized such that aborts with exactly the same probability as the protocol run on the noisy channel , then the availability of the protocol is bounded by the security, i.e.,

where the simulator is the one used in the previous sections, introduced in \secrefsec:security.simulator, \figreffig:qkd.ideal.


Since aborts with exactly the same probability as the real system and since simulates the real system, we can substitute for . The result then follows, because the converter on both the real and ideal systems can only decrease their distance (\eqnrefeq:axioms.nonincrease). ∎

5 Examples of composition

It is immediate from the AC framework [MR11] that the composition of two protocols satisfying \defrefdef:security is still secure.202020See \appendixrefapp:generic for a proof sketch. In this section we attempt to provide a better feeling for protocol composition by illustrating it with several examples. We compose QKD in series and in parallel, and show that – as a result of the triangle inequality and the security of the individual protocols – the corresponding composed real systems are indistinguishable from the composed ideal systems.

In \secrefsec:ex.leak we first look at a situation in which part of the key is known to the adversary. In \secrefsec:ex.otp we compose QKD with a one-time pad. And in \secrefsec:ex.qkd we compose two runs of a QKD protocol in parallel. We provide a more extensive example of protocol composition in \appendixrefapp:ex.auth, where we model the security of authentication and compose it with QKD, resulting in a key expansion protocol.

To simplify the examples, we only consider security in the presence of an adversary and ignore the first condition from \defrefdef:security. For the same reason, when writing up the security condition with the trace distance, we hard-code the simulator used in \secrefsec:security in the security criterion. Furthermore, as shown in \secrefsec:security.simulator, conditioned on aborting, the real and ideal systems of QKD are identical, so the security criterion can be reduced to the case in which the QKD protocol terminates with a shared key between Alice and Bob, which happens with probability . With these simplifications, a QKD protocol is /̄secure if


where is a perfect shared key and and are the final states, conditioned on producing a key, that the distinguisher holds after interacting with the real and ideal systems, respectively.

5.1 Partially known key

The accessible information given in \eqnrefeq:localqkd is shown to be insufficient to define security for a QKD protocol by considering a setting in which part of the key is available to Eve [KRBM07]. This allows her to guess the remaining bits of the key, which would not have been possible had the key been distributed using an ideal resource. We analyze exactly this setting here, and argue that this does not affect the security of a QKD scheme that satisfies \defrefdef:security.

To model this partial knowledge of the key, let Alice run a protocol that receives part of the secret key – generated either by a QKD protocol or by an ideal resource – and sends it on a channel to Eve. Plugging this in the real and ideal QKD systems from Figures 1(a) and 4.1, we get \figreffig:ex.leak.

(a) The QKD protocol generates a pair of keys . Alice then runs , which provides the first part of to Eve. The drawing of the insecure quantum channel and authentic classical channel have been removed to simplify the figure.


Secret key

(b) The ideal secret key resource generates a key , part of which is provided to Eve by . A simulator pads the ideal key resource to generate the same communication as in the real setting.
Figure 5.1: Alice runs a protocol which reveals the first half of her key to Eve. In each figure, Alice and Bob have access to the left and right interfaces, and Eve to the lower interface. If we remove the parts in gray we recover the real and ideal systems of QKD.

It is immediate from \figreffig:ex.leak that cannot increase the distance between the real and ideal systems and therefore cannot compromise security: the systems in gray can be run internally by a distinguisher attempting to guess whether it is interacting with the real or ideal QKD system, so this case is already bounded by the security of QKD.

This reasoning is summed up in the following equation, which can be directly derived from \eqnrefeq:axioms.nonincrease:

The same can be obtained from the properties of the trace distance if we write out explicitly the states gathered by the distinguisher. If the QKD protocol is /̄secure, we have from \ that

where is the state gathered by a distinguisher interacting with the real QKD system (\figreffig:qkd.real.adv) and is the state gathered by interacting with the ideal system (\figreffig:qkd.ideal), conditioned on the protocol not aborting. A distinguisher interacting with either of the two systems from \figreffig:ex.leak gets extra information at Eve’s interface, namely the first part of Alice’s key , and only the second part of that key at Alice’s interface. The complete states gathered by interacting with \figreffig:ex.leak.real and \figreffig:ex.leak.ideal are given by and , respectively, where the orignal system containing Alice’s key is split in two, and . These can be obtained from and by a unitary map which simply permutes the registers. Thus, the trace distance does not increase. So we have

If we analyze the same situation from the perspective of an adversary that can access only the /̄interface, composing QKD with a protocol that reveals results in a net gain of information for this adversary. But as shown above, for a distinguisher that also receives the outputs of the honest players – the generated secret keys – there is no gain.

5.2 Sequential composition of key distribution and one-time pad

If we compose a one-time pad (depicted in \figreffig:otp.real) and a QKD protocol (depicted in \figreffig:qkd.real.adv), we obtain \figreffig:ex.otp.real, where the secret key resource used by the one-time pad is replaced by the QKD protocol. We showed in \secrefsec:ac.otp that a one-time pad constructs a secure channel (\figreffig:otp.ideal.resource), which provides Eve with only one functionality, learning the length of the message. However, this was if the one-time pad protocol had access to a secret key resource with a blank /̄interface, as in \figreffig:qkd.resource.simple. In reality, QKD constructs a resource that allows Eve to prevent a key from being generated, as in \figreffig:qkd.resource.switch. It can easily be shown that with access to this resource, a one-time pad constructs a secure channel with two controls at Eve’s interface: one for preventing any message from being sent and a second for learning the length of the message if she did not activate the first. This resource is illustrated in \figreffig:ex.otp.ideal, along with the appropriate simulator for constructing this resource with a one-time pad and a QKD protocol: the combination of the two simulators used in the individual proofs of the one-time pad (\figreffig:otp.ideal) and QKD (\figreffig:qkd.ideal).

(a) The composition of a one-time pad and a QKD protocol. The authentic channels and insecure quantum channels used have not been depicted as boxes to simplify the figure.


Secret key