1 Introduction

 

[0.8cm] Relativistic quantum cryptography

[0.25cm]  

[1.5cm]


JĘDRZEJ KANIEWSKI

(MMath, University of Cambridge)

A thesis submitted in fulfilment of the requirements

for the degree of Doctor of Philosophy

[0.3cm] in the

[0.4cm] Centre for Quantum Technologies

National University of Singapore

[1cm]

2015

Declaration

I hereby declare that this thesis is my original work and has been written by me in its entirety. I have duly acknowledged all the sources of information which have been used in the thesis.

This thesis has also not been submitted for any degree in any university previously.

 

Jędrzej Kaniewski

9 September 2015

Acknowledgements

I would like to thank my supervisor, Stephanie Wehner, for the opportunity to conduct a PhD in quantum information. I am grateful for her time, effort and resources invested in my education. Working with her and being part of her active and diverse research group made the last four years a great learning experience.

The fact that I was even able to apply for PhD positions is largely thanks to my brilliant and inspiring undergraduate supervisor, mentor and friend, Dr Peter Wothers MBE. I am particularly grateful for his supportive attitude when I decided to dedicate myself to quantum information. I am grateful to St. Catharine’s College for a wonderful university experience and several long-lasting friendships.

I would like to thank my collaborators, Félix Bussières, Patrick J. Coles, Serge Fehr, Nicolas Gisin, Esther Hänggi, Raphael Houlmann, Adrian Kent, Troy Lee, Tommaso Lunghi, Atul Mantri, Nathan McMahon, Gerard Milburn, Corsin Pfister, Robin Schmucker, Marco Tomamichel, Ronald de Wolf and Hugo Zbinden, who made research enjoyable and from whom I have learnt a lot.

I am also indebted to Corsin Pfister and Le Phuc Thinh, who have read a preliminary version of this thesis, and Tommaso Lunghi and Laura Mančinska, who have given comments on parts of it.

Special thanks go to Evon Tan for being the omnipresent good spirit of CQT. Her incredible problem-solving skills allowed me to focus on research and contributed greatly to the scientific output of this thesis.

I would like to thank Valerio Scarani for being approachable and always happy to talk about various aspects of quantum information and the scientific world in general.

I am grateful to my examiners: Anne Broadbent, Marcin Pawłowski and Miklos Santha for the careful reading of this thesis and providing stimulating feedback. I would like to thank Alexandre Roulet, Jamie Sikora, Marco Tomamichel and Marek Wajs for useful comments on the defence presentation.

Dziękuję Markowi Wajsowi za nieocenioną pomoc przy drukowaniu i składaniu doktoratu.

Arturowi Ekertowi chciałbym podziękować za czas, wsparcie i konkretne wskazówki w chwilach zwątpienia.

Choć to już parę lat chciałbym również gorąco podziękować Krzysztofowi Kuśmierczykowi, Annie Mazurkiewicz i Bognie Lubańskiej za czas i wysiłek włożony w moją edukację oraz za bycie źródłem motywacji i inspiracji. Wszystko to, co udało mi się osiągnąć, jest oparte na solidnych licealnych fundementach i bez ich wkładu nie byłoby możliwe. Chcę także podziękować Poniatówce za niezapomniane trzy lata i wiele przyjaźni, które trwają do dzisiaj.

Jackowi Jemielitemu chciałbym podziękować za pierwsze spotkanie z nauką z prawdziwego zdarzenia, niespotykaną wytrwałość i cierpliwość a przede wszystkim za unikalne na skalę światową poczucie humoru, którego często mi brakuje.

Doktorat dedykuję w całości Mamie, Tacie, Siostrze i Bratu, bez wsparcia których to przełomowe dzieło nigdy by nie powstało.

Abstract

Special relativity states that information cannot travel faster than the speed of light, which means that communication between agents occupying distinct locations incurs some minimal delay. Alternatively, we can see it as temporary communication constraints between distinct agents and such constraints turn out to be useful for cryptographic purposes. In relativistic cryptography we consider protocols in which interactions occur at distinct locations at well-defined times and we investigate why such a setting allows to implement primitives which would not be possible otherwise.

Relativistic cryptography is closely related to non-communicating models, which have been extensively studied in theoretical computer science. Therefore, we start by discussing non-communicating models and its applications in the context of interactive proofs and cryptography. We find which non-communicating models might be useful for the purpose of bit commitment, propose suitable bit commitment protocols and investigate their limitations. We explain how some non-communicating models can be justified by special relativity and study what consequences such a translation brings about. In particular, we present a framework for analysing security of multiround relativistic protocols. We show that while the analysis of classical protocols against classical adversaries is tractable, the case of quantum protocols or quantum adversaries in a classical protocol constitutes a significantly harder task.

The second part of the thesis is dedicated to analysing specific protocols. We start by considering a recently proposed two-round quantum bit commitment protocol. We start by proving security under the assumption that idealised devices (single-photon source, perfect detectors) are available. Then, we propose a fault-tolerant variant of the protocol which can be implemented using realistic devices (weak-coherent source, noisy and inefficient detectors) and present a security analysis which takes into account losses, errors, multiphoton pulses, etc. We also report on an experimental implementation performed in collaboration with an experimental group at the University of Geneva.

In the last part we focus on classical schemes. We start by analysing a known two-round classical protocol and we show that successful cheating is equivalent to winning a certain non-local game. This is interesting as it demonstrates that even if the protocol is entirely classical, it might be advantageous for the adversary to use quantum systems. We also propose a new, multiround classical bit commitment protocol and prove its security against classical adversaries. The advantage of the multiround protocol is that it allows us to increase the commitment time without changing the locations of the agents. This demonstrates that in the classical world an arbitrary long commitment can be achieved even if the agents are restricted to occupy a finite region of space. Moreover, the protocol is easy to implement and we discuss an experiment performed in collaboration with the Geneva group.

We conclude with a brief summary of the current state of knowledge on relativistic cryptography and some interesting open questions that might lead to a better understanding of the exact power of relativistic models.

List of publications

This thesis is based on three publications.

Chapters 3 and 4 are based on

  • Secure bit commitment from relativistic constraints [arXiv:1206.1740]
    J. Kaniewski, M. Tomamichel, E. Hänggi and S. Wehner
    IEEE Transactions on Information Theory 59, 7 (2013).
    (presented at QCrypt ’12)

Chapter 5 is based on

  • Experimental bit commitment based on quantum communication and special relativity [arXiv:1306.4801]
    T. Lunghi, J. Kaniewski, F. Bussières, R. Houlmann, M. Tomamichel, A. Kent, N. Gisin, S. Wehner and H. Zbinden
    Physical Review Letters 111, 180504 (2013).
    (presented at QCrypt ’13)

Chapter 6 is based on

  • Practical relativistic bit commitment [arXiv:1411.4917]
    T. Lunghi, J. Kaniewski, F. Bussières, R. Houlmann, M. Tomamichel, S. Wehner and H. Zbinden
    Physical Review Letters 115, 030502 (2015).
    (presented at QCrypt ’14)

During his graduate studies the author has also contributed to the following publications.

  1. Query complexity in expectation [arXiv:1411.7280]
    J. Kaniewski, T. Lee and R. de Wolf
    Automata, Languages, and Programming: Proceedings of ICALP ’15,
    Lecture Notes in Computer Science 9134 (2015).

  2. Equivalence of wave-particle duality to entropic uncertainty [arXiv:1403.4687]
    P. J. Coles, J. Kaniewski and S. Wehner
    Nature Communications 5, 5814 (2014).
    (presented at AQIS ’14)

  3. Entropic uncertainty from effective anticommutators [arXiv:1402.5722]
    J. Kaniewski, M. Tomamichel and S. Wehner
    Physical Review A 90, 012332 (2014).
    (presented at AQIS ’14 and QCrypt ’14)

  4. A monogamy-of-entanglement game with applications to device-independent quantum cryptography [arXiv:1210.4359]
    M. Tomamichel, S. Fehr, J. Kaniewski and S. Wehner
    New Journal of Physics 15, 103002 (2013).
    (presented at Eurocrypt ’13 and QCrypt ’13)

Contents:

Notation and list of symbols

Symbol Meaning
set of integers from to
cardinality of a set or modulus of a number
a Hilbert space
dimension of
dual space of
linear operators acting on
Hermitian operators acting on
identity matrix
complex conjugate of
transpose of (with respect to the standard basis)
Hermitian conjugate of
pure quantum states
mixed quantum states
maximally entangled state of dimension
Hadamard matrix
Schatten -norm
Schatten -norm
trace
partial trace over
quantum channel
identity channel
Hamming weight
Hamming distance
exclusive-OR (XOR)
finite-field multiplication
probability
inner product
finite alphabets
finite field of order
player (in a multiplayer game)

Chapter 1 Introduction

Quantum cryptography lies at the intersection of physics and computer science. It brings together different communities and makes for a lively and exciting environment. It demonstrates that the fundamental principles of quantum physics can be cast and studied using the operational approach of cryptography. Besides, thanks to recent technological advances, practical applications are just round the corner.

Due to the interdisciplinary nature of quantum cryptography the relevant background knowledge spans multiple fields, which makes it particularly difficult to provide an introduction which would be both complete and concise. We have, therefore, chosen to focus on the topics which are directly related to quantum cryptography and skip over the less relevant areas.

This chapter starts with a short introduction to cryptography, which is the study of exchanging and processing information in a secure fashion. We focus on two-party (or mistrustful) cryptography, whose goal is to protect the privacy of an honest party interacting with potentially dishonest partners. Then, we introduce quantum information theory, which studies how quantum systems can be used to store and process information. We discuss the main features that distinguish it from the classical information theory and briefly describe the early history of the field. The next part of this chapter brings the two topics together under the name of quantum cryptography. We give a brief account of its early days, again, with a particular focus on two-party cryptography. We finish by giving a brief outline of this thesis.

1.1 Cryptography

Cryptography has been around ever since rulers of ancient tribes realised the need to send secret (or private) messages. Ideally, such messages should reveal no information if intercepted by an unauthorised party. The solution to this problem is known as a cipher, which is simply a procedure for converting a secret message (called the plaintext) into another message (called the ciphertext), which should be intelligible to a friend (who knows the particular cipher we are using) but should give no information to an enemy. The first confirmed accounts of simple ciphers come from ancient Greece and Rome, for example Julius Caesar used a simple shift cipher (now also known as a Caesar cipher) to ensure privacy of his correspondence. Until modern times designing practical (i.e. easy to implement and difficult to break) ciphers was essentially the only branch of cryptography. One such cipher known as the one-time pad was invented by Gilbert S. Vernam and Joseph O. Mauborgne in 1917.1 While the one-time pad guarantees (provably) secure communication it requires the two parties to share a random string of bits, known as a key, which is as long as the message they want to send. This quickly becomes impractical if the parties want to exchange large amounts of data.

A report presented by Claude Shannon in 1945 marks the birth of modern cryptography [Sha49].2 Shannon proposed a formal definition of a (perfectly) secure cipher and proved that one-time pad satisfies such a stringent requirement. Moreover, he proved that any cipher that guarantees perfect security requires the key to be as long as the message (which essentially means that the one-time pad cannot be improved). But the contributions of this work go well beyond encryption and the analysis of one-time pad, as it was the first time that cryptography was phrased in the rigorous language of mathematics. This put cryptography on equal footing with other established sciences and set the stage for information theory (discovered by Shannon a couple of years later).

Nowadays cryptography is a mature field within which hundreds of cryptographic tasks (or primitives) have been defined and studied (and encryption, while obviously very important, is just one of them). Except for purely practical reasons for studying these tasks there is also a deeper motivation. Certain questions in cryptography (e.g. finding sufficient assumptions to perform a given task or proving impossibility results) give us valuable and operational insight into the underlying information theory. While classical information theory is relatively well understood, its quantum counterpart is not. That is why studying quantum cryptography is an important pursuit and contributes towards our understanding of the quantum world we (probably) live in.

In this thesis we only consider a branch of cryptography known as two-party or mistrustful cryptography, in which two parties, usually referred to as Alice and Bob, want to perform a certain task together but since they do not fully trust each other they want to minimise the amount of information revealed during the protocol. A simple example of such a scenario is the millionaires’ problem introduced by Andrew Yao [Yao82], in which two millionaires want to find out who is richer without revealing their actual wealth. This is certainly an interesting problem and, in fact, one that we often face in our everyday lives. Below we present and motivate some other natural two-party tasks.

  • Example 1: Alice uses an online movie service called Bob, which charges separately for every downloaded movie. Alice has paid for one movie and wants to download it but being paranoid about privacy she is reluctant to reveal her choice to Bob. On the other hand, Bob wants to make sure that Alice only downloads one movie (and not more) so he is not keen on giving her access to the entire database. This problem, called oblivious transfer3, turns out to be a convenient building block for two-party cryptography. In fact, it can be used to construct any other two-party primitive [Kil88].

  • Example 2: Alice has supernatural powers that allow her to predict the future, for example the results of tomorrow’s draw of the national lottery. She wants to impress Bob (she likes to be admired) but she does not want him to get rich (she knows that money does not bring happiness). Hence, the goal is to commit to a message without actually revealing it until some later time. Such primitives are known as commitment schemes [Blu81, BCC88]4 and the simplest one, in which the committed message is just one bit, is called bit commitment and constitutes one of the main topics of this thesis.

  • Example 3: Alice is a quantum hacker and throughout the years she has exposed dozens of improperly formulated security proofs and misguided calculations. Having realised the damage done to the quantum community she has contacted a law enforcement agency represented by Bob to negotiate turning herself in. Alice and Bob want to schedule a secret meeting but for obvious security reasons they want to make sure that the location is chosen in a truly random fashion. In other words, Alice and Bob want to agree on a random choice, which neither of them can bias (or predict it in advance). This primitive known as coin tossing (or coin flipping) was introduced by Blum [Blu81].

All these tasks produce conflicting interests between Alice and Bob. It is clear that security for either party can be ensured at the cost of leaving the other party completely unprotected. In case of oblivious transfer, for example, Alice could give up her privacy and simply announce which movie she wants to watch. Alternatively, Bob could provide Alice with the entire database, hoping that she will not abuse his trust.

The goal of two-party cryptography is to first come up with the right mathematical definition of these primitives and then find in what circumstances and under what assumptions they can (or cannot) be implemented. It is also interesting to study reductions between different primitives, i.e. how to use one primitive to implement another one, which leads to a resource theory for cryptography. For example, oblivious transfer can be used to implement commitment schemes because choosing a particular message can be interpreted as committing to its label. Commitment schemes, on the other hand, can be used to generate trusted randomness. For example, in order to generate one trusted bit we use a commitment scheme with two possible values (such a primitive is known as bit commitment). Alice commits to a bit , then Bob announces bit and finally Alice reveals and the outcome of the coin toss is declared to be . As long as at least one of the parties is honest the resulting bit is uniform. The use of a commitment scheme ensures that does not depend on (which would allow Bob to cheat).

The holy grail of the field is the so-called information-theoretic security5. There, the basic assumption is that the dishonest party is restricted by the underlying information theory, which is arguably the weakest assumption that one needs to perform security analysis. The term information-theoretic security goes back to Shannon (e.g. see his definition of secure encryption [Sha49]).

Unfortunately, it turns out that two-party primitives cannot be implemented with information-theoretic security (for both parties) unless we make some further assumptions.6 Below we give a brief overview of various (reasonable) assumptions that make information-theoretically secure two-party cryptography possible.

  • Trusted third-party : The trivial solution is to introduce a trusted third-party, which implements the primitive for Alice and Bob. In the paranoid world, in which Alice and Bob trust nobody but themselves, this is not a satisfactory solution. Moreover, it makes all tasks trivially possible.

  • Pre-shared resources : Another solution that allows for two-party cryptography is to equip Alice and Bob with some shared correlations. This could be either shared randomness [Riv99] or access to a source of inherent and unpredictable noise that allows to generate such correlations during the protocol [Cré97, WNI03].7

  • Technological limitations : The standard real-world solution to the commitment task is for Alice to lock her message in a safe box, which she then hands over to Bob while keeping the key. Whenever Alice wants to reveal the message, she gives the key to Bob, who opens the safe box and reads the message. This is secure as long as Alice has no way of remotely modifying the message and Bob has no tools to open the safe box, i.e. we must assume that they are subject to certain technological limitations. One can also assume that their “digital technology” is limited, e.g. by restricting their computational power or storage capabilities, which again makes secure two-party cryptography possible. The former leads to the rich and practically important field of computational security8, while the latter leads to the bounded storage model [Mau91].

  • Communication constraints : It is well-known that interrogating suspects one by one leads to better results than dealing with the whole group at the same time. In the cryptographic language this corresponds to forcing one (or more) parties to delegate agents, who perform certain parts of the protocol without communicating. Such setting was originally introduced in complexity theory under the name of multiprover models9 to evade certain impossibility results [BGKW88]. These models are interesting from the cryptographic point of view but we must be explicit how they are adjusted to fit the framework of standard two-party cryptography (in which there are only two parties interacting and not more). On the bright side some types of non-communicating models can (with subtle adjustments) be implemented by requiring multiple agents to interact simultaneously at multiple locations (under the assumption that the speed of light is finite). The first explicit examples of such relativistic protocols came from Adrian Kent [Ken99, Ken05]. This field, now known as relativistic cryptography, constitutes the main topic of this thesis.

1.2 Quantum information theory

As mentioned before the report written by Shannon in 1945 marks the beginning of modern cryptography [Sha49]. Thinking about encryption and the one-time pad led him to questions about the nature of information. Shannon’s next paper investigating fundamental limits of compression and transmission [Sha48] is considered the beginning of (classical) information theory, which became an active field of research with a wide range of practical implications. While the basic framework of quantum mechanics already existed at the time (introduced in the 1920s and 30s by Bohr, Born, de Broglie, Dirac, Einstein, Heisenberg, Planck, Schrödinger and others), rigorous connections between the two were not established until much later.

In 1935 Einstein, Podolsky and Rosen wrote a paper in which they argue that quantum mechanics cannot be considered a complete theory [EPR35]. They postulate that for every measurement whose outcome is certain there exists an “element of reality” and deduce that due to the uncertainty principle incompatible observables cannot have simultaneous elements of reality. On the other hand, they note that in case of entangled10 particles the elements of reality of one system depend on the measurements performed on the other. Since they perceive the elements of reality as something objective, independent of any measurement process, they conclude that the quantum-mechanical description must be incomplete. This idea was further developed by John Bell [Bel64] who realised that the assumptions of Einstein, Podolsky and Rosen boil down to the existence of local hidden variables, which completely determine the outcome of all possible measurements. Bell showed that any theory satisfying these requirements (like the classical theory) is subject to certain restrictions (now known as Bell inequalities) and demonstrated that quantum mechanics violates such restrictions. The first explicit Bell inequality proposed by Clauser, Horne, Shimony and Holt [CHSH69] is a clear-cut evidence that the set of quantum correlations is strictly bigger than its classical counterpart. Realising that quantum mechanics gives rise to an information theory which is qualitatively different that the classical version, opened a new, fruitful research direction. Questions concerning storing or transmitting information using quantum systems have the appealing feature of being operational and fundamental at the same time. In the 1970s Holevo proved how many classical bits can be reliably stored in a quantum system [Hol73] and Helstrom showed how to optimally distinguish two quantum states [Hel76].

In 1980 Boris Tsirelson published a breakthrough paper, which exactly characterises the set of correlations achievable using quantum systems (in a restricted class of scenarios) [Tsi80]. Another important result concerning quantum correlations comes from Reinhard Werner, who showed that entanglement, while necessary, is not a sufficient condition for observing stronger-than-classical correlations [Wer89]. In 1982 Wootters and Żurek proved the celebrated no-cloning theorem, which states that given a single copy of an unknown quantum state, there does not exist a physical procedure that produces two (perfect) copies [WŻ82]. While the result itself is rather simple (including the proof), it has far-reaching consequences and shows that one should be rather careful when applying the classical intuition to quantum systems. Around the same time the first ideas to use quantum systems to perform computation came about. Richard Feynman proposed the concept of quantum simulation, i.e. using one quantum system to simulate another [Fey82] while David Deutsch initiated the study of quantum computation by introducing the concept of a quantum Turing machine and presenting a simple problem which can be solved more efficiently using quantum systems [Deu85]. While the problem introduced by Deutsch is of little practical use, it is important as the first demonstration that quantum computing is strictly more powerful than its classical counterpart.

In 1994 Peter Shor published a paper that changed the status of quantum computation from an exercise in linear algebra to a field of potentially enormous practical impact [Sho94]. Shor proposed an algorithm that can efficiently factor large numbers and solve the discrete logarithm problem, which, as a consequence, allows to break all commonly used public cryptography systems. In 1996 Lov Grover published an algorithm that gives a quadratic speed-up while searching an unstructured database [Gro96].11 These two results sparked enormous interest as they showed that quantum computation might be important from the practical point of view. Since then the task of finding new quantum algorithms and building an actual quantum computer has been a full-time job of hundreds of computer scientists, physicists and engineers.

It seems fair to say that it is the breakthroughs in quantum computation that gave the whole field a significant push and encouraged many brilliant researchers to work on quantum information. Since then the field has developed rapidly and this includes aspects closely related to quantum computation like quantum error-correction or quantum computer architecture but also areas which are not directly relevant like quantum correlations, quantum foundations, quantum Shannon theory or quantum cryptography. For more information we refer to a brief survey on early quantum information written by Bennett and Shor in 1998 [BS98] or to a book by Nielsen and Chuang [NC00], which became the primary textbook in the field (in particular for quantum computation). For a detailed introduction to the information-theoretic aspects (the quantum Shannon theory) see Chapter 1 of Mark M. Wilde’s book [Wil13].

1.3 Quantum cryptography

In the late 1960s Stephen Wiesner wrote a paper on how to use quantum particles of spin- to produce money that is “physically impossible to counterfeit”. The paper got rejected from a journal and ended up in Wiesner’s drawer (the paper was eventually published in ACM SIGACT News [Wie83] about fifteen years later). These ideas were further pursued by Bennett, Brassard, Breidbart and Wiesner [BBBW83] and led to a groundbreaking paper proposing the first quantum key distribution protocol, which allows two distant parties to communicate securely through an insecure quantum channel [BB84]. In 1991 Artur Ekert proposed a quantum key distribution protocol that relied on entanglement and Bell’s theorem [Eke91]. Another protocol (which relies on entanglement but not Bell’s theorem) was presented in Ref. [BBM92] and soon the first experimental demonstration of quantum key distribution was reported together with concrete solutions for the classical post-processing phase and explicit security estimates [BBB92]. Since then an enormous amount of progress has been made in both theoretical and practical aspects of quantum key distribution and it is well beyond the scope of this introduction to discuss it. A recent article by Ekert and Renner is an excellent account of the current state of quantum key distribution [ER14].

Before we go into the details let us state very clearly that throughout this thesis we work under the (implicit) assumption that Alice and Bob trust their own devices. In other words, if the protocol requires Alice to generate a certain quantum state, she is capable of constructing a device that does just that and she may rest assured that the source does not accumulate information about the previous uses or leak secret data through extra degrees of freedom. While this assumption seems natural and easy to ensure in the classical world, it becomes more of a challenge in the quantum world simply because our understanding and expertise in quantum technologies are limited. These considerations gave rise to the field of device-independent cryptography which aims to design protocols which remain secure even if executed using faulty or malicious devices. The fact that such strong security guarantees are even possible is clearly remarkable and this topic has received a lot of interest in the last couple of years. Due to a large volume of works on this topic we do not attempt to list the relevant references and point the interested reader at comprehensible and accessible lectures notes by Valerio Scarani [Sca12] as well as Sections IV.C and IV.D of a recent review on Bell nonlocality [BCP14].

While quantum key distribution was and still remains the predominant area of research in quantum cryptography, other applications have been present from the very beginning as exemplified by Wiesner’s unforgeable quantum money. The original paper of Bennett and Brassard contains a bit-commitment-based coin tossing protocol [BB84]. As pointed out by the authors the protocol is insecure if one of the parties leaves the quantum states untouched (instead of performing the prescribed measurements) but they consider it a “merely theoretical threat” due to the technological difficulty of implementing such a strategy. In 1991 Brassard and Crépeau proposed a different quantum bit commitment protocol [BC91], which does not suffer from the previous problem but is vulnerable against an adversary who can perform coherent measurements, i.e. joint measurements on multiple quantum particles, which, again, is considered difficult. By combining the two quantum bit commitment protocols they obtain a coin tossing protocol which can only be broken by an adversary who can both keep entanglement and perform coherent measurements. In the meantime a quantum protocol for oblivious transfer was proposed whose security, again, relies on the adversary being technologically limited [BBCS92]. In 1993 Brassard, Crépeau, Jozsa and Langlois [BCJL93] proposed a new bit commitment protocol which comes with a complete security proof that does not rely on any technological assumptions. In other words, the protocol is claimed to be secure against all attacks compatible with quantum physics. In 1992 Bennett et al. suggested how bit commitment and quantum communication can be used to construct oblivious transfer [BBCS92]. This construction was formalised and proven secure by Yao [Yao95], who refers to it as the “canonical construction”, which gave the optimistic impression that quantum mechanics allows for secure two-party cryptography without any extra assumptions.12 Unfortunately, it was later discovered that the protocol proposed in Ref. [BCJL93] is insecure, which soon led to a complete impossibility result [May97, LC97]. For a detailed account of quantum cryptography until that point please consult Refs. [BC96, Cré96, BCMS97].

The initial results of Mayers, Lo and Chau began a sequence of negative results. Impossibility of bit commitment immediately rules out oblivious transfer and, in fact, the same techniques can be used to rule out any one-sided two-party computation (i.e. a primitive in which inputs from two parties produce output which is only given to one of them) [Lo97]. The more complicated case of two-sided computation was first considered by Colbeck (for a restricted class of functions) [Col07] while the general impossibility result was proven by Buhrman, Christandl and Schaffner [BCS12]. In case of string commitment (i.e. when we simultaneously commit to multiple bits) it is clear that the perfect primitive cannot be implemented but the situation becomes slightly more involved when it comes to imperfect primitives as the results depend on the exact security criteria used [BCH06, BCH08]. For more recent impossibility proofs of bit commitment see Refs. [DKSW07, WTHR11, CDP13].

While perfect quantum bit commitment is not possible, it is interesting to know what security trade-offs are permitted by quantum mechanics. In the classical case the trade-offs are trivial: in any classical protocol at least one of the parties can cheat with certainty. Preliminary results on the quantum security trade-offs were proven by Spekkens and Rudolph [SR01], while the optimal trade-off curve was found by Chailloux and Kerenidis [CK11]. Interestingly enough, the achievability is argued through a construction that uses a complicated and rather poorly understood weak coin flipping protocol by Mochon [Moc07].

Another direction (similar to what was done previously in the classical world) is to identify the minimal assumptions that would make two-party cryptography possible in the quantum world.

One solution available in the classical world is to give Alice and Bob access to some trusted randomness. The quantum generalisation of this assumption would be to give Alice and Bob access to quantum systems or some other source of stronger-than-classical correlations [BCU06, WWW11]. Such correlations indeed allow us to implement secure bit commitment. The advantage of this assumption over the classical counterpart is that in the classical case we had to trust whoever distributed the randomness (in the original paper referred to as the trusted initialiser [Riv99]). On the other hand, stronger-than-classical correlations guarantee security regardless of where they came from.

A natural quantum extension of the bounded storage model proposed by Maurer [Mau91] is the quantum bounded storage model [DFSS05, DFR07, Sch10] and its generalisation to the case of noisy quantum storage [WST08, KWW12, BFW14]. While storing classical information seems easy and cheap (which makes the assumption of the adversary’s bounded storage not particularly convincing), reliable storage of quantum information continues to pose a significant challenge and, hence, makes for a reasonable assumption. Another technological limitation that leads to secure bit commitment is the restriction on the class of quantum measurements that the dishonest party can perform [Sal98].

The proposal to combine quantum mechanics with relativistic13 communication constraints (attributed to Louis Salvail) was already mentioned in 1996 [BC96, Cré96]. The early papers of Kent [Ken99, Ken05] consider security against quantum adversaries but the actual protocols are completely classical. To the best of our knowledge, the first quantum relativistic protocol was proposed by Colbeck and Kent for a certain variant of coin tossing [CK06]. This marks the beginning of quantum relativistic cryptography.

1.4 Outline

The main theme of this thesis is relativistic quantum cryptography with a particular focus on commitment schemes. Chapter 2 contains the necessary background in quantum information theory and cryptography.

In Chapter 3 we introduce non-communicating models as they originally appeared in the context of interactive proofs. We show why they are useful in cryptography and determine the exact communication constraints that might allow for secure commitment schemes. For each of these models we present a provably secure bit commitment protocol.

Chapter 4 introduces the framework for relativistic protocols. We start with a couple of simple examples and then present a procedure which maps a relativistic protocol onto a model with partial communication constraints. We show that at least in some scenarios the analysis of such models is tractable.

In Chapter 5 we focus on a particular quantum bit commitment protocol. We analyse its security by mapping it onto a simple quantum guessing game. Moreover, we adapt the original protocol to make it robust against experimental errors and we extend the security analysis appropriately. We briefly report on an implementation of the protocol done in collaboration with an experimental group at the University of Geneva.

In Chapter 6 we propose a new, classical multiround bit commitment protocol and analyse its security against classical adversaries. The multiround protocol allows to achieve arbitrarily long commitments (at the cost of growing resources) with explicit and easily-computable security guarantees. Again, we briefly discuss an experiment performed in collaboration with the Geneva group.

Chapter 7 summarises the content of this thesis and outlines a couple of interesting direction for future research in quantum relativistic cryptography.

Chapter 2 Preliminaries

In this chapter we establish the notation, nomenclature and some basic concepts used throughout this thesis.

2.1 Notation and miscellaneous lemmas

2.1.1 Strings of bits

Given two bits we use “” to denote their exclusive-OR (XOR)

For an -bit string , let be the bit of and the XOR of two strings (of equal length) is defined bitwise. The fractional Hamming weight of is the fraction of ones in the string

where denotes the cardinality of the set. The fractional Hamming distance between and is the fraction of positions at which the two strings differ

Note that the Hamming weight can be interpreted as the distance from the string of all zeroes : . For , we use to denote the substring of specified by the indices in . If is a bit, we define

2.1.2 Cauchy-Schwarz inequality for probabilities

When dealing with probabilities we use uppercase letters to denote random variables and lowercase letters to denote values they might take, e.g. . For we use as a shorthand notation for .

Lemma 2.1.

Let be a uniform random variable over , i.e.  for all , and let be a family of events defined on . Let be the average probability (of these events)

and be the cumulative size of the pairwise intersections

Then the following inequality holds

Proof.

Each event can be represented by a vector in whose entries are labelled by integers from . If a particular outcome belongs to the event, we set the corresponding component to and if it does not we set it to

Moreover, let be the normalised, uniform vector: for all . It is straightforward to check that with these definitions we have

where denotes the standard inner product on and since the vectors are non-negative we have . Since the inner product is linear we have

which can be upper bounded using the Cauchy-Schwarz inequality. Since we have

which gives the following quadratic constraint

Solving for gives the desired bound.

2.1.3 Chernoff bound for the binomial distribution

Lemma 2.2 ([Che52]).
Let be independent random variables taking on values 0 or 1. Let and be the expectation value of . Then for any the following inequality holds Alternatively, setting gives

2.2 Quantum mechanics

Quantum mechanics despite its mysterious nature admits a relatively simple mathematical description. While it is an interesting question to ask why quantum mechanics is as it is, instead of being more (or less) powerful (and indeed such questions constitute the main topic of quantum foundations), we take a more hands-on approach. Namely, we accept the standard textbook formulation of quantum mechanics as it is and investigate its consequences. Section 2.2.1 defines the basic notions of linear algebra (and, hence, can be skipped by most readers), which will be necessary to describe the quantum formalism in Section 2.2.2.

2.2.1 Linear algebra

The following section contains the bare minimum of linear algebra necessary to understand this thesis and serves primarily the purpose of establishing consistent notation and nomenclature. For a complete and detailed introduction to linear algebra we refer to the excellent textbooks by Rajendra Bhatia [Bha97, Bha09]. In this thesis we restrict our attention to finite-dimensional systems. Let be a Hilbert space of finite dimension over complex numbers. Let denote the dual space of , i.e. the space of linear functionals on . We employ the bra-ket notation proposed by Paul Dirac [Dir39], in which elements of are written as kets and each ket has an associated bra, denoted by , such that applying the linear functional to an arbitrary vector , written as a bra-ket , corresponds exactly to evaluating the inner product between and . A set of vectors constitutes an orthonormal basis if the vectors are orthogonal and normalised, i.e. , where is the Kronecker delta. Let be the set of linear operators acting on . The identity operator, denoted by , is the unique operator that satisfies for all . Writing a linear operator in a particular basis leads to a (complex) matrix whose entries equal where the expression should be understood as . Note that while the operator and its matrix representation are not the same object (the former is basis-independent, while the latter corresponds to a particular basis) for the purpose of this thesis this distinction may be ignored and we will use the two terms interchangeably. The trace of a square matrix is the sum of its diagonal entries The Hermitian conjugate of an operator , denoted by , is defined to satisfy where denotes the complex conjugate. An operator satisfying is called Hermitian (or self-adjoint) and we denote the set of Hermitian operators acting on by . Operators satisfying are called unitary operators or unitaries. It is easy to verify that for a Hermitian operator we have for all vectors . A Hermitian operator is called positive semidefinite if for all vectors , which is often written as . Every linear operator admits a singular value decomposition, i.e. it can be written in the form , where and are unitary operators and is a diagonal matrix of real, non-negative entries known as the singular values of . Let be the vector of singular values. For the Schatten -norm of , denoted by , is defined as the vector  -norm of For the purpose of this thesis we will only need the limit so let us define

2.2.2 Quantum formalism

A pure state of a quantum system is described by a normalised vector, i.e.  such that . We adopt the convention that every -dimensional Hilbert space is equipped with an orthonormal basis , which we call the computational (or standard) basis. Writing in this basis allows us to interpret it as a -dimensional complex unit vector. The global phase of a state is inconsequential, i.e. quantum mechanics tells us that vectors and (for ) correspond to the same physical state. The smallest non-trivial quantum system corresponds to and is called a qubit (a term coined by Schumacher and Wootters [Sch95]). The Hadamard operator is defined as or It is easy to verify that is simultaneously Hermitian () and unitary (). Define , and let us call the Hadamard (or diagonal) basis. The computational and Hadamard bases are widely used in cryptography because they are an example of mutually unbiased bases (for ), i.e. they satisfy which captures the notion of being maximally incompatible. A mixed quantum state on is a Hermitian operator, which is positive semidefinite and of unit trace. We define the set of (mixed) quantum states on The operator describing a mixed state is called the density matrix. Mixed states are a generalisation of pure states: an arbitrary pure state can be represented as a density matrix . Mixed states arise naturally when dealing with composite systems. Suppose we have two systems (or registers) and described by Hilbert spaces and , respectively, and we want to describe the global state of the system. What are the allowed states on and taken together? In case of pure states, quantum mechanics tells us to take the tensor product of the two Hilbert spaces, i.e. . Therefore, an arbitrary pure bipartite state can be written as where should be understood as (the tensor product symbol is commonly omitted to avoid notational clutter). Given a bipartite system one might wonder what can be said about the marginal states on and (similar to the concept of the marginals of a probability distribution). In particular, one would expect that if we restrict ourselves to measurements on alone then it should be possible to “truncate” to by disregarding any information about . This intuition leads the concept of reduced states. Let us first write the density matrix corresponding to Given an operator acting on multiple registers we define the operation of partial trace which “traces out” a particular register, e.g. Note that the standard trace operation corresponds to tracing all the registers. It is easy to verify that partial traces commute so we can without ambiguity write . Tracing out the register from the density matrix gives which is easily verified to be a valid quantum state and which we call the reduced state on . It is easy to verify that the knowledge of suffices to make all possible predictions about operations or measurements that act solely upon subsystem . In cryptography reduced states are important because they allow us to quantify the amount of knowledge that a particular subsystem provides to its holder. Once we know how to describe the state of a quantum system we would like to know how we can interact with it. To extract any information from a quantum state one needs to measure it. Note that this is one of the aspects in which quantum theory differs significantly from its classical counterpart. In the classical world the object and its (complete) description are operationally equivalent: given the description one can construct the object and given the object one can determine (to an arbitrary precision) its description. In the quantum world a single copy of an object gives us significantly less information than its complete description as demonstrated by the no-cloning theorem [WŻ82]. In contrast to the classical world, every quantum system can be measured in multiple ways, which means that the measurement process must be described explicitly. A measurement14 on a -dimensional quantum state which yields outcomes from a finite alphabet is a collection of positive semidefinite operators 15 that add up to (-dimensional) identity (2.1) Quantum mechanics is a probabilistic theory, i.e. it only allows us to calculate probabilities of observing different outcomes. According to Born’s rule [Bor26] measuring the state yields outcome with probability It is easy to see that the condition (2.1) is imposed to ensure that the resulting probability distribution is non-negative and normalised for every state. Note that such an information-theoretic formulation of the measurement process does not necessarily coincide with the notion of measuring a physical quantity, e.g. the outcome might not be a number so one cannot talk about the expectation value or the standard deviation of the measurement. The process of measuring a quantum state can be seen as a map that takes a quantum state and outputs a probability distribution. This naturally generalises to maps in which the output remains quantum and such maps are known as quantum channels. The identity channel (i.e. the unique channel that leaves every state unaffected) is denoted by 16. Generally, a map is a quantum channel iff: is linear, i.e. for any and is completely positive, i.e. for any , where is an auxiliary Hilbert space of arbitrary dimension, is trace-preserving, i.e. for any These properties can be rigorously derived from the assumption that a channel is a result of a unitary evolution acting on a larger Hilbert space. On a more pragmatic level, these rules ensure that the channel is a linear map that takes quantum states on into valid quantum states on . When dealing with multipartite states it might be useful to explicitly write out the input and output registers, e.g. .

2.2.3 Remote state preparation

A state of the form (2.2) is called classical-quantum (cq) since the first register represents a classical random variable while the second is a general quantum system. Such states describe how a quantum system can be correlated with some classical data. One way of obtaining such a state is to sample the classical random variable and prepare subsystem in a particular state conditional on the outcome. Here, we show how to use entanglement to remotely prepare a certain class of such states, a phenomenon also known as steering. Define the maximally entangled state of dimension as It is easy to verify that in this case both marginals are maximally mixed, i.e. proportional to the identity matrix Moreover, for an arbitrary linear operator , we have where denotes the transpose with respect to the computational basis If we replace with a measurement operator this implies that observing a particular outcome on results in a particular subnormalised quantum state on . Hence, we have remotely prepared a state on by performing a measurement on . It is easy to see that any cq-state of the form (2.2) which satisfies (2.3) can be obtained by performing the right measurement on one half of the -dimensional maximally entangled state. More specifically, the appropriate measurement is described by measurement operators . The restriction (2.3) expresses the rule that the reduced state on must remain unchanged, i.e. it must remain maximally mixed. This phenomenon turns out to be important in quantum cryptography. An essential feature of quantum information is the ability to encode information in two (or more) incompatible bases. The most common example was originally introduced by Wiesner [Wie83] but goes under the name of BB84 states (after Bennett and Brassard who popularised the term [BB84]). In this case Alice uses either computational or Hadamard basis to encode a logical bit in a qubit which she later sends to Bob. If the logical bit is uniform the two encodings lead to and respectively. It is easy to verify that both of these satisfy relation (2.3) with . This leads to an important observation (in this particular cryptographic context due to Bennett, Brassard and Mermin [BBM92]) that such states can be prepared by first generating the maximally entangled state of two qubits and then measuring subsystem in the right basis. In fact, Alice simply makes a measurement in either computational or Hadamard basis. Since measurements on Alice’s side commute with any operations on Bob’s side, they can be delayed until some later point in the protocol, which means that now Alice and Bob share entanglement during the protocol. In other words, we have turned a prepare-and-measure scheme (Alice prepares a state and sends it to Bob, who performs a measurement), in which there is no entanglement between Alice and Bob, into an equivalent (from the security point of view) entanglement-based scheme (Alice and Bob simultaneously perform measurements on a shared entangled state). Often the entanglement-based schemes are easier to analyse, which we we will take advantage of to prove security of a quantum relativistic bit commitment protocol in Chapter 5.

2.3 Multiplayer games

For the purpose of this thesis, a game is an interaction between a referee and one or more players. The referee asks each player a question and the player must give an answer. In most cases the players are not allowed to communicate during the game. A strategy is a procedure that the players follow to generate their answers. At the end of the game, the referee decides whether the game is won or lost.

2.3.1 Classical and quantum strategies

For concreteness, we consider a game of non-communicating players. Each player receives an input from and is required to output a symbol from ( and are arbitrary finite alphabets). A game is defined by the input distribution and a predicate function which specifies whether the players win or lose for a particular combination of inputs and outputs.17 Every strategy available to classical players can be written as a convex combination of deterministic strategies. Hence, the maximum winning probability, denoted by and referred to as the classical value of the game, can be achieved by a deterministic strategy. A deterministic strategy is a collection of functions , , which determine each player’s response. Therefore, where the maximum is taken over all combinations of functions. Quantum players, in turn, are allowed to share a quantum state and perform measurements that depend on the inputs. For simplicity in the quantum setting we only describe two-player games () but these concepts extend in a straightforward way to an arbitrary number of players (see for example Ref. [Vid13]). A quantum strategy consists of a bipartite pure quantum state (of finite dimension) 18 and measurements that each player will perform for every possible input , denoted by . The maximum winning probability achievable by quantum players denoted by is called the quantum value where the optimisation is taken over all quantum strategies. Calculating the classical value of a game can be done by iterating over all possible strategies. While this is clearly not efficient (the number of strategies to check is exponential in the number of inputs), at least in principle it can be done.19 On the other hand, computing the quantum value is a more difficult problem and no generic procedure is known.20 The problem stems from the fact that we do not have a convenient description of the quantum set of correlations (i.e. there is no efficient procedure to decide whether a given point belongs to the set or not). To establish an upper bound on the quantum value of a game it is common to consider a larger set of correlations known as the no-signalling correlations, which does admit a simple description. Intuitively, this is the largest set of correlations that does not allow to send messages between different parties and the simplest example is the so-called Popescu-Rohrlich box [PR94]. Because the no-signalling set is a polytope (i.e. the convex hull of a finite set of extreme points) we know how to optimise over it (at least in principle, efficiency considerations apply as before). For a detailed characterisation of different sets of correlations refer to a recent review paper on Bell nonlocality [BCP14].

2.3.2 Finite fields

A field is a set with two operations: addition and multiplication, which satisfy the usual properties as listed below. The field is closed under multiplication and addition. Both operations are associative. Both operations are commutative. There exist additive and multiplicative identity elements. There exist additive and multiplicative inverses (except for the additive identity which does not have a multiplicative inverse). Multiplication is distributive over addition. It is easy to see that real or complex numbers form with the standard addition and multiplication are fields. We call a field finite (the name Galois field is also used after Évariste Galois) if the set of elements is finite. The order of a finite field is the number of elements in the set and a finite field of order exists iff is a prime power, i.e.  for some prime and integer . Since all finite fields of a given order are isomorphic (i.e. they are identical up to relabelling of the elements), we speak of the finite field of order denoted by . For a thorough introduction to finite fields please consult an excellent book by Mullen and Mummert [MM07]. Finite fields appear often in coding theory and cryptography since they are finite sets closed under (appropriately defined) addition, multiplication and their inverses. Moreover, all these operations can be implemented efficiently on a computer. Fields corresponding to are a common choice since their elements have a natural representation as strings of bits. The protocol proposed in Chapter 6 uses finite-field arithmetic and its security hinges on the difficulty of a certain family of multiplayer games. In this section we prove upper bounds on the classical value of such games and discuss the connection to a natural algebraic problem concerning multivariate polynomials over finite fields.

2.3.3 Definition of the game

Fig. 2.1: The “Number on the Forehead” model. Vertical lines remind us that the players are not allowed to communicate.
Buhrman and Massar [BM05] proposed a generalisation of the CHSH game [CHSH69], which was further studied by Bavarian and Shor [BS15]. A natural multiplayer generalisation of this game arises in the security analysis of the multiround bit commitment protocol in Chapter 6. Since the analysis does not require familiarity with the actual actual bit commitment protocol and might be of independent interest, we have decided to make it a stand-alone component of the Preliminaries (rather than incorporating it in Chapter 6). Consider a game with players, denoted by , and let be random variables drawn independently, uniformly at random from We use to denote the set of integers between and , . In the “Number on the Forehead” model [CFL83] receives all the random variables except for the one, which we denote by , and is required to output an element of , which we denote by (see Fig. 2.1). The game is won if the sum of the outputs equals the product of the inputs (all the operations are performed in the finite field), i.e. the predicate function is If the player employs a deterministic strategy described by , i.e. he outputs , then the winning probability equals As described in Section 2.3 the classical value of the game equals (2.4) where the maximisation is taken over all combinations of functions from to .21 Our goal is to find an upper bound on as a function of and .

2.3.4 Relation to multivariate polynomials over finite fields

As the probability distribution of inputs is uniform the winning probability of a particular deterministic strategy (defined by a collection of functions ) is proportional to the number of inputs on which the following equality holds (2.5) Alternatively, we can count the zeroes of the following function By the Lagrange interpolation method every function from to (for arbitrary ) can be written as a polynomial. Therefore, the question concerns the number of zeroes of the polynomial . Different strategies employed by the players give rise to different polynomials and we need to characterise what polynomials are “reachable” in this scenario. The output of is an arbitrary polynomial of , hence, it only contains terms that depend on at most variables. This means that the part of that depends on all variables comes solely from the first term and equals . Therefore, finding the classical value of the game is equivalent to finding the polynomial with the largest number of zeroes, whose only term that depends on all variables equals . This shows that the optimal strategy for our game is closely related to purely algebraic properties of polynomials over finite fields.

2.3.5 A recursive upper bound on the classical value

Here, we find explicit upper bounds on through an induction argument. First, note that for there is only one term on the right-hand side of Eq. (2.5) and since this term takes no arguments it is actually a constant. Since is uniform we have Now, we derive an upper bound on in terms of . For a fixed strategy the winning probability can be written as Conditioning on a particular value of leads to events that only depend on . In particular, setting defines the event which satisfies (2.6) We can use Lemma 2.1 to find a bound on as long as we are given bounds on for .
Proposition 2.1.
For we have .
Proof.
Eq. (2.6) defines through a certain equation in the finite field. If the equations corresponding to and are satisfied simultaneously then any linear combination of these equations is also satisfied. More specifically, we define a new event (2.7) and since we are guaranteed that . To find an upper bound on we give the players more power by allowing a more general expression on the right-hand side. In Eq. (2.7) the term is a particular function of and , so let us replace it by an arbitrary function of these variables Under this relaxation, we arrive at the following equality Clearly, is a constant, non-zero multiplicative factor known to each player. Dividing the equation through by leads to the same game as considered before but one player has been eliminated (there are only players now). Therefore,
This allows us to prove the main technical result.
Proposition 2.2.
The classical value of the game defined in Section 2.3.3 satisfies the following recursive relation (2.8)
Proof.
The statement follows directly from combining Lemma 2.1 with Proposition 2.1. ∎
Since we know that , we can obtain a bound on by recursive evaluation of Eq. (2.8). More precisely, we get for Note that this bound is always non trivial, i.e.  for all values of and . To obtain a slightly weaker but simpler form presented in Eq. (6.9) in Chapter 6 we note that and set .

2.4 Cryptographic protocols and implementations

Cryptography is a field is driven by applications, i.e. the starting point is a particular task that two (or more) parties want to perform. Formulating a task in a rigorous, mathematical language gives rise to a cryptographic primitive. In case of two-party cryptography two aspects must be specified. Correctness: The expected behaviour when executed by honest parties. Security: A list of behaviours that are forbidden regardless of the strategies that the dishonest parties might employ. Defining correctness is straightforward because what we want to achieve is clear from the beginning. Finding the right definition of security, on the other hand, might be a challenging task. Converting our intuition about what the primitive should not allow for into a mathematical statement is not always straightforward and often multiple security definitions are simultaneously in use depending on the exact context. Sometimes security is perfect (cf. the hiding property of bit commitment in Definition 2.2), but more often it is quantified by a (small) number usually denoted by (cf. the binding property in Definition 2.3), which can be (usually) understood as an upper bound on the probability that a cheating attempt is successful. It is worth emphasising that no meaningful statements can be made if all involved parties decide to cheat simply because if they collectively deviate in the “right” way they can produce any imaginable output. If all the dishonest parties form a coalition whose only goal is to enforce a certain output, nothing can stop them from achieving it. In particular, in the two-party case Alice and Bob could, instead of executing the protocol, decide to play a game of chess and then the output of the interaction would be a complete account of a chess game. Clearly, no cryptographic statements can be made about a chess game. Therefore, we only consider scenarios in which at least one party is honest and that is why in the two-party setting we prefer to talk about security for honest Alice (Bob) instead of security against dishonest Bob (Alice). Once the primitive has been defined we propose a protocol (i.e. a sequence of interactions between the players) that implements it. Verifying the correctness of a cryptographic protocol is simple since the honest parties behave in a well-defined manner. Showing security, on the other hand, is more complicated because we need to characterise all possible ways in which the dishonest parties might deviate from the protocol and argue that none of them violates the security requirements of the primitive. In a protocol that does not achieve perfect security, the final outcome of a security proof is an upper bound on how well the dishonest party can cheat. Since the level of security that we are happy to accept depends on the precise circumstances, protocols usually come in families parametrised by an integer and the security guarantee is a function of ideally satisfying as . Increasing the value of leads to protocols that use more resources (e.g. computation, communication or randomness) but achieve better security. Ideally, we would like to decay exponentially but inverse polynomial decay might also be acceptable. Security analysis of such a family of protocols aims to find the tightest bound, i.e. lowest , as a function of . Having performed the theoretical analysis of a protocol, the last step is to actually implement it. In case of mature technologies (e.g. modern digital devices) fault-tolerance (capability of terminating correctly even in the presence of errors) is ensured at the hardware level so there is no need to introduce any extra measures in the actual protocol. The multiround classical relativistic bit commitment protocol discussed in Chapter 6 is a prime example: the simplest theoretical protocol is already suitable for implementation and no modifications are necessary. On the other hand, in case of less developed fields like quantum technologies the situation is a bit more complicated. Since we have not yet found a way to (generically) eliminate all the errors, we must consider how they will affect our protocol. What happens when honest parties follow the protocol but their communication or storage suffers from noise? Depending on how severe the errors are, the protocol either terminates with the wrong output or it aborts. To prevent such an undesirable outcome the protocol must be modified to become fault-tolerant. The exact modifications that need to be made depend on what type of noise we want to protect ourselves against. More specifically, we need to have a model of noise that is simple enough to analyse but remains a reasonably faithful description of the experimental setup. As a consequence, turning a theoretical protocol into an experimental proposal is not so straightforward and usually requires multiple rounds of communication between the theoretician and the experimentalist. The new fault-tolerant protocol admits a couple of parameters, which determine its error tolerance, and these should be chosen to ensure that the protocol terminates successfully (with high probability) when performed by the honest parties. In this case asymptotic analysis is sufficient, since it is the actual experiment that demonstrates correctness (while calculations simply give us an indication whether the experiment is worth setting up). Having modified our protocol we need to reassess its security and it is clear that introducing fault-tolerant features makes a protocol more vulnerable to cheating. Moreover, since the security analysis is supposed to please the most paranoid cryptographers, we must make minimal assumptions about the adversary. In particular, we do not want to impose on him any technological restrictions. Our devices are imperfect due to our lack of skills and knowledge but we do not want to assume that about the adversary. The standard approach to quantum cryptography is to assume that the devices used by the honest party are trusted (i.e. their precise description including potential imperfections is known) but the devices used by the adversary might be arbitrary (i.e. they are only limited by the laws of physics).22 Clearly, requiring that our protocol is correct for honest parties with imperfect devices and remains secure against an all-powerful adversary puts us in a difficult situation. As mentioned before, the fault-tolerant protocol takes a couple of parameters which we can try adjusting but we might nevertheless reach the conclusion that guaranteeing correctness and security simultaneously is not possible. This means that the quality of the devices available to the honest parties is not sufficient to allow for a secure execution of the protocol. We can turn this statement around and ask about the minimal requirements on the honest devices. How much noise can we tolerate before the protocol becomes insecure? Note that now this is a property of the protocol alone and we should aim to design protocols with the highest possible noise tolerance. In case of quantum technologies a successful implementation of a cryptographic protocol often requires a collective effort of the experimentalist (who attempts to reduce the experimental noise to the absolute minimum) and the theoretician (who improves the theoretical security analysis). An example of such an analysis for a quantum bit commitment protocol is presented in Chapter 5.

2.5 Bit commitment

Recall Example 2 from Section 1.1, in which Alice wants to commit to a certain message without actually revealing it. Commitment schemes have multiple applications, for example they allow us to prove that we know something or that we are able to predict some future event without revealing any information in advance. They are also a useful tool to force different parties to act simultaneously, even if the communication model is inherently sequential. Consider two bidders who want to take part in an auction but there is no trusted auctioneer at hand. In the usual, sequential communication model one of them has to announce his bid first, which gives an unfair advantage to the other bidder. This can be rectified if the first bidder commits to his bid (instead of announcing it) and opens it only after the second bidder has announced his price. Hence, given access to a commitment functionality, one can perform a fair auction without a trusted third party. Moreover, commitment schemes are often used in reductions to construct other cryptographic primitives. For the purpose of this section we restrict ourselves to schemes in which the committed message is just one bit. As explained in Section 2.4 the protocol should be correct (it should succeed if executed by honest parties) and secure (the honest party should be protected even if the other party deviates arbitrarily from the protocol).23 To make precise mathematical statements, we need a formal description of the protocol in the quantum language.

2.5.1 Formal definition

The primitive of bit commitment is usually split into two phases: the commit phase and the open phase. In the commit phase Alice interacts with Bob and at the end of the commit phase she should be committed, i.e. she should no longer have the freedom to choose (or change) her commitment. Nevertheless, Bob should remain ignorant about Alice’s commitment. In the open phase Alice sends to Bob the bit she has committed to, along with a proof of her commitment, which he examines to decide whether to accept the opening or not. While this description is sufficient for most purposes, it has some undesirable features. First of all, since it does not explicitly mention the period in between the two phases, it might create the impression that there is no interval in between, i.e. it might lead to the false conclusion that the end of the commit phase and the beginning of the open phase correspond to the same point of time. This is clearly misleading as the whole point of a commitment scheme is to obtain a finite interval between the two, i.e. a period in which Alice is committed to a message which Bob remains ignorant about. The distinction is usually not made explicit because in most protocols nothing happens in between the two phases (Alice and Bob just savour the moment of being securely committed), which means that the two points are operationally equivalent (e.g. any information that Bob might extract about Alice’s commitment just before the open phase he might also extract immediately after the commit phase). This is not true for protocols in which communication continues in between the two phases and there the distinction is important. Therefore, we explicitly introduce the sustain phase, i.e. the period during which the commitment is valid. For reasons which will become clear soon, we call the beginning of the sustain phase the commitment point and the end the opening point. We also split up what is usually called the open phase into two separate parts: in the open phase Alice unveils to Bob and sends him a proof of her commitment (which we assume to be a single message24), while in the verify phase Bob decides whether to accept the opening or not. The phase structure is shown in Fig. 2.2. We use and to denote the subsystems of Alice and Bob, respectively. We use to denote the proof, which is generated (in the open phase) by Alice and sent to Bob. We implicitly assume that contains the information about the value that Alice is trying to unveil. Since the commit and sustain phases are interactive they do not admit a compact description in the quantum formalism. The open phase can be described as a quantum channel , which acts on Alice’s subsystem () to produce a proof (). Bob’s decision whether to accept or reject the commitment in the verify phase can be described by a binary measurement performed jointly on subsystems and .

commitment point

opening point

1. commit

2. sustain

3. open

4. verify

Fig. 2.2: The phase structure of a generic bit commitment protocol.
The honest scenario is relatively straightforward to analyse. The protocol specifies uniquely (for each value of Alice’s commitment ) the state shared between Alice and Bob at every stage of the protocol and finding it explicitly is a matter of simple calculation.
Definition 2.1.
Let be the state shared between Alice and Bob at the opening point, be the opening map and be the final measurement. A bit commitment protocol is -correct if for we have
In the dishonest scenario the situation becomes a bit more complicated because the state shared between Alice and Bob is no longer uniquely specified. For example, if Alice is dishonest then the state of her subsystem might be completely arbitrary. For the purpose of defining security it is convenient to talk about the set of states that dishonest Alice (or Bob) can enforce during the protocol and we will use to denote such states (to distinguish them from the honest states denoted by ).25 These sets are then used to quantify security. As discussed before, coming up with the right security definition is not trivial because it requires us to turn the intuitive notion of security into a mathematical statement. It is useful to realise that the dishonest scenario is operationally equivalent to a game between the honest party (acting as a referee since their behaviour is determined by the protocol) and the dishonest party (a player who is allowed to adopt an arbitrary strategy). Thus, defining security is equivalent to specifying the exact rules of such a “cheating game”. To look at a concrete example let us start with the case of honest Alice and dishonest Bob. Bob’s goal is to find out the value of Alice’s commitment before the open phase begins, i.e. at the opening point, and to achieve this he might deviate arbitrarily from the protocol. This admits a natural formulation as a game in which Alice (the referee) chooses uniformly at random and follows the honest protocol until the opening point. Then, Bob is challenged to guess at the opening point and the probability of guessing correctly is a natural measure of his cheating abilities. To phrase this in terms of quantum states, let be the state at the opening point and note that a particular strategy of dishonest Bob enforces two distinct states .
Definition 2.2.
A bit commitment protocol is hiding if all pairs of states that Bob can enforce at the opening point satisfy (2.9) where .
This definition implies that whatever strategy Bob employs, he obtains no information about Alice’s commitment.26 Note that this property is sometimes referred to as being perfectly hiding, in contrast to schemes that only guarantee Alice partial security. Since all protocols considered in this thesis are perfectly hiding, we always use hiding to mean perfect security. The case of dishonest Alice and honest Bob is a bit more complex. In order to claim that Alice’s commitment begins at the commitment point, we must show that at that point she no longer has the freedom to unveil both values, regardless of the strategy adopted prior to that. In other words, the dishonest behaviour of Alice can be seen as two distinct strategies (corresponding to and ) which are identical until the commitment point and let us call such strategies compatible. Intuitively, this means that she can delay the choice which strategy to follow until the commitment point.
Definition 2.3.
Let be a pair of states that Alice can enforce at the opening point using compatible strategies and let be opening maps. Define to be the probability that Alice’s attempt to unveil is accepted by Bob A bit commitment protocol is called -binding if for all states and for all opening maps we have
Note that finding the optimal opening map for a particular intermediate state is a semidefinite program so it can be solved efficiently. Therefore, the cheating strategy is essentially specified by a pair of compatible strategies. It is clear that the restriction that the two strategies are compatible is crucial. Clearly, Alice can enforce the honest pair of states