Testing random-detector-efficiency countermeasure in a commercial system reveals a breakable unrealistic assumption

Testing random-detector-efficiency countermeasure in a commercial system reveals a breakable unrealistic assumption

Anqi Huang angelhuang.hn@gmail.com Institute for Quantum Computing, University of Waterloo, Waterloo, ON, N2L 3G1 Canada Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, N2L 3G1 Canada    Shihan Sajeed Institute for Quantum Computing, University of Waterloo, Waterloo, ON, N2L 3G1 Canada Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, N2L 3G1 Canada    Poompong Chaiwongkhot Institute for Quantum Computing, University of Waterloo, Waterloo, ON, N2L 3G1 Canada Department of Physics and Astronomy, University of Waterloo, Waterloo, ON, N2L 3G1 Canada    Mathilde Soucarros ID Quantique SA, Chemin de la Marbrerie 3, 1227 Carouge, Geneva, Switzerland    Matthieu Legré ID Quantique SA, Chemin de la Marbrerie 3, 1227 Carouge, Geneva, Switzerland    Vadim Makarov Institute for Quantum Computing, University of Waterloo, Waterloo, ON, N2L 3G1 Canada Department of Physics and Astronomy, University of Waterloo, Waterloo, ON, N2L 3G1 Canada Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, N2L 3G1 Canada
Abstract

In the last decade, efforts have been made to reconcile theoretical security with realistic imperfect implementations of quantum key distribution (QKD). Implementable countermeasures are proposed to patch the discovered loopholes. However, certain countermeasures are not as robust as would be expected. In this paper, we present a concrete example of ID Quantique’s random-detector-efficiency countermeasure against detector blinding attacks. As a third-party tester, we have found that the first industrial implementation of this countermeasure is effective against the original blinding attack, but not immune to a modified blinding attack. Then, we implement and test a later full version of this countermeasure containing a security proof [C. C. W. Lim et al., IEEE J. Sel. Top. Quantum Electron. 21, 6601305 (2015)]. We find that it is still vulnerable against the modified blinding attack, because an assumption about hardware characteristics on which the proof relies fails in practice.

I Introduction

Currently, applied cryptography systems rely on the hardness of certain mathematical assumptions, which only provides computational security Naor (2003); ETS (2015). Once an eavesdropper has enough computing power, such as a quantum computer, the security of these classical encryption algorithms will be broken Bennett and DiVincenzo (2000); Shor (1994). However, quantum key distribution (QKD) allows two parties, Alice and Bob, to share a secret key based on the laws of quantum mechanics Bennett and Brassard (1984); Ekert (1991); Gisin et al. (2002); Scarani et al. (2009). Because of no-cloning theorem Wootters and Zurek (1982), an eavesdropper with arbitrary computing power cannot copy the information sent by Alice without leaving any trace, which guarantees the unconditional security of communication Lo and Chau (1999); Shor and Preskill (2000); Lütkenhaus (2000); Mayers (2001); Gottesman et al. (2004); Renner et al. (2005).

For this gradually maturing technology, practical QKD systems have been realised in laboratories Bennett (1992); Schmitt-Manderbach et al. (2007); Stucki et al. (2009); Tang et al. (2015) and several companies have provided commercial QKD systems to general customers QKD (). However, imperfect components used in the implementations lead to security issues that have attracted an increasing attention in the last decade Vakhitov et al. (2001); Makarov et al. (2006); Gisin et al. (2006); Qi et al. (2007); Zhao et al. (2008); Lydersen et al. (2010a); Sun et al. (2011); Lydersen et al. (2011a); Jouguet et al. (2013); Sajeed et al. (2015). Since increasing number of quantum attacks have been demonstrated, academic community is already aware of the security threat from practical loopholes. Therefore, the next step is to come up with loophole-free countermeasures. Importantly, the security of these countermeasures should be verified.

In this paper, an example of testing the security of an implemented countermeasure is given. We examine ID Quantique’s attempted countermeasure to earlier discovered bright-light detector control attacks Lydersen et al. (2010a, b); Wiechers et al. (2011) that were demonstrated 6 years ago on ID Quantique’s and MagiQ Technologies’ QKD products. The countermeasure is to randomly remove some detector gates to force the effective detection efficiency to zero during those slots leg (). The idea is that when an eavesdropper is performing the blinding attack, she will produce click during these removed gates and thus get caught. This countermeasure has been implemented in a commercial system Clavis2 by two authors of this paper working at ID Quantique (M.S. and M.L.), then provided as-is in a form of firmware update to the remaining four authors from the University of Waterloo who played the role of a third-party testing team. The authors from ID Quantique did not participate in the test, however results of the test produced by the testing team were discussed by all authors and agreed upon.

The experimental results produced by the testing team show that although this countermeasure is effective against the original detector blinding attack Lydersen et al. (2010a), it is no longer effective if the eavesdropper modifies her attack slightly. We note here that this countermeasure implemented by ID Quantique is the simplest possible version of the original countermeasure proposal leg (), and has already been criticised as unreliable in a later theoretical work Lim et al. (2015). Hence, the testing team has gone further ahead and manually implemented a full version of the countermeasure using two non-zero detection efficiency levels leg (); Lim et al. (2015), and tested it. Our testing shows that even the full countermeasure is vulnerable to the modified blinding attack. Specifically, we experimentally disprove an assumption that Bob’s detection probability under blinding attack cannot be proportional to his single-photon detection efficiency, on which the theoretical analysis in Ref. Lim et al., 2015 relies.

The paper is organized as follows. Section II reviews a hacking-and-patching timeline of ID Quantique’s Clavis2 QKD system and introduces the countermeasure. In Section III, testing results of ID Quantique’s first countermeasure implementation are reported and our modified blinding attack is introduced. Section IV theoretically analyses conditions of a successful attack and shows that the modified blinding attack satisfies them. Moreover, in Section V, based on certain assumptions about a future implementation of the full countermeasure Lim et al. (2015), we demonstrate two possible methods to hack this full version implementation. We discuss the practicality of our attacks against installed commercial QKD lines in Section VI and conclude in Section VII.

Ii From loophole discovery to countermeasure implementation

In 2009, the vulnerability of the commercial QKD system Clavis2 idq () to detector blinding attacks was identified and a confidential report was submitted to ID Quantique (the work was published shortly afterwards Lydersen et al. (2010a)). After this, ID Quantique has been trying to figure out an experimental countermeasure against these attacks. The timeline of this security problem is shown in Fig. 1. In 2010, ID Quantique proposed a countermeasure that randomizes the efficiency of a gated avalanche photodiode (APD) by randomly choosing one out of two different gate voltages, and filed this idea for a patent leg (). In this way, an eavesdropper Eve does not know the exact efficiency of Bob in every gated slot and thus cannot maintain his detection statistics. At the sifting phase, if the observed detection rates differ from the expected values, Alice and Bob would be aware of Eve’s presence and discard their raw keys.

Figure 1: Timeline of hacking-countermeasure-hacking for the bright-light detector control class of attacks.

In 2014, Lim et al. proposed a specific protocol to realize this countermeasure Lim et al. (2015), which analyses the security mathematically for blinding attacks that obey a certain assumption on their behavior. In the protocol, Bob randomly applies two non-zero detection efficiencies , and measures detection rates and conditioned on these efficiencies. The effect of detector blinding attack is accounted via the factor . Without the blinding attack, the detection rate is proportional to the efficiency, making this factor zero. The analysis makes a crucial assumption that the detection rate under blinding attack , i.e., it will be independent of Bob’s choice of . Then, under attack the factor will be greater than zero, and reduces the secure key rate. This solution intends to introduce an information gap between Eve and Bob, for Eve has no information about Bob’s random efficiency choice.

Later in 2014, ID Quantique implemented the countermeasure as a firmware patch. The hardware in Clavis2 is not capable of generating two nonzero efficiency levels that switch randomly between adjacent detector gates. As a result, implementation is in a simple form by suppressing gates randomly with probability. The suppressed gates represent zero efficiency , while the rest of the gates represent calibrated efficiency . Ideally, in the updated system, there should be no click in the absence of the gate. In practice, transient electromagnetic interference may extremely infrequently lead to a click without a gate. Therefore, an alarm counter is used with the system lifetime limit of 15 clicks in the absence of the gate. If this limit is reached, it triggers the firmware to brick the system and require factory maintenance. This implementation assumes that under blinding attack Lydersen et al. (2010a), click probability should not depend on the gate voltage and the attack should therefore cause clicks at the slots of gate absence.

Iii testing the countermeasure

Figure 2: Click probability under original blinding attack Lydersen et al. (2010a) versus energy of trigger pulse. The blinding power is , as the same as the power used in the published original attack Lydersen et al. (2010a). The timing of trigger pulse is long, after the centre of the gate signal, which should roughly reproduce the original attack Lydersen et al. (2010a).

In this section, we demonstrate that the countermeasure presently implemented by ID Quantique is effective against the original blinding attack Lydersen et al. (2010a), but not sufficient against the general class of attacks attempting to take control of Bob’s single-photon detectors.

Let us briefly remind the reader how Clavis2 and the original blinding attack against it work. Clavis2 is a bidirectional phase-encoding QKD system  Stucki et al. (2002); idq (). After Bob sends multi-photon bright pulses to Alice, Alice randomly modulates one of the four BB84 phase states Bennett and Brassard (1984), attenuates the pulses and sends them back to Bob. Bob randomly chooses one out of two measurement bases. Interference happens between pulses from longer and shorter paths of an interferometer at Bob’s side, and the outcomes of interference depend on the phase difference between Alice’s and Bob’s modulation Muller et al. (1997). However, Eve is able to control the outcomes by the following strategy. She shines bright light to blind the detectors, and then intercepts Alice’s states Lydersen et al. (2010a). According to Eve’s interception results, she re-sends faked states by multi-photon pulses to Bob’s blinded detectors. If Bob chooses the same measurement basis as Eve’s, the pulses interfere at Bob’s interferometer, so that all power of the pulse goes to one detector to trigger a click. If the measurement bases chosen by Bob and Eve are mismatched, there is no interference, and the power of the pulse is split equally between Bob’s two detectors. In this case, neither detector clicks. In this attack, Eve can fully control Bob’s detectors and obtain the whole key tracelessly Lydersen et al. (2010a).

For the original blinding attack, Eve sends bright-light continuous-wave (c.w.) laser light to blind Bob’s detectors. Then a trigger pulse is sent slightly after the gate to make a click. We repeat this attack for improved Clavis2 system and test the amount of energy to trigger a click which is shown in Fig. 2. From Fig. 2, we can see the trigger pulse energy for gate presence (solid curves) is lower than that for gate absence (dashed curves), because minute electrical fluctuations of APD voltage following the gate signal lower the click threshold slightly.

However, if Eve tries to trigger a click with probability when the gate is applied, this amount of trigger pulse energy (marked by a dotted vertical line in Fig. 2) also might trigger a click with non-zero probability when the gate is suppressed, which is monitored and results in an alarm. Therefore, Eve cannot hack the system with full controllability. To avoid clicks in slots of gate suppression, Eve could in theory decrease the level of trigger pulse energy to trigger a click sometimes with gate presence, but never with gate absence. This also satisfies a necessary condition of a successful attack which we will discuss in Section IV later. Unfortunately, in practice, our testing result shows the amount of trigger pulse energy required to trigger D0 without the gate is about , which is only less than the amount of energy for click () when the gate is present. The difference of these two energy levels is likely not big enough to achieve a reliable attack operation that avoids triggering the countermeasure. Also, D1 will always trigger at these energy levels, revealing the attack. Eve could target D1 using a slightly lower energy level, but the relative precision required is similar there. Routine fluctuations of temperature and other equipment parameters may lead to some instability of these trigger pulse energy levels, causing a risk for Eve to trigger a few clicks in the gate absence and brick the system being attacked. From this point of view, we think this first implementation of countermeasure is effective against the original blinding attack.

Figure 3: Idealized APD gate signal and real oscillogram of optical trigger pulse. Relative time between the gate voltage transitions and the optical pulse is approximate. The c.w. signal is generated by a laser diode; the trigger pulse signal is obtained by modulating pump current of a separate laser diode, using an electrical pulse generator Lydersen et al. (2010a).
Figure 4: Oscillograms at comparator input in the detector circuit, proportional to APD current. (a) Geiger mode. The small positive and negative pulses are due to gate signal leakage through the APD capacitance of . (b) Geiger mode, single-photon avalanche. (c–f) The detector is blinded with c.w. illumination, with (c) no trigger pulse applied, (d)  trigger pulse applied after the gate, (e)  trigger pulse applied in the gate, and (f)  trigger pulse applied in the gate.

We can slightly modify our blinding attack to break the security of this countermeasure. Similarly to the original blinding attack, Bob’s detectors are blinded by a bright-light laser first. Then, instead of sending a trigger pulse slightly after the gate as in the original attacks Lydersen et al. (2010a), we send a long trigger pulse on top of the c.w. illumination during the detector gate, as shown in Fig. 3. This trigger pulse produces a click in one of Bob’s two detectors only if Bob applies the gate and his basis choice matches that of Eve; otherwise there is no click.

To explain why this modified attack succeeds, let us remind the reader the normal operation of an avalanche photodiode (APD). The detectors in Clavis2 are gated APDs. When the gate signal is applied, the voltage across the APD is greater than its breakdown voltage . If a single photon comes during the gated time, an avalanche happens and causes a large current. This current is converted into a voltage by the detector electronic circuit. If the peak voltage is larger than a threshold , the detector registers a photon detection (a ‘click’). Fig. 4(a) and (b) show the cases of no photon coming and a photon introducing an avalanche. Appendix A explains more details of the detector operation principle and the blinding attack.

A bright laser is able to blind the APDs. Under c.w. illumination, the APD produces constant photocurrent that overloads the high-voltage supply and lowers . Then, even when the gate signal is applied, does not exceed and the APD remains in the linear mode as a classical photodetector that is no longer sensitive to single photons. This means the detectors become blinded.

Under the blinding attack, Fig. 4(c–e) shows the detector voltages in different cases: when no trigger pulse is applied and when the trigger pulse is applied either after or in the gate. Since in the linear mode the gain factor of secondary electron-hole pairs generation in the APD depends on the voltage across it, the gate applied to the APD increases the gain factor. This larger gain during the gated time assists the APD in generating a larger photocurrent than the photocurrent outside the gate. Therefore the gate signal causes a positive pulse as shown in Fig. 4(c). The trigger pulse applied after the gate produces a second pulse, but the peak voltages of neither pulses exceed [Fig. 4(d)]. However, when the trigger pulse is shifted inside the gate, the two pulse amplitudes add up, reach and produce a detector click [Fig. 4(e)]. If Bob chooses a different measurement basis than Eve, only half of the trigger pulse energy arrives at each detector Lydersen et al. (2010a). In this case, the peak voltage does not reach [Fig. 4(f)]. Overall, only when the trigger pulse is applied during the gate time and Bob chooses the same basis as Eve, the detector under the blinding attack clicks. As a result, Eve can control Bob’s detectors to make Bob obtain the same measurement result as her, and does not introduce extra errors Lydersen et al. (2010a).

Figure 5: Output of a blinded detector in Clavis2 under control of trigger pulses of different energy. The top graph shows a gate applied at the first slot, but suppressed at the second slot. However, an optical trigger pulse is sent to the detector in both slots. Graphs A–E show detector output versus trigger pulse energy . In graph A, the energy is insufficient to produce a click. As the energy is increased above , clicks intermittently appear in the presence of the gate, as shown in graph B. At the energy level above , the gate always has a click, as shown in graph C. However, there is never a click when there is no gate. At a higher energy level above , clicks in the gate absence appear intermittently (graph D) or always (graph E).

Contrary to most of previously demonstrated attacks attempting to take control of single-photon detectors Lydersen et al. (2010a, b, 2011a), in the present demonstration the timing of the trigger pulse has to be aligned with the gate. Besides timing alignment, another important factor of the attack is the trigger pulse energy . To test the effect of different trigger pulse energy, we gradually increase it and observe the detection outcomes. Figure 5 shows schematically in which order clicks appear in Clavis2 as is increased. We observe three thresholds.

  • If (where is detector number), the detector never clicks when the gate is applied.

  • If , the detector always clicks when the gate is applied.

  • If , the detector never clicks when the gate is suppressed.

Figure 6: Energy thresholds of trigger pulse versus c.w. blinding power. Shaded area shows the range of trigger pulse energies of the perfect attack.

Figure 6 shows these detection thresholds measured for a range of c.w. blinding powers. All the thresholds rise with the blinding power, because higher blinding power leads to a larger photocurrent and lower . The decreased leads to smaller gain and thus lower sensitivity to the trigger pulse. (Appendix B contains a more detailed investigation of the processes inside the detector.) As can be seen, for any given blinding power, is much higher than the other click thresholds. This easily allows the original detector control attack Lydersen et al. (2010a) to proceed undetected by the countermeasure. A more formal analysis will be stated in the next section.

Iv Conditions of a successful attack

Experimental result of the previous section shows that the attack of Ref. Lydersen et al., 2010a is possible in Clavis2. However, general conditions for a successful attack should be analysed theoretically. In this section, we first consider strong conditions for a perfect attack, in which Eve induces a click in Bob with probability if their bases match and the gate is applied, and probability otherwise. These conditions are definitely sufficient for a successful attack Lydersen et al. (2010a). However, as we remark later in this section, even if these strong conditions are not satisfied, an attack may still be possible.

Strong conditions. If the detection outcome varies as Fig. 5 with the increase of trigger pulse energy, the order of the three thresholds is:

(1)

If Eve and Bob select opposite bases, half of the energy of trigger pulse goes to each Bob’s detector. In this case, none of the detectors should click despite the gate presence. This is achieved if Lydersen et al. (2010a)

(2)

The random gate suppression imposes additional conditions. In case of basis mismatch, half of the trigger pulse energy is arriving at each detector. It should induce a click in neither detector when the gate signal is absent. For the target detector i, there is no click once Eq. 1 is satisfied. For the other detector i, no click is achieved when half of the trigger pulse energy is still lower than the detection threshold in the no-gate case. That is,

(3)

If the bases match, we need to make sure there is no click when the gate is suppressed, but always a click in the expected detector in the gate presence. This is achieved if , which is already included in inequality 1. Although inequality 3 has a physical meaning, it mathematically follows from inequalities 2 and 1. Thus satisfying inequalities 2 and 1 represents the strong attack conditions and guarantees the same performance as in Ref. Lydersen et al., 2010a. The shaded area in Fig. 6 indicates a range of the trigger pulse energies Eve can apply for the perfect attack. The range is sufficiently wide to allow for a robust implementation, only requiring Eve to set correct energy with about precision.

Necessary condition. An attack may still be possible even if Eve’s trigger pulse does not always cause a click in Bob when their bases match, and/or sometimes causes a click when their bases do not match Lydersen et al. (2011b). The latter introduces some additional QBER but as long as it’s below the protocol abort threshold, Alice and Bob may still produce key. The random gate removal countermeasure imposes the condition

(4)

which means Eve should be able to at least sometimes cause a click in the gate while never causing a click without the gate (lest the alarm counter is increased). This is a necessary condition for an attack. As the present paper details, there are strong engineering reasons why this condition is likely to be satisfied in a detector. Additional conditions will depend on exact system characteristics Lydersen et al. (2011b).

V Will a full implementation of the countermeasure be robust?

We have proved so far that the current countermeasure with gate suppression cannot defeat the detector blinding attack. However, the paper of Lim et al. Lim et al. (2015) claims that the full version of countermeasure with two non-zero detection efficiencies is effective against a large class of detector side-channel attacks including the blinding attack Lydersen et al. (2010a). Even though this full countermeasure has not been implemented by ID Quantique, we have tested some properties of the detectors in Clavis2 to show two possible methods to hack the full countermeasure, based on certain assumptions about a future implementation.

Bob could choose randomly between and detection efficiency by changing either gate voltage amplitude or high-voltage supply  Lim et al. (2015). Since in Clavis2 hardware is fixed (see Appendix A), we assume an engineer will change to achieve different non-zero detection efficiencies. To achieve half of original detection efficiency, we lower manually. When of D0 drops from to , the detection efficiency reduces from to . Similarly, we decrease of D1 from to , leading to the detection efficiency reduction from to . After that, we test Eve’s controllability of these two detectors.

First, we blind the detectors and then measure the relation between the energy of trigger pulse and probability to cause a click. The position of trigger pulse is fixed in the middle of gate signal. Figure 7 shows the testing result which indicates there is a transition range between and click probability.

Figure 7: Click probabilities under blinding attack versus energy of trigger pulse. Solid curves show the energy of trigger pulse for original , while dashed curves for reduced lowering photon detection efficiency by about a factor of 2. The blinding power is and the timing of trigger pulse is aligned in the middle of the gate by minimizing its energy required to make a click.

From the measurement result, Eve can randomly select different levels of trigger pulse energy (shown as dotted lines in Fig. 7) to attack the full version of countermeasure. As we know, only when Bob chooses the same measurement basis as Eve, all the energy of trigger pulse arrives targeted detector and achieves a click. For target D0, if trigger pulse energy is chosen, D0 always clicks, while at , the detector only clicks if higher is applied. When and are chosen randomly with the same probability , the detection probability for higher is and the detection probability for lower is only . Therefore, the attack reproduces correct detection probabilities as the protocol requires. Similarly, for target D1, Eve can choose to trigger click always and choose to get a click only if higher is applies. This reproduces correct detection probabilities, and . At the same time, and remain safely below shown in Fig. 6, so clicks are never produced in the absence of the gate and alarm is not triggered. This allows Eve to hack the countermeasure tracelessly.

Second, we test the correlation between time shift of trigger pulse and click probability of blinded detector. The trigger pulse energy we use in this test for D1 is slightly lower than that of D0, but both levels of energy are above in Fig. 6 marked as red . The measurement result is shown in Fig. 8.

Figure 8: Click probabilities under blinding attack versus relative time shift of trigger pulse. Solid curves give the detection probability at the original , and dashed curves give the detection probability at lower . Note that the latter extends over a relatively narrower time window. The blinding power is . The energy of trigger pulse for D0 is and for D1 is . These energy levels are marked as red in Fig. 6.

This testing result illustrates another method to attack the countermeasure: randomly adjusting the time shift of the trigger pulse. For D0, after fixing the suitable energy level of trigger pulse, Eve can always trigger a click by choosing time shift , but only trigger a click at higher by choosing . Similarly, if target detector is D1, the detector always clicks at , but only clicks at higher at . Then, when Eve sends trigger pulse to control D0, she randomly selects and with equal probability to reproduce the correct detection efficiencies of D0. Eve utilizes the same strategy for D1 to achieve correct detection probabilities, and . In this way, Eve also hacks Clavis2 system tracelessly.

Generally, a finite set of decoy detection efficiency levels can be hacked by properly setting probabilities of different attacking energy levels or time-shifts. We take energy levels of trigger pulse as an example. According to the result in Fig. 7, it is reasonable to extrapolate that we can find distinct levels of trigger pulse energy in this situation. Then Eve can apply () with probability to satisfy . This would reproduce every expected value of and hack the system. We have so far assumed that applying energy level causes zero click probability for decoy levels up to , and 100% click probability for and above. However this is not a necessary condition. More generally, under energy , the click probability for efficiency level is . To reproduce the expected efficiencies, we need to satisfy the following set of equations:

(5)

We might solve these equations to get values . A worse case would be if Eve cannot find values of all , which means she may only have a partial control of Bob’s . However, it still breaks the assumption in the security proof Lim et al. (2015) that Eve cannot form faked states with click probability conditional on Bob’s randomly chosen efficiency. For quantitative analysis, an updated security proof would be needed first.

From the above testing and analysis of the implementation that changes , we can guess that an alternative implementation that changes  Lim et al. (2015) or adds an intensity modulator in front of the detectors Moroder et al. (2009), may leave a similar loophole. If we apply the intensity modulator, the energy of the trigger pulse arriving at the detector is not constant but depends on the modulation. However, this case is similar to gate voltage modulation, as we only consider the total energy from the gate signal and trigger pulse. Therefore, we will get similar results as Figs. 8 and 7, but the amount of trigger pulse energy and time shift might be different.

The reason for this practical loophole is a wrong assumption made by Lim and his colleagues Lim et al. (2015). They assume Eve cannot generate faked states that trigger detections with probabilities that are proportional to the original photon detection efficiency. Here we have proved this is in fact possible. Therefore, the model of a practical detector should be more precise in security analysis, if one wishes to close the detector control loophole without resorting to measurement-device-independent QKD.

Vi Our attacks in a black-box setting

According to Kerckhoffs’ principle Kerckhoffs (1883), Eve always knows everything about the algorithms and hardware of Alice’s and Bob’s boxes, including the precise values of equipment parameters. The classical security community practices Kerckhoffs’ principle since 1970’s, and widely agrees that this is a good approach to implementation security Naor (2003). This is supported by many examples of cryptographic systems that did not follow this principle and were compromised Singh (1999). The quantum academic community certainly agrees that QKD should be made secure in this setting, which is necessary for QKD being unconditionally secure Lo and Chau (1999); Shor and Preskill (2000); Lütkenhaus (2000); Mayers (2001); Gottesman et al. (2004); Renner et al. (2005).

However, it is also a practically interesting question if any proposed attack can be mounted on today’s commercial QKD systems in a black-box setting, when Eve only has access to the public communication lines but cannot directly measure signals and values of analog parameters inside Alice’s and Bob’s boxes Gisin (). In this realistic scenario, Eve may purchase (or acquire by other means) a sample of the system hardware, open it, make internal measurements and rehearse her attacks on it. Then she has to eavesdrop on her actual target, an installed system sample in which she has not had physical access to the boxes. Although the latter sample can be of the same model and design, it will generally have different values of internal analog parameters, owing to sample-to-sample variation in system components. A full implementation of our attacks in this scenario remains to be tested. In this setting it will be of utmost importance for Eve to avoid triggering clicks in the absence of the gate, because this would very quickly brick the system and risk revealing her attack attempt. The original blinding attack that applies the trigger after the gate becomes very sensitive to precise values of thresholds in the presence of the first version of countermeasure (Fig. 2). For this reason we think the countermeasure will likely be triggered by the original attack in the realistic black-box setting.

Our modified attack that applies the trigger inside the gate will likely avoid triggering the alarm, because the no-gate threshold energies are much higher that the energies required for detector control (Fig. 6). It also tolerates some fluctuation in experimental parameters for detector control. For example, when Eve applies blinding power, trigger pulse energy, and times her trigger pulse at the middle of the gate, we have verified that the attack still works perfectly for up to change in the trigger energy (see Fig. 6) or up to change in the trigger timing. This makes it robust against reasonably expected fluctuations and imprecision of the system parameters. In particular, the timing accuracy required for our attack in much coarser than the several tens of picoseconds precision Alice and Bob use in normal operation Jain et al. (2011). The trigger energy setting precision is similar to the original attack that required Lydersen et al. (2010a).

Eve may need a few attempts to set a correct trigger energy when attacking a new copy of the system. She can do this by starting at a low trigger energy and attempting several increasing values of energy while watching the classical traffic Alice-Bob for the success or failure of the QKD session she has attacked Makarov and Hjelme (2005). A QKD session that fails because of too low detection efficiency is a naturally occurring event that is part of normal system operation, does not raise an alarm and is recovered from automatically in Clavis2 Jain et al. (2011); Makarov et al. ().

A full two-level implementation of the countermeasure may require Eve to run more attempts, because of a finer degree of control required over the trigger pulse energy and timing. Yet, similarly to the first countermeasure implementation, the no-gate trigger energy that would raise alarm remains safely well above the energies required for detector control. The practicality of attack in the black-box setting is thus difficult to predict without having the actual industrial implementation of the full countermeasure, and actually demonstrating the full attack, which can be a future study.

Vii Conclusion

We have tested the first implementation of the countermeasure against the blinding attack in the commercial QKD system Clavis2. Our testing result demonstrates that presently implemented countermeasure is effective against the original blinding attack but not effective against a modified blinding attack. The modified attack fully controls Bob’s single-photon detectors but does not trigger the security alarm. The modified attack is similar to the original detector blinding attack Lydersen et al. (2010a) with the only difference that the trigger pulses are time-aligned to coincide with the detector gates, instead of following it. We argue that this attack should be implementable in practice against an installed QKD communication line where Eve does not have physical access to characterising Alice and Bob, however such full demonstration has not yet been done, to our knowledge.

We have also tested the full proposed implementation of countermeasure with two non-zero efficiency levels, and found its security to be unreliable despite predictions of the theory proposal Lim et al. (2015). From the current testing results, bright-pulse triggering probabilities of the blinded detectors depend on several factors including , timing and energy of the trigger pulse (see Section V). This in principle allows Eve to compromise the full countermeasure implementation.

We have tested the countermeasure implemented with the gated single-photon detectors (SPDs). The idea of random detection efficiency can be applied to other types of SPDs that are also sensitive to the blinding attack: free-running SPDs Sauge et al. (2011) and superconducting nanowire SPDs Lydersen et al. (2011a). However, the countermeasure based on these detectors might still be hackable. Since the efficiencies of these types of SPDs depend on the bias voltage or current, varying these bias signals likely changes other parameters inside the SPD and its electronics. Therefore, when we randomize the detection efficiency, other degrees of freedom might be changed as well. Eve has a chance to exploit these side channels to hack the countermeasure. Of course, the exact outcome cannot be known until the countermeasures in different types of detectors are experimentally tested.

According to our testing result, this countermeasure is not as reliable as would be expected in a high-security environment of QKD. Although an ideal industrial countermeasure has not been achieved, everybody now has a more clear concept about the detector loopholes. This procedure emphasizes the necessity of security testing every time practical QKD systems are developed or updated. We only can reach the final practical security of any QKD system after several iterations of implementation development and testing verification. Our countermeasure testing also illustrates that patching a loophole is still time-consuming and difficult. However, addressing practical vulnerabilities at the design stage of a QKD system is both cheaper and less messy than trying to retrofit patches on an existing deployed solution. Addressing security at the design stage should be the goal whenever possible.

Acknowledgment

We thank C. C. W. Lim, N. Gisin, and E. Anisimova for discussions. This work was supported by Industry Canada, NSERC (programs Discovery and CryptoWorks21), CFI, Ontario MRI, US Office of Naval Research, ID Quantique, European Commission’s FET QICT SIQS and EMPIR 14IND05 MIQC2 projects. P.C. was supported from Thai DPST scholarship.

Appendix A Background

In this section, we recap the operating principle of the single-photon detector, its implementation in Clavis2, and the original blinding attack Lydersen et al. (2010a). Most available single-photon detectors are APDs operating in Geiger mode, in which they are sensitive to single photons Cova et al. (2004). As shown in Fig. 9, when the APD is reverse-biased above its breakdown voltage , a single photon can cause a large current . If this current exceeds the threshold , electronics registers this as a photon detection (a ‘click’). After that, an external circuit quenches the avalanche by lowering the bias voltage below , and the APD comes into a linear mode. If the APD is illuminated by bright light (which does not happen in normal single-photon operation but can happen during an eavesdropping attack), in the linear mode is proportional to the incident bright optical power . then becomes a threshold on the incident optical power that makes a click.

Figure 9: Linear-mode and Geiger-mode APD operation (reprinted from Lydersen et al. (2010a)).

From an engineering view, the detector can be analyzed by its circuit. Figure 10 shows an equivalent circuit diagram of the two detectors used in Clavis2. When no gate signal is applied, the APDs are biased slightly below their by the negative high-voltage supply , 111Using values from the sample of Clavis2 tested in our present study at the University of Waterloo, which is a different sample than in Refs. Lydersen et al., 2010a, b; Wiechers et al., 2011. To bring the APD into Geiger mode, an additional high, long pulse is applied through a logic level converter DD1. The anode of the APD is AC-coupled to a fast comparator DA1. Since the capacitor C1 blocks the DC component, only when the current flowing through the APD changes, it generates a pulse as the input of DA1. If the peak voltage of this pulse is greater than the positive threshold , the comparator produces a logic output signal indicating a click. Once a click in either of the two Bob’s detectors is registered, the next 50 gates will not be applied to both detectors, which constitutes a deadtime to reduce afterpulsing.

If Eve sends a bright c.w. illumination to the gated detectors, the bright light makes the APD generate a significant photocurrent that monotonically increases with the optical power . When we consider effects of this current on the whole detector circuit (Fig. 10), the most useful one is a reduction of the voltage across the APD . Although the high-voltage supply stays constant, the photocurrent causes a significant voltage across , thus drops. If we apply enough illumination power, will be less than even inside the gate, and the APD then always stays in the linear mode. The detector becomes blind to single photons. In our testing, we measure the voltage at test point T2 in Fig. 10 and refer to this voltage as in the text. is close to real , because [precisely, ].

Figure 10: Equivalent detector bias and comparator circuit, as implemented in Clavis2 (reprinted from Lydersen et al. (2010a)).

After blinding Bob’s detectors, Eve can conduct a faked-state attack. Eve first intercepts all photons sent by Alice. Whenever Eve detects a photon, she sends the same state to Bob via a bright trigger pulse of a certain energy, superimposed on her blinding illumination. Only if Bob chooses the same measurement basis as Eve and applies the gate, one of Bob’s detectors will click and he will get the same bit value as Eve. Otherwise, there is no click at Bob’s side. During the sifting procedure, Alice and Bob keep the bit values when they have chosen the same basis, and so does Eve. Therefore Eve has identical bit values with Bob, introduces no extra QBER, and does not increase the alarm counter. Eve then listens to the public communication between Alice and Bob and performs the same error correction and privacy amplification procedures as them, to obtain an identical copy of their secret key Lydersen et al. (2010a).

Appendix B Analysis of processes in the detector

For further understanding of the detector behaviour under successful blinding attack, we attempt to quantitatively model electrical and thermal processes in it. As we mentioned previously, the bias voltage decreases when the blinding power is applied. A measured relationship between and continuous blinding power is shown in Fig. 11. Detector 0 is blinded at and detector 1 is blinded at . Higher blinding illumination leads to lower bias voltage. This is consistent with the same measurement done for the original blinding attack Lydersen et al. (2010a).

In a detector blinded by c.w. laser illumination, the gain factor is affected by not only the power of blinding laser, but also the gate signal. When the APD is blinded and forced to work in the linear mode, it can be treated as an ordinary photodiode with a finite internal gain. Photoelectrons and holes are accelerated by a high electric field and initiate a chain of impact ionizations that generates secondary electron-hole pairs. Thus, the APD has an internal multiplication gain factor , since one photon can yield many electrons of photocurrent flowing in the circuit. When is much lower than , will be close to 1. However, the APD may not have any significant photosensitivity below so-called punch-through voltage, below which the electrical field does not extend into the absorption layer of InGaAs/InP heterostructure Hiskett et al. (2000).

Figure 11: Bias voltage of APDs versus c.w. blinding power.

We have done a measurement of small-signal gain of the APDs in Clavis2 by measuring their photocurrent response to a short optical pulse input. The results are shown in Fig. 12. There is virtually no photosensitivity below the punch-through voltage of about . Above that voltage starts at (corresponding to quantum efficiency assuming ), then rises above closer to . The gain values measured at are and , which is consistent with values from data sheets of commercial APDs. From the above measurements, we know that Eve can vary the amount of blinding power to the detectors to control the bias voltage and thus the gain factor.

After we blind Bob’s detectors in Clavis2, the gain factor is greater during the gate duration, because the gate signal raises . Thus the electrical charge generated by the APD in response to a trigger pulse applied in the gate is greater than when it’s applied outside the gate. For example, in Fig. 4(c), the gate pulse alone contributes extra charge on top of the current that would be generated without the gate. When the trigger pulse is applied after the gate [Fig. 4(d)], the total charge of the two pulses is ; however, when the trigger pulse is moved into the gate [Fig. 4(e)], the total charge rises to . Therefore, a greater gain factor during the gated time helps the pulse to cross the threshold.

Figure 12: Gain versus APD bias voltage. Values of gain for bias voltages below were negligibly low for a practical attack, and below the sensitivity of our measurement method.

We have attempted to model the increased gain due to the gate. In our model, we consider a thermal effect and an internal resistance of the APD. On the one hand, an increased temperature raises  Sze and Ng (2007). Electrical heating () and the absorption of the blinding power result in a heat dissipation: for detector 0 and for detector 1 222Under blinding power, , . Heat dissipation of detector 0: ; , , Heat dissipation of detector 1: .. Then, an estimated thermal resistance Lydersen et al. (2010b) between each APD chip and the cold plate converts the power dissipation into the increased temperature. The temperature-dependent breakdown voltage increases with the coefficient of about  Lydersen et al. (2010b). As a result, increases by () for detector 0 (1). Figure 12 shows the relation between gain factor and the actual in the linear mode. When is close to , the gain factor increases rapidly. On the other hand, we suppose the APD has a passive internal resistance, so the internal bias voltage across the ideal photodiode is less than the value of we test. By measuring the voltage of a stable avalanche pulse and calculating the current trough the detector circuit when avalanche happens, we obtain the internal resistance of in detector 0 and in detector 1. Therefore, the real bias voltage under blinding attack shown in Fig. 4(c–f) is , which corresponds to in detector 0 as shown in Fig. 12. When gate is applied, the bias voltage becomes which corresponds to in Fig. 12. However, the measured charges in Fig. 4(d) and (e) illustrate much less gain change: at and at  333When we apply a trigger pulse after the gate, this single trigger pulse contributes charge which is the difference between the total charges in Fig. 4(c) and (d). . When we apply a trigger pulse during the gate, this single trigger pulse contributes charge which is the difference between the total charges in Fig. 4(c) and (e). .. The discrepancy may be explained by a larger actual thermal resistance between the APD and the cold plate than we estimate, which should be verified in future research.

References

  • Naor (2003) M. Naor, in Advances in Cryptology – CRYPTO 2003 (Springer, Berlin, 2003) pp. 96–109.
  • ETS (2015) ETSI white paper no. 8: Quantum safe cryptography and security (ETSI, Sophia Antipolis, France, 2015).
  • Bennett and DiVincenzo (2000) C. H. Bennett and D. P. DiVincenzo, Nature 404, 247 (2000).
  • Shor (1994) P. W. Shor, in Proceedings of 35th Annual Symposium on Foundations of Computer Science (IEEE, 1994) pp. 124–134.
  • Bennett and Brassard (1984) C. H. Bennett and G. Brassard, in Proceedings of IEEE International Conference on Computers, Systems, and Signal Processing, Bangalore, India (IEEE Press, New York, 1984) pp. 175–179.
  • Ekert (1991) A. K. Ekert, Phys. Rev. Lett. 67, 661 (1991).
  • Gisin et al. (2002) N. Gisin, G. Ribordy, W. Tittel,  and H. Zbinden, Rev. Mod. Phys. 74, 145 (2002).
  • Scarani et al. (2009) V. Scarani, H. Bechmann-Pasquinucci, N. J. Cerf, M. Dušek, N. Lütkenhaus,  and M. Peev, Rev. Mod. Phys. 81, 1301 (2009).
  • Wootters and Zurek (1982) W. K. Wootters and W. H. Zurek, Nature 299, 802 (1982).
  • Lo and Chau (1999) H.-K. Lo and H. F. Chau, Science 283, 2050 (1999).
  • Shor and Preskill (2000) P. W. Shor and J. Preskill, Phys. Rev. Lett. 85, 441 (2000).
  • Lütkenhaus (2000) N. Lütkenhaus, Phys. Rev. A 61, 052304 (2000).
  • Mayers (2001) D. Mayers, J. ACM 48, 351 (2001).
  • Gottesman et al. (2004) D. Gottesman, H.-K. Lo, N. Lütkenhaus,  and J. Preskill, Quantum Inf. Comput. 4, 325 (2004).
  • Renner et al. (2005) R. Renner, N. Gisin,  and B. Kraus, Phys. Rev. A 72, 012332 (2005).
  • Bennett (1992) C. H. Bennett, Phys. Rev. Lett. 68, 3121 (1992).
  • Schmitt-Manderbach et al. (2007) T. Schmitt-Manderbach, H. Weier, M. Fürst, R. Ursin, F. Tiefenbacher, T. Scheidl, J. Perdigues, Z. Sodnik, C. Kurtsiefer, J. G. Rarity, A. Zeilinger,  and H. Weinfurter, Phys. Rev. Lett. 98, 010504 (2007).
  • Stucki et al. (2009) D. Stucki, N. Walenta, F. Vannel, R. T. Thew, N. Gisin, H. Zbinden, S. Gray, C. R. Towery,  and S. Ten, New J. Phys. 11, 075003 (2009).
  • Tang et al. (2015) Y.-L. Tang, H.-L. Yin, S.-J. Chen, Y. Liu, W.-J. Zhang, X. Jiang, L. Zhang, J. Wang, L.-X. You, J.-Y. Guan, D.-X. Yang, Z. Wang, H. Liang, Z. Zhang, N. Zhou, X. Ma, T.-Y. Chen, Q. Zhang,  and J.-W. Pan, IEEE J. Sel. Top. Quantum Electron. 21, 1 (2015).
  • (20) Several companies sell QKD systems: ID Quantique (Switzerland), http://www.idquantique.com/; the Austrian Institute of Technology (Austria), http://www.ait.ac.at/; QuantumCTek (China), http://www.quantum-info.com/; and Qasky (China), http://www.qasky.com/.
  • Vakhitov et al. (2001) A. Vakhitov, V. Makarov,  and D. R. Hjelme, J. Mod. Opt. 48, 2023 (2001).
  • Makarov et al. (2006) V. Makarov, A. Anisimov,  and J. Skaar, Phys. Rev. A 74, 022313 (2006), erratum ibid. 78, 019905 (2008).
  • Gisin et al. (2006) N. Gisin, S. Fasel, B. Kraus, H. Zbinden,  and G. Ribordy, Phys. Rev. A 73, 022320 (2006).
  • Qi et al. (2007) B. Qi, C.-H. F. Fung, H.-K. Lo,  and X. Ma, Quantum Inf. Comput. 7, 73 (2007).
  • Zhao et al. (2008) Y. Zhao, C.-H. Fung, B. Qi, C. Chen,  and H.-K. Lo, Phys. Rev. A 78, 042333 (2008).
  • Lydersen et al. (2010a) L. Lydersen, C. Wiechers, C. Wittmann, D. Elser, J. Skaar,  and V. Makarov, Nat. Photonics 4, 686 (2010a).
  • Sun et al. (2011) S.-H. Sun, M.-S. Jiang,  and L.-M. Liang, Phys. Rev. A 83, 062331 (2011).
  • Lydersen et al. (2011a) L. Lydersen, M. K. Akhlaghi, A. H. Majedi, J. Skaar,  and V. Makarov, New J. Phys. 13, 113042 (2011a).
  • Jouguet et al. (2013) P. Jouguet, S. Kunz-Jacques,  and E. Diamanti, Phys. Rev. A 87, 062313 (2013).
  • Sajeed et al. (2015) S. Sajeed, P. Chaiwongkhot, J.-P. Bourgoin, T. Jennewein, N. Lütkenhaus,  and V. Makarov, Phys. Rev. A 91, 062301 (2015).
  • Lydersen et al. (2010b) L. Lydersen, C. Wiechers, C. Wittmann, D. Elser, J. Skaar,  and V. Makarov, Opt. Express 18, 27938 (2010b).
  • Wiechers et al. (2011) C. Wiechers, L. Lydersen, C. Wittmann, D. Elser, J. Skaar, C. Marquardt, V. Makarov,  and G. Leuchs, New J. Phys. 13, 013043 (2011).
  • (33) M. Legre and G. Ribordy, “Apparatus and method for the detection of attacks taking control of the single photon detectors of a quantum cryptography apparatus by randomly changing their efficiency”, international patent appl. WO 2012/046135 A2 (filed 2010-10-10, published 2012-04-12).
  • Lim et al. (2015) C. C. W. Lim, N. Walenta, M. Legré, N. Gisin,  and H. Zbinden, IEEE J. Sel. Top. Quantum Electron. 21, 6601305 (2015).
  • (35) Clavis2 specification sheet, http://www.idquantique.com/images/stories/PDF/clavis2-quantum-key-distribution/clavis2-specs.pdf.
  • Stucki et al. (2002) D. Stucki, N. Gisin, O. Guinnard, G. Ribordy,  and H. Zbinden, New J. Phys. 4, 41 (2002).
  • Muller et al. (1997) A. Muller, T. Herzog, B. Huttner, W. Tittel, H. Zbinden,  and N. Gisin, Appl. Phys. Lett. 70, 793 (1997).
  • Lydersen et al. (2011b) L. Lydersen, N. Jain, C. Wittmann, Ø. Marøy, J. Skaar, C. Marquardt, V. Makarov,  and G. Leuchs, Phys. Rev. A 84, 032320 (2011b).
  • Moroder et al. (2009) T. Moroder, M. Curty,  and N. Lütkenhaus, New J. Phys. 11, 045008 (2009).
  • Kerckhoffs (1883) A. Kerckhoffs, J. des Sciences Militaires IX, 5 (1883).
  • Singh (1999) S. Singh, The code book: The Secret History of Codes and Code-breaking (Four Estate, London, 1999).
  • (42) N. Gisin,  abstract of keynote talk at QCrypt 2015, Tokyo, September 28 – October 2, 2015, arXiv:1508.00341 [quant-ph] .
  • Jain et al. (2011) N. Jain, C. Wittmann, L. Lydersen, C. Wiechers, D. Elser, C. Marquardt, V. Makarov,  and G. Leuchs, Phys. Rev. Lett. 107, 110501 (2011).
  • Makarov and Hjelme (2005) V. Makarov and D. R. Hjelme, J. Mod. Opt. 52, 691 (2005).
  • (45) V. Makarov, J.-P. Bourgoin, P. Chaiwongkhot, M. Gagné, T. Jennewein, S. Kaiser, R. Kashyap, M. Legré, C. Minshull,  and S. Sajeed, Phys. Rev. A .
  • Sauge et al. (2011) S. Sauge, L. Lydersen, A. Anisimov, J. Skaar,  and V. Makarov, Opt. Express 19, 23590 (2011).
  • Cova et al. (2004) S. Cova, M. Ghioni, A. Lotito, I. Rech,  and F. Zappa, J. Mod. Opt. 51, 1267 (2004).
  • (48) Using values from the sample of Clavis2 tested in our present study at the University of Waterloo, which is a different sample than in Refs. \rev@citealplydersen2010a, lydersen2010b, wiechers2011.
  • Hiskett et al. (2000) P. A. Hiskett, G. S. Buller, A. Y. Loudon, J. M. Smith, I. Gontijo, A. C. Walker, P. D. Townsend,  and M. J. Robertson, Appl. Opt. 39, 6818 (2000).
  • Sze and Ng (2007) S. M. Sze and K. K. Ng, Physics of Semiconductor Devices (Wiley-Interscience, 2007).
  • (51) Under blinding power, , . Heat dissipation of detector 0: ; , , Heat dissipation of detector 1: .
  • (52) When we apply a trigger pulse after the gate, this single trigger pulse contributes charge which is the difference between the total charges in Fig. 4(c) and (d). . When we apply a trigger pulse during the gate, this single trigger pulse contributes charge which is the difference between the total charges in Fig. 4(c) and (e). .
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
238814
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description