Scalable repeater architectures for multi-party states
The vision to develop quantum networks entails multi-user applications, which require the generation of long-distance multi-party entangled states. The current rapid experimental progress in building prototype-networks calls for new design concepts to guide future developments. Here we describe an experimentally feasible scheme implementing a two-dimensional repeater network for robust distribution of three-party entangled states of GHZ type in the presence of excitation losses and detector dark counts — the main sources of errors in real-world hardware. Our approach is based on atomic or solid state ensembles and employs built-in error filtering mechanisms peculiar to intrinsically two-dimensional networks. This allows us to overcome the performance limitation of conventional one-dimensional ensemble-based networks distributing multi-party entangled states and provides an efficient design for future experiments with a clear perspective in terms of scalability.
The development of quantum networks holds the promise to realize quantum technologies such as secure communication schemes Bruss2000 (), distributed quantum computing Beals2013 () and metrological applications Komar14 (); Eldredge2016 (); Proctor2017 (); Ge2017 (). This prospect resulted in intense efforts to build prototype-networks with few nodes. The creation of long-range bipartite entangled states has been demonstrated ExpDetTransfer (); TracyReview (); Chou2005 (); Yuan2008 (); Olmschenk2009 (); Lee2011 (); Krauter2013 (); Pfaff2014 (); Delteil2016 (); Hofmann2012 (); BellTest2 () and multi-party entangled states shared between several remote parties can be realized in the near-future SciNet (). The ability to distribute such multi-party entangled states over long-distances Epping2016 () will be an essential prerequisite for future quantum networks KimbleReview (); Ying2012 (); Pant2017 (); Siddhartha2018 (); Wehnereaam9288 () consisting of large two-dimensional (2D) structures that allow for multi-user applications. This need and the current experimental possibilities raise the question which type of architecture can be used to distribute long-range multi-party entanglement in a practical and scalable fashion. We address this problem from an implementation-oriented point of view and propose a networking scheme that allows for a robust and efficient realization with current and near-future experimental means.
Future quantum networks will require different types of multi-party entangled states suitable for different tasks. We concentrate on distributing three-party GHZ states GHZ1 (), e.g. , between remote parties, that can be used for clock synchronization Komar14 (), quantum secret sharing VotingAndSecretSharing (); SecretSharing3GHZ (), quantum secret voting Voting3GHZ () or for fundamental tests of nature Mermin1990 (). A key challenge for distributing quantum states over long-distances is the fact that losses and decoherence scale exponentially in the distance Takeoka2014 (). To solve this problem, quantum repeaters Briegel98 (); ReviewJiang (); ReviewSangouard (); Muralidharan2015 () have been introduced, which however are inherently one-dimensional (1D) schemes aiming at generating a bipartite entangled state connecting two remote parties. Therefore, an intrinsically 2D network can be preferable for a multipartite entanglement distribution as it was shown Wallnofer2016 () to have a higher error threshold than a 1D counterpart under assumptions of a generic noise model and full Bell state analysis (or a universal gate set) availability.
In this work, we propose an implementation-oriented 2D repeater scheme that, in contrast to Wallnofer2016 (), (i) can be realized using a constrained set of quantum operations available in atomic or solid state ensembles and (ii) has a mechanism to mitigate an excitation loss error and detector dark counts — major imperfections for this type of system. The resulting robust architecture is custom-tailored to implementations based on atomic or solid state ensembles, as these systems are promising for realizing long-lived quantum memories deRiedmatten2015 (); QAP_review2010 (). While atomic ensembles allow a limited set of operations for processing quantum information, record coherence lifetimes () up to six hours have been observed in rare-earth doped crystals Zhong2015 (). This makes them particularly promising for entanglement distribution over continental distances using repeater schemes which inevitably requires memory times corresponding to the duration needed for classical communication between remote nodes.
Our scheme is inspired by the seminal DLCZ proposal for generating Bell pairs between two nodes Duan2001 (), and requires optical cavities with good cooperativities, linear optical elements, and photodetectors. GHZ states are first generated over moderate distances and then merged to form GHZ states connecting increasingly remote parties [see inset in Fig. 1(a) and Fig. 2(a)]. Similar to the DLCZ proposal, our protocol does not require a universal gate set or a full Bell-state analysis. However, unlike the DLCZ scheme, the new protocol suppresses the propagation of so-called vacuum and multi-excitation errors. These errors are resulted from excitation loss or detection of a dark count during the generation of the elementary state or merging operation and lead to the preparation of a state containing more/fewer excitations than expected. In the DLCZ scheme, the vacuum and multi-excitation errors freely propagate decreasing fidelity dramatically with the increase of nesting level and therefore making the scheme not scalable (ReviewSangouard () and Sec. A1 of Supplemental Material). In comparison, our 2D scheme results in a truly scalable architecture. As shown in Fig. 1 and explained below, this feature allows our repeater scheme to cover longer distances than networks based on combining conventional (one-dimensional) quantum links.
We propose a nested structure of quantum network Wallnofer2016 () consisting of segments in three-party entangled states of increasing size as shown in Fig. 2(a). The scheme requires the ability to generate GHZ states of elementary segments spanning a length , and generalized entanglement swapping procedure to merge the GHZ states doubling the covered distance. We detail the generation and swapping operations separately in the next two paragraphs.
The generation of GHZ states at the elementary level is illustrated in Fig. 2(b). We consider two types of ensembles: (i) ensembles with a -level scheme at nodes A and C, allowing for the efficient storage and read-out of photons Duan2001 (); SimonJ2007 (); Gorshkov2007 () and (ii) ensembles with a double- configuration placed in a cavity at node B. Information is encoded in the absence or presence of a collective spin excitation in the ensembles, i.e. in the logical states and , where is the number of emitters per ensemble and , denote ground state levels of the emitters [see Fig. 2(b)]. Ensembles A and C are driven by weak coherent laser pulses resulting in entangled states of the ensembles and the corresponding forward-scattered photons Duan2001 (), , (the role of the parameter is explained in Sec. B of Supplemental Material, our analysis includes also higher order terms). The capital and lowercase subscripts refer to the states of the ensemble and the corresponding emitted light field. Node B performs the gate operation , . The working mechanism and imperfections are detailed in Supplemental Material, Sec. G. The light fields emitted from nodes B and C are synchronously directed to a swapping station equipped with a 50/50 beamsplitter and two single-photon detectors, as illustrated in Fig. 2(b). Conditioning on detecting a single photon allows for a probabilistic projection onto the state .
For merging three GHZ states, the states of ensembles at adjacent nodes are projected onto the one-excitation subspace. This operation is realized by reading out the atomic excitations Duan2001 (); SimonJ2007 (); Gorshkov2007 () and directing the emitted light fields to the swapping station described above [see also inset of Fig. 2(a)], where success is heralded by the detection of a single photon (see Sec. A of Supplemental Material for details), otherwise the resulting state is discarded.
Apart from the major challenge to mitigate photon transmission losses, experimental limitations at the individual nodes have to be taken into account. The most important local error sources are the read-out inefficiency and imperfect photon detectors (with a detection inefficiency and a dark count probability during a photon pulse detection). Moreover, quantum states stored in the ensembles are degraded over time. Due to the encoding used here, the relevant physical mechanism — dephasing of individual emitters — leads to an effective decay of the stored collective excitations (Supplemental Material, Sec. F). In the considered ensemble-based setting, the main imperfections are therefore errors of loss type and detector dark counts.
The proposed 2D repeater protocol is designed to prevent the propagation of this kind of errors by employing an intrinsic redundancy of the native 2D network to filter out the errors in the merging process at each nesting level. The filtering mechanism, illustrated in Fig. 3, works as follows. At each nesting level one has to perform three merging operations to complete the entanglement generation between the three outermost nodes (X, Y, Z in Fig. 3). The third merging operation is redundant and it shows, if unsuccessful, that there is a lack or an excess of excitations in the generated state. This indicates an occurrence of either the loss type error or the multi-excitation error at previous stages of the protocol. In our protocol the errors have therefore to conspire within one nesting level to pass undetected, which leads to an increased GHZ state preparation probability.
However, the filtering is a passive operation, therefore, it is important to start the entanglement distribution with initial states of high fidelity. As detailed in Sec. A and B of Supplemental Material, the protocol for generating entanglement at the elementary level and the relative orientation of GHZ states in the scheme is chosen such that the involved measurements allow for (i) identifying imperfect merging operations for ideal input states and (ii) filtering out imperfect states for ideal merging operations. The initial states of high fidelity at the elementary level and the subsequent error filtering at each nesting level result in the scheme robust against the vacuum and multi-excitation errors.
In the following, we highlight some of the main properties of our scheme and summarize the effect of limited memory coherence times in the presence of experimental imperfections. Figure 1(a) shows the fidelity of the generated state with respect to the target state , , for the proposed architecture. In Fig. 1(b), for comparison, we introduce a fidelity benchmark representing the performance of a 1D scheme based on long-distance bipartite entanglement distribution. The 1D benchmark uses conventional ensemble-based repeater scheme which creates three long-distance Bell pairs to be subsequently merged to generate the desired target state. The benchmark estimates an upper bound of the fidelity as ideal local GHZ states [shown with double lined triangles in the inset in Fig. 1(b)] are used for the final merging. In contrast to the 2D approach, which relies on multi-party entanglement at all network levels, the 1D scheme exploits the distribution of long-distance Bell pairs and involves the error filtering mechanism only in the end (Methods section).
The most striking feature of Fig. 1 is the fact that the 2D protocol can distribute GHZ states over increasingly large distances by increasing the memory coherence time (or alternatively, by using so-called multiplexing approaches Multiplexing (); Simon2007 (); Bonarota2011 (); Abruzzo2014 (); vanDam2017 () in which several quantum memories or memory modes are used in parallel to compensate for limited coherence times). An increase of significantly extends the distance that can be covered, since our protocol profits from using higher nesting levels for larger distances. This property makes the scheme practical and scalable. This is not the case for the 1D benchmark based on a regular DLCZ scheme, which is hampered by fundamental difficulties associated with the creation of long-range bipartite links (see Supplemental Material, Sec. A3 and Sec. C for details). Therefore, the 1D strategy reaches its performance limit at much shorter distances than the 2D scheme as can be seen in Fig. 1(b) in comparison with Fig. 1(a).
To further characterize the potential of our approach, we analyze the case of infinite memory coherence time. Figure 4 shows the fidelity achievable by a network for which the number of nesting levels is optimal for a given distance in the presence of typical experimental imperfections for . A network with more nesting levels divides the target distance into shorter segments what reduces the probability to lose photons during transmission at the cost of additional errors generated by the repeater scheme itself. For each target distance, there is an ideal number of nesting levels beyond which the addition of a repeater station becomes undesirable.
The comparison of the 2D approach (black line) with the 1D benchmark (green line) for generating three-party GHZ states shows that the latter suffers from severe limitations in the achievable fidelities for large distances even for perfect quantum memories.
Here we emphasize that although the 1D benchmark uses fewer resources (memory cells) than our 2D scheme (see Supplemental Material Sec. A2) its performance is saturated at much shorter distances and can not be improved by a simple increase of resources invested into multiplexing. The presented 2D architecture allows us to convert the extra resources into longer entanglement distribution distances with higher target fidelities. Moreover, the additional memory cells improve coverage of intrinsically 2D networks by allowing the multi-party entanglement generation between arbitrary nodes of the network (see Supplemental Material Sec. A1), while 1D based schemes entangle only the outermost ensembles.
A detailed analysis of the 2D protocol performance, including an assessment of the achievable rates, is provided in Supplemental Material (Sec. A3 and Sec. C). In the following, we discuss means to further improve the performance by temporal filtering.
Temporal filtering protocol
Limited coherence times of quantum memories are generally an important restricting factor for quantum repeaters. To mitigate their effect, we introduce a temporal filtering mechanism by defining a time window , after which quantum states are discarded. During the probabilistic generation and merging procedure shown in Fig. 2, ensembles storing excitations longer than are reset to their ground state, such that the entanglement generation process involving these nodes starts anew. As a result, the influence of decoherence is decreased at the expense of a reduced rate. The time can be changed dynamically by the control software without changing the hardware in a quantum network. Due to this added flexibility, different types of applications — requiring either high fidelities or high rates — can be accommodated. The corresponding tradeoff has been calculated semi-analytically (as described below) and is shown in Fig. 5.
We used a numerical Monte Carlo algorithm and developed a new semi-analytical technique to analyze the performance of our scheme in the presence of realistic imperfections. The Monte Carlo method is very flexible and simulates the full repeater protocol step by step Cody2016 (); VanMeter2017 (). The strength of this exact simulation is also its weakness — the runtime is proportional to the entanglement generation time in the quantum network, growing quickly with the network’s scale.
The analytical method overcomes this difficulty while still incorporating all relevant error sources, including finite memory coherence times. It takes time delays due to classical communication into account and can be used to analyze a large class of repeater schemes, including 1D protocols. The main idea is to determine the density matrix distribution for the ensemble of states generated by the network at time . More precisely, we obtain the Laplace image of this distribution, , which fully describes the statistics of the network and allows one to infer the average generated state , corresponding generation time , and other relevant statistical quantities.
We assume that the probability to generate an entangled state of an elementary segment is small for each time step , such that the continuous probability density with the rate can be introduced. As an example, we consider a linear network consisting of two links, where the density matrix distribution for generating the link in a state before the link has been generated at time in a state is given by with . Here we sum over all intermediate times of the link creation. The degradation of the link during the waiting time due to finite memory life times is taken into account using the superoperator . The corresponding Laplace image of the distribution is given by .
The probability to generate and successfully merge states of two segments during the time window is given by , with the merging superoperator. The summation over all possible combinations of unsuccessful mergings leading to the generation of an entangled state results in a sum of multiple convolutions in the time domain. In the Laplace domain, the sum converges to
with the sum over all possible orders in which links are generated and the unit superoperator. The Laplace image (1) is used to find the average density matrix and generation time for the given nesting level of the network, as described above. To address the next network level, we apply the approximation that the segments are generated time-independently in the state with the rate . In a recursive procedure, the state and generation time of an arbitrary network level can be found. As detailed in Sec. E of Supplemental Material, we introduce diagrams to conveniently deal with probabilistic processes in networks of arbitrary complexity.
We proposed a scalable 2D architecture for generating long-distance multi-party entanglement and provided a full performance analysis with emphasis on the effect of finite memory lifetimes. Covering increasingly longer distances as shown in Fig. 1 can be achieved either using memories with long coherence times or using multiplexing approaches Multiplexing (); Simon2007 (); Bonarota2011 (); Abruzzo2014 (); vanDam2017 (), in which several memory-modes are used in parallel. The presented scheme provides a flexible structure for creating GHZ states between arbitrary nodes of the 2D network, not only between the outermost ensembles. The logical topology of the network [shown in Fig. 2(a)] is thereby not identical to the topology of the required fiber links, which can easily be adjusted to accommodate urban constraints. We are here primarily interested in metropolitan distances and applications requiring only moderate bit-rates. Important examples include secret voting and the protection of classified information that requires several parties for decryption VotingAndSecretSharing (); SecretSharing3GHZ ().
The presented scheme can be modified to work without a cavity at the expense of using a larger number of ensembles (see Supplemental Material, Sec. H). Similarly, it can be adapted to accommodate polarization-type qubits Duan2001 (); ReviewSangouard () and two-click conditioning Sangouard2008 (), enhancing its resistance with respect to fiber-length fluctuations. It will be interesting to develop similarly robust schemes for other platforms such as trapped ions TracyReview (); Duan2010 (), in which a universal gate set can be implemented. In this case, more complicated 2D repeater protocols involving quantum error correction ReviewJiang () instead of error filtering could be envisaged.
Parameters of the simulated network
The results of simulations presented in Figs. 1, 4 and 5 are obtained with the following parameters of the network: detection inefficiency , read-out inefficiencies , fiber attenuation length km, gate efficiency (see text) , signal length s, and dark count probability in the signal measurement operation .
One-dimensional benchmark scheme
Here we present an alternative, one-dimensional (1D), repeater approach for distribution of multi-partite GHZ state, which provides us with the fidelity benchmark used in the present work. This 1D benchmark scheme is based on the generation of long-range bipartite quantum states that are created using regular DLCZ quantum repeaters. A comparison of various nesting levels structures for the 1D benchmark scheme and the propose 2D scheme is presented in Fig. 6(a).
While two links are enough to distribute entanglement between three parties, we equip the 1D scheme with the third link, which is used to implement error filtering procedure in the final step, similar to the original DLCZ protocol, as explained below. To ensure the robustness of the benchmark we assume that the distant parties X, Y, and Z of the 1D scheme have access to ideal GHZ states that can be generated on demand. This provides an upper bound on the performance of 1D schemes of DLCZ type for distribution of tripartite GHZ states. Therefore, the corresponding fidelity of the 1D scheme serves as a benchmark for our 2D scheme.
Figure 6(b) shows the final merging step of the 1D scheme acting as an error filtering operation. In this step, five swapping operations are followed by the last one, which is deterministic in the ideal case. A failure of the last swapping operation indicates an erroneous state. More details about the 1D benchmark scheme are presented in Sec. A of Supplemental Material.
Acknowledgements.We thank M. Afzelius and P. Jobez for fruitful discussions on solid-state ensemble-based implementations and J. Wallnöfer for his input on 2D architectures. Research was sponsored by the Swiss National Foundation (SNSF) through grant number PP00P2-150579, by the Austrian Science Fund (FWF): P28000-N27, P30937-N27, and by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-15-2-0060. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
The supplementary material contains eight sections. In Sec. A, we explain the built-in error filtering mechanism of the two-dimensional (2D) quantum repeater scheme put forward in the main text. We introduce the 1D repeater scheme used in the main text as a benchmark, and analyze performance of the 2D scheme. Sec. B discusses the generation of the elementary GHZ states in detail. In Sec. C, we provide results for the average generation time of long-range GHZ states using the proposed repeater architecture, and in Sec. D, we briefly review the distillability criterion used in the main text Dur1999 (). In Sec. E we explain the numerical methods that have been used to study the performance of our scheme. In Sec. F, the model used to describe decoherence in quantum memories is set forth. In Sec. G, we provide details on the nodes involving a double- level scheme. In Sec. H, we describe an alternative version of our quantum repeater scheme that does not require cavities, but instead relies on larger number of resources and longer memory coherence time.
- A Error filtering mechanism
- B Generation of elementary GHZ states
- C Network generation time
- D Distillability of a GHZ state
- E Methods
- F Memory decoherence
- G Double- scheme ensemble
- H 2D repeater scheme without cavities
Appendix A Error filtering mechanism
In this section we describe the basic working principles of the two-dimensional (2D) quantum repeater scheme put forward in the paper and explain its built-in error filtering mechanism. To demonstrate the effectiveness of this architecture, we compare our new 2D repeater scheme with a benchmark based on long-range quantum links that are generated using one-dimensional (1D) repeaters. In particular, we analyze the scaling of the respective achievable fidelities with increasing repeater nesting levels.
a.1 Two-dimensional quantum repeater scheme
In the main text, we propose a 2D repeater architecture based on Wallnofer2016 () and consider its application for the distribution of long-range tripartite GHZ states
shared between three parties X, Y, and Z. Our 2D scheme is inspired by the original (one-dimensional) DLCZ proposal Duan2001 () and utilizes similar ingredients: linear optical elements, photodetectors and atomic or solid state ensembles. As an additional element we introduce an optical cavity with a good cooperativity, which is used for the generation of GHZ states at the elementary level of the repeater protocol (see Sec. B). We note that the cavity can be dispensed with at the expense of using a large number of ensembles, as explained in Sec. H.
As the original DLCZ protocol, our 2D repeater scheme relies on probabilistic merging operations (explained below). Due to the probabilistic character of the entanglement creation and merging procedures, it is advantageous to employ a nested structure as shown in Fig. 2a in the main text. Figure 7(a) shows a single nesting level of our 2D scheme, in which three network segments in tripartite GHZ states are merged by three entanglement swapping operations, thereby extending the distance over which entanglement is shared. The merging operation for bipartite states is illustrated in Fig. 8(a): the states of two quantum memories are mapped to light fields which subsequently interfere at a balanced beamsplitter. Each of the two output ports of the beamsplitter is equipped with a single-photon detector (SPD), and the detection of a single photon in the measured light fields projects the joint system onto the desired entangled state.
Imperfections in realistic setups, such as fiber losses, memory read-out losses, memory decay, detector inefficiencies and dark counts — which we consider in our analysis — lead to errors in the states distributed by quantum repeaters. Figure 8(b) illustrates how loss errors or multi-excitation errors (see Sec. B.2) can freely propagate through the merging of 1D repeater links by passing unnoticed through the entanglement swapping operations. The original DLCZ repeater protocol entails therefore an error filtering procedure as final step of its implementation Duan2001 (). However, this filtering step at the end of the protocol is ineffective against errors accumulated during the distribution of the quantum states ReviewSangouard (). Therefore, our 2D repeater scheme uses a filtering mechanism at each nesting level of its scheme as illustrated in Fig. 7(b). For each nesting level, two entanglement swapping operations probabilistically merge two tripartite GHZ states into a five-partite GHZ state. In the ideal case, the third entanglement swapping operation deterministically prepares the desired state as the measurement of two anti-correlated qubits always results in a single detector click. An unsuccessful third merging operation during which no or more than one detector click is obtained indicates that there has been an error and leads to a rejection of the produced state.
The filtering mechanism of the 2D scheme adds an extra swapping operation per each nesting level leading to a faster growth of the total number of swapping operations compared to a 1D benchmark scheme presented in the next section. Due to inevitable imperfections of the swapping process the probability of generating errors grows together with the number of swapping operations. Nevertheless, in Sec. A.3, we will show that the fidelity drops slower with the increase of nesting level for the 2D scheme than for the 1D benchmark. This means that the filtering mechanism eliminates more errors than it adds.
Apart from error mitigation, the intermediate nodes of the 2D architecture serves the important purpose of allowing for the generation of GHZ states between arbitrary communication partners in a 2D network as shown in Fig. 9.
a.2 One-dimensional benchmark scheme
Here we present an alternative repeater approach for distribution of multi-partite GHZ state, shown in Fig. 8(c), which provides us with a fidelity benchmark used in the main text. This one-dimensional (1D) benchmark scheme is based on the generation of long-range bipartite quantum states that are created using regular DLCZ quantum repeaters. While two links are enough to distribute entanglement between three parties, we equip the 1D scheme with the third link, which is used to implement error filtering procedure in the final step, similar to the original DLCZ protocol, as explained below. To provide a clear comparison and to reduce the discussion to the errors that occur within the different types of repeaters, we consider the 1D scheme assuming that the distant parties X, Y, and Z have access to ideal GHZ states that can be generated on demand. This provides an upper bound on the performance of 1D schemes of DLCZ type for distribution of tripartite GHZ states. Therefore, the corresponding fidelity of the 1D scheme serves as a benchmark for our 2D scheme.
Figure 8(c) shows the final merging step of 1D scheme acting as an error filtering operation. In this step five swapping operations are followed by the last one, which is deterministic in the ideal case. A failure of the last swapping operation indicates an erroneous state. Due the probabilistic nature of entanglement swapping operations, it is statistically faster to perform this final merging step in two successive stages. In the first stage each link is merged with a corresponding ideal GHZ state, and in the second stage the resulted tripartite states are merged together as indicated in Fig. 8(c) by gray dashed lines. A comparison of various nesting levels structures for the 2D scheme and the 1D benchmark scheme is presented in Fig. 8(d).
Prior to the final filtering step of the 1D benchmark approach, errors propagate through the whole process during which the long-range bipartite entangled quantum links are established, as noted in the previous section [see Fig. 8(b)]. Combinations of different types of errors often render them undetectable for the final filtering step. By comparison, the filtering at each nesting level used in the 2D approach reduces probability to obtain an undetectable combination of errors at the expense of a more rapid growth of resources.
The required resources as a function of the network linear size with the network nesting level can be estimated as follows. The 2D strategy requires memory cells as opposed to the 1D benchmark scheme using , including the ideal local GHZ states at the outermost nodes 111 At the elementary level of the two repeater schemes, the basic segments consist of tripartite GHZ states for the 2D strategy and bipartite links (Bell states) for the 1D protocol. If the latter consists of levels in total, three bipartite long-range links generated by DLCZ repeaters involving levels are merged into a tripartite GHZ state in the last step, as shown in Fig. 8(c). This final merging step represents the nesting level in the 1D case..
a.3 Performance of the two-dimensional scheme
In this subsection we numerically and analytically analyze the fidelity scaling of the 2D repeater and the 1D benchmark schemes discussed above. More specifically, we calculate the drop of the fidelity as a function of the total number of imperfect swapping operations required for the nesting level,
where we have and for the 1D and 2D case respectively 2221D scheme has no meaningful tripartite entanglement at the elementary level since it contains only a bipartite link, therefore (see also Fig. 8[d]).. The fidelity associated with the network state at the repeater level is given by
with defined in Eq. (2). The main sources of errors reducing the efficiency of entanglement swapping operations are (i) imperfect memory read-out (ii) imperfect photodetectors efficiency and (iii) dark counts. These imperfections are associated with an excitation loss probability during the read-out process, an excitation loss probability during the photon detection process, and a dark count probability , which is given by the product of the of the dark count rate and the duration of the photon pulses. The model used for simulating imperfect entanglement swapping operations is explained in detail in Sec. E.
To examine the accumulation of errors caused by imperfect swapping operations we consider a drop of fidelity of the entangled states generated by the networks from ideal states of elementary segments (triangles and links for the 2D and 1D schemes, respectively) as a function of total number of swapping operations in the limit of perfect quantum memory. The effect of memory imperfections is discussed in detail below.
The results, shown in Fig. 10, clearly indicate that the built-in filtering mechanism of the 2D repeater scheme leads to a significantly slower growth of the infidelity with increasing number of swapping operations than the 1D benchmark strategy. These results have been obtained numerically by truncating the considered Hilbert space to include up to four excitations, i.e. up to the Fock state . To illustrate the operation of the filtering mechanisms for the analyzed 1D and 2D strategies, we show the initial density matrix (corresponding to an ideal GHZ state) in the left inset of Fig. 10, and the density matrices for final states for the 1D (green line inset) and the 2D (black line inset) approach. All density matrices in the insets are projected onto the qubit subspace, which is a good approximation for representation of the final network states. For the 2D repeater scheme, we mainly observe a decay of the coherences (dephasing), while the 1D benchmark strategy leads not only to dephasing, but also adds significant population to the diagonal elements of the density matrix. Therefore the fidelity is higher in the 2D case for each nesting level, even though the number of required entanglement swapping operations increases faster than in the 1D benchmark approach. Figure 10 also demonstrates that the 2D scheme allows one to implement more repeater nesting levels before the generated entangled states become not distillable (see Sec. D for details on the distillation criterion).
The effect of imperfect swapping operations on the entanglement distribution can be described analytically in the limit in which the corresponding probabilities are small, . In this case, the entangled states that are distributed by the network assuming ideal elementary input states, are faithfully represented in the qubit basis ( and ). However, the excited state still needs to be considered perturbatively to account for the scattering of two photons (each from a different memory cell) into one output port of the beamsplitter, which can lead to a single click due to a finite detector efficiency.
Under these assumptions, the resulting states generated by the repeater networks can be presented in terms of the target GHZ state , the corresponding classically correlated state
and the diagonal density matrix , where is the identity matrix in the qubit basis.
The state generated by the 1D benchmark strategy at the nesting level is gviven by
The 2D repeater scheme generates the following state:
where we used Eqs. (3) to obtain the dominant scaling of the infidelities with the nesting level . The analytical results [Eqs. (6) and (7)] are in a good agreement with the numerical data as shown in Fig. 10 by the green (1D benchmark scheme) and black (2D scheme) dashed lines. The small deviation from the numerics for the 2D repeater curve is due to terms of higher order in the error probabilities and .
The error filtering mechanism is evident from the analytical expressions [Eqs. (4) and (5)] for the states generated by both schemes. The lack of contributions to first order in the error probabilities indicates the perfect filtering of single errors. In order to pass undetected, the errors have to conspire during the swapping procedure as represented by the terms and describing the excitation loss during memory read-out and the photon loss in the detector, respectively, followed by a dark count during the detection. The amount of these errors grows linearly with the number of swapping operations for the 2D repeater and quadratically for the 1D benchmark scheme.
The structure of the generated states is also different for the two network protocols. Dephasing leading to the classically correlated state and processes resembling thermal noise resulting in the diagonal state are contributing similarly in the 1D benchmark strategy of entanglement generation. In contrast, at high nesting levels , the 2D repeater scheme suffers mainly from the loss of coherence (dephasing) while the process leading to is significantly suppressed by the built-in error filtering. The resilience of dephasing errors originates from the fact that swapping operations applied to the ideal state and to the dephased one yield the same outcomes, which renders such errors undetectable.
The numerical and analytical results of this section show that the proposed 2D repeater, in contrast to the 1D benchmark approach, can efficiently filter out errors introduced by the imperfect swapping operations and, particularly, by photon losses in the memory read-out operations. As shown in Sec. F, the decoherence in the memory cells leads to the same kind of excitation loss as the read-out inefficiency, and thus is also efficiently filtered out in the 2D scheme. This explains the main result of the paper: in the presence of realistic imperfections, the proposed 2D scheme scales significantly better with the network size than its analogs based on the 1D repeaters.
Appendix B Generation of elementary GHZ states
In this section we present a scheme for the probabilistic generation of the initial entangled states that constitute the elementary segments of the proposed 2D repeater protocol. We analyze the segment state and its generation time. It is shown that the use of a nonlinear node B (see Fig. 11) is beneficial for the creation of initial states of 2D networks with high rate and fidelity.
Our protocol for generating the elementary GHZ states is inspired by the original DLCZ proposal. The elementary network segment, shown in Fig. 11(a), consists of three nodes A, B and C. For simplicity we assume an equilateral triangle with length . The nodes A and C contain atomic or solid state ensembles with a -type level structure, and node B, considered in detail in Sec. G, employs a cold ensemble with a double- configuration placed in a cavity. Other resources for the elementary state generation include lasers, fibers, a balanced beamsplitter and two single photon detectors (SPDs).
The ensembles store quantum information encoded in the absence or presence of a collective excitation, i.e. in the logical states and , where is the number of emitters per ensemble and , denote the emitter ground states, as shown in Fig. 11(a). The ensembles can be used to efficiently generate and retrieve excitations, as shown in Duan2001 (); Hammerer2010 ().
First, we consider an idealized case of the GHZ state generation to illustrate the working principle of the protocol. The effect of realistic imperfections will be discussed in the next subsection. The target state for the elementary segment preparation reads , where subscripts refer to the corresponding ensembles. The protocol is probabilistic and works as follows. For each generation attempt the ensembles are initialized in the logical state . Then, weak laser pulses drive the ensembles at nodes A and C, coupling them to the outgoing photonic modes and , correspondingly, via an off-resonant Raman transition. Similarly to Duan2001 (), the resulting states are two-mode squeezed vacuum states
where is the squeezing parameter controlled by the driving laser and sech cosh . To simplify the notation, we introduce the abbreviation . In the ideal case we choose .
Next, both of the outgoing photon pulses propagate over the distance towards node B, such that mode is directed to the central swapping station, and mode enters node B. As illustrated in Fig. 11(a), node B contains an ensemble with a double- configuration, which allows to implement a nonlinear gate operation , for the single photon component of the incoming mode (see Sec. G). The gate action transfers the whole system to a state , where
Finally, mode interferes with mode at the balanced beamsplitter with the output fields measured by two SPDs. Upon the detection of a single photon, the elementary segment state is projected onto the GHZ state , otherwise the generation step is repeated. In the ideal case, the success probability of a generation attempt is .
A finite efficiency of the nonlinear gate results in the optimal relation between the squeezing parameters which maximizes the fidelity of the generated state . This relation ensures that there are equal probabilities of detecting a photon coming from the nodes A and C provided that the chances of the photon loss during the propagation are also equal. For higher nesting levels there are more factors affecting the final state such that a numerical optimization of the squeezing parameters is necessary to achieve the maximum fidelity shown in Fig. 1 of the main text.
b.2 Elementary state generation with realistic imperfections
In this subsection we study the state of the elementary segment generated in the presence of realistic imperfections. The most important imperfections in the considered scenario are photon losses in the fiber with attenuation length , photon losses in detectors with probability, dark counts with probability , and the finite efficiency of the nonlinear gate at node B (see Sec. G). Since the memory decay does not significantly affect the elementary segment preparation in the considered parameters regime ( and ), we neglect it in this subsection. This assumption allows us to use the simple optimal relation for the squeezing parameters maximizing the fidelity of the elementary segment. In the numerical results of the main text, however, we fully account for the state decay in the memory during the propagation time , the detection time given by the pulse duration , and the propagation time of classical communication , where is the speed of light in the fiber.
Below we provide an analytical calculation of the state generated by the protocol presented in the previous subsection with the realistic imperfections modeled according to Sec. E. The probability of a dark count , and the probability of a loss in a detector , are treated as small parameters and are taken into account up to first order. The squeezing parameter [see Eq. (8)] is also considered to be small and considered up to . Under these assumptions the density matrix of the elementary segment reads
where the states are given by
Here the vacuum state is generated by a dark count, the state is a result of a photon loss in the fiber AB or in the node B (to the left of the swapping station), and the state is produced by a photon loss between the node C and the swapping station. The states and contain on average one excitation more than the target GHZ state and thus are results of so-called multi-excitation errors.
The coefficient in Eq. (11) is the probability to succeed in a generation attempt. The coefficients in front of the states in the brackets read
One can infer from Eqs. (12) (assuming ) that the conditional probability of the multi-excitation errors grows as and thus it could be suppressed by decreasing . However, once , the generated state is deteriorated by the vacuum component as the coefficient starts to dominate. Physically this corresponds to a laser driving the ensembles so weakly that most of the detectors clicks are caused by dark counts while the ensembles stay in the vacuum state . Therefore, there is an optimal value for the parameter corresponding to a compromise between the two sources of errors.
In Fig. 12 we show the fidelity of the generated state and the average number of the generation attempts as functions of the squeezing parameter . The analytical results are shown as dashed lines. To verify the analytical calculations we present numerical results (solid lines) obtained by the method identical to the analytical analysis but for the truncated Hilbert space including Fock states up to and without neglecting terms of higher order in the probabilities and . This is necessary to study the region of not too small squeezing parameters . As can be seen, the analytical data agree with numerics in the relevant regime of moderate which provides the maximal fidelity. The red dot indicates the optimal value with the corresponding maximal fidelity , which can be found analytically from Eqs. (11) and (12):
where we also neglected higher orders of .
The state of elementary segments [given by Eq. (11)] that is generated by the scheme with the nonlinear gate (node B) contains three types of errors. During the merging of the elementary segments into a segment of the nesting level these errors can be partially filtered depending on relative orientations of the elementary segments [a couple of configurations are shown in Fig. 11(b)]. A proper orientation of segments leads to the optimal filtering of those errors. Thus, the fidelity of the nesting level state () is generally higher than the fidelity of the corresponding elementary segments, as shown in Fig. 12 by the dotted line. The relative orientation of the elementary segments is optimized numerically for each set of parameters in order to obtain the fidelity plot shown in Fig. 1 of the main text.
As can be seen from Fig. 12, the maximum fidelity of the nesting level state is achieved for a parameter that is smaller than corresponding to the maximum of elementary segment fidelity. This indicates that the vacuum component of the elementary segments states is more efficiently filtered out than the multi-excitation error. As presented in Fig. 5 of the main text, one can achieve an efficient rate/fidelity trade-off in the limit of long memory coherence times by varying the parameter . However, in the case of a short coherence time, as shown in the same plot, the time filtering protocol [V. Kuzmin et al., in preparation] allows for a more efficient trade-off.
Appendix C Network generation time
Figure 13 shows the average time it takes to generate the network state with the maximum fidelity shown in Fig. 1 of the main text. The corresponding calculation is explained in Sec. E. One can see that the 2D repeater scheme reaches higher fidelities and greater distribution distances, relative to the 1D benchmark scheme, in exchange for a longer generation time. This is the consequence of the superior filtering mechanism built into the 2D scheme, however, being a passive technique, it still faces a rate-fidelity trade-off.
Multiplexing of the 2D network structure Collins2007 () allows to significantly reduce the generation time and, simultaneously, to relax the memory coherence time requirements. Meanwhile, even with multiplexing, the 1D benchmark scheme is unable to achieve the distribution range and fidelity of the 2D repeater network.
Appendix D Distillability of a GHZ state
One of the characteristics considered in the main text is the maximum distance at which the repeater network can distribute quantum states that are distillable to the perfect GHZ state. This maximum distance, the distillation threshold, is obtained via the distillation criterion Dur1999 () and is shown in Fig. 13 and Fig. 1 of the main text indicated by red dash-dotted lines.
The idea of the criterion is the following. We consider a family of three-qubit (A, B, and C) states of the form
where the parameters and are positive numbers restricted by tr, and
is the orthonormal tripartite GHZ-basis, where with in binary notation. If the state has a negative partial transposes with respect to the qubits A and B, the maximally entangled bipartite state, , can be distilled from many copies of . This automatically means that if all three partial transposes are not positive, one can distill the states , , , and generate the target GHZ state using these states [i.e., as it is shown in Fig. 8(c)]. Since states can be depolarized to the form (14) without changing the diagonal elements Dur1999 (), this gives a sufficient condition for distillability for general states.
Explicitly, to decide whether the GHZ state is distillable from an ensemble of states , one needs to calculate the values
The state has a negative partial transpose with respect to the qubit A (B, C) iff . Thus the sufficient criterion for distillability is
Appendix E Methods
This section explains two methods we use for simulating the considered quantum networks. First, we explain the Monte Carlo (MC) method in the subsection E.1, and then, in the subsection E.2, we provide an overview to the diagrammatic technique developed in this work.
e.1 Monte Carlo method
The idea of the MC method is to simulate numerically the full repeater protocol step by step. Each run of the program generates a trajectory with the corresponding total generation time and the final pure state of the network. Averaging over many runs we obtain the density matrix of the ensemble of states generated by the network and the corresponding mean generation time.
Throughout the simulation, the pure state of the network evolves in discrete steps describing attempts of the elementary segments preparation and the merging operations. The imperfections of the merging operations along with the dissipative processes between the merging steps are described by the corresponding superoperators acting on the system density matrix. A superoperator can be decomposed into a set of Kraus operators which satisfies the completeness relation Nielsen2010 (). The result of a superoperator action on a pure state (evolution of the state) is an ensemble of states with the corresponding probabilities . The MC algorithm randomly chooses a state from the ensemble according to the probability .
For the network simulation we take imperfections such as excitation losses and dark counts into account, which are described by the following superoperators
Here and are the operators of annihilation and creation of an excitation in the corresponding mode, is the Lindblad superoperator. The process of photon losses in the fiber is described by with the coefficient , where is the fiber length and is the fiber attenuation length; the memory decay process is defined also by the loss superoperator with the coefficient , where is the decay time and is the memory coherence time.
Using the introduced imperfections, one can define a superoperator for the process of merging segments. As shown in Fig. 8(a), the merging operation is applied to two adjacent memory modes with indexes and . States of the two memories are read out with the loss probability . Subsequently, the photons interfere in the perfect balanced beamsplitter to be measured by two photon-number resolving single-photon detectors (SPDs) with the loss probability and the dark count probability . The detection of a single photon projects the joint system onto an entangled state. Altogether, the merging superoperator reads
where the node memory read out process is described by . The dark counts and inefficiency of the SPD at the output port of the beamsplitter are represented by with the consecutive projection describing the detection of only one photon. The prefactor takes the photon detection in the second mode into account, which gives the opposite phase of the resulting state and could be reduced to the first case by the corresponding phase flip operation. The superoperator for the balanced beamsplitter reads
where is the annihilation operator of the mode .
The numerical calculations are performed in a truncated Hilbert space. The results presented in the main text account for the Fock states up to , which is enough to consider the first order term in the multi-excitation error.
To start the simulations one needs the density matrix of the elementary segment and its generation probability for one attempt. The calculation of and is done in terms of density matrices and detailed in Sec. B. The probability for the elementary segment to be generated in attempts is . The MC calculations are initialized with each elementary segment having an ’age’ of with probability and being in one of the eigenstates of with the conditional probability given by the corresponding eigenvalue of . Here, the duration of one attempt is with the speed of light in the fiber and the photon pulse duration. The first term is the time of the photon pulse propagation over the distance between a node and a swapping station plus the time for the response via classical communication. For simplicity we consider the network scheme with identical elementary segments having a certain fixed relative orientation [two possible orientations are shown in Fig. 11(b)]. Thus, the MC simulation starts with all elementary segments initialized in the same ensemble state and with the generation probability .
The simulation proceeds as follows. If one segment is created at time and an adjacent one at time , the nodes of the first one experience decoherence (decay) for the duration . Then the two segments are probabilistically merged by the superoperator . If the merging is successful, the state and generation time of the resulting network segment are saved, and the adjacent segment of the same nesting level is evaluated. After that, the oldest segment decays for the difference of the generation times, and the segments are merged again. Once the swapping fails, the states of the corresponding segments have to be regenerated anew, while the global time keeps running.
The network simulation is performed recursively. To evaluate a segment state at the nesting level, segments of the level are required, and so on down to the elementary segments. All states and ages of the intermediate segments are collected for the reconstruction of their density matrices and generation rates. Thus, as a result of the MC simulation, the statistics of all nesting levels up to the one is obtained.
Although the MC method is very flexible, it is extremely demanding in terms of time and computational resources: one needs to collect a large number of trajectories in order to achieve convergence to the true average values. In fact, the time for simulation of one trajectory is proportional to the real network generation time and grows exponentially with the nesting level . It could happen that certain initial parameters lead to an arbitrary long trajectory to be simulated, and therefore, to the occupation of a large amount of computational resources. This could be avoided by using a so called Russian Roulette method KahnHerman () which consistently terminates those trajectories and accounts for them in an unbiased way.
In view of a large computational time, the MC method becomes unpractical for simulation and successive optimization of scalable and complex networks, such as the 2D repeater network. We solve this problem by developing a new semi-analytical diagrammatic method, which significantly simplifies the simulation of probabilistic quantum repeaters.
e.2 Diagrammatic method
The idea of the diagrammatic method is to determine the full repeater network statistics, which can be used then to obtain the average network state and the generation time . The statistics of states generated by the network can be described by the density matrix distribution , such that is the unnormalized density matrix of the ensemble of states generated within the time period and is the probability to generate a state within this period. In what follows we will find the Laplace transform of the density matrix distribution denoted as . Then, one can infer the average state generated by the network and the generation time as
To illustrate the approach we provide a simple example of the network evaluation without consideration of the communication time and the time filtering protocol. We consider a two-segments repeater shown in Fig. 8(a), where two links with states and are merged to a longer link with the state . We assume, that the links are generated probabilistically in discrete time steps and that the success probability in each step is small . Under this assumption one can define the link generation rate and the continuous probability density to generate the link at time . Then, we can introduce the elementary diagram representing the density matrix distribution for generation of the elementary link in the state at time
Here the vertical line of length represents the total link generation time and the circle denotes the successful generation event. Time flows upward.
Using diagram (20), one can represent the density matrix distribution for two links, and , prepared for merging at time as
where we sum over two possible generation orders . In Eq. we integrate over all possible preparation times of the generated link, , before the one, , is prepared at time . The degradation of the link during the waiting time due to the finite memories life time is taken into account using the superoperator , where are the annihilation operators for two memory nodes of the link and is the Lindblad superoperator defined in the previous subsection.
The Laplace image of the distribution reads . Therefore, the average density matrix and the average time for preparation of the two links for merging can be found according to Eqs. as
Once both links are prepared, they are probabilistically merged into the link . This merging can be represented by the modified diagrams as
where the operator of the probabilistic merging is defined in Eq. .
The probability densities of the successful and unsuccessful merging events are and , correspondingly, where is the unit superoperator. To fully describe the network, one has to consider all possible trajectories consisting of different numbers of unsuccessful mergings leading to the last successful one. The density matrix distribution for the network state generated at time after one unsuccessful merging is represented by the following diagram
where we integrate over all intermediate times of the unsuccessful merging. The convolution in Eq. (24) becomes a product in the Laplace domain: . The density matrix distribution of the final state generated by the network is described by the following infinite sum of diagrams
In the Laplace domain the sum becomes a geometric series which converges to
According to Eqs. the average density matrix and the average generation time for the merging of two links are found from Eq. (25) as
The next nesting level of the network can be evaluated by repeating the described procedure using the obtained state and rate as initial parameters for the new “elementary” links.
The application of the diagrammatic method for the evaluation of the proposed 2D repeater architecture, as well as the calculation of the communication time and the temporal filtering, shown in Fig. 5 of the main text, will be presented in detail in [V. Kuzmin et al., in preparation].
Appendix F Memory decoherence
As discussed in Section B.1, the quantum information in ensembles is encoded in collective excitations and . In this section we study the effect of memory decoherence on the collective excitations. The memory based on ensembles (both, atomic and solid state) decays mainly due to individual dephasing of atoms (impurities) which can be caused by atomic collisions or by fluctuating magnetic fields Hammerer2010 ().
Here we describe the evolution of the collective excitations in terms of Heisenberg-Langevin equations Barnett2002 () and show that the individual dephasing results simply in a decay of the collective variables
where we introduce canonical variables for spin polarized ensembles , , with the projections of collective spin. The Langevin noise operators are given by the correlators
Here is a single spin value, and is the number of spins (atoms).
To obtain the result above, we consider an ensemble of spins (atoms) which experience individual random rotations around a certain direction (-axis) due to perturbations caused by collisions and fluctuating magnetic fields. The preferred direction can be chosen by applying an extra magnetic field which creates an energy splitting of the atomic ground levels larger than the energy scale of the perturbation. The Hamiltonian describing the dephasing effect reads
where are Langevin forces with . The individual spin components obey the commutation relations (plus cyclic permutations). We are interested in the effective equations of motion for the collective spin components . Using the Heisenberg equations of motion for the individual spins (given by the Hamiltonian (28)) we obtain the equations for the collective spin,
Next, we formally solve the equations for the individual spins and substitute the solutions into the equations for the collective variables (29)-(31). Using the -correlated property of the Langevin noise operators we obtain the effective equations of motion for the collective spin components,
Here are the initial states of atomic spins. For an initially polarized ensemble and using and one arrives at the effective equations of motion for the canonical collective variables (26), (27). These equations describe a decay of continuous variables with the minimal quantum noise and corresponding decay rate . Our numerical simulations employ this model for the quantum memory decoherence.