Distributed Quantum Computation Architecture Using Semiconductor Nanophotonics
In a large-scale quantum computer, the cost of communications will dominate the performance and resource requirements, place many severe demands on the technology, and constrain the architecture. Unfortunately, fault-tolerant computers based entirely on photons with probabilistic gates, though equipped with “built-in” communication, have very large resource overheads; likewise, computers with reliable probabilistic gates between photons or quantum memories may lack sufficient communication resources in the presence of realistic optical losses. Here, we consider a compromise architecture, in which semiconductor spin qubits are coupled by bright laser pulses through nanophotonic waveguides and cavities using a combination of frequent probabilistic and sparse determinstic entanglement mechanisms. The large photonic resource requirements incurred by the use of probabilistic gates for quantum communication are mitigated in part by the potential high-speed operation of the semiconductor nanophotonic hardware. The system employs topological cluster-state quantum error correction for achieving fault-tolerance. Our results suggest that such an architecture/technology combination has the potential to scale to a system capable of attacking classically intractable computational problems.
Keywords: distributed quantum computation; topological fault tolerance; quantum multicomputer; nanophotonics.
Small quantum computers are not easy to build, but are certainly possible. For these, it is sufficient to consider the five basic DiVincenco criteria: ability to add qubits, high-fidelity initialization and measurement, low decoherence, and a universal set of quantum gates. However, these criteria are insufficient for a large-scale quantum computer. DiVincenzo’s added two communications criteria — the ability to convert between stationary and mobile qubit representations, and to faithfully transport the mobile ones from one location to another and convert back to the stationary representation — are also critical, but so is gate speed (“clock rate”), the parallel execution of gates, the necessity for feasible large-scale classical control systems and feed-forward control, and the overriding issues of manufacturing, including the reproducibility of structures that affect key tuning parameters . In light of these considerations, the prospects for large-scale quantum computing are less certain.
Advances in understanding what constitutes an attractive technology for a quantum computer are married to advances in quantum error correction. These improvements include the theoretical thresholds below which the application of quantum error correction actually improves the error rate of the system , increases in the applicability of known classical techniques , understanding of feasible implementation of error correcting codes , design of error suppression techniques suited to particular technologies or error models , advances in purification techniques , and experimental advances toward implementation . Among the most important, and radical, new ideas in quantum error correction is topological quantum error correction (tQEC), for example surface codes . These codes are attracting attention due to their high error thresholds and their minimal demands on interconnect geometries, but work has just begun on understanding the impact of tQEC on quantum computer architecture, including determining the hardware resources necessary and the performance to be expected .
The effective fault tolerance threshold in tQEC depends critically on the microarchitecture of a system, principally the set of qubits which can be regarded as direct neighbors of each qubit. As connectivity between qubits increases, both the operations required to execute error correction and the opportunities for “crosstalk” as sensitive qubits are directly exchanged decline, allowing the system to more closely approach theoretical limits.
Here, we argue that even for tQEC schemes that require only nearest-neighbor quantum gates in a two-dimensional lattice geometry, communication resources will continue to be critical. We present an architecture sketch in which efficient quantum communication is used to compensate for architecture inhomogenities, such as physical qubits which must be separated by large effective distances due to hardware constraints, but also due to qubits missing from the lattice due to manufacturing defects. Assuming a homogeneous architecture may be acceptable for small-scale systems, but in order to create a system that will grow to solve practical, real-world problems, distributed computation and a focus on the necessary communications is required. Further, our design explicitly recognizes that not all communications channels are identical; they vary in the fidelity of created entanglement and physical and temporal resources required. This philosophy borrows heavily from established principles in classical computer architecture . Classically, satisfying the demands of data communication is one of the key activities of system architects . Our design process incorporates this philosophy.
No computing system can be designed without first considering its target workload and performance goals . The level of imperfection we allow for quantum operations depends heavily on the application workload of the computer. Our goal is the detailed design (and ultimately implementation) of a large-scale system: more than ten thousand logical qubits capable of running Toffoli gates within a reasonable time (days or at most a few months). For example, such a system could factor a 2,000-bit number using Shor’s algorithm . This choice of scale affects the amount of error in quantum operations that we can tolerate. Steane analyzes the strength of error resilience in a system in terms of , the product of the number of logical qubits in an application () and the depth (execution time, measured in Toffoli gate times) of the application () . Our goal is to tune the error management system of our computer to achieve a logical error per Toffoli gate executed of , with .
Under most realistic technological assumptions, the resources required to reach adequate values are huge. Nearly all proposed matter qubits are at least microns in size, when control hardware is included. For chip-based systems, a simple counting argument demonstrates that more qubits are required than will fit in a single die, or even a single wafer. This argument forces the implementation to adopt a distributed architecture, and so we require that a useful technology have the ability to entangle qubits between chips .
As an example architecture supporting rich communications, we are designing a device based on semiconductor nanophotonics, using the spin of an unpaired electron in a semiconductor quantum dot as our qubit, with two-qubit interactions mediated via cavity QED. We plan to use tQEC to manage run-time, soft faults, and to design the architecture to be inherently tolerant of fabricated and grown defects in most components.
Our overall architecture is a quantum multicomputer, a distributed-memory system with a large number of nodes that communicate through a multi-level interconnect. The distributed nature will allow the system to scale, circumventing a number of issues that would otherwise place severe constraints on the maximum size and speed of the system, hence limiting problems for which the system will be suitable.
Within this idiom, many designs will be possible. The work we present here represents a solid step toward a complete design, giving a framework for moving from the overall multicomputer architecture toward detailed node design. We can now begin to estimate the actual hardware resources required, as well as establish goals (such as the necessary gate fidelity and memory lifetimes) for the development of the underlying technology.
Section 2 presents background on the techniques for handling of errors in a quantum computer that we propose to use. Section 3 qualitatively presents our hardware building blocks: semiconductor quantum dots, nanophotonic cavities and waveguides, and the optical schemes for executing gates. Section 4 presents a qualitative description of the resources employed in the complete system. In particular, it describes how some quantum dots, used for communication, are arranged for deterministic quantum logic mediated by coupled cavity modes, while other quantum dots are indirectly coupled via straight, cavity-coupled waveguides for purification-enhanced entanglement creation. Long columns of these basic building blocks span the surface of a chip, and many chips are coupled together to create the complete multicomputer. Preliminary quantitative resource counts appear in section 5.
2 Multi-level Error Management
A computer system is subject to both soft faults and hard faults; in the quantum computing literature, “fault tolerance” refers to soft faults. A soft fault is an error in the operation of a normally reliable component. Soft faults can be further divided into errors on the quantum state (managed through dynamically-executed quantum error correction or purification), and the loss of qubit carrier (e.g., loss of a photon, ion or the electron in a quantum dot, depending on the qubit technology). Qubit loss may be addressed by using erasure codes, or, in the case of tQEC, through special techniques for rebuilding the lattice state . In this section, we introduce our approach to managing these multiple levels of errors, which will be further developed in the following sections.
2.1 Defect Tolerance and Quantum Communication
Hard faults are either manufactured or “grown” defects (devices that stop working during the operational lifetime of the system). With adequate hardware connectivity, flexible software-based assignment of roles to qubits will add hard fault tolerance, allowing the system to deal with both manufactured and grown defects.
The percentage of devices that work properly is called the yield. In our system, most of the components are expected to have high yields, but the quantum dots themselves will likely have low yields, at least in initial fabrication runs and possibly in ultimate devices. These faults occur in part due to the difficulty of growing optically active quantum dots in prescribed locations, but more due to the difficulty of assuring each dot is appropriately charged and tuned near the optical wavelength of the surrounding nanophotonic hardware, to be further discussed in Sec. 3.3.
The presence of hard faults means that the connectivity of the quantum computer begins in a random configuration, which we can determine by device testing. As a result, the architecture will have an inhomogeneous combination of high-fidelity connections where pairs of neighboring qubits are good and low-fidelity connections between more distant qubits. To compensate for the low-fidelity connections, we choose to use entanglement purification to bring long-distance entangled-states up to the fidelity we desire for building our complete tQEC lattice. This choice means that the system will naturally use many of the techniques developed for quantum repeaters , and portions of the system will require similar computation and communication resources, used in a continuous fashion. Details of these procedures are presented in Sec. 4.
2.2 Topological Fault Tolerance
On top of purified states, we employ topological error correction (tQEC), , in particular the two-dimensional scheme introduced by Raussendorf and Harrington. In this scheme, the action of the quantum computer is the sequential generation and detection of a cluster state, and error correction proceeds by checking against expected quantum correlations for that state. Logical qubits are defined by deliberately altering these correlations at a pair of boundaries in an effectively three-dimensional lattice of physical qubits. These boundaries may be the extremities of the lattice or holesaaaThese holes are commonly called “defects” in the topological computing literature, as they are similar to defects in a crystal; in this paper, we reserve the term “defect” for a qubit that does not function properly, i.e. a manufacturing defect. of various shapes “cut” into the lattice by choosing not to entangle some qubits. The qubits in the interior of the lattice have their state tightly constrained, whereas pairs of boundaries are associated with a degree of freedom that is used as the logical qubit.
The simplicity of the gate sequences used to constrain the qubits in the lattice interior and the independence of these gate sequences on the size of the system are directly responsible for tQEC’s high threshold error rate of approximately 0.8% for preparation, gate, storage and measurement errors , the highest threshold found to date for a system with only nearest neighbor interactions.
In 2-D, we choose to make holes that are squares of side length . Logical operators take the form of rings and chains of single-qubit operators — chains connect pairs of holes, rings encircle one of the holes. If we associate with chains and with rings (or vice versa), it can be seen that these operators will always intersect an odd number of times ensuring anticommutation. Braiding holes around one another can implement logical CNOT, as shown in Figure 1.
tQEC offers important architectural advantages over other error-suppression schemes, such as concatenated codes. Most importantly, unlike tQEC, many concatenated codes lose much of their effectiveness when long-distance gates are precluded by the underlying technology. In addition, the amount of error correction applied in tQEC can be controlled more finely than with concatenated codes, which have a property that every time an additional level of error correction is used, the number of physical qubits grows by at least an order of magnitude. tQEC’s error-protection strength, in contrast, improves incrementally with each additional row and column added to the lattice.
Logical errors are exponentially suppressed by increasing the circumference and separation of holes. This can be inferred directly from Figure 1 — the number of physical qubit errors required to form an unwanted logical operation grows linearly with circumference and separation. The threshold error rate is defined to be the error rate at which increasing the resources devoted to error correction neither increases nor decreases the logical error — the error rate at which the errors corrected are balanced by the errors introduced by the error correction circuitry. Assuming a hole circumference and separation of , for physical error rates , error suppression of order will be observed. The factor depends on the details of the error correction circuits. Assuming the error correction circuits do not copy single errors to multiple locations, as a circumference of implies that a chain of approximately errors can occur before our error correction system will mis-correct the state and give a logical error.
Related tQEC schemes exist in 3-D and 2-D . The 3-D scheme makes use of a 3-D cluster state and the measurement-based approach to computing — all qubits are measured in various bases, and measurement results processed to determine both the bases of future measurements and the final result of the computation. This approach is well-suited to a technology with short-lived qubits (e.g., photons, which are easily lost) or slow measurement. The 2-D scheme requires a 2-D square lattice of qubits that are not easily lost plus fast measurement. Given these two properties, the threshold is slightly higher than the 3-D case and certain operations, such as logical measurement, can be performed more quickly. Barring these minor caveats, the 2-D scheme is a simulation of the 3-D scheme, in which one dimension of the 3-D lattice becomes time.
2.3 Logical Gates in Topological Error-Corrected Systems
When making use of topological error correction, only a small number of single logical qubit gates are possible — namely , and logical initialization and measurement in these bases. Logical initialization and measurement in the and bases can be implemented using initialization and measurement of regions of single qubits encompassing the defects in the and bases. The only possible multiple logical qubit gate, logical CNOT, can be implemented by braiding the correct type of defects in a prescribed manner as shown in Figure 1. This set of gates is not universal.
To achieve universality, rotations by and around the and axes can be added to the logical gate set. These gates, however, require the use of specially-prepared states where , . Fault-tolerant creation of the states involves use of the concatenated decoding circuits for the 7-qubit Steane code and 15-qubit Reed-Muller code respectively to distill a set of low-fidelity states into a single higher-fidelity one. Convergence is rapid — if the input states have average probability of error , the output states will have error probabilities of and respectively .
This implies that for most input error rates, two levels of concatenation will be more than sufficient. Nevertheless, this still represents a large number of logical qubits, implying the need for factories throughout the computer and the dedication of most of the qubits in the computer to generate the necessary states at a sufficient rate. This will impact the resource counting for our target application, as we discuss in Section 5.
When using an state, the actual gate applied will be a random rotation by either or . Error corrected logical measurement must be used to determine which gate was applied and hence whether a corrective gate also needs to be applied. If , the correction must be applied before further gates are applied, introducing a temporal gate ordering. This time ordering prevents arbitrary quantum circuits involving non-Clifford group gates being implemented in constant time.
3 Hardware Elements
In considering the harware in which to implement this architecture, by far the most important pending question is the choice of quantum dot type, which will also determine the semiconductor substrate and operational wavelengths.
3.1 Quantum Dots
The best type of quantum dot to employ remains an open question. Charged, self-assembled InGaAs quantum dots in GaAs are appealing due to their high oscillator strength and near-IR wavelength. These dots have been engineered into cavities in the strong coupling regime and recent experiments have demonstrated complete ultrafast optical control of a single electron spin qubit trapped in the dot. However, it is challenging to make high-yield CQED devices from these dots due to their high inhomogeneous broadening and the challenges of site selectivity, although progress continues in designing tunable quantum dots in prescribed locations. Sufficient homogeneity for a scalable system, however, may require a more homogeneous kind of quantum dot, such as those defined by a single donor impurity and its associated donor-bound-exciton state. Donor-bound excitons in high quality silicon and GaAs are remarkably homogeneous, both in their optical transitions and in the Larmor frequencies of the bound spin providing the qubit. However, the isolation of single donors in these systems has been challenging. Donor impurities in silicon would seem almost ideal, since isotopic purification can give long spin coherence times and extremely homogeneous optical transitions, but optical control in this system is hindered by silicon’s indirect band-gap. A II-VI semiconductor such as ZnSe may provide a nearly ideal compromise – single fluorine impurities in ZnSe have been isolated, shown to have a comparable oscillator strength to quantum dots, and incorporated into microcavities. Recently, sufficient homogeneity has been available to observe interference from photons from independent devices. However, this system comes with its own challenges, such as the less convenient blue emission wavelength. Nitrogen-Vacancy centers in diamond have also attracted heavy attention recently, but the diamond substrate remains a challenging one for implementing the nanophotonic hardware that supports the quantum computer.
Regardless of the type of quantum dot, there are several common physical features which are to be employed for quantum information processing. The dot has a two-level ground state, provided by the spin of trapped electrons in a global applied magnetic field. This spin provides the physical qubit. The dot also has several optical excited states formed from the addition of an exciton to the dot. One of these excited states forms an optical -system with the two ground states, allowing not only single qubit control via stimulated Raman transitions, but also selective optical phase shifts of dispersive light (to be discussed in Sec. 3.3) or state-selective scattering. These enable several possible means to achieve entanglement mediated by photons.
The quantum dots will be incorporated in small cavities to enhance their interaction with weak optical fields. Cavities may be made from a variety of technologies, including photonic crystal defects and microdisks. Here, we will focus on suspended microdisk cavities.
The small microdisks are in turn coupled to larger waveguides arranged as disks, rings, or straight ridges, which carry qubit-to-qubit communication signals. These waveguides can be ridges topographically raised above the chip surface, or line-defects in photonic crystals. Our present focus is on ridge-type waveguides. Waveguides are well-advanced and relatively low-loss, although it is best to make the waveguides as straight as possible, and to avoid crossing two waveguides in the floor plan. Silicon at telecom wavelengths, for example, makes a good waveguide for our purposes, as it is almost transparent to m light, with a loss of about 0.1dB/cm. The coherent processing of single photons in on-chip waveguides has recently been well demonstrated for ridge-type silica waveguides.
The “no crossing waveguides” restriction is one of the two key issues driving device layout. The other is the need to route signals to more than one possible destination, for which high-speed, low-loss optical switching is required. Good optical switches are difficult to build: many designs have poor transmission of the desired signals and poor extinction of the undesired ones, and tend to be large and slow. In our architecture, we focus on microdisk-type or microring-type add/drop filters. In suspended silica systems, these switches have been shown to have insertion losses as low as 0.001 dB for the “bus” when the microdisk is off-resonant; optical loss from the bus to the drop port can be as low as 0.3 dB when the system is resonant. On-chip switches in semiconductor platforms do not typically feature such nearly ideal behavior but continue to improve. For example, m by m multi-ring add-drop switches with a loss of a few dB were recently demonstrated in a silicon platform .
We need to individually control the resonance of every optical microdisk in the circuit; these microdisks provide the add/drop switches and qubit-hosting cavities. Ultimately, it is the ability to rapidly move these microdisk resonators into and out of near-resonance with the waveguided control light that provides the quantum networking capability. A candidate method for this is to employ the optical nonlinearity of the semiconductor substrate. A strong, below-gap laser beam focused from above onto one of the cavities will shift its index of refraction through a combination of heating, carrier creation, and intrinsic optical nonlinearities . The laser pulses for this may be carried through free space from a micromirror array .
To complete the architecture, we will also need mode-locked lasers for single-qubit control, modulated CW-lasers for quantum non-demolition (QND) measurements as well as deterministic and heralded entanglement gates, and photodiodes to measure the intensity of the control light. Lasers and photodiodes are expensive in both space and manufacturing cost, so an ideal system will be carefully engineered to minimize the number required. Mode-locked lasers with repetition frequency tuned to the Larmor frequency of spin qubits will be used for fast single-qubit rotations . These lasers may be directed by the same micromirror used for switching. More slowly modulated single-frequency lasers will be used for qubit initialization, measurement, and entanglement operations. These lasers may be incorporated into the chip, or injected via a variety of coupling technologies. The photodiodes are intended to measure intensity of pulses with thousands to millions of photons, rather than single-photon counting, which allows the possibility of fast, on-chip, cavity-enhanced photodiodes; however, off-chip detectors may be more practical depending on the semiconductor employed.
These resources are crucial, as they are needed for every single-qubit measurement and heralded entangling operation. These operations dominate the operation of a cluster-state-based quantum computer. However, these same technologies are evolving rapidly for classical optoelectronic interconnects, and are expected to continue to improve in coming years.
3.3 Executing Physical Gates
Four types of physical gates are employed in this architecture.
The first type of gate is arbitrary single qubit rotations, which may be performed efficiently using picosecond pulses from a semiconductor mode-locked laser with pulse repetition frequency tuned to the qubit’s Larmor frequency . A cavity is not needed for this operation, and the pulses used are sufficiently far detuned from the qubit and the cavity resonance that the cavity plays little role. The phase and angle of each rotation is determined via switching pulses through fixed delay routes, as described in Ref. ?. The performance of this gate is limited by spurious excitations created in the vicinity of the quantum dot by the pulse and not by optical loss or other architectural considerations.
The next type of gate is the quantum-non-demolition QND measurement of a single qubit. This gate is critical, since the initialization and measurement of every qubit is very frequent in our tQEC architecture, and the QND gate allows both. A QND measurement makes use of the optical microcavity containing the dot, and operates with the cavity well detuned from the dot’s optical transitions. In such a configuration, an optical transition to one qubit ground state may present a different effective index of refraction for a cavity mode than the optical transition to the other qubit ground state. This results in a qubit-dependent optical phase shift of a slow optical pulse coupled in and out of the waveguide. This optical pulse may then be mixed with an unshifted pulse from the same laser to accomplish a homodyne measurement of the phase shift. In one variation of this scheme, this phase is detected as a change in the polarization direction of a linearly polarized optical probe beam; this has been demonstrated for quantum dots both with and without a microcavity; larger phase shifts have also been observed in neutral dots in improved photonic crystal cavities. Simulations indicate that pulses with a timescale of about 100 ps may be used for this gate .
These first two gate types are single-qubit gates. For generating entanglement between distant qubits, two further gates are employed: a deterministic, nearest-neighbor gate, and a non-deterministic gate for heralded entanglement generation for distant qubits.
The deterministic, nearest-neighbor gate will be mediated by a common microdisk mode connecting the cavities joining nearby qubits. The phase or amplitude of this cavity mode may be altered by the state of the qubits with which it interacts, which in turn changes the phase or population of those qubits. The gate is achieved by driving the coupled cavity mode with one or more appropriately modulated optical pulses from a CW laser. The light is allowed to leak out of the cavity and may then be discarded. The amplitude version of such a gate was proposed in 1999 by Imamoglu et al., and may be viewed as a pair of stimulated Raman transitions for two qubits driven by two CW lasers and their common cavity mode. This gate is known to require high- cavities. The phase version of this gate, described in Ref. ?, is an adaptation of the “qubus” gates proposed by Spiller et al. in 2006; more detailed design and simulation of this gate in the present context is in progress.
If such deterministic gates are available, one may naturally ask whether a fully two-dimensional architecture of coupled qubits is more viable than the communication-based architecture we present here. Indeed, if truly reliable cavity QED systems can be developed in the large-scale, deterministic photonic-based gates may enable highly promising single-photon-based architectures for tQEC. However, the devices that will enable deterministic CQED gates in solid-state systems are unlikely to be fully reliable.
In particular, high-fidelity deterministic gates require extremely low optical loss between qubits, and therefore cannot easily survive coupling to straight waveguides or to other elements in the photonic circuit such as switches and fibers. For generating entanglement through these elements, stochastic but heralded entanglement schemes are used, similar to gates in linear optics except with physical quantum memory. Combined with local single-qubit rotations, QND measurements, and deterministic nearest-neighbor gates, this heralded entanglement allows quantum teleportation. Heralded entanglement is the bottleneck resource in quantum wiring. Heralded entanglement gates come in several flavors, but fortunately each type requires the same basic qubit and cavity resource; they vary in the strength of the optical field used and the method of optical detection. Which type to employ depends on the amount of loss between the qubits to be entangled.
For qubits with relatively low loss between them, such as those coupled to a common waveguide without traversing to the drop port of a switch, so-called “hybrid” schemes are attractive. In these schemes, the QND measurement discussed above is extended to two qubits, distinguishing odd-parity qubit subspaces from even-parity states. For some detection schemes, such as -homodyne detection, this parity gate may be deterministic, up to single-qubit operations which depend on measurement results . If such parity gates are available, “repeat-until-success” schemes for quantum computation are very attractive , and have been proposed for use in multicomputer-like distributed systems . However, if weak CQED nonlinearities are employed with lossy waveguides, these detection schemes fail . In this case, -homodyne detection may still show strong performance, but the parity gate is incomplete. The heralded measurement of an odd-parity state may project qubits into an entangled state with probability %, but when this fails no entanglement is present. As in schemes using linear optics, this allows probabilistic quantum logic. With the addition of an extra ancilla qubit, this partial parity-gate may be combined into a probabilistic CNOT gate for entanglement purification.
This scheme is attractive due to its use of relatively bright laser light and near ideal probability of successful heralding. However, it is strongly subject to loss, as has been discussed previously . More complex measurement schemes may improve the fidelity of such gates at the expense of their probability of heralding a success . For very lossy connections, the number of photons in the optical pulse might be reduced to an average of less than one photon, in which case single-photon scattering schemes would be employed. These schemes succeed much more infrequently, as they rely on the click of a single photon detector projecting the combined qubit/photon system into one where no photons were lost, a possibility whose probability decreases with loss. Here, we consider only many-photon qubus gates using homodyne detection as discussed in Ref. ?; we compensate for different connections with different loss rates only by changing the intensity of the optical pulses employed, whose optimum varies with loss. The detection scheme remains constant across the architecture.
Although proposals for nonlocal, deterministic gates exist, their performance is always hindered by optical loss. This is an inevitability: if photons are mediating information between qubits, the loss of those photons into the environment inevitably reveals some information about the quantum states of the qubits, causing decoherence. A well-designed photon-mediated architecture should use a hierarchy of photon-mediation schemes to provide high-success-probability gates at low distances and highly loss-tolerant gates at higher distances, and the qubus mechanisms allow some degree of hierarchical tuning without adding extra physical resources.
In the present discussion, we discuss performance entirely in terms of optical loss. Photons may be lost in waveguides, from cavities, from the cavity-waveguide interfaces, and from spontaneous emission. An approximation of the amount of decoherence-causing loss at a quantum-dot-loaded cavity and cavity/waveguide interface, when running hybrid CQED-based gates optimally, is the inverse of the cooperativity factor . This factor arises from the ratio of spontaneous emission into a cavity mode (assumed to be overcoupled to the waveguide) to spontaneous emission into other modes. It scales as the quality factor of the cavity divided by its mode volume, so the cavities containing qubits are designed small to maximize this factor. When we discuss qubit-to-qubit optical loss, this loss should be considered as the linear loss in the waveguide connecting the qubits plus about . Cooperativity factors between self-assembled quantum dots and the whispering gallery modes of suspended microdisks have been shown to approach 100 , corresponding to a cavity-induced loss limit of 0.04 dB.
4 Architecture: Layout and Operational Basics
In this section, we qualitatively describe our architecture and its operation. Many of the design decisions described here will be justified numerically in Section 5.
4.1 Architecture Axes
The basic structural element of our system is one-dimensional: a waveguide with a tangent series of microdisks, each connected to one or more smaller microdisks containing quantum dots, as in Fig. 2. The shared bus nature of a single waveguide offers the advantage that the qubit at one end can communicate quickly and easily with the qubit at the other end; this long-distance interaction has the potential to accelerate some algorithms and aids in defect tolerance, as we will show below. However, that shared nature makes the bus itself a performance bottleneck in the system, as contention for access to the bus and the measurement device forces some actions to be postponed .
This limitation on concurrent operation makes it natural to consider using multiple columns. Columns are connected by teleportation, aided by heralded entanglement and purification. The resulting structure, developed in Figures 2 to 5, is a set of many columns, defined by long, vertical waveguides, interspersed with smaller, circular and oval waveguides, and qubits in cavities tangential to the waveguides. The vertical waveguides are of two types: logic waveguides, which are used to execute operations between qubits within one column, and teleportation waveguides, which are used to create and purify connections between columns within a single chip or between chips. The small, colored circles represent the smallest microcavities containing quantum-dot qubits. The different colors represent different roles for particular qubits, which we describe in Section 4.2. The teleportation columns do not use the smaller, higher- circular waveguides to couple qubits deterministically. Instead, as in Figures 3 and 4, they use larger racetrack-shaped waveguides that can support a larger number of qubits which are only stochastically entangled, called transceiver qubits. The qubits along one racetrack can be used to purify ancilla qubits, allowing us to connect qubits in potentially distant parts of the chip, or to connect to off-chip resources.
The architecture in Fig. 5 is designed to minimize both the length of waveguides and the number of switches traversed by pulses carrying quantum information. Note that signals introduced onto the waveguide snaking through the chip will not be perfectly switched into the detectors, implying some accumulated noise; however, this effect can be mitigated with appropriate detector time binning and sufficiently large microdisk -factors in the switches.
A single node has two axes of growth. The length of a logical waveguide column and the number of columns provide the basic rectangular layout, which will have some flexibility but is ultimately limited by the size of chip that can be practically fabricated, packaged and used. To give a concrete example, if we set the vertical spacing of the red lattice qubits to m and the column-to-column spacing to m, 100 qubits in each vertical column and 100 columns will result in the active area of the chip being 5 mm by 10 mm.
A third axis of growth is the number of chips that are connected into the overall system – the number of nodes in our multicomputer. In previous work, we have been concerned with the topology and richness of the interconnection network between the nodes of a multicomputer using CSS codes, finding that a linear network is adequate for many purposes . The extension of nodes into the serpentine teleportation waveguide in Fig. 5 enables such a linear-network multicomputer, although the additional necessary resources for bridging lossier chip-to-chip connections will not be considered here.
The structures in our architecture are large by modern VLSI standards; the principle fabrication difficulty is accurate creation of the gap between the cavities and the waveguides. That spacing must be 10-100nm, depending on the microdisk and waveguide size and quality factors . The roughness of the cavity edge is a key fabrication characteristic that determines the quality of the cavity, and ultimately the success of our device.
Although the device architecture and quantum dot technology are not yet fixed, we include images of test-devices fabricated using e-beam lithography following the methodology described in Ref. ?, only to help visualize future devices. Figures 2 and 3 include scanning electron microscope images of a device created in a GaAs wafer containing a layer of self-assembled InAs quantum dots . More scalable fabrication techniques than e-beam lithography must ultimately be developed for scalability; promising routes include nanoimprint lithography and deep sub-wavelength photolithography .
4.2 Qubit Roles and Basic Circuits
The different colors for the qubit quantum dots in Figure 3 represent different roles within the system. Physically, the cavities are identical, but they are coupled to different waveguides, allowing them to interact directly with different sets of qubits. Within those connectivity constraints, their roles are software-defined and flexible. Finding the correct hardware balance among the separate roles is a key engineering problem. The answer will depend on many parameters of the physical system, including the losses in switches and couplers, and will no doubt change with each successive technological generation.
The red qubits in the figures, in the column vertically placed between the larger circles, are the lattice qubits. Those that are functional are assigned an effective position in the 2-D lattice used to implement tQEC. These are subsequently divided into code qubits, which are never directly measured, and syndrome qubits, which are regularly measured following connections to code qubits in order to maintain the topologically protected surface code. The ideal number and density of syndrome qubits among code qubits depends on the yield. Within a column, all functional nearest neighbor pairs of qubits can be coupled in parallel. Non-nearest-neighbor couplings can only occur sequentially. For very low yields, in which code qubits rarely have nearest-neighbor couplings, only a few syndrome qubits per column are required as the syndrome circuits must largely be implemented sequentially, implying the syndrome qubits can be reused.
The blue qubits, or transceiver qubits, are aligned with the racetracks and the long purification waveguides. These qubits are used to create Bell pairs between column groups within the same device, or between devices. Because purification is a very resource-intensive process, the transceiver qubits are numerically the dominant type.
The green qubits, sandwiched between the column of circles and the column of racetracks, are ancilla qubits, used to deterministically connect stochastically created entangled states among (blue) transceiver qubits to (red) lattice qubits. The green qubits also play an auxiliary role during the purification of the blue qubits.
The circuit, or program, for executing purification on the blue qubits is shown in Figure 3. The blue qubits have previously been measured and are thus initialized to a known state. Then, qubits in a given teleportation column of Figure 5 are entangled with qubits in either the same column or the one neighbouring it to the right using the heralded entanglement generation technique discussed in Sec. 3.3. Note that waveguide loss prevents the efficient entangling of qubits in widely separated teleportation columns. In general, a laser pulse is inserted in the teleportation waveguide at a given column, coupled with a qubit in that column, coupled with a second qubit either in that column or the one neighbouring it to its right and then switched out of the teleportation waveguide and measured. This process is repeated in rapid succession, building a pool of low-fidelity entangled pairs, creating the states at the left edge of Figure 3.
Once the base-level entangled pairs are created, the circuit in Figure 3 is executed within each column, which employs two probabilistic parity gates to achieve the controlled-NOT operations used in entanglement purification. Purification proceeds until entangled state fidelities are considered sufficient for computation. At that time the purified entanglement between blue transceiver qubits is used to make an appropriate entangled (green) ancilla which are connected to the target lattice qubits.
Finally, the high-fidelity Bell pairs are used to create the tQEC lattice, using the clustering circuit shown in Fig. 4.
The most important issue in the generation of a cluster state in our geometry is the physical asymmetry between connections within a column, those with other columns, and those between dies. The hierarchy of connection distances in our system will be characterized in terms of the number of laser pulses and measurements required to achieve entanglement of a particular fidelity.
Entangling two qubits connected to the same circular waveguide is straightforward; we can refer to these as “cavity connected” or “C-connected.” Racetracks are a longer, and slightly lower-fidelity, form of cavity; we refer to two ancillae or two transceiver qubits on the same racetrack as “R-connected”, or racetrack-connected. Two lattice qubits connected through an R-connected Bell pair are said to be indirectly connected, or “I-connected”.
Within a logic column, many deterministic gates on C-connected qubits can be performed without purification, and a high level of parallelism may be employed. The pulses that execute deterministic gates on the logic waveguide couple into the cavities only weakly, and do not need to be measured after the gate, making it possible that the same strong pulse could be used to execute several gates concurrently. If we label the qubits with the pattern , we may be able to couple all of the pairs in one entangling time slot, then couple all of the pairs in the second time slot.
The fidelity of W connections is dominated by the efficiency of coupling pulses into and out of cavities, as the loss in the waveguide will be negligible. When connecting two lattice qubits in columns separated by a purification waveguide, we require moderate amounts of purification. The purification ancillae are themselves W-connected; the post-purification lattice connection we refer to as “-connected”.
Finally, qubits that do not share the same purification waveguide must be connected using a pulse that transits one or more switches. We refer to these physical connections as or connections, where is the number of switches and is the number of I/O ports that must be transited. Lattice qubits connected after purification we refer to as -connected.
The -connections and -connections will be most strongly subject to bottlenecks from the limited number of laser pulses and detection events in our architecture, and are therefore the focus of our numeric studies in the next section.
5 Resource Estimates
Given a set of technological constraints (pulse rate, error rate, qubit size, maximum die size), a complete architecture will balance a set of tradeoffs to find a sweet spot that efficiently meets the system requirements (application performance, success probability, cost). Minimizing lattice refresh time is the key to both application-level performance and fault tolerance, but demands increased parallelism (hence cost); in our system, this favors a very wide, shallow lattice, which is more difficult to use effectively at the application level. Increasing the number of application qubits increases the parallelism of many applications (including the modular exponentiation that is the bottleneck for Shor’s algorithm), but if the space dedicated to the singular factory does not increase proportionally, performance will not improve.
We begin by describing the communication costs and the impact of loss on the lattice refresh cycle time in a generic 2-D multicomputer layout, from which we can calculate the effective logical clock cycle time for executing gates on application qubits. With these concepts in hand, we then propose an architecture, and calculate its prospective performance.
5.1 Communications and Lattice Refresh
Figure 6 shows the residual infidelity and the cost in teleportation waveguide pulses as a function of the loss in the probe beam from qubit to qubit through the waveguides. Purification is performed using only Bell pairs of symmetric fidelities, and is run until final fidelity saturates or until fidelity is better than 99.5%. The two curves represent two values of round-trip loss in the racetrack waveguides used for local parity gates; with local loss of 0.2%, we cannot achieve a final fidelity above the threshold for tQEC. Thus, we establish an engineering goal of 0.02% loss or better.
The values in Fig. 6 are calculated by generating a Markov probability matrix for the protocol of symmetric purification, where each matrix transition requires the generation and detection of an optical pulse in the teleportation waveguide. Probabilities and fidelities for each step are found using the formalism presented in Ref. ?. Many of these transitions are deterministic, but some are not due to the probability of parity gates failing or the purification protocol failing. Exponentiation of this matrix allows the direct calculation of the probability of completing the protocol in a given number of steps, allowing calculation of the probability density function for completion of purification vs. number of optical pulses. These probability distributions are strongly Poissonian. They are used to calculate the average and root-mean-square number of pulses plotted in Fig. 6.
This Markov analysis is useful for estimating performance, but overestimates the required spatial and temporal resources considerably. The strictly symmetric purification routine assumed here makes less than ideal use of qubit memory; alternative resource management strategies can lead to order-of-magnitude improvements in speed without a comparable increase in size, as considered, for example, in Ref. ?. Also, the calculation we have performed assumes that when parity gates fail in the circuit shown in Fig. 3(a), the entire procedure fails and entangled pairs must be regenerated and repurified. In fact, if one parity gate succeeds and the other fails, then one Bell pair preserves some of its entanglement and may be kept, possibly with a Pauli correction, for subsequent purification rounds. Optimizing the purification procedure to account for such possibilities is difficult to do analytically; Monte Carlo simulations such as those in Ref. ? may estimate the worth of these strategies, but we leave such simulations for future work.
With the proper layout, we can connect multiple chips into a two-dimensional structure. With rows of chips each, and a chip that consists of columns each containing rows of lattice qubits, we have a physical structure capable of supporting an lattice. In such a multicomputer, entangling pulses may be destined for another qubit in the same column in the same chip, another qubit in the same column but the chip below, or in the neighboring column to the left or right. With multiple possible destinations, switching is naturally required; we can arrange the switching so that vertical connections are connections and horizontal ones are connections. Assessing the scalability of such a system and establishing guidelines for configuring the system depend on understanding these connections.
Table 1 lists the costs for the lattice building operations on such a switched multicomputer architecture. We compare two logical lattices, a direct-mapped logical lattice and a sub-lattice-organized logical lattice in which each physical column is used as a small lattice bbbThe table assumes that . Although that is not a requirement, the expressions are more complex for ; without careful structuring, potentially as many as half of the connections may become for .. The physical yield affects the probability that two neighboring lattice qubits and their shared ancilla are good, and hence the probability that a connection can be used. Additionally, for low yields (), we assign only a few qubits per column as tQEC syndrome qubits, forcing all lattice cycle operations to use -connected gates.
We observe several qualitative facts about this architecture:
The lattice cycle time is constant as increases, but the number of lasers and measurement devices must increase proportionally.
To first order, the lattice cycle time scales linearly with , but second-order effects will likely make it worse than linear.
The number of connections favors a sub-lattice with a large , but the minimum size of the logical lattice limits ; we require .
Increasing lattice cycle time hurts fidelity due to memory degradation.
Increasing lattice cycle time hurts application performance.
The total lattice refresh cycle time is , where is the number of pulse time steps in the complete cycle. The final, logical clock rate for application gates depends on both the refresh cycle and the temporal extent of the lattice holes as they move through the system to execute logical gates. We can visualize the movement of the holes through the temporal dimension as “pipes” routed in a pseudo-3-D space. To maintain the same perimeter and spacing about the hole as it extends into the temporal dimension, each hole movement will also have to extend for lattice refresh cycles. We have used as the length of one side of each square hole. The temporal spacing must be , implying that the fastest rate at which hole braiding can occur is lattice refresh cycles.
In our architecture, the logical clock rate is . The number of refresh cycles per logical gate is . The refresh time itself is ; because we must choose , the number of pulses grows at least linearly in . As the columns lengthen, fidelity falls and the number of pulses per cycle grows, creating a positive feedback in and cycle time.
5.2 Proposed Architecture and Performance
Table 2 summarizes our initial strawman architecture, depicted in Fig. 5. To factor an -bit number using Shor’s algorithm, we would like to have logical qubits. Having established a goal of factoring a 2,048-bit number, we need 12,288 logical qubits.
Ultimately, the execution of application algorithms in tQEC requires, as at the physical level, two components: communication and computation. Logical communication consists of routing the pipes through the pseudo-3-D lattice. These pipes can route through the space with only a fixed temporal extent, allowing the equivalent of “long distance” gates in the circuit model. They do, however, consume space in the lattice, creating a direct tradeoff between the physical size of the system and the time consumed. Additionally, the shape of the logical lattice determines how efficiently logical qubits can be placed and routed. We assign 25% of the logical qubit space for wiring and hole movement space.
Computation, for many algorithms, will be dominated by Toffoli gates; as some of the operations are probabilistic, an average of over ten and states are required for each. Shor’s algorithm requires some Toffoli gates: adder calls (after optimizations to modulo arithmetic and one level of indirection in the arirthmetic ), each requiring Toffoli gates . The total of Toffoli gates require over states. Again, a direct tradeoff can be made between space and time, as the states can be built in parallel. For our system and this size of problem, rough balance is achieved with about 65% of the logical qubits dedicated to the factory.
The multicomputer organization is wide and shallow, to minimize refresh cycle time. Once we have decided to limit to 1, the detailed chip layout simplifies, allowing the serpentine waveguide shown in Fig. 5. In this architecture, W connections are high fidelity, there are no neighbors ( connections), and connections to neighboring columns need not leave the chip except at chip boundaries. The from Table 1 is still , but physical connections are connections with a loss of only about 0.4dB. The vertical height of a single chip will only accommodate enough cavities for a direct-mapped lattice, .
Figure 7a shows the execution time for our proposed system. A 2048-bit number should be factorable in just over 400 days, if the technological characteristics in Table 2 can be met. The system is large, requiring more than six billion lattice qubits and several times that total number when ancillae and transceivers are included. At the application level, much more parallelism is available if a larger system is built. A system one hundred times larger would factor the number in about five days.
Figure 7b shows execution time as a function of the loss in our two key connection types, the intra-column W connections and the inter-column X connections. Minimizing the additional loss incurred in inter-column travel helps hold execution time within reasonable bounds.
Reaching toward the desirable lower left corner of Fig. 7a requires improving the base-level entanglement fidelity or reducing the number of pulses used to purify Bell pairs. Our system is fairly robust to yield. Below 40% it is difficult to build a system capable of running tQEC, but above that level, increasing yield has only minor effects on temporal and spatial resources. This gives a clear message: pursue fidelity and quality of components at the expense of yield.
Our design focuses on the communications within a quantum computer, building on a natural hierarchy of connectivity ranging from direct coupling of neighbors on one physical axis of our chip through medium-fidelity, waveguide-based purification coupling on the other axis, to distant, switched connections requiring substantial purification. Thus, while we refer to our design as a quantum multicomputer with each node consisting of a single chip, it is more accurate to regard the connections between qubits as occurring on a set of levels rather than a simple internal/external distinction. Founded on quantum dots connected via cavity QED and nanophotonic waveguides and using topological error correction, this proposal represents progress toward a practical quantum computer architecture. The physical technologies are maturing rapidly, and tQEC offers both operational flexibility and a high threshold on realistic architectures such as ours.
While the overall architecture (multicomputer) and the system building blocks (tQEC, purification circuits, etc.) have been established, much work remains to be done. The most important pending decision is the actual choice of semiconductor and quantum dot type. The cavity and memory lifetime, which dramatically affect our ability to build and maintain the lattice cluster state, will be critical factors in this decision. The yield of functional qubits will ultimately drive the types of experiments that are feasible.
With the decision of semiconductor and the key technical parameters in hand, it will become possible to more quantitatively analyze the mid-level design choices of node size, layout tradeoffs, and the numbers of required lasers and photodiodes. The control system for managing the qubits and cavity coupling will be a large engineering effort involving optics, electronic circuits, and possibly micromechanical elements. Finally, application algorithms need to be implemented and optimized and run-time systems deployed, which will require the creation of large software tool suites.
One of our goals in this work is to establish target values for experimental parameters that must be achieved for such a large system to work. For the chip design and system configuration we present here, we estimate that the yield of functional quantum dots must be at least , the local optical loss must be better than , the adjusted gate error rate better than , and the memory coherence time about milliseconds or more. The exact values of these goals depend on the architecture, system scale, and application; the entire system is summarized in Table 2.
As a final comment, the physical resources demanded by this architecture are daunting. Other architectures for quantum computers are comparably daunting. The current work is intended in large part to reveal the scope of the problem. With realistic resources such as lossy waveguides, finite-yield qubits, and finite chip-sizes, the added overhead for error correction makes quantum computers very expensive by current standards. We must rely on engineering advancements to improve nanophotonic and quantum dot devices as well as VLSI-like manufacturing capabilities to realize a quantum computer with a realistic cost. Indeed, our current understanding of how to make very large quantum computers is often likened to classical computers before VLSI techniques were developed. The successful technologies enabling practical approaches to building large computers are likely yet to be discovered, but architectures such as the one we have presented and the defect-tolerant, communication-oriented design principles we have used are expected to provide the guiding context for these new technologies.
This work was supported by NSF, with partial support by MEXT and NICT. We acknowledge the support of the Australian Research Council, the Australian Government, and the US National Security Agency (NSA) and the Army Research Office (ARO) under contract number W911NF-08-1-0527. The authors thank Shinichi Koseki for fabricating and photographing the test structure and Shota Nagayama for help with the figures. We thank Jim Harrington, Robert Raussendorf, Ray Beausoliel, Kae Nemoto, Bill Munro, and the QIS groups at HP Labs and NII, for many useful technical discussions. We also would like to thank Skype, Ltd. for providing the classical networking software that enabled the tri-continental writing of this manuscript.
- 1. D.P. DiVincenzo. The physical implementation of quantum computation. Fortschritte der Physik, 48(9-11):771–783, 2000.
- 2. David P. DiVincenzo. Quantum Computation. Science, 270(5234):255–261, 1995.
- 3. Timothy P. Spiller, William J. Munro, Sean D. Barrett, and Pieter Kok. An introduction to quantum information processing: applications and realisations. Contemporary Physics, 46:406, 2005.
- 4. Rodney Van Meter and Mark Oskin. Architectural implications of quantum computing technologies. ACM Journal of Emerging Technologies in Computing Systems, 2(1):31–63, January 2006.
- 5. Dorit Aharonov and Michael Ben-Or. Fault-tolerant quantum computation with constant error rate. http://arXiv.org/quant-ph/9906129, June 1999. extended version of STOC 1997 paper.
- 6. Todd Brun, Igor Devetak, and Min-Hsiu Hsieh. Correcting quantum errors with entanglement. Science, 314(5798):436–439, 2006.
- 7. Dave Bacon and Andrea Casaccino. Quantum error correcting subsystem codes from two classical linear codes. quant-ph/0610088, October 2006.
- 8. D. Bacon. Operator quantum error-correcting subsystems for self-correcting quantum memories. Physical Review A, 73(1):12340, 2006.
- 9. D.J.C. MacKay, G. Mitchison, and P.L. McFadden. Sparse-graph codes for quantum error correction. IEEE Transactions on Information Theory, 50(10):2315, 2004.
- 10. Andrew M. Steane. Overhead and noise threshold of fault-tolerant quantum error correction. Physical Review A, 68:042322, 2003.
- 11. Andrew M. Steane. Quantum computer architecture for fast entropy extraction. Quantum Information and Computation, 2(4):297–306, 2002. http://arxiv.org/quant-ph/0203047.
- 12. Simon J. Devitt, Austin G. Fowler, and Lloyd C. Hollenberg. Simulations of Shor’s algorithm with implications to scaling and quantum error correction. http://arXiv.org/quant-ph/0408081, August 2004.
- 13. Dean Copsey, Mark Oskin, Tzvetan Metodiev, Frederic T. Chong, Isaac Chuang, and John Kubiatowicz. The effect of communication costs in solid-state quantum computing architectures. In Proceedings of the fifteenth annual ACM Symposium on Parallel Algorithms and Architectures, pages 65–74, 2003.
- 14. T. Szkopek, P. O. Boykin, H. Fan, V. P. Roychowdhury, E. Yablonovitch, G. Simms, M. Gyure, and B. Fong. Threshold error penalty for fault-tolerant quantum computation with nearest neighbor communication. IEEE Trans. on Nanotech., 5:42, 2006.
- 15. Darshan D. Thaker, Tzvetan Metodi, Andrew Cross, Isaac Chuang, and Frederic T. Chong. CQLA: Matching density to exploitable parallelism in quantum computing. In Computer Architecture News, Proc. 33rd Annual International Symposium on Computer Architecture. ACM, June 2006.
- 16. Mark G. Whitney, Nemanja Isailovic, Yatish Patel, and John Kubiatowicz. A fault tolerant, area efficient architecture for Shor’s factoring algorithm. In Proc. 36th Annual International Symposium on Computer Architecture, June 2009.
- 17. E. Collin, G. Ithier, A. Aassime, P. Joyez, D. Vion, and D. Esteve. NMR-like control of a quantum bit superconducting circuit. Physical Review Letters, 93:157005, October 2004.
- 18. Lieven M.K. Vandersypen and Isaac Chuang. NMR techniques for quantum computation and control. Rev. Modern Phys., 76:1037, 2004.
- 19. D. A. Lidar, I. L. Chuang, and K. B. Whaley. Decoherence-free subspaces for quantum computation. Physical Review Letters, 81(12):2594–2597, September 1998.
- 20. Daniel A. Lidar and K. Birgitta Whaley. Irreversible Quantum Dynamics, volume 622 of Lecture Notes in Physics, chapter Decoherence-Free Subspaces and Subsystems. Springer, 2003.
- 21. W. Dür and H.J. Briegel. Entanglement purification and quantum error correction. Rep. Prog. Phys., 70:1381–1424, 2007.
- 22. Z. W. E. Evans, A. M. Stephens, J. H. Cole, and L. C. L. Hollenberg. Error correction optimisation in the presence of X/Z asymmetry, 2007.
- 23. Austin G. Fowler, Charles D. Hill, and Lloyd C. L. Hollenberg. Quantum error correction on linear nearest neighbor qubit arrays. Physical Review A, 69:042314, 2004.
- 24. A. Yu. Kitaev. Quantum computations: algorithms and error correction. Russian Math. Surveys, 52(6):1191–1249, 1997.
- 25. C.H. Bennett, G. Brassard, S. Popescu, B. Schumacher, J.A. Smolin, and W.K. Wootters. Purification of noisy entanglement and faithful teleportation via noisy channels. Physical Review Letters, 76(5):722–725, 1996.
- 26. J.I. Cirac, A. Ekert, S.F. Huelga, and C. Macchiavello. Distributed quantum computation over noisy channels. Physical Review A, 59:4249, 1999.
- 27. J. Dehaene, M. Van den Nest, B. De Moor, and F. Verstraete. Local permutations of products of Bell states and entanglement distillation. Physical Review A, 67(2):22310, 2003.
- 28. C. Kruszynska, A. Miyake, H.J. Briegel, and W. Dür. Entanglement purification protocols for all graph states. Physical Review A, 74(5):52316, 2006.
- 29. E.N. Maneva and J.A. Smolin. Improved two-party and multi-party purification protocols. Contemporary Mathematics Series, 305:203–212, 2000.
- 30. Rodney Van Meter, Thaddeus D. Ladd, W. J. Munro, and Kae Nemoto. System design for a long-line quantum repeater. IEEE/ACM Transactions on Networking, 17(3):1002–1013, June 2009.
- 31. E. Knill, R. Laflamme, R. Martinez, and C. Negrevergne. Benchmarking quantum computers: the five-qubit error correcting code. Physical Review Letters, 86(25):5811–5814, June 2001.
- 32. J. Chiaverini, D. Leibfried, T. Schaetz, M. D. Barrett, R. B. Blakestad, J. Britton, W. M. Itano, J. D. Jost, E. Knill, C. Langer, R. Ozeri, and D. J. Wineland. Realization of quantum error correction. Nature, 432:602–605, 2004.
- 33. T.B. Pittman, B.C. Jacobs, and J.D. Franson. Demonstration of quantum error correction using linear optics. Physical Review A, page 052332, May 2005.
- 34. Eric Dennis, Alexei Kitaev, Andrew Landahl, and John Preskill. Topological quantum memory. J. Math. Phys., 43:4452–4505, 2002.
- 35. Robert Raussendorf, Jim Harrington, and Kovid Goyal. Topological fault-tolerance in cluster state quantum computation. New Journal of Physics, 9:199, 2007.
- 36. A.Y. Kitaev. Fault-tolerant quantum computation by anyons. Annals of Physics, 303(1):2–30, 2003.
- 37. M.H. Freedman, A. Kitaev, M.J. Larsen, and Z. Wang. Topological quantum computation. American Mathematical Society, 40(1):31–38, October 2002.
- 38. Robert Raussendorf and Jim Harrington. Fault-tolerant quantum computation with high threshold in two dimensions. Physical Review Letters, 98:190504, 2007.
- 39. Simon J. Devitt, Austin G. Fowler, Todd Tilma, William J. Munro, and Kae Nemoto. Classical processing requirements for a topological quantum computing system, 2009.
- 40. S. J. Devitt, A. G. Fowler, A. M. Stephens, A. D. Greentree, L. C. L. Hollenberg, W. J. Munro, and Kae Nemoto. Architectural design for a topological cluster state quantum computer. arXiv:0808.1782, 2008.
- 41. D. P. DiVincenzo. Fault tolerant architectures for superconducting qubits. arXiv:0905.4839, 2009.
- 42. René Stock and Daniel F. V. James. Scalable, high-speed measurement-based quantum computer using trapped ions. Physical Review Letters, 102(17):170501, 2009.
- 43. John L. Hennessy and David A. Patterson. Computer Architecture: A Quantitative Approach. Morgan Kaufman, 4th edition, 2006.
- 44. William James Dally and Brian Towles. Principles and Practices of Interconnection Networks. Elsevier, 2004.
- 45. Rodney Van Meter, Kohei M. Itoh, and Thaddeus D. Ladd. Architecture-dependent execution time of Shor’s algorithm. In Proc. Int. Symp. on Mesoscopic Superconductivity and Spintronics (MS+S2006), February 2006.
- 46. Peter W. Shor. Algorithms for quantum computation: Discrete logarithms and factoring. In Proc. 35th Symposium on Foundations of Computer Science, pages 124–134, Los Alamitos, CA, 1994. IEEE Computer Society Press.
- 47. Rodney Doyle Van Meter III. Architecture of a Quantum Multicomputer Optimized for Shor’s Factoring Algorithm. PhD thesis, Keio University, 2006. available as arXiv:quant-ph/0607065.
- 48. Jeffrey Yepez. Type-II quantum computers. International Journal of Modern Physics C, 12(9):1273–1284, 2001.
- 49. Thomas M. Stace, Sean D. Barrett, and Andrew C. Doherty. Thresholds for topological codes in the presence of loss. Physical Review Letters, 102(20):200501, 2009.
- 50. H.-J. Briegel, W. Dür, J.I. Cirac, and P. Zoller. Quantum repeaters: the role of imperfect local operations in quantum communication. Physical Review Letters, 81:5932–5935, 1998.
- 51. A.G. Fowler, A.M. Stephens, and P. Groszkowski. High threshold universal quantum computation on the surface code. Arxiv preprint arXiv:0803.0272, 2008.
- 52. Austin G. Fowler and Kovid Goyal. Topological cluster state quantum computing, 2008.
- 53. D.S. Wang, A.G. Fowler, A.M. Stephens, and L.C.L. Hollenberg. Threshold error rates for the toric and surface codes. Arxiv preprint arXiv:0905.0531, 2009.
- 54. J.P. Reithmaier, G. Sęk, A. Löffler, C. Hofmann, S. Kuhn, S. Reitzenstein, L.V. Keldysh, V.D. Kulakovskii, T.L. Reinecke, and A. Forchel. Strong coupling in a single quantum dot–semiconductor microcavity system. Nature, 432(7014):197–200, 2004.
- 55. J. Berezovsky, M. H. Mikkelsen, N. G. Stoltz, L. A. Coldren, and D. D. Awschalom. Picosecond coherent optical manipulation of a single electron spin in a quantum dot. Science, 320:349, 2008.
- 56. D. Press, T. D. Ladd, B. Y. Zhang, and Y. Yamamoto. Complete quantum control of a single quantum dot spin using ultrafast optical pulses. Nature, 456:218–221, 2008.
- 57. C. Kistner, T. Heindel, C. Schneider, A. Rahimi-Iman, S. Reitzenstein, S. Hofling, and A. Forchel. Demonstration of strong coupling via electro-optical tuning in high-quality QD-micropillar systems. Optics Express, 16(19):15006, 2008.
- 58. I. Fushman, D. Englund, A. Faraon, N. Stoltz, P. Petroff, and J. Vuckovic. Controlled phase shifts with a single quantum dot. Science, 320(5877):769–772, 2008.
- 59. C. Schneider, M. Strauß, T. Sünner, A. Huggenberger, D. Wiener, S. Reitzenstein, M. Kamp, S. Höfling, and A. Forchel. Lithographic alignment to site-controlled quantum dots for device integration. Appl. Phys. Lett., 92(18):183101, 2008.
- 60. A.M. Tyryshkin, S. A. Lyon, A. V. Astashkin, and A. M. Raitsimring. Electron spin-relaxation times of phosphorous donors in silicon. Phys. Rev. B, 68:193207, 2003.
- 61. A. Yang, M. Steger, D. Karaiskaj, M. L. W. Thewalt, M. Cardona, K. M. Itoh, H. Riemann, N. V. Abrosimov, M. F. Churbanov, A. V. Gusev, A. D. Bulanov, A. K. Kaliteevskii, O. N. Godisov, P. Becker, H.-J. Pohl, J. W. Ager III, and E. E. Haller. Optical detection and ionization of donors in specific electronic and nuclear spin states. Phys. Rev. Lett., 97:227401, 2006.
- 62. A. Pawlis, M. Panfilova, D. J. As, K. Lischka, K. Sanaka, T. D. Ladd, and Y. Yamamoto. Lasing of donor-bound excitons in ZnSe microdisks. Phys. Rev. B, 77:153304, 2008.
- 63. K. Sanaka, A. Pawlis, T. D. Ladd, K. Lischka, and Y. Yamamoto. Indistinguishable photons from independent semiconductor nanostructures. Phys. Rev. Lett., 2009. in press.
- 64. M. V. G. Dutt, L. Childress, L. Jiang, E. Togan, J. Maze, F. Jelezko, A. S. Zibrov, P. R. Hemmer, and M. D. Lukin. Quantum register based on individual electronic and nuclear spin qubits in diamond. Science, 316(5829):1312–1316, 2007.
- 65. C. Santori, Ph. Tamarat, P. Neumann, J. Wrachtrup, D. Fattal, R. G. Beausoleil, J. Rabeau, P. Olivero, A. D. Greentree, S. Prawer, F. Jelezko, and P. Hemmer. Coherent population trapping of single spins in diamond under optical excitation. Phys. Rev. Lett., 97(24):247401, 2006.
- 66. Balasubramanian, G. et al. Ultralong spin coherence time in isotopically engineered diamond. Nature Mater., 8:383–387, 2009.
- 67. S.M. Clark, K.M.C. Fu, T.D. Ladd, and Y. Yamamoto. Quantum computers based on electron spins controlled by ultra-fast, off-resonant, single optical pulses. Physical Review Letters, 99:040501, 2007.
- 68. T. D. Ladd, P. van Loock, K. Nemoto, W. J. Munro, and Y. Yamamoto. Hybrid quantum repeater based on dispersive cqed interactions between matter qubits and bright coherent light. New J. Phys., 8:184, 2006.
- 69. C. Cabrillo, J. I. Cirac, P. García-Fernández, and P. Zoller. Creation of entangled states of distant atoms by interference. Phys. Rev. A, 59(2):1025, 1999.
- 70. L. Childress, J.M. Taylor, A.S. Sørensen, and M.D. Lukin. Fault-tolerant quantum repeaters with minimal physical resources and implementations based on single-photon emitters. Physical Review A, 72(5):52330, 2005.
- 71. E. Waks and J. Vuckovic. Dipole induced transparency in drop filter cavity-waveguide systems. Phys. Rev. Lett., 96:153601, 2006.
- 72. A. Politi, M. J. Cryan, J. G. Rarity, S. Y. Yu, and J. L. O’Brien. Silica-on-silicon waveguide quantum circuits. Science, 320(5876):646, 2008.
- 73. H. Rokhsari and K. J. Vahala. Ultralow loss, high , four port resonant couplers for quantum optics and photonics. Phys. Rev. Lett., 92(25):253905, Jun 2004.
- 74. Yurii Vlasov, William M. J. Green, and Fengnian Xia. High-throughput silicon nanophotonic wavelength-insensitive switch for on-chip optical networks. Nature Photonics, March 2008. doi:10.1038/nphoton.2008.31.
- 75. J. Kim et al. System design for large-scale ion trap quantum information processor. Quantum Information and Computation, 5(7):515–537, 2005.
- 76. S. M. Clark, K.-M. C. Fu, Q. Zhang, T D. Ladd, C. Stanley, and Y. Yamamoto. Ultrafast optical spin echo for electron spins in semiconductors. Phys. Rev. Lett., 2009. in press.
- 77. J. Berezovsky, M. H. Mikkelsen, O. Gywat, N. G. Stoltz, L. A. Coldren, and D. D. Awschalom. Nondestructive optical measurements of a single electron spin in a quantum dot. Science, 314:1916, 2006.
- 78. M. Atature, J. Dreiser, A. Badolato, and A. Imamoglu. Observation of Faraday rotation from a single confined spin. Nat. Phys., 3:101, 2007.
- 79. I. Fushman, D. Englund, A. Faraon, N. Stoltz, P. Petroff, and J. Vuckovic. Controlled phase shifts with a single quantum dot. Science, 320(5877):769, 2008.
- 80. A. Imamoglu, D. D. Awschalom, G. Burkard, D.P Divincenzo, D. Loss, M. Shermin, and A. Small. Quantum information processing using quantum dot spins and cavity QED. Phys. Rev. Lett., 83:4204, 1999.
- 81. Shi-Biao Zheng. Unconventional geometric quantum phase gates with a cavity QED system. Phys. Rev. A, 70(5):052320, Nov 2004.
- 82. T. P. Spiller, K. Nemoto, S. L. Braunstein, W. J. Munro, P. van Loock, and G. J Milburn. Quantum computation by communication. New J. Phys, 8:30, 2006.
- 83. Thaddeus D. Ladd and Yoshihisa Yamamoto. in preparation.
- 84. L. M. Duan and H. J. Kimble. Phys. Rev. Lett., 92:127902, 2004.
- 85. A. M. Stephens, Z. W. E. Evans, S. J. Devitt, A. D. Greentree, A. G. Fowler, W. J. Munro, J. L. O’Brien, K. Nemoto, and L. C. L. Hollenberg. Deterministic optical quantum computer using photonic modules. Physical Review A, 78(3), 2008.
- 86. P. van Loock, T. D. Ladd, K. Sanaka, F. Yamaguchi, K. Nemoto, W. J. Munro, and Y. Yamamoto. Hybrid quantum repeater using bright coherent light. Phys. Rev. Lett., 96:240501, 2006.
- 87. S. D. Barrett, Pieter Kok, Kae Nemoto, R. G. Beausoleil, W. J. Munro, and T. P. Spiller. Symmetry analyzer for nondestructive bell-state detection using weak nonlinearities. Phys. Rev. A, 71(6):060302, Jun 2005.
- 88. W.J. Munro, K. Nemoto, and T.P. Spiller. Weak nonlinearities: a new route to optical quantum computation. New Journal of Physics, 7:137, May 2005.
- 89. Sean D. Barrett and Pieter Kok. Efficient high-fidelity quantum computation using matter qubits and linear optics. Phys. Rev. A, 71(6):060310, Jun 2005.
- 90. Yuan Liang Lim, Sean D. Barrett, Almut Beige, Pieter Kok, and Leong Chuan Kwek. Repeat-Until-Success quantum computing using stationary and flying qubits. Physical Review Letters, 95(3):30505, 2005.
- 91. P. van Loock, N. Lutkenhaus, W. J. Munro, and K. Nemoto. Quantum repeaters using coherent-state communication. Physical Review A, 78(6), 2008.
- 92. E. Peter, P. Senellart, D. Martrou, A. Lemaître, J. Hours, J. M. Gérard, and J. Bloch. Exciton-photon strong-coupling regime for a single quantum dot embedded in a microcavity. Phys. Rev. Lett., 95:067401, 2005.
- 93. Shinichi Koseki, Bingyang Zhang, Kristiaan De Greve, and Yoshihisa Yamamoto. Monolithic integration of quantum dot containing microdisk microcavities coupled to air-suspended waveguides. Applied Physics Letters, 94:051110, February 2009.
- 94. Rodney Van Meter, W. J. Munro, Kae Nemoto, and Kohei M. Itoh. Arithmetic on a distributed-memory quantum multicomputer. ACM Journal of Emerging Technologies in Computing Systems, 3(4):17, January 2008.
- 95. Rodney Van Meter, Kae Nemoto, and William J. Munro. Communication links for distributed quantum computation. IEEE Transactions on Computers, 56(12):1643–1653, December 2007.
- 96. Stephen Y. Chou, Peter R. Krauss, and Preston J. Renstrom. Imprint lithography with 25-nanometer resolution. Science, 272(5258):85–87, 1996.
- 97. Linjie Li, Rafael R. Gattass, Erez Gershgoren, Hana Hwang, and John T. Fourkas. Achieving lambda/20 resolution by one-color initiation and deactivation of polymerization. Science, 324(5929):910–913, 2009.
- 98. Trisha L. Andrew, Hsin-Yu Tsai, and Rajesh Menon. Confining light to deep subwavelength dimensions to enable optical nanopatterning. Science, 324(5929):917–921, 2009.
- 99. Timothy F. Scott, Benjamin A. Kowalski, Amy C. Sullivan, Christopher N. Bowman, and Robert R. McLeod. Two-color single-photon photoinitiation and photoinhibition for subdiffraction photolithography. Science, 324(5929):913–917, 2009.
- 100. W. Dür, H.-J. Briegel, J. I. Cirac, and P. Zoller. Quantum repeaters based on entanglement purification. Physical Review A, 59(1):169–181, Jan 1999.
- 101. Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000.
- 102. Vlatko Vedral, Adriano Barenco, and Artur Ekert. Quantum networks for elementary arithmetic operations. Phys. Rev. A, 54:147–153, 1996. http://arXiv.org/quant-ph/9511018.
- 103. Rodney Van Meter and Kohei M. Itoh. Fast quantum modular exponentiation. Physical Review A, 71(5):052320, May 2005.
- 104. Thomas G. Draper, Samuel A. Kutin, Eric M. Rains, and Krysta M. Svore. A logarithmic-depth quantum carry-lookahead adder. Quantum Information and Computation, 6(4&5):351–369, July 2006.