The Measurement Calculus
Measurement-based quantum computation has emerged from the physics community as a new approach to quantum computation where the notion of measurement is the main driving force of computation. This is in contrast with the more traditional circuit model which is based on unitary operations. Among measurement-based quantum computation methods, the recently introduced one-way quantum computer [RB01] stands out as fundamental.
We develop a rigorous mathematical model underlying the one-way quantum computer and present a concrete syntax and operational semantics for programs, which we call patterns, and an algebra of these patterns derived from a denotational semantics. More importantly, we present a calculus for reasoning locally and compositionally about these patterns. We present a rewrite theory and prove a general standardization theorem which allows all patterns to be put in a semantically equivalent standard form. Standardization has far-reaching consequences: a new physical architecture based on performing all the entanglement in the beginning, parallelization by exposing the dependency structure of measurements and expressiveness theorems.
Furthermore we formalize several other measurement-based models e.g. Teleportation, Phase and Pauli models and present compositional embeddings of them into and from the one-way model. This allows us to transfer all the theory we develop for the one-way model to these models. This shows that the framework we have developed has a general impact on measurement-based computation and is not just particular to the one-way quantum computer.
The emergence of quantum computation has changed our perspective on many fundamental aspects of computing: the nature of information and how it flows, new algorithmic design strategies and complexity classes and the very structure of computational models [NC00]. New challenges have been raised in the physical implementation of quantum computers. This paper is a contribution to a nascent discipline: quantum programming languages.
This is more than a search for convenient notation, it is an investigation into the structure, scope and limits of quantum computation. The main issues are questions about how quantum processes are defined, how quantum algorithms compose, how quantum resources are used and how classical and quantum information interact.
Quantum computation emerged in the early 1980s with Feynman’s observations about the difficulty of simulating quantum systems on a classical computer. This hinted at the possibility of turning around the issue and exploiting the power of quantum systems to perform computational tasks more efficiently than was classically possible. In the mid 1980s Deutsch [Deu87] and later Deutsch and Jozsa [DJ92] showed how to use superposition – the ability to produce linear combinations of quantum states – to obtain computational speedup. This led to interest in algorithm design and the complexity aspects of quantum computation by computer scientists. The most dramatic results were Shor’s celebrated polytime factorization algorithm [Sho94] and Grover’s sublinear search algorithm [Gro98]. Remarkably one of the problematic aspects of quantum theory, the presence of non-local correlation – an example of which is called “entanglement” – turned out to be crucial for these algorithmic developments.
If efficient factorization is indeed possible in practice, then much of cryptography becomes insecure as it is based on the difficulty of factorization. However, entanglement makes it possible to design unconditionally secure key distribution [BB84, Eke91]. Furthermore, entanglement led to the remarkable – but simple – protocol for transferring quantum states using only classical communication [BBC93]; this is the famous so-called “teleportation” protocol. There continues to be tremendous activity in quantum cryptography, algorithmic design, complexity and information theory. Parallel to all this work there has been intense interest from the physics community to explore possible implementations, see, for example, [NC00] for a textbook account of some of these ideas.
On the other hand, only recently has there been significant interest in quantum programming languages; i.e. the development of formal syntax and semantics and the use of standard machinery for reasoning about quantum information processing. The first quantum programming languages were variations on imperative probabilistic languages and emphasized logic and program development based on weakest preconditions [SZ00, Ö01]. The first definitive treatment of a quantum programming language was the flowchart language of Selinger [Sel04b]. It was based on combining classical control, as traditionally seen in flowcharts, with quantum data. It also gave a denotational semantics based on completely positive linear maps. The notion of quantum weakest preconditions was developed in [DP06]. Later people proposed languages based on quantum control [AG05]. The search for a sensible notion of higher-type computation [SV05, vT04] continues, but is problematic [Sel04c].
A related recent development is the work of Abramsky and Coecke [AC04, Coe04] where they develop a categorical axiomatization of quantum mechanics. This can be used to verify the correctness of quantum communication protocols. It is very interesting from a foundational point of view and allows one to explore exactly what mathematical ingredients are required to carry out certain quantum protocols. This has also led to work on a categorical quantum logic [AD04].
The study of quantum communication protocols has led to formalizations based on process algebras [GN05, JL04] and to proposals to use model checking for verifying quantum protocols. A survey and a complete list of references on this subject up to 2005 is available [Gay05].
These ideas have proven to be of great utility in the world of classical computation. The use of logics, type systems, operational semantics, denotational semantics and semantic-based inference mechanisms have led to notable advances such as: the use of model checking for verification, reasoning compositionally about security protocols, refinement-based programming methodology and flow analysis.
The present paper applies this paradigm to a very recent development: measurement-based quantum computation. None of the cited research on quantum programming languages is aimed at measurement-based computation. On the other hand, the work in the physics literature does not clearly separate the conceptual layers of the subject from implementation issues. A formal treatment is necessary to analyze the foundations of measurement-based computation.
So far the main framework to explore quantum computation has been the circuit model [Deu89], based on unitary evolution. This is very useful for algorithmic development and complexity analysis [BV97]. There are other models such as quantum Turing machines [Deu85] and quantum cellular automata [Wat95, vD96, DS96, SW04]. Although they are all proved to be equivalent from the point of view of expressive power, there is no agreement on what is the canonical model for exposing the key aspects of quantum computation.
Recently physicists have introduced novel ideas based on the use of measurement and entanglement to perform computation [GC99, RB01, RBB03, Nie03]. This is very different from the circuit model where measurement is done only at the end to extract classical output. In measurement-based computation the main operation to manipulate information and control computation is measurement. This is surprising because measurement creates indeterminacy, yet it is used to express deterministic computation defined by a unitary evolution.
The idea of computing based on measurements emerged from the teleportation protocol [BBC93]. The goal of this protocol is for an agent to transmit an unknown qubit to a remote agent without actually sending the qubit. This protocol works by having the two parties share a maximally entangled state called a Bell pair. The parties perform local operations – measurements and unitaries – and communicate only classical bits. Remarkably, from this classical information the second party can reconstruct the unknown quantum state. In fact one can actually use this to compute via teleportation by choosing an appropriate measurement [GC99]. This is the key idea of measurement-based computation.
It turns out that the above method of computing is actually universal. This was first shown by Gottesman and Chuang [GC99] who used two-qubit measurements and given Bell pairs. Later Nielsen [Nie03] showed that one could do this with only 4-qubit measurements with no prior Bell pairs, however this works only probabilistically. Leung [Leu04] improved this to two qubits, but her method also works only probabilistically. Later Perdrix and Jorrand [Per03, PJ04] gave the minimal set measurements to perform universal quantum computing – but still in the probabilistic setting – and introduced the state-transfer and measurement-based quantum Turing machine. Finally the one-way computer was invented by Raussendorf and Briegel [RB01, RB02] which used only single-qubit measurements with a particular multi-party entangled state, the cluster state.
More precisely, a computation consists of a phase in which a collection of qubits are set up in a standard entangled state. Then measurements are applied to individual qubits and the outcomes of the measurements may be used to determine further measurements. Finally – again depending on measurement outcomes – local unitary operators, called corrections, are applied to some qubits; this allows the elimination of the indeterminacy introduced by measurements. The phrase “one-way” is used to emphasize that the computation is driven by irreversible measurements.
There are at least two reasons to take measurement-based models seriously: one conceptual and one pragmatic. The main pragmatic reason is that the one-way model is believed by physicists to lend itself to easier implementations [Nie04, CAJ05, BR05, TPKV04, TPKV06, WkJRR05, KPA06, BES05, CCWD06, BBFM06]. Physicists have investigated various properties of the cluster state and have accrued evidence that the physical implementation is scalable and robust against decoherence [Sch03, HEB04, DAB03, dNDM04b, dNDM04a, MP04, GHW05, HDB05, DHN06]. Conceptually the measurement-based model highlights the role of entanglement and separates the quantum and classical aspects of computation; thus it clarifies, in particular, the interplay between classical control and the quantum evolution process.
Our approach to understanding the structural features of measurement-based
computation is to develop a formal calculus. One can think of this as an
“assembly language” for measurement-based computation. Ours is the first
programming framework specifically based on the one-way model. We first
develop a notation for such classically correlated sequences of
entanglements, measurements, and local corrections. Computations are
organized in patterns
So far, this is primarily a clarification of what was already known from the series of papers introducing and investigating the properties of the one-way model [RB01, RB02, RBB03]. However, we work here with an extended notion of pattern, where inputs and outputs may overlap in any way one wants them to, and this results in more efficient – in the sense of using fewer qubits – implementations of unitaries. Specifically, our universal set consists of patterns using only 2 qubits. From it we obtain a 3 qubit realization of the rotations and a 14 qubit realization for the controlled- family: a significant reduction over the hitherto known implementations.
The main point of this paper is to introduce a calculus of local equations over patterns that exploits some special algebraic properties of the entanglement, measurement and correction operators. More precisely, we use the fact that that 1-qubit measurements are closed under conjugation by Pauli operators and the entanglement command belongs to the normalizer of the Pauli group; these terms are explained in the appendix. We show that this calculus is sound in that it preserves the interpretation of patterns. Most importantly, we derive from it a simple algorithm by which any general pattern can be put into a standard form where entanglement is done first, then measurements, then corrections. We call this standardization.
The consequences of the existence of such a procedure are far-reaching. Since entangling comes first, one can prepare the entire entangled state needed during the computation right at the start: one never has to do “on the fly” entanglements. Furthermore, the rewriting of a pattern to standard form reveals parallelism in the pattern computation. In a general pattern, one is forced to compute sequentially and to strictly obey the command sequence, whereas, after standardization, the dependency structure is relaxed, resulting in lower computational depth complexity. Last, the existence of a standard form for any pattern also has interesting corollaries beyond implementation and complexity matters, as it follows from it that patterns using no dependencies, or using only the restricted class of Pauli measurements, can only realize a unitary belonging to the Clifford group, and hence can be efficiently simulated by a classical computer [Got97].
As we have noted before, there are other methods for measurement-based quantum computing: the teleportation technique based on two-qubit measurements and the state-transfer approach based on single qubit measurements and incomplete two-qubit measurements. We will analyze the teleportation model and its relation to the one-way model. We will show how our calculus can be smoothly extended to cover this case as well as new models that we introduce in this paper. We get several benefits from our treatment. We get a workable syntax for handling the dependencies of operators on previous measurement outcomes just by mimicking the one obtained in the one-way model. This has never been done before for the teleportation model. Furthermore, we can use this embedding to obtain a standardization procedure for the models. Finally these extended calculi can be compositionally embedded back in the original one-way model. This clarifies the relation between different measurement-based models and shows that the one-way model of Raussendorf and Briegel is the canonical one.
This paper develops the one-way model ab initio but certain concepts that the reader may be unfamiliar with: qubits, unitaries, measurements, Pauli operators and the Clifford group are in an appendix. These are also readily accessible through the very thorough book of Nielsen and Chuang [NC00].
In the next section we define the basic model, followed by its operational and denotational semantics, for completeness a simple proof of universality is given in section 4, this has appeared earlier in the physics literature [DKP05], in section 5 we develop the rewrite theory and prove the fundamental standardization theorem. In section 6 we develop several examples that illustrate the use of our calculus in designing efficient patterns. In section 7 we prove some theorems about the expressive power of the calculus in the absence of adaptive measurements. In section 8 we discuss other measurement-based models and their compositional embedding to and from the one-way model. In section 9 we discuss further directions and some more related work. In the appendix we review basic notions of quantum mechanics and quantum computation.
2 Measurement Patterns
We first develop a notation for 1-qubit measurement based computations. The basic commands one can use in a pattern are:
1-qubit auxiliary preparation
2-qubit entanglement operators
and 1-qubit Pauli operators corrections and
The indices , represent the qubits on which each of these operations apply, and is a parameter in . Expressions involving angles are always evaluated modulo . These types of command will be referred to as , , and . Sequences of such commands, together with two distinguished – possibly overlapping – sets of qubits corresponding to inputs and outputs, will be called measurement patterns, or simply patterns. These patterns can be combined by composition and tensor product.
Importantly, corrections and measurements are allowed to depend on previous measurement outcomes. We shall prove later that patterns without these classical dependencies can only realize unitaries that are in the Clifford group. Thus, dependencies are crucial if one wants to define a universal computing model; that is to say, a model where all unitaries over can be realized. It is also crucial to develop a notation that will handle these dependencies. This is what we do now.
Preparation prepares qubit in state . The entanglement commands are defined as (controlled-), while the correction commands are the Pauli operators and .
Measurement is defined by orthogonal projections on
followed by a trace-out operator. The parameter is called the angle of the measurement. For , , one obtains the and Pauli measurements. Operationally, measurements will be understood as destructive measurements, consuming their qubit. The outcome of a measurement done at qubit will be denoted by . Since one only deals here with patterns where qubits are measured at most once (see condition (D1) below), this is unambiguous. We take the specific convention that if under the corresponding measurement the state collapses to , and if to .
Outcomes can be summed together resulting in expressions of the form which we call signals, and where the summation is understood as being done in . We define the domain of a signal as the set of qubits on which it depends.
As we have said before, both corrections and measurements may depend on signals. Dependent corrections will be written and and dependent measurements will be written , where and . The meaning of dependencies for corrections is straightforward: , no correction is applied, while and . In the case of dependent measurements, the measurement angle will depend on , and as follows:
so that, depending on the parities of and , one may have to modify the to one of , and . These modifications correspond to conjugations of measurements under and :
accordingly, we will refer to them as the and -actions. Note that these two actions commute, since up to , and hence the order in which one applies them does not matter.
As we will see later, relations (2) and (3) are key to the propagation of dependent corrections, and to obtaining patterns in the standard entanglement, measurement and correction form. Since the measurements considered here are destructive, the above equations actually simplify to
Another point worth noticing is that the domain of the signals of a dependent command, be it a measurement or a correction, represents the set of measurements which one has to do before one can determine the actual value of the command.
We have completed our catalog of basic commands, including dependent ones, and we turn now to the definition of measurement patterns. For convenient reference, the language syntax is summarized in Figure 1.
Patterns consists of three finite sets , , , together with two injective maps and and a finite sequence of commands , read from right to left, applying to qubits in in that order, i.e. first and last, such that:
no command depends on an outcome not yet measured;
no command acts on a qubit already measured;
no command acts on a qubit not yet prepared, unless it is an input qubit;
a qubit is measured if and only if is not an output.
The set is called the pattern computation space, and we write for the associated quantum state space . To ease notation, we will omit the maps and , and write simply , instead of and . Note, however, that these maps are useful to define classical manipulations of the quantum states, such as permutations of the qubits. The sets , are called respectively the pattern inputs and outputs, and we write , and for the associated quantum state spaces. The sequence is called the pattern command sequence, while the triple is called the pattern type.
To run a pattern, one prepares the input qubits in some input state , while the non-input qubits are all set to the state, then the commands are executed in sequence, and finally the result of the pattern computation is read back from outputs as some . Clearly, for this procedure to succeed, we had to impose the (D0), (D1), (D2) and (D3) conditions. Indeed if (D0) fails, then at some point of the computation, one will want to execute a command which depends on outcomes that are not known yet. Likewise, if (D1) fails, one will try to apply a command on a qubit that has been consumed by a measurement (recall that we use destructive measurements). Similarly, if (D2) fails, one will try to apply a command on a non-existent qubit. Condition (D3) is there to make sure that the final state belongs to the output space , i.e., that all non-output qubits, and only non-output qubits, will have been consumed by a measurement when the computation ends.
We write (D) for the conjunction of our definiteness conditions (D0), (D1), (D2) and (D3). Whether a given pattern satisfies (D) or not is statically verifiable on the pattern command sequence. We could have imposed a simple type system to enforce these constraints but, in the interests of notational simplicity, we chose not to do so.
Here is a concrete example:
with computation space , inputs , and outputs . To run , one first prepares the first qubit in some input state , and the second qubit in state , then these are entangled to obtain . Once this is done, the first qubit is measured in the , basis. Finally an correction is applied on the output qubit, if the measurement outcome was . We will do this calculation in detail later, and prove that this pattern implements the Hadamard operator .
In general, a given pattern may use auxiliary qubits that are neither input nor output qubits. Usually one tries to use as few such qubits as possible, since these contribute to the space complexity of the computation.
A last thing to note is that one does not require inputs and outputs to be disjoint subsets of . This, seemingly innocuous, additional flexibility is actually quite useful to give parsimonious implementations of unitaries [DKP05]. While the restriction to disjoint inputs and outputs is unnecessary, it has been discussed whether imposing it results in patterns that are easier to realize physically. Recent work [HEB04, BR05, CAJ05] however, seems to indicate it is not the case.
2.3 Pattern combination
We are interested in how one can combine patterns in order to obtain bigger ones.
The first way to combine patterns is by composing them. Two patterns and may be composed if . Provided that has as many outputs as has inputs, by renaming the pattern qubits, one can always make them composable.
The composite pattern is defined as:
— , , ,
— commands are concatenated.
The other way of combining patterns is to tensor them. Two patterns and may be tensored if . Again one can always meet this condition by renaming qubits in a way that these sets are made disjoint.
The tensor pattern is defined as:
— , , and ,
— commands are concatenated.
In contrast to the composition case, all the unions involved here are disjoint. Therefore commands from distinct patterns freely commute, since they apply to disjoint qubits, and when we say that commands have to be concatenated, this is only for definiteness. It is routine to verify that the definiteness conditions (D) are preserved under composition and tensor product.
Before turning to this matter, we need a clean definition of what it means for a pattern to implement or to realize a unitary operator, together with a proof that the way one can combine patterns is reflected in their interpretations. This is key to our proof of universality.
3 The semantics of patterns
In this section we give a formal operational semantics for the pattern language as a probabilistic labeled transition system. We define deterministic patterns and thereafter concentrate on them. We show that deterministic patterns compose. We give a denotational semantics of deterministic patterns; from the construction it will be clear that these two semantics are equivalent.
Besides quantum states, which are non-zero vectors in some Hilbert space , one needs a classical state recording the outcomes of the successive measurements one does in a pattern. If we let stand for the finite set of qubits that are still active (i.e. not yet measured) and stands for the set of qubits that have been measured (i.e. they are now just classical bits recording the measurement outcomes), it is natural to define the computation state space as:
In other words the computation states form a -indexed family of
3.1 Operational semantics
We need some preliminary notation. For any signal and classical state , such that the domain of is included in , we take to be the value of given by the outcome map . That is to say, if , then where the sum is taken in . Also if , and , we define:
which is a map in .
We may now view each of our commands as acting on the state space , we have suppressed and in the first 4 commands:
where following equation (1). Note how the measurement moves an index from to ; a qubit once measured cannot be neasured again. Suppose , for the above relations to be defined, one needs the indices , on which the various command apply to be in . One also needs to contain the domains of and , so that and are well-defined. This will always be the case during the run of a pattern because of condition (D).
All commands except measurements are deterministic and only modify the quantum part of the state. The measurement actions on are not deterministic, so that these are actually binary relations on , and modify both the quantum and classical parts of the state. The usual convention has it that when one does a measurement the resulting state is renormalized and the probabilities are associated with the transition. We do not adhere to this convention here, instead we leave the states unnormalized. The reason for this choice of convention is that this way, the probability of reaching a given state can be read off its norm, and the overall treatment is simpler. As we will show later, all the patterns implementing unitary operators will have the same probability for all the branches and hence we will not need to carry these probabilities explicitly.
We introduce an additional command called signal shifting:
It consists in shifting the measurement outcome at by the amount . Note that the -action leaves measurements globally invariant, in the sense that . Thus changing to amounts to swapping the outcomes of the measurements, and one has:
and signal shifting allows to dispose of the action of a measurement, resulting sometimes in convenient optimizations of standard forms.
3.2 Denotational semantics
Let be a pattern with computation space , inputs , outputs and command sequence . To execute a pattern, one starts with some input state in , together with the empty outcome map . The input state is then tensored with as many s as there are non-inputs in (the commands), so as to obtain a state in the full space . Then , and commands in are applied in sequence from right to left. We can summarize the situation as follows:
If is the number of measurements, which is also the number of non outputs, then the run may follow different branches. Each branch is associated with a unique binary string of length , representing the classical outcomes of the measurements along that branch, and a unique branch map representing the linear transformation from to along that branch. This map is obtained from the operational semantics via the sequence with , such that:
A pattern realizes a map on density matrices given by . We write for the map realized by .
Each pattern realizes a completely positive trace preserving map.
Proof. Later on we will show that every pattern can be put in a
equivalent form where all the preparations and entanglements appear first,
followed by a sequence of measurements and finally local Pauli corrections.
branch maps decompose as ,
where is a unitary map over collecting all
corrections on outputs, is a projection from to
representing the particular measurements performed along the
branch, and is a unitary embedding from to collecting
the branch preparations, and entanglements. Note that is the same on
where we have used the fact that is unitary, is a projection and is independent of the branches and is also unitary. Therefore the map is a trace-preserving completely-positive map (cptp-map), explicitly given as a Kraus decomposition. Hence the denotational semantics of a pattern is a cptp-map. In our denotational semantics we view the pattern as defining a map from the input qubits to the output qubits. We do not explicitly represent the result of measuring the final qubits; these may be of interest in some cases. Techniques for dealing with classical output explicitly are given by Selinger [Sel04b] and Unruh [Unr05].
A pattern is said to be deterministic if it realizes a cptp-map that sends pure states to pure states. A pattern is said to be strongly deterministic when branch maps are equal.
This is equivalent to saying that for a deterministic pattern branch maps are proportional, that is to say, for all and all , , and differ only up to a scalar. For a strongly deterministic pattern we have for all , , .
If a pattern is strongly deterministic, then it realizes a unitary embedding.
Proof. Define to be the map realized by the pattern. We have . Since the pattern in strongly deterministic all the branch maps are the same. Define to be , then must be a unitary embedding, because .
3.3 Short examples
For the rest of paper we assume that all the non-input qubits are prepared in the state and hence for simplicity we omit the preparation commands .
First we give a quick example of a deterministic pattern that has branches with different probabilities. Its type is , , and its command sequence is . Therefore, starting with input , one gets two branches:
Thus this pattern is indeed deterministic, and implements the identity up to a global phase, and yet the two branches have respective probabilities and , which are not equal in general and hence this pattern is not strongly deterministic.
There is an interesting variation on this first example. The pattern of interest, call it , has the same type as above with command sequence . Again, is deterministic, but not strongly deterministic: the branches have different probabilities, as in the preceding example. Now, however, these probabilities may depend on the input. The associated transformation is a cptp-map, with:
One has , so is indeed a completely positive and trace-preserving linear map and and clearly for no unitary does one have .
For our final example, we return to the pattern , already defined above. Consider the pattern with the same qubit space , and the same inputs and outputs , , as , but with a shorter command sequence namely . Starting with input , one has two computation branches, branching at :
and since , both transitions happen with equal probabilities . Both branches end up with non proportional outputs, so the pattern is not deterministic. However, if one applies the local correction on either of the branches’ ends, both outputs will be made to coincide. If we choose to let the correction apply to the second branch, we obtain the pattern , already defined. We have just proved , that is to say realizes the Hadamard operator.
3.4 Compositionality of the Denotational Semantics
With our definitions in place, we will show that the denotational semantics is compositional.
For two patterns and we have and
Proof. Recall that two patterns , may be combined by composition provided has as many outputs as has inputs. Suppose this is the case, and suppose further that and respectively realize some cptp-maps and . We need to show that the composite pattern realizes .
Indeed, the two diagrams representing branches in and :
can be pasted together, since , and . But then, it is enough to notice 1) that preparation steps in commute with all actions in since they apply on disjoint sets of qubits, and 2) that no action taken in depends on the measurements outcomes in . It follows that the pasted diagram describes the same branches as does the one associated to the composite .
A similar argument applies to the case of a tensor combination, and one has that realizes .
If one wanted to give a categorical treatment
Define the two following patterns on :
with , in the first pattern, and in the second. Note that the second pattern does have overlapping inputs and outputs.
The patterns and are universal.
Proof. First, we claim and respectively realize and , with:
We have already seen in our example that implements
, thus we already know this in the particular
case where . The general case follows by the same kind of computation.
Second, we know that these unitaries form a universal set for [DKP05]. Therefore, from the preceding section, we infer that combining the corresponding patterns will generate patterns realizing any unitary in .
These patterns are indeed among the simplest possible. As a consequence, in the section devoted to examples, we will find that our implementations often have lower space complexity than the traditional implementations.
Remarkably, in our set of generators, one finds a single measurement and a single dependency, which occurs in the correction phase of . Clearly one needs at least one measurement, since patterns without measurements can only implement unitaries in the Clifford group. It is also true that dependencies are needed for universality, but we have to wait for the development of the measurement calculus in the next section to give a proof of this fact.
5 The measurement calculus
We turn to the next important matter of the paper, namely standardization. The idea is quite simple. It is enough to provide local pattern-rewrite rules pushing s to the beginning of the pattern and s to the end. The crucial point is to justify using the equations as rewrite rules.
5.1 The equations
The expressions appearing as commands are all linear operators on Hilbert space. At first glance, the appropriate equality between commands is equality as operators. For the deterministic commands, the equality that we consider is indeed equality as operators. This equality implies equality in the denotational semantics. However, for measurement commands one needs a stricter definition for equality in order to be able to apply them as rewriting rules. Essentially we have to take into the account the effect of different branches that might result from the measurement process. The precise definition is below.
Consider two patterns and we define if and only if for any branch , we have , where and are the branch map defined in Section 3.2.
The first set of equations gives the means to propagate local Pauli corrections through the entangling operator .
These equations are easy to verify and are natural since belongs to the Clifford group, and therefore maps under conjugation the Pauli group to itself. Note that, despite the symmetry of the operator qua operator, we have to consider all the cases, since the rewrite system defined below does not allow one to rewrite to . If we did allow this the reqrite process could loop forever.
A second set of equations allows one to push corrections through measurements acting on the same qubit. Again there are two cases:
These equations follow easily from equations (4) and (5). They express the fact that the measurements are closed under conjugation by the Pauli group, very much like equations (9),(10),(11) and (12) express the fact that the Pauli group is closed under conjugation by the entanglements .
Define the following convenient abbreviations:
Particular cases of the equations above are:
The first equation, follows from the fact that , so the action on is trivial; the second equation, is because is equal modulo , and therefore the and actions coincide on . So we obtain the following:
which we will use later to prove that patterns with measurements of the form and may only realize unitaries in the Clifford group.
5.2 The rewrite rules
We now define a set of rewrite rules, obtained by orienting the equations
to which we need to add the free commutation rules, obtained when commands operate on disjoint sets of qubits:
where represent the qubits acted upon by command , and are supposed to be distinct from and . Clearly these rules could be reversed since they hold as equations but we are orienting them this way in order to obtain termination.
Condition (D) is easily seen to be preserved under rewriting.
Under rewriting, the computation space, inputs and outputs remain the same, and so do the entanglement commands. Measurements might be modified, but there is still the same number of them, and they still act on the same qubits. The only induced modifications concern local corrections and dependencies. If there was no dependency at the start, none will be created in the rewriting process.
In order to obtain rewrite rules, it was essential that the entangling command () belongs to the normalizer of the Pauli group. The point is that the Pauli operators are the correction operators and they can be dependent, thus we can commute the entangling commands to the beginning without inheriting any dependency. Therefore the entanglement resource can indeed be prepared at the outset of the computation.
Write , respectively , if both
patterns have the same type, and one obtains the command sequence of
from the command sequence of by applying one, respectively any
number, of the rewrite rules of the previous section. We say that is
standard if for no , and the procedure
of writing a pattern to standard form is called standardization
One of the most important results about the rewrite system is that it has the desirable properties of determinacy (confluence) and termination (standardization). In other words, we will show that for all , there exists a unique standard , such that . It is, of course, crucial that the standardization process leaves the semantics of patterns invariant. This is the subject of the next simple, but important, proposition,
Whenever , .
Proof. It is enough to prove it when . The first group of rewrites has been proved to be sound in the preceding subsections, while the free commutation rules are obviously sound.
We now begin the main proof of this section. First, we prove termination.
Theorem 2 (Termination)
All rewriting sequences beginning with a pattern terminate after finitely many steps. For our rewrite system, this implies that for all there exist finitely many such that where the are standard.
Proof. Suppose has command sequence ; so the number of commands is . Let be the number of commands in . As we have noted earlier, this number is invariant under . Moreover commands in can be ordered by increasing depth, read from right to left, and this order, written , is also invariant, since commutations are forbidden explicitly in the free commutation rules.
Define the following depth function on and commands in :
Define further the following sequence of length , is the depth of the -command of rank according to . By construction this sequence is strictly increasing. Finally, we define the measure with:
We claim the measure we just defined decreases lexicographically under rewriting, in other words implies , where is the lexicographic ordering on .
To clarify these definitions, consider the following example. Suppose ’s command sequence is of the form , then , , and . For the command sequence we get that , and . Now, if one considers the rewrite , the measure of the left hand side is , while the measure of the right hand side, as said, is , and indeed . Intuitively the reason is clear: the s are being pushed to the left, thus decreasing the depths of s, and concomitantly, the value of .
Let us now consider all cases starting with an rewrite. Suppose the command under rewrite has depth and rank in the order . Then all s of smaller rank have same depth in the right hand side, while has now depth and still rank . So the right hand side has a strictly smaller measure. Note that when , because of the creation of a (see the example above), the last element of may increase, and for the same reason all elements of index in may increase. This is why we are working with a lexicographical ordering.
Suppose now one does an rewrite, then strictly decreases, since one correction is absorbed, while all commands have equal or smaller depths. Again the measure strictly decreases.
Next, suppose one does an rewrite, and the command under rewrite has depth and rank . Then it has depth in the right hand side, and all other commands have invariant depths, since we forbade the case when is itself an . It follows that the measure strictly decreases.
Finally, upon an rewrite, all commands have invariant depth, except possibly one which has smaller depth in the case , and decreases strictly because we forbade the case where . Again the claim follows.
So all rewrites decrease our ordinal measure, and therefore all sequences of rewrites are finite, and since the system is finitely branching (there are no more than possible single step rewrites on a given sequence of length ), we get the statement of the theorem.
The final statement of the theorem follows from the fact that we have finitely many rules so the system is finitely branching. In any finitely branching rewrite system with the property that every rewrite sequence terminates, it is clearly true that there can be only finitely many standard forms.
The next theorem establishes the important determinacy property and furthermore shows that the standard patterns have a certain canonical form which we call the NEMC form. The precise definition is:
A pattern has a NEMC form if its commands occur in the order of s first, then s , then s, and finally s.
We will usually just say “EMC” form since we can assume that all the auxiliary qubits are prepared in the state we usually just elide these commands.
Theorem 3 (Confluence)
For all , there exists a unique standard , such that , and is in EMC form.
Proof. Since the rewriting system is terminating, confluence
follows from local
We look for critical pairs, that is occurrences of three successive commands where two rules can be applied simultaneously. One finds that there are only five types of critical pairs, of these the three involve the command, these are of the form: , and ; and the remaining two are: with , and all distinct, with and distinct. In all cases local confluence is easily verified.
Suppose now does not satisfy the EMC form conditions. Then, either there is a pattern with not of type , or there is a pattern with not of type . In the former case, and must operate on overlapping qubits, else one may apply a free commutation rule, and may not be a since in this case one may apply an rewrite. The only remaining case is when is of type , overlapping ’s qubits, but this is what condition (D1) forbids, and since (D1) is preserved under rewriting, this contradicts the assumption. The latter case is even simpler.
We have shown that under rewriting any pattern can be put in EMC form, which is what we wanted. We actually proved more, namely that the standard form obtained is unique. However, one has to be a bit careful about the significance of this additional piece of information. Note first that uniqueness is obtained because we dropped the and free commutations, thus having a rigid notion of command sequence. One cannot put them back as rewrite rules, since they obviously ruin termination and uniqueness of standard forms.
A reasonable thing to do, would be to take this set of equations as generating an equivalence relation on command sequences, call it , and hope to strengthen the results obtained so far, by proving that all reachable standard forms are equivalent.
But this is too naive a strategy, since , and:
obtaining an expression which is not symmetric in and . To conclude, one has to extend to include the additional equivalence , which fortunately is sound since these two operators are equal up to a global phase. Thus, these are all equivalent in our semantics of patterns. We summarize this discussion as follows.
We define an equivalence relation on patterns by taking all the rewrite rules as equations and adding the equation and generating the smallest equivalence relation.
With this definition we can state the following proposition.
All patterns that are equivalent by are equal in the denotational semantics.
This relation preserves both the type (the triple) and the underlying entanglement graph. So clearly semantic equality does not entail equality up to . In fact, by composing teleportation patterns one obtains infinitely many patterns for the identity which are all different up to . One may wonder whether two patterns with same semantics, type and underlying entanglement graph are necessarily equal up to . This is not true either. One has (where is defined in Section 4), and this readily gives a counter-example.
We can now formally describe a simple standardization algorithm.
Input: A pattern on qubits with command sequence
Output: An equivalent pattern in NEMC form.
Commute all the preparation commands (new qubits) to the right side.
Commute all the correction commands to the left side using the EC and MC rewriting rules.
Commute all the entanglement commands to the right side after the preparation commands.
Note that since each qubit can be entangled with at most other qubits, and can be measured or corrected only once, we have entanglement commands and measurement commands. According to the definiteness condition, no command acts on a qubit not yet prepared, hence the first step of the above algorithm is based on trivial commuting rules; the same is true for the last step as no entanglement command can act on a qubit that has been measured. Both steps can be done in . The real complexity of the algorithm comes from the second step and the commuting rule. In the worst case scenario, commuting an correction to the left might create other corrections, each of which has to be commuted to the left themselves. Thus one can have at most new corrections, each of which has to be commuted past measurement or entanglement commands. Therefore the second step, and hence the algorithm, has a worst case complexity of .
We conclude this subsection by emphasizing the importance of the EMC form. Since the entanglement can always be done first, we can always derive the entanglement resource needed for the whole computation right at the beginning. After that only local operations will be performed. This will separate the analysis of entanglement resource requirements from the classical control. Furthermore, this makes it possible to extract the maximal parallelism for the execution of the pattern since the necessary dependecies are explicitly expressed, see the example in section 6 for further discussion. Finally, the EMC form provides us with tools to prove general theorems about patterns, such as the fact that they always compute cptp-maps and the expressiveness theorems of section 7.
5.4 Signal shifting
One can extend the calculus to include the signal shifting command . This allows one to dispose of dependencies induced by the -action, and obtain sometimes standard patterns with smaller computational depth complexity, as we will see in the next section which is devoted to examples.
where denotes the substitution of with in , , being signals. Note that when we write a explicitly on the upper left of an , we mean that . The first additional rewrite rule was already introduced as equation (6), while the other ones merely propagate the signal shift. Clearly one can dispose of when it hits the end of the pattern command sequence. We will refer to this new set of rules as . Note that we always apply first the standardization rules and then signal shifting, hence we do not need any commutation rule for and commands.
It is important to note that both theorem 2 and 3 still hold for this extended rewriting system. In order to prove termination one can start with the EMC form and then adapt the proof of Theorem 2 by defining a depth function for a signal shift similar to the depth of a correction command. As with the correction, signal shifts can also be commuted to the left hand side of a command sequence. Now our measure can be modified to account for the new signal shifting terms and shown to be decreasing under each step of signal shifting. Confluence can be also proved from local confluence using again Newman’s Lemma [Bar84]. One typical critical pair is where appears in the domain of signal and hence the signal shifting command will have an effect on the measurement. Now there are two possible ways to rewrite this pair, first, commute the signal shifting command and then replace the left signal of the measurement with its own signal shifting command:
The other way is to first replace the left signal of the measurement and then commute the signal shifting command:
Now one more step of rewriting on the last equation will give us the same result for both choices.
All other critical terms can be dealt with similarly.
In this section we develop some examples illustrating pattern composition, pattern standardization, and signal shifting. We compare our implementations with the implementations given in the reference paper [RBB03]. To combine patterns one needs to rename their qubits as we already noted. We use the following concrete notation: if is a pattern over , and is an injection, we write for the same pattern with qubits renamed according to . We also write for pattern composition, in order to make it more readable. Finally we define the computational depth complexity to be the number of measurement rounds plus one final correction round. More details on depth complexity, specially on the preparation depth, i.e. depth of the entanglement commands, can be found in [BK06].
Consider the composite pattern with computation space , inputs , and outputs . We run our standardization procedure so as to obtain an equivalent standard pattern: