# Geometric analysis of minimum time trajectories for a two-level quantum system

###### Abstract

We consider the problem of controlling in minimum time a two-level quantum system which can be subject to a drift. The control is assumed to be bounded in magnitude, and to affect two or three independent generators of the dynamics. We describe the time optimal trajectories in , the Lie group of possible evolutions for the system, by means of a particularly simple parametrization of the group. A key ingredient of our analysis is the introduction of the optimal front line. This tool allows us to fully characterize the time-evolution of the reachable sets, and to derive the worst-case operators and the corresponding times. The analysis is performed in any regime: controlled dynamics stronger, of the same magnitude or weaker than the drift term, and gives a method to synthesize quantum logic operations on a two-level system in minimum time.

###### pacs:

02.30.Yy, 03.65.Aa, 03.67.-a^{†}

^{†}thanks: I am grateful to D. D’Alessandro for many helpful discussions. Work supported by ARO MURI under Grant W911NF-11-1-0268

## I Introduction

Control theory studies how the dynamics of a system can be modified through suitable external actions called controls miko (). When applied to quantum systems, it provides tools for the study of the feasibility and optimization of particular operations, for instance, in quantum information processing nielsen (), in atomic and molecular physics, and in Nuclear Magnetic Resonance (NMR) levitt (). In this work, we explore the time-optimal control miko2 (); khaneja () of the dynamics of a two-level system (or qubit), the basic unit in quantum information and quantum computation. For its fundamental role, the control of this system has been studied in several works, under many different assumptions (see for example wu (); boscain (); wenin (); kirillova (); garon (); hegerfeldt (); albertini () and references therein). Here, we provide a complete characterization of the time-optimal trajectories, assuming that the dynamics can contain a non-controllable part (the drift), and that the controllable part depends on two or three independent control functions.

From a mathematical point of view, we introduce some new key tools which enable a simple and comprehensive treatment of the system, and the extension of the results in albertini (). In particular, our analysis holds for any relative strength between controllable and non-controllable dynamics. The drift might be a dominant contribution, a perturbation, or a comparable term with respect to the controlled part. Our analysis is relevant whenever it is not accurate to assume that quantum operations can be performed in null time, that is, through infinitely strong controls.

The system dynamics is expressed through the Schrödinger operator equation

(1) |

with initial condition . The operator , an element of the special unitary group , realizes the time evolution as , where is the statistical operator associated to the system. The three functions of time , are the control parameters, which we assume bounded by

(2) |

Later, we will assume that only and can be used to affect the dynamics, i. e., we will set . The generators , , are given by , where

(3) |

are the Pauli matrices, with commuting relations , where is a cyclic permutation of . The first contribution might be an arbitrary, static drift term which can always be written as in (1) by a suitable redefinition of the Pauli matrices and of the control functions.

In this work, we characterize the time optimal trajectories in for any final operator , where is the minimum time for the transition . We derive the corresponding optimal controls , and for arbitrary values of and , and provide a complete description of the reachable sets in at any time , that is, the family of operators the system evolution can be mapped to in the given time (see Definition II below). In particular, we derive worst-case operators and times. It follows from standard results in geometric control theory that the system is controllable, and every final can be reached at some finite time. An important ingredient of our analysis is a representation of elements of the special unitary group which solely relies on two parameters, providing a clear description of optimal trajectories and reachable sets in terms of the evolution of the boundary of the reachable sets themselves.

The plan of this work is as follows. We start our analysis by considering a system with three arbitrary controls (in control theoretical jargon, a fully actuated system). In Section II we review the Pontryagin maximum principle of optimal control pontryagin (), which is the starting point of our analysis, and we derive the necessary conditions for optimal controls. By using them, in Section III we explicitly compute the candidate optimal trajectories in , and represent them in the chosen parametrization of the special unitary group. We introduce the notion of optimal front line, which describes the evolution of the boundary of the reachable set. By using it, in Section IV we characterize the evolution of reachable sets in the three cases , and , and provide the optimal times, whenever an analytical expression is possible. We also derive the worst-case operators and the relative times. In Section V we use the same ideas and formalism to fully characterize the reachable sets (and related quantities) in the case where only two controls affect the dynamics. In such a case, we find that there are different evolution for the reachable sets in the cases , and . In Section VI we provide examples of applications by particularizing our results to some special target operators: diagonal operators and the SWAP operator. This is done in both scenarios of two of three controls. In Section VII we compare our work to existing results on the optimal control on , describe possible extensions of the approach, and finally conclude.

## Ii The Pontryagin maximum principle

Given an arbitrary final operator , we consider a trajectory in , determined by control functions , , such that and . A basic tool for the study of optimal control problems is given by the Pontryagin maximum principle, which, in the context of control of system (1) on the Lie group , takes the following form.

###### Definition 1

The Pontryagin Hamiltonian is defined as

(4) |

where , and .

###### Proposition II.1

(Pontryagin maximum principle) Assume that a control strategy , , with , and the corresponding trajectory are optimal (that is, the final time is minimal). Then there exists , , such that for every such that .

Define the coefficients

(5) |

By using the Lagrange multipliers method to maximize (4) with the bound (2), we find that the optimal controls satisfy

(6) |

Arcs where the Pontryagin Hamiltonian is independent of the control functions are called singular and, on them, the controls are not constrained by equations like (6). In general, an optimal trajectory will be a concatenation of singular and non-singular arcs, and, usually, the presence of singular arcs makes the solution of the optimal control problem more difficult, and more sophisticated mathematical tools are required to face the problem (see for instance boscain2 () for a general analysis on 2-dimensional manifolds, or wu2 (); lapert () for some recent applications of the Pontryagin maximum principle when singular arcs are present). In our scenario, a singular trajectory would require in some interval, but this is impossible, because in this case would vanish, and this is excluded by the Pontryagin principle. Therefore, we can conclude that trajectories containing singular arcs are never optimal.

The dynamics of the coefficients can be derived by differentiating (5) with respect to , and by using the commutation relations among Pauli matrices. We find

(7) | |||||

By considering the form of optimal controls (6), we obtain that , that is, is constant. Moreover, using (6) in (II),

(8) |

where and are constants. Therefore, the candidate optimal controls are given by

(9) | |||||

and is given by

(10) |

Because of the special form of the candidate optimal controls, the dynamics (1) can be integrated, as we prove in the next section.

## Iii Extremal trajectories in SU(2)

We substitute the extremals ^{1}^{1}1Candidate optimal controls and trajectories are called extremals in optimal
control theory language. in (1), and find the
corresponding extremal trajectories in . To proceed, it is convenient to
counter-evolve the drift of the system, by passing to the interaction picture of the dynamics,

(11) |

With this substitution the differential evolution for is given by

(12) |

which is simpler to integrate because the generator is time-independent. In the adopted representation we find that

(13) |

where we have defined to simplify the notation. Throughout this paper, we will switch between and the re-scaled time , possibly with subscripts, without further comments. By using (11), we compute

(14) |

This is the form of the extremal trajectories in . They depend on the two parameters and (which can be tuned via , , ) as well as on the fixed parameters and . To find the optimal trajectory for a given final state , one has to determine the values of and such that the transition takes the minimal time. This is conveniently done by choosing a suitable representation of , described in the following.

###### Remark III.1

An arbitrary operator can be given the following representation:

(15) |

Therefore, is described in terms of three parameters: , and . It turns out that, in the control scenario at hand, the optimal time does not depend on the parameter . In other words, all the operators in which differ only for the value on are reached in the same optimal time. In fact, in (14) it is possible to arbitrarily change the phase of the off-diagonal terms by suitably choosing . This parameter enters the analysis only in the phase of the off-diagonal terms, and it is independent of the choice of . Therefore, to fully characterize the optimal trajectories in and the reachable sets, we can limit our attention to the upper diagonal element of , which is sufficient to determine and .

This result can also be proven by adopting the argument in Proposition 2.1 in albertini (), where it is shown that the minimum time to reach and is the same for all real when there are two controls and . The two operators differ only for the phase of the off-diagonal entries. The proof is valid also for a fully actuated system.

According to the previous remark, we shall parameterize solely by and , or and in the equivalent representation . A point in the unit disk in the plane represents a family of matrices in which only differ by the phase of the anti-diagonal elements. These matrices are reached in the same minimum time. Moreover, every candidate optimal trajectory can be represented by its projection onto the unit disk with the understanding that any trajectory corresponds to a family of trajectories only differing by the phase . Points on the border of the unit disk () correspond to diagonal matrices, and the initial point, the identity matrix, corresponds to the point , .

By direct inspection of (14) we have

(16) |

with . For we find

(17) |

and for

(18) |

These trajectories lie on the border of the unit disk. Moreover, by multiplying the first equation in (III) by , and the second equation by , and subtracting the results, we eliminate the parameter , and obtain

(19) |

This relation is a constraint on the terminal points of the candidate optimal trajectories at time , for arbitrary : they lie on a line with time-dependent slope and intercept. This can also be seen by noticing that we can recast (III) in the form

(20) |

which explicitly shows the special role played by the trajectories with .

As varies in , Eq. (20) describes a segment connecting two points on the unit circle. The end points rotate with uniform speed on the disk border, unless , since in this case one of them is fixed in . Extremal trajectories are parameterized by . In general, since there is a one-to-one correspondence between points of the segment and , two extremal trajectories cannot reach the same point in exactly the same time. Consequently, there are not overlaps points where two (or more) different extremal trajectories intersect. The only exception to this behavior is when the aforementioned segment collapses to a point. This scenario will arise only when , at the worst case time.

The segment we have just described, and its generalization to the case of two controls only in Sec. V, will be a fundamental ingredient in the analysis of reachable sets and optimal times. Therefore, we find it convenient to assign a specific name to it: the optimal front-line . More precisely, we can write and we will use this notation to represent subsets of the front line, as for instance .

Given an arbitrary final state , represented in the unit disk by , in order to find the optimal trajectory leading to it, we have to require that and . The minimal time is the smallest such that is in the optimal front line. The corresponding determines the optimal control strategy. The optimal minimum time can also be calculated analytically or numerically as follows. From (III) we find that

(21) |

In (III), if we multiply the first equation by , the second by , and then we subtract them, we obtain

(22) |

The minimum time is the smallest for which this equation is valid. Furthermore, by squaring the two equations in (III) and summing them, we find

(23) |

from which can be found, given the prior knowledge of . In principle, this approach can be used to find the optimal strategy for any final target operation. However, a geometrical analysis of the optimal front line provides much more information on how the states are reached, further insights on the optimal times, and the geometry of the reachable sets.

## Iv Properties of the reachable sets and optimal times

###### Definition 2

The two sets are related by

(24) |

The structure of these sets is a direct consequence of the evolution of the aforementioned optimal front-line . It is a known fact in optimal control theory that, if a trajectory is optimal for at time , then belongs to the boundary of the reachable set until time , i. e. . Therefore, if a point of the unit disk is reached by for the first time at time , it belongs to . However, in general not all points of the optimal front line belong to , because they might be included in front lines corresponding to earlier times. Therefore, our strategy is to study the evolution of the front lines, and an important role will be played by the curve where intersects . This curve contains the points where optimal trajectories loose their optimality. We illustrate the procedure in the three different scenarios, depending on the relative values of and . A generalization of this idea will be used in Sec. V as well, with the difference that we will find several intersection curves between and .

### iv.1 The case

This is the case where the control action is assumed to be more powerful than the natural evolution of the system. In this case, and are both positive. Therefore, following (17) and (18), the extremal points of the optimal front-line rotate in opposite directions along the unit-circle, with constant angular speed. This shows that and do not intersect on the unit disk, although this fact could be proved by a direct computation. Therefore, all the trajectories ending on the front line are optimal.

During its evolution, spans all the unit disk, and eventually collapses to a point on the border of the disk, defined by the condition

(25) |

The corresponding worst-case time is , and is the collapsing point. The worst-case time is independent of because the relative angular velocity between the extremal points of the optimal front-line depends only on . Notice that and the worst-case operator are conjugate points, since there is a one-parameter family of geodesics connecting them (the parameter is ).

All the points in the unit disk are reached in an optimal time . See Fig. 1 for a graphical representation of the evolution of the reachable sets in a specific case. Notice that, as a special case, we can consider , that is, there is not drift in the dynamics of the system. The corresponding worst-case operator is represented by the point .

### iv.2 The case

In this case, the strength of the control action is the same as the free evolution of the system. One of the extremal points of the optimal front-line is fixed at , and the optimal front-line rotates about it. This point corresponds to when , respectively. The analysis is analogous to the previous case, with the optimal trajectories ending on when , and on when . The worst-case time is again , and the worst-case operator is represented by . However, in this case it is possible to derive analytically the values for and for a given final state , since the optimal trajectories are circles. In fact, for a direct computation shows that (III) is consistent with

(26) |

and then, for any value of , the trajectory is a circle of radius , centered in . In the two cases, the optimal controls for a target , represented by , are given by

(27) |

and the optimal times are

(28) |

for , and

(29) |

for . See Fig 2 for a pictorial representation of the evolution of the reachable sets in a special case.

### iv.3 The case

In this case, the strength of the control action is smaller than that of the free evolution. In the limit of small, the control can be seen as a perturbation to the dynamics. The analysis is more complicated, because the optimal front-lines at time have a self-intersection during their evolution. Therefore, some trajectories ending on the optimal front-line will not be optimal. The geometric explanation of this behavior is that, in this case, the end points of rotate in the same direction, generating at each time a rotation of this segment about one of its points. To determine this point at time we have to require that it is in both and .

According to (19), we have to impose that satisfies

(30) |

and the second condition can be replaced by

(31) |

We find that the unique solution at time is given by

(32) |

and, by comparing (IV.3) and (III), we notice that the locus of self-intersections of the optimal front-line, described by (IV.3), is itself an extremal trajectory for the system, corresponding to . which we call the critical trajectory. The value is critical, in the sense that trajectories can be optimal only for when , and when . This can be understood by considering that for the end point of the optimal front-line corresponding to foreruns the other end point, and similarly for . Fig. 3 shows how the optimal front lines generate the critical trajectory during their evolution.

The critical trajectory is a well-know concept in optimal geometric control theory, where it is called the cut locus. In fact, for a given initial point, the cut locus is defined as the set of points where the extremal trajectories lose their optimality. We will shortly see that, in the regime under investigation, all the optimal trajectories lose their optimality on the critical trajectory. Therefore, our analysis of the optimal front-line represents a simple approach for determining the cut locus. Notice that, when , the cut locus reduces to a point, corresponding to the worst-case operator. This is the conjugate point to the initial point .

The critical trajectory has a singular point when . This point is a cusp singularity, whose
appearance can be geometrically understood by considering the evolution of the optimal front-line ^{2}^{2}2Generally, the optimal front line undergoes a time-dependent roto-translation in the plane. The cusp singularity appears when the translational contribution vanishes. Therefore, it represents the instantaneous rotation center of . From

(33) |

we find that , and the singular point of the critical trajectory is

(34) |

It turns out that this is the point where the critical trajectory looses optimality. In fact, when , the points of the critical trajectory are in the reachable set until time , and then they have already been reached at a former time.

Any other optimal trajectory looses optimality at some time, when it intersects the reachable set until that time. The boundary of the reachable set until time , , is given by the optimal front-line and the critical trajectory. Since the self-intersections of the optimal front line form themselves an extremal trajectory for the system, an optimal trajectory can loose optimality only by intersecting the critical trajectory. For this reason, as mentioned before, the critical trajectory is a cut locus for this system.

If we denote by the time when the optimal front-line will comes back to the point , we can conclude that, for , describes the terminal points of the optimal trajectories when . Analogously, these terminal points are given by when . For , the extremal trajectories which are still optimal end on , where and are determined by the intersection of and the critical trajectory. In general, their analytical derivation is not possible. However, we can determine the worst-case time and the corresponding : these are obtained by requiring that becomes tangent to the critical trajectory at some point. If we assume that this point is reached at time , we can write it as . The tangent to the critical trajectory in this point is given by

(35) |

which, considering the explicit expressions of , , and from (IV.3) and (IV.3), can be recast in the form

(36) |

with . We require that this line coincides with the optimal front-line (19) at some later time . Therefore

(37) |

which, with the further constraint , is solved by

(38) |

Therefore, the worst-case time for is

(39) |

which is consistent with the result found when . The worst-case point in the unit disk is arbitrarily close to , and it is approached through the optimal trajectory characterized by . This can be seen by requiring that

(40) |

and using that

(41) |

a direct consequence of (38) and (39). It turns out that (40) is equivalent to , and is not admitted since it corresponds to the critical optimal trajectory. In Fig 4 we provide a graphical representation of the evolution of the reachable sets in a special case.

As decreases, the critical trajectory stretches and spirals around the center of the unit disk. Eventually, when , the singular point of the critical trajectory approaches the center of the unit disk. In this limit, this point represents the worst case operator, which is reached only asymptotically ().

## V The case with two controls

In this section we consider the case where in (1), that is, the control action enters only through and . This is not the most general case of dynamics with two controls and a drift term, which could contain also contributions along and . However, the general scenario cannot be described with the representation adopted in this work, since, in this case, to operators differing by the phase in the off-diagonal elements there usually correspond different optimal times.

### v.1 Optimal controls and trajectories

This problem has been recently considered in albertini () and, by using a different approach, the optimal trajectories have been derived under the condition .Following the procedure outlined in the previous sections, we are able to fully characterize the reachable sets (and related properties) for arbitrary values of and . In particular, can be both positive, negative, or null. Under the constraint , we find that the optimal controls must satisfy

(42) |

and , are defined as in (5). Their dynamics is given by

(43) | |||||

and, by using (42) in (V.1), we obtain that is constant. Moreover, we find

(44) |

where and are two constants, and is given by

(45) |

The candidate optimal controls have the form

(46) |

Since is unconstrained, can assume any real value. Singular arcs are given by on some interval, which implies and in that interval. Following the argument of albertini (), it is possible to prove that, also in this case, singular arcs can never contribute to an optimal trajectory.

Integration of the dynamics (1) follows the same lines outlined before (with the intermediate operator ), and the final result is

(47) |

where we have defined , , and . The candidate optimal trajectories, in the adopted representation of (see Remark III.1), are obtained by taking the real and imaginary parts of the upper diagonal element in (47):

(48) |

In analogy with the case of a fully actuated system, one could numerically solve these equations for an arbitrary final operator reached in minimal time . However, in this work we are mainly interested in studying the evolution of the reachable sets by introducing the optimal front line and studying its evolution.

### v.2 The optimal front-line

As before, we define the optimal front-line as the set of terminal points for a candidate optimal trajectory at time :

(49) |

It is possible to verify that there is a one-to-one correspondence between and points on . This can be seen, for instance, by rewriting (V.1) in polar coordinates

(50) |

and

(51) |

Although it is possible to obtain with , this necessarily implies . Therefore, the correspondence is one-to-one at any .

Not all the extremal trajectories are optimal. Following the discussion of the previous sections, we have to consider the self-intersections of , as well as the intersections of and , in order to determine critical values of for which the trajectories lose optimality. In this case we must use the parametric expressions for the points of since it is not possible to solve for one of the two equations in (V.1), and obtain a closed expression of the optimal front line in terms of and alone. Therefore, we cannot directly rely on the procedure developed in the previous sections. However, the optimal front line can be considered as the envelope of its tangent lines. Therefore, if there is a self-intersection of in some point, there must also be a self-intersection of the tangent line to in that point. Consequently, we can find the intersections of and by considering, for each , the intersections of the tangent lines to the optimal front-line at time and . If they are on , they correspond to the desired intersection of and , and the corresponding is a critical value, relevant for determining where the trajectories are optimal.

Again, by means of this simple analysis we are able to fully characterize the cut loci for this system. We will find not trivial cut loci for any value of and .

The slope of the tangent line to the optimal front line at time , in the point labeled by , is given by

(52) |

Since

(53) |

we find that

(54) |

Therefore, the tangent line to in the point , at time , is given by

(55) |

The intersections of tangent lines to and are obtained by solving the system

(56) |

whose solution follows the same steps which have been detailed in the previous section.
When , we find the unique solution ^{3}^{3}3When , the only solution to
(56) is given by , , with the
constraint . This solution is already accounted for in the case .

(57) |

However, since the intersection point must be on , also (V.1) must be satisfied. Therefore

(58) |

which has several solutions. If , we find that , solved by

(59) |

Since this critical value is time-independent, this locus of self-intersections of the optimal front-line is by itself a critical optimal trajectory . It loses its optimality at a critical time such that . Since

(60) |

we find that the critical time is

(61) |

This trajectory is a cut locus for the system, analogous to that described in the case of three controls, when .

Additional solutions to (V.2) are found when . In this case the critical frequencies are implicitly defined by , where is an integer. The corresponding points are on the boundary of the unit disk: , . These cut loci are not optimal trajectories for the system since the critical frequencies are time-dependent. The explicit expressions of these critical frequencies are

(62) |

and they are defined for , that is, . It turns out that , and equality holds only when . If we write , and require that , we have