SemiGlobal Approximate stabilization of an infinite dimensional quantum stochastic system
Abstract
In this paper we study the semiglobal (approximate) state feedback stabilization of an infinite dimensional quantum stochastic system towards a target state. A discretetime Markov chain on an infinitedimensional Hilbert space is used to model the dynamics of a quantum optical cavity. We can choose an (unbounded) strict Lyapunov function that is minimized at each timestep in order to prove (weak) convergence of probability measures to a final state that is concentrated on the target state with (a prespecified) probability that may be made arbitrarily close to . The feedback parameters and the Lyapunov function are chosen so that the stochastic flow that describes the Markov process may be shown to be tight (concentrated on a compact set with probability arbitrarily close to ). We then use Prohorov’s theorem and properties of the Lyapunov function to prove the desired convergence result.
keywords:
Quantum control, Lyapunov stabilization, Stochastic stability, Oneparameter semigroups1 Introduction
In this paper we consider the stabilization of a discretetime Markov process (Equations (3) and (4)) that is defined on the unit sphere on an infinitedimensional Hilbert (Fock) space at a target state which is a specific unit vector in corresponding to a photonnumber state. We consider a Lyapunov function based statefeedback controller that drives our quantum system to the target state with probability greater than some prespecified probability for all ^{1}^{1}1The problem of output feedback control has been examined in the finitedimensional context in Mirrahimi et al. (2010); Dotsenko et al. (2009) using a quantum adaptation of the Kalman filter. We do not discuss the problem of estimating the state of the system and refer the reader to Mirrahimi et al. (2010) for further details on designing a state estimator..
The specific physical system under consideration uses Quantum NonDemolition (QND) measurements to detect and/or produce highly nonclassical states of light in trapped superconducting cavities Deléglise et al. (2008); Gleyzes et al. (2007); Guerlin et al. (2007) (see (Haroche and Raimond, 2006, Ch. 5) for a description of such quantum electrodynamical systems and Brune et al. (1992) for detailed physical models with QND measures of light using atoms). In this paper we examine the feedback stabilization of such experimental setups near a prespecified target photon number state. Such photon number states, with a precisely defined number of photons, are highly nonclassical and have potential applications in quantum information and computation.
As the Hilbert space is infinite dimensional it is difficult to design feedback controllers to drive the system towards a target state (because closed and bounded subsets of are not compact). In Mirrahimi et al. (2010); Dotsenko et al. (2009) a controller was designed by approximating the underlying Hilbert space with a finitedimensional Galerkin approximation . Physically this approximation leads to an artificial bound on the maximum number of photons that may be inside the cavity. In this paper we wish to design a controller for the full Hilbert space without using the finite dimensional approximation. Simulations (see Somaraju et al. (2011)) indicate that the controller in Theorem 3.1 below performs better than the one designed using the finitedimensional approximation in Mirrahimi et al. (2010); Dotsenko et al. (2009).
Controlling infinite dimensional quantum systems have previously been examined in the deterministic setting of partial differential equations, which do not involve quantum measurements. Various approaches have been used to overcome the noncompactness of closed and bounded sets. One approach consists of proving approximate convergence results which show convergence to a neighborhood of the target state for example in Beauchard and Mirrahimi (2009); Mirrahimi (2009). Alternatively, one examines weak convergence for example, in Beauchard and Nersesyan (2010). Other approaches such as using strict Lyapunov functions or strong convergence under restrictions on possible trajectories to compact sets have also been used in the context of infinite dimensional statespace for example in Coron and dAndŕea Novel (1998); Coron et al. (2007).
The situation in our paper is different in the sense that the system under consideration is inherently stochastic due to quantum measurements. The system we consider may be described using a discrete time Markov process on the set of unit vectors in the state Hilbert space as explained in Subsection 2.3. We use a strict Lyapunov function that restricts the system trajectories with high probability to compact sets as explained in Section 3. We use the properties of weakconvergence of measures to show approximate convergence (i.e. with probability of convergence approaching one) of the discrete time Markov process towards the target state.
1.1 Outline
The remainder of the paper is organised as follows: in the following Section 2 we introduce some notation and the system model of the discretetime Markov process. We also recall some results concerning the (weak)convergence of probability measures. In Section 3 we state the main result of our paper (Theorem 3.1) concerning the approximate semiglobal stabilizability of the Markov process at our target state. We also provide a proof of the main result using several Lemmas. We then present our conclusions in the final Section.
2 Definitions and System Description
We introduce some notation that will be used to describe the discretetime Markov process that characterizes the system.
2.1 Notation
In this paper, we use Dirac’s Braket notation commonly used in Physics literature^{2}^{2}2See e.g. (Teschl, 2009, Sec. 8.3) for more details of the quantum Harmonic oscillator model and notation used here.. The system Hilbert space associated with the quantum cavity is a Fock space which we denote by with innerproduct and norm . We drop the subscript, for ease of notation, if this causes no confusion. Let the set^{3}^{3}3, and denote the set of integers, positive integers and nonnegative integers, respectively.
denote the canonical basis of the Fock space . Physically, the state represents a cavity state with precisely photons.
Let and be the annihilation and creation operators defined on domains and , respectively and be the number operator with domain . These unbounded operators satisfy the relations
(1) 
for all .
For all and denote by the open ball in centered at and of radius . Also denote by the closed set of unit vectors in with the topology inherited from . Let denote the set for all and . Let be the Banach space of continuous functions on with the supremum norm .
We denote by the Borel algebra of and by the set of all probability measures on the measure space . For all and measurable functions defined on we denote by
the expectation value of the function with respect to measure .
The support of a probability measure is defined to be the set
2.2 Topology on
In this paper we study the weak convergence of probability measures. It can be shown (see e.g. Merkle (2000)) that the space is a subset of the unit ball in the (continuous linear) dual space of through the relation where
for all . When we refer to the convergence of a sequence of measures, we mean converges with respect to the weak topology of .
Definition 2.1.
We say that a sequence of probability measure converges (weak) to a probability measure if for all
and we write
In the weak topology on the set of probability measures, compactness is related to the notion of tightness of measures. A set of probability measures is said to be tight (Billingsley, 1999, p. 9) if for all there exists a compact set such that for all ,
We recall below Prohorov’s theorem (see e.g. Merkle (2000)).
Theorem 2.1 (Prohorov’s theorem).
Any tight sequence of probability measures has a (weak) converging subsequence.
2.3 Discretetime Markov process
We now describe the evolution of our quantum system which is governed by a Markov process on the state space . We introduce below the Markov process model with minimal references to the actual physical system under consideration. We refer the interested reader to (Haroche and Raimond, 2006, Ch. 5) and references therein for a description of the physical system and the approximations involved in deriving the Markov process model (also see Mirrahimi et al. (2010); Dotsenko et al. (2009)).
Define the displacement operator and measurement operators where and as
Here, and are experimentally determined real numbers and the operators , and are defined in Equation (1). Recall that because the operator is selfadjoint, we may conclude from Stone’s theorem that the set of operators form a stronglycontinuous unitary group (see e.g. (Reed and Simon, 1980, Sec V111.4)), i.e.
(2) 
Denote by the state of the system at timestep . Given the state at timestep the state at timestep is a random variable whose distribution is given by the following twostep equation
(3)  
(4) 
Here and the control .
Remark 2.1.
The time evolution from the step to , consists of two types of evolutions: a projective measurement by the operators and a control part involving operator . For the sake of simplicity, we will use the notation of to illustrate this intermediate step.
The Markov jump probabilities may also be written in terms of density operators. Given any denote by the density operator . Then
Here is the trace of a traceclass operator on . If we set then we can write the Markov jump probabilities (3), (4) using the equivalent density operator description.
Here, and are superoperators. We switch between the two equivalent descriptions throughout the paper depending on convenience.
2.4 Some useful formulas
We recall below some useful results. The BakerCampbellHausdorff formula, which will be used to evaluate the derivatives of our Lyapunov function, states (see e.g. (Nielsen and Chuang, 2000, p. 291))
(5) 
where and are linear operators on and . The are defined recursively with and for .
Let be a Markov process on some state space X. Suppose that there is a nonnegative function on satisfying , then Doob’s inequality states
(6) 
3 Main Results
We prove the main results of our paper in this Section. We wish to use the control to drive the system into a prespecified target state where .
We use a strict Lyapunov function defined^{4}^{4}4We choose this Lyapunov function assuming . The Lyapunov function may easily be modified for the case and all the proofs in this paper may be applied to that case as well.
(7) 
Here
is a small positive number and
(8) 
We set to be the set of all where the above Lyapunov function is finite.
Remark 3.1.
We note that coherent states , for are in . These coherent states naturally occur in optical cavities and are generally the initial condition is a coherent state in experiments.
We choose a feedback that maximises the expectation value of the Lyapunov function in every timestep 
(9) 
for some positive constant .
Remark 3.2.
The Lyapunov function and feedback are chosen to be this specific form to serve three purposes 

We choose the sequence as . This guarantees that if we choose to minimize the expectation value of the Lyapunov function then the trajectories of the Markov process are restricted to a compact set in with probability arbitrarily close to 1. This implies that the limit set of the process is nonempty (see Lemma 3.11).

The term is chosen such that the Lyapunov function is a strict Lyapunov functions for the Fock states. This implies that the support of the limit set only contains Fock states (see Lemma 3.12).

The relative magnitudes of the coefficients have been chosen such that is a strict global minimum of . Moreover given any we can choose such that for all , and for all in some neighborhood of , does not have a local minimum at . This implies that if is in this neighborhood of then we can choose an to decrease the Lyapunov function and move away from by some finite distance with probability that can be made arbitrarily close to 1 by an appropriate choice of and (see proof of Lemma 3.13).
We make the following assumption^{5}^{5}5See Remark 3.3 below, to see how we may weaken this assumption..

The eigenvalues of and are nondegenerate. This is equivalent to the assumption that is not a rational number. This implies that the only eigenvectors of and are the Fock states .
The following Theorem is our main result.
Theorem 3.1.
Suppose assumption A1 is true. For any initial measure , let be the Markov flow induced by Equations (3), (4) and control given in Equation (9) with Lyapunov function in Equation (7). Here, determines the control signal and determines the Lyapunov function.
Given any and , there exist constants and such that for all satisfying , converges (weak to a limit set . Moreover for all , only if is one of the Fock states and
3.1 Overview of Proof
The proof of the Theorem uses several Lemmas that are proven in the next two subsections. We outline the central idea of the proof now.
The Lyapunov function is such that choosing ensures that the expectation value of the Lyapunov function is nonincreasing. Therefore by choosing to be the , that minimizes the expectation value of the Lyapunov function in each step, we ensure that is a supermartingale (Lemma 3.10).
The set is a compact subset of (Lemma 3.4). We use this fact and the supermartingale property of to show that with probability approaching 1, is in a compact set for all . This implies the tightness of the sequence . Hence, we can use Prohorov’s Theorem 2.1 to prove the existence of a converging subsequence (Lemma 3.11).
Suppose is some subsequence that converges to a measure . We show that the second term in the Lyapunov function is a strict Lyapunov function for Fock states. That is with equality if and only if is a Fock state. This implies that the support set of all only consists of Fock states (Lemma 3.12).
Finally we note that the first part of the Lyapunov function satisfies for all . Moreover for and . This implies is a local maximum for the Lyapunov function. Therefore, we can find a feedback to move the system away from Fock states if with high probability (Lemma 3.13). This therefore implies that we converge to our target Fock state with high probability.
We prove in the following two sections the convergence result rigorously. We first establish some properties of the Lyapunov function.
3.2 Properties of the Lyapunov function
Lemma 3.2.
For small enough is nonnegative on .
Proof.
We have,
Using a similar analysis for we get
Here, is the Kroneckerdelta. Because
for all , for small enough, . ∎
For the remainder of this paper, we assume that has been chosen small enough to ensure that is nonnegative. Also note that for small enough if and only if . Therefore is a strict global minimum for .
Lemma 3.3.
The function is lower semicontinuous on .
Proof.
Because and are bounded operators, and are continuous functions on . So we only need to prove the lower semicontinuity of .
Let . Let be given. Then the finiteness of implies that there exists an such that
We can choose small enough such that for all satisfying ,
Therefore, for all
∎
Lemma 3.4.
For all , the set is compact in with respect to the topology inherited from .
Proof.
Let be a sequence in and let
Because, and is strictly increasing for , we have
for all and . Therefore, using a diagonalization argument we know that there exists a subsequence and a set of numbers , such that
(10)  
(11)  
(12) 
We claim that . To see this suppose is given. Then because as , there exists an such that
(13) 
Also because as , there exists a large enough such that for all ,
(14) 
Combining Equations (10)(14) we get for
The lower semicontinuity of implies that the set is closed. Therefore . Hence is compact. ∎
Let be any sequence of positive numbers with as . Define an inner product
Here the is defined on the linear subspace of on which is finite.
Given a linear operator , let be its resolvent operator. We recall below a Theorem in (Renardy and Rogers, 2004, Th. 12.31).
Theorem 3.5.
A closed, densely defined operator in some Hilbert space is the generator of an analytic semigroup (w.r.t. the uniform topology of the set of bounded operators on ) if and only if there exists such that the halfplane is contained in the resolvent set of and, moreover, there is a constant such that
We show that if we consider to be an operator on some domain in then is an analytic semigroup on .
Lemma 3.6.
The operator is symmetric on .
Proof.
Let and let and . Then,
Therefore,
Similarly,
Therefore the operator is symmetric. ∎
The operators is well defined on the finite linear span of the Fock basis and this finite linear span of the Fock basis is obviously dense in the space . The fact that is closable follows from the fact that is symmetric (see e.g. (Kato, 1966, p. 270)). Therefore, we can assume that is a closed operator (if it is not closed then we can always redefine to be its closed linear extension).
Lemma 3.7.
The semigroup is analytic in the set of bounded operators on with respect to the uniform operator topology of bounded operators on .
Proof.
Lemma 3.8.
Let and be given. Then there exists a constant such that for all and satisfying ,
(15) 
for . Moreover, for some constant .
Proof.
Because and are bounded operators on and is an analytic semigroup on , we have
where .
We know from Lemma 3.7 that is analytic with respect to the uniformoperator topology induced by the seminorm^{6}^{6}6Even though the Lemma 3.7 was proven for sequences , we can apply it to the case where one of the is zero. In fact small changes in do not change the analycity of . We can prove the same Lemma by considering the quotient spaces of the seminorm . . Hence, for all such that we can write, using the Taylor series expansion at ,
for and only depends on .
We now prove the required bound on using the BakerCampbellHausdorff formula (5). By noting that
and setting and in Equation (5), we get
(16)  
Where we have set . If we let , then
(17)  
and
(18)  
Substituting Equations (17) and (18) into (16) and rearranging terms, we get
(19)  
If then substituting for from Equation (8) we get
and for we get
Substituting this into (19), we have for ,
(20)  
where
For all , the first term in Equation (20) will be less than for small enough. We show that for small enough, for all such that ,
(21)  
For all we have and hence we can choose a small enough such that
(22) 
We have
The function defined in the above Equation is of order . Because is order and because the series converges, we know that the series converges. Hence, can be made arbitrarily small for large . By CauchySchwartz, and hence can be chosen large enough such that for all satisfying , we have
(23) 
Because , we can choose small enough so that for all ,