Identification of open quantum systems from observable time traces

# Identification of open quantum systems from observable time traces

Jun Zhang Joint Institute of UMich-SJTU and Key Laboratory of System Control and Information Processing (MOE), Shanghai Jiao Tong University, Shanghai, 200240, China    Mohan Sarovar Department of Scalable & Secure Systems Research (08961), Sandia National Laboratories, Livermore, CA 94550, USA
July 16, 2019
###### Abstract

Estimating the parameters that dictate the dynamics of a quantum system is an important task for quantum information processing and quantum metrology, as well as fundamental physics. In this paper we develop a method for parameter estimation for Markovian open quantum systems using a temporal record of measurements on the system. The method is based on system realization theory and is a generalization of our previous work on identification of Hamiltonian parameters [Phys. Rev. Lett. 113, 080401 (2014)].

###### pacs:
03.65.Wj, 02.30.Yy, 03.67.-a

## I Introduction

Recent years have witnessed the rapid progress of quantum information processing technologies, including convincing demonstrations of a quantum advantage in several applications including communication and sensing. Such technologies require the precise fabrication and manipulation of quantum degrees of freedom, and as a result, much effort is invested into understanding and precisely identifying the quantum dynamics.

In Ref. Zhang and Sarovar (2014) we developed a technique for identifying a parameterized Hamiltonian from time traces of expectation values of a small set of observables. This technique was recently experimentally demonstrated and validated in Ref. Hou et al. (2014). In the current paper we generalize this work to enable identification of parameterized open system evolution that can be described by a Lindblad master equation. This expands the applicability of this type of system identification approach that utilizes time traces of observables. As in Ref. Zhang and Sarovar (2014) we consider only finite dimensional systems, and assume that the system can be reliably prepared in a small number of initial states, and possesses observables whose expectation value can be sampled over a period of time. Nuclear magnetic resonance Slichter (1996); Hou et al. (2014) and ensembles of neutral atoms Smith et al. (2013) are two typical examples of physical systems for which these assumptions are valid.

Our approach can be considered a generalization of traditional spectroscopic methods such as Ramsey interferometry in which spectral features of time-dependent data are used to infer values of underlying system parameters Slichter (1996); Gerry and Knight (2005). This inference is simple in the case of Ramsey or Rabi measurements where the relation between spectral features and the parameters is straightforward. In more complex situations, the relationship can be too complex to know a priori. Moreover, it may not even be known whether the measurements performed can identify a parameter of interest. Such complex situations beyond conventional spectroscopy can occur even in small systems such as a few atoms or spins. We develop a systematic way to perform parameter estimation in such complex situations, providing a criterion of whether the unknown parameters are estimable given the set of measurements available, and if so, prescribing a data-driven algorithm to identify them.

In addition to Ref. Zhang and Sarovar (2014), previous studies that have examined this kind of generalized spectroscopy to estimate Hamiltonian or open system parameters from time traces include Refs. Boulant et al. (2003); Cole et al. (2005); Devitt et al. (2006); Burgarth et al. (2009); Burgarth and Maruyama (2009); Di Franco et al. (2009); Bellomo et al. (2010a, b); Granade et al. (2012); Schirmer et al. (2012); Kato and Yamamoto (2013); Dominy et al. (2013); Jagadish and Shaji (2014). We draw particular attention to Ref. Schirmer et al. (2012), which studied the problem of identifying open quantum systems in the same setting that we do. The solution constructed in that work is related to the one that we will present below in the sense that both consider observable time traces in the Laplace domain and attempt to solve equations encoding the relation between the unknown parameters and the measured signal. The principal difference is that whereas Ref. Schirmer et al. (2012) expresses the unknown parameters directly in terms of the measured signal, we use model realization theory to first construct a minimal model of the system from the measurements, and then estimate the parameters from this minimal model.

The remainder of this paper is organized as follows: in section II we formulate the system identification problem in the Markovian open quantum systems context. Then in section III we describe the identification algorithm based on model realization, and highlight some of the distinctions between this algorithm and the corresponding algorithm for closed systems developed in Ref. Zhang and Sarovar (2014). We illustrate the open system algorithm with an example in section IV, and in addition, we present several case studies in Appendix A that exemplify the types of equations that must be solved in order to identify the parameters. Finally, section VI concludes the paper with a discussion of results and future directions.

## Ii Problem formulation

The problem of interest is the identification of unknown parameters that dictate the dynamical behavior of a quantum system. The system state is represented by a density matrix , where , ( is a positive semidefinite matrix), and . We assume the system dynamics is governed by a quantum Markovian master equation in Lindblad form Lendi (1987); Alicki and Lendi (2007):

 ˙ρ=Lρ≡LHρ+LDρ, (1)

where is the generator of Hamiltonian dynamics

 LHρ=−i[H(θ),ρ] (2)

with a vector of unknown parameters (), and represents a general Lindblad dissipative generator

 LDρ=12∑N2−1j,k=1gjk([Fj,ρF†k]+[Fjρ,F†k]). (3)

Here is an orthonormal basis for the Lie algebra , and the Hilbert-Schmidt inner product is defined as . Letting , we have that Alicki and Lendi (2007). Expanding the Hamiltonian in this orthonormal basis, we obtain a parameterized Hamiltonian in the form:

 H(θ)=∑Mm=1hm(θ)Fm, (4)

where are some known functions of . We take as the unknown parameters from now on since solving from is an algebraic problem. The unknown parameters that we want to estimate are the Hamiltonian parameters and the Lindblad coefficients .

First we define the structure constants of the Lie algebra with respect to the orthonormal basis through the commutator of basis elements:

 [iFj,iFk]=∑N2−1l=1CjkliFl. (5)

For future use we also define the constants through the anticommutator of basis elements Alicki and Lendi (2007):

 {iFj,iFk}=−2NδjkI+∑N2−1l=1DjkliFl. (6)

The dynamics of the expectation value of an observable , i.e., can be written as

 ˙xn(t)=trFn(LHρ(t)+LDρ(t)). (7)

From Lendi (1987); Zhang and Sarovar (2014), we derive that

 ˙xn(t)=∑N2−1p=1(Qnp+Rnp)xp(t)+bn, (8)

where

 Qnp =∑Mm=1Cmnphm, (9) Rnp =−14∑N2−1l,j,k=1gjk(Cnjl(Cklp+Dklp) +Cknl(Cljp+Dljp)), bn =1N∑N2−1j,k=1Im(gjk)Cnjk.

Collecting the in a vector , we obtain a linear differential equation describing the complete dynamics:

 ˙x(t)=Ax(t)+b,xn(0)=trFnρ(0), (10)

where the matrix has elements , and the vector has elements .

The vector is often called the coherence vector, and is a complete representation of the quantum state Lendi (1987). Eq. (10) gives an explicit description of the quantum dynamics as a linear time invariant (LTI) system, and hence it enables application of results from classical linear systems theory.

Next we turn to the observables being monitored. The measurement output consists of a vector of time-dependent observable expectation values:

 (11)

In most physical systems one only has access to a limited set of observables (e.g., local observables in a many-body system). However, as a result of the dynamics, it is possible that all the parameters defining Eq. (1) are imprinted in the time evolution of the monitored observables. This is the notion we wish to formalize and exploit. We begin by expanding each monitored observable in the basis of the Lie algebra , i.e.. With this expansion we can define the output vector as

 y(t)=cx(t) (12)

with the -th row of the matrix being the elements . Also define the set , where is a vector of length , as the collection of unique basis elements that appear in the expansion of all the measured observables.

We want to use the dynamical equation governing the time evolution of only the observables being monitored to estimate the unknown parameters. It is possible that the evolution does not couple the elements in to all elements of the basis. This is equivalent to possessing block diagonalizable structure and the elements of being coupled only through a proper subblock. A constructive procedure to find the relevant basis elements that couple to the measured observables is the following generalization of the filtration procedure in geometric control theory Sastry (1999). First we define the adjoint generator of dynamics through . Explicitly, , with

 L†HX=−i[X,H(θ)]

and

 L†DX= 12N2−1∑j,k=1gjk([Fj,F†kX]+[XFj,F†k]).

Then for the Lindblad dynamics in Eq. (1) define an iterative procedure as

 G0 = M, Gi = L†[Gi−1]∪Gi−1, (13)

where

 L†[Gi−1]={Fj:tr(F†jL†g)≠0, where g∈Gi−1}.

That is, at each iteration we compute the adjoint evolution of each of the elements of , and if the result has nonzero inner product with a basis element not already in , this basis element is added to . Since the dynamical system is finite dimensional, this iteration will saturate at a maximal set after finite steps, and we refer to this set as the accessible set. Now writing all the with in a vector of dimension , the dynamics for this vector is given by

 ddtxa = ~Axa+~b, (14) y(t) = ~cxa(t), (15)

where is a sub-matrix of , is a sub-vector of , and is a sub-matrix of ; i.e., all contain only the elements necessary to describe the evolution of the subset of observable averages collected in , and how these define the measurement traces.

Finally, we assume that the system is prepared in a fixed, known initial state , and the corresponding initial state for Eq. (14) is .

## Iii Identification algorithm

Suppose that we can measure the expectation values of some observables in at regular time instants , where is the sampling period. Denote the measured expectation values of the observables as 111Note that these expectation values may have to be collected from averaging measurements on several runs of the experiment under the same initial state., which is the output of the following discretized form of Eq. (14):

where , and for brevity of notation we use and .

Eq. (16) defines a discrete time LTI system and we will now use invariants of different realizations of an LTI system to identify the unknown Hamiltonian and Lindblad parameters that generate the dynamics. To this end, we need to construct a realization from the measurement time traces. There are many methods for constructing a realization of a linear dynamical system from input-output data in linear systems theory Callier and Desoer (1991). In Zhang and Sarovar (2014), we presented and utilized a method called the eigenstate realization algorithm (ERA) Juang and Pappa (1985) for identifying Hamiltonian dynamics, and in the following we show that this method can be used in this case of open system dynamics in Lindblad form also. For completeness, we include the specification of ERA here.

### iii.1 Eigenstate realization algorithm

The first stage of the estimation algorithm is to construct a minimal realization of the system based on input/output information. This is achieved by ERA in three steps as follows.

Step 1: Collect the measured data into an matrix (generalized Hankel matrix) as:

 Hrs(k)= ⎡⎢ ⎢ ⎢ ⎢ ⎢⎣y(k)y(k+t1)⋯y(k+ts−1)y(j1+k)y(j1+k+t1)⋯y(j1+k+ts−1)⋮⋮⋮y(jr−1+k)y(jr−1+k+t1)⋯y(jr−1+k+ts−1)⎤⎥ ⎥ ⎥ ⎥ ⎥⎦

with arbitrary integers (, , ) and (, , ).

Step 2: Find the singular value decomposition (SVD) of as

 Hrs(0)=P[Σ000]QT=[P1P2][Σ000][QT1QT2],

where , are both orthonormal, and is a diagonal matrix with the non-zero singular values of determined up to numerical accuracy , i.e. for all where is the dimension of . The matrices , , , are partitions with compatible dimensions. When the measurement time traces are noisy, determining the cutoff parameter (and hence ) can be difficult since the noise can lead to non-decaying singular values. In this case, one can choose by demanding that the realization produced by ERA is of the correct order. We will return to this issue when we specify how the ERA realization will be used (see discussion after Eq. (18)).

Step 3: Form a realization of the system (16) as , , and , where and is the first column of . The triple forms a realization of Eq. (16), and since the output data is an invariant of realizations, this triple generates the same output as the original system:

 y(j)=^c^Ajd^x(0),for all j≥0. (17)

Note that although the original system response is composed of a response to a non-zero initial state and a response to the forcing vector (i.e., Eq. (16)), the ERA realization is composed of only an initial state response. This is because at the level of input-output realizations it is not possible to distinguish between the response to initial states and the response to forcing inputs, and therefore ERA lumps all responses into one type.

This completes the specification of the ERA algorithm. It results in a realization of the discrete time dynamical system (16) in the form of the triple .

We note that the measurements results do not have to be uniform in Step 1 of the algorithm; in particular, if it is known that some measurement results are particularly noisy or corrupted, those data points can be discarded by choosing appropriate integers (, , ) and (, , ). This data filtering can reduce the estimation error caused by measurement noise and outliers.

### iii.2 Estimation algorithm

In order to estimate the parameters in the original dynamical system we now convert the discrete-time LTI system realization obtained from ERA, , to a continuous time realization, by letting , where the logarithm in this definition is the principal branch of the natural logarithm. The accurate conversion from a discrete time system to a continuous time system relies on the sampling time being sufficiently small to capture all continuous time dynamics. For the Hamiltonian system identification case we were able to specify conditions on the sampling time based on the Shannon-Nyquist criteria, since in this case the output time traces are guaranteed to be band-limited Zhang and Sarovar (2014). However, there is no such guarantee in the Markovian open system case since now the time traces are generally decaying oscillations, or time-limited sigmals. Therefore, we do not currently have conditions on the sampling time for this conversion to accurately provide a continuous-time realization (i.e., to avoid aliasing effects), but note that one can generally estimate a valid sampling time from knowledge of the intrinsic frequencies in the system. In engineering, it is typical practice to sample such time-limited signals at or times the fastest frequency in the signal Franklin et al. (1997), and such a heuristic suffices in our setting as well, as the example in section IV and the case studies in Appendix A bear out. Additionally, we note that although the Shannon-Nyquist criteria enables a formal specification of the minimum sampling time in the Hamiltonian estimation scenario, to apply it requires some knowledge of the system’s spectrum. In the absence of this knowledge, the situation is the same in the closed and open systems cases: one needs to estimate a valid sampling time and possibly also try multiple sampling times.

Now, to estimate the unknown parameters, we use the fact that the system input-output relations are invariants of different realizations. We work in the Laplace domain in the following in order to form algebraic equations for the unknown parameters. By equating the Laplace transform of the outputs of the original system and the ERA realization we get:

 (18)

This equation relates the unknown parameters to the measured data through the ERA realization. Explicitly, the right hand side of Eq. (18) is completely determined by the measured data, and the left hand side is in terms of the Hamiltonian parameters and Lindblad coefficients . The resolvent expressions on the right and left hand sides of Eq. (18), and , can be computed symbolically, or alternatively the expressions can be expanded in powers of , and the coefficients in this expansion can be equated to yield polynomial equations for the unknown parameters. Solving these multivariate polynomial equations leads to the identification of and . In the case studies encountered in Appendix A we are able to express the resolvents exactly by computing the matrix inverses symbolically, but one may need to resort to the expansion in in more complicated cases. We discuss this issue further in Section VI.

Note that in Step 2 of ERA dictates the maximum order of the polynomial on the right hand side of Eq. (18). This suggests that we should choose to be the order of the denominator polynomial in the left hand side of Eq. (18), which can be calculated from symbolic computations and obtained as an irreducible rational function. This choice coincides with the rank of when there is no noise in the measurement time traces.

We highlight a crucial difference here between parameter estimation in the closed system and Markovian open system scenarios. For the former, , and if the triple form a minimal realization (i.e., is controllable and observable), then the Laplace transform on the left hand side of Eq. (18) is gauranteed to have a canonical form as a ratio of polynomials Callier and Desoer (1991), with

 P(s) =det(sI−~A), (19) Q(s) =det(s[I000]−[~Axa(0)~c0]).

Having this form enabled us in Ref. Zhang and Sarovar (2014) to avoid explicitly computing the inverse or its expansion in a power series. These are both computationally intensive to compute since the matrix is a symbolic matrix containing the unknown parameters. However, in the open system case, where the left hand side of Eq. (18) has no known canonical form, one cannot avoid performing this symbolic inverse or power series computation. This is a critical difference in computational difficulty between the closed and open system parameter identification problems.

We note that by converting back into the continuous time domain and formulating the Laplace transform relation in Eq. (18), we have converted the problem of estimating parameters to one of solving polynomial equations. This is in contrast to directly estimating parameters using Eq. (17), which would involve solving transcendental equations for the unknown parameters. This simplification of the equations relating the unknown parameters to the measured data is one of the primary advantages of our approach.

We conclude with two further comments on the above estimation algorithm:

1. The initial state may have to be chosen carefully. For example, if is zero or an eigenvector of , it leads to no sensitivity in the output to any of the unknown parameters. Care must be taken to avoid such degenerate cases. In fact, running the algorithm with multiple initial states leads to more polynomial equations of low order and thus can help to solve for the unknown parameters more efficiently.

2. This system identification algorithm can result in multiple estimates of the unknown parameters, all of which satisfy the input/output relations captured by Eq. (18). This is because several Markovian generators can generate the same map between an input state and measurement time trace, and hence are equivalent from an input/output perspective Burgarth and Yuasa (2012). When the algorithm results in multiple parameter estimates, one has to appeal to prior information, or add resources such as additional input states or observable time traces in order to uniquely specify a parameter set.

The following section explicitly demonstrates the algorithm developed above and Appendix A presents additional case studies.

## Iv Dissipative energy transfer

In this section we apply the estimation algorithm developed above to a physically relevant example: energy transfer between two qubits at finite temperature. The qubits could represent the relevant energy levels of an atomic or molecular system, or spin- systems. The Markovian master equation for the dynamics takes the form

 dρdt = −i[H,ρ]+2∑k=1γk2(σkzρσkz−ρ) (20) +2∑k=12g−k(σk−ρσk+−12σk+σk−ρ−12ρσk+σk−) +2g+k(σk+ρσk−−12σk−σk+ρ−12ρσk−σk+),

where the Hamiltonian for the system is given by

 H=ω12σ1z+ω22σ2z+δ1(σ1+σ2−+σ1−σ2+). (21)

This is a model for two possibly detuned qubits with coherent energy transfer dynamics, independent dephasing, and incoherent excitation-relaxation. The rates of excitation/relaxation () are functions of the temperature of the environment of the qubits Breuer and Petruccione (2002). There are nine parameters that dictate the dynamics in this model: . For convenience, we define

 ν1=g+1+g−1,μ1=g+1−g−1, (22) ν2=g+2+g−2,μ2=g+2−g−2.

Assume that the observable being measured is , or , or both, since we will be interested in exploring the benefits of measuring multiple observables. The state vector determined by the accessible set is the same regardless of whether one or both of those observables are measured, and is explicitly specified by , , , , , , whereas the dynamics of is determined by Eq. (14) with

 ~A= ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣−2ν100δ1−δ100−2ν20−δ1δ1000−νs−γsω2ω10−δ1δ1−ω2−νs−γs0ω1δ1−δ1−ω10−νs−γsω2000−ω1−ω2−νs−γs⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦, ~b=[μ1,μ2,0,0,0,0]T,

where and . The matrix defining the output depends on which observable is being monitored and is given by

 ⟨σ1z⟩: ~cz1=[1,0,0,0,0,0], or (23) ⟨σ2z⟩: ~cz2=[0,1,0,0,0,0],

or the concatenation of these two row vectors into a matrix, if both observables are being monitored.

Letting the initial state be , we have that

 xa(0)=[1,0,0,0,0,0]T. (24)

Since the matrix is small, we can symbolically calculate the resolvents in Eq. (18), to obtain the Laplace transform of the two possible measurement traces or :

 ¯Z1(s) =~cz1(sI−~A)−1xa(0)+~cz1(sI−~A)−1~b/s (25) =s4+q3s3+q2s2+q1s+q0s5+p4s4+p3s3+p2s2+p1s

with

 q3 = μ1+2ν1+4ν2+2γs, q2 = 2δ21+ν21+6ν1ν2+2ν1γs+2μ1ν1+5ν22+6ν2γs +4μ1ν2+γ2s+2μ1γs+ω2d, q1 = 2δ21μ1+2δ21μ2+2δ21ν1+2δ21ν2+2δ21γs+μ1ν21 +5μ1ν22+4ν1ν22+2ν21ν2+μ1γ2s+2ν2γ2s+4ν22γs +μ1ω2d+2ν2ω2d+2ν32+6μ1ν1ν2+2μ1ν1γs +6μ1ν2γs+4ν1ν2γs q0 = 2μ1ν32+2δ21μ1γs+2δ21μ2γs+4μ1ν1ν22+2μ1ν21ν2 (26) +2μ1ν2γ2s+4μ1ν22γs+2μ1ν2ω2d+2δ21μ1ν1 +2δ21μ1ν2+2δ21μ2ν1+2δ21μ2ν2+4μ1ν1ν2γs,

and

 p4 = 4ν1+4ν2+2γs, p3 = 4δ21+5ν21+14ν1ν2+6ν1γs+5ν22+6ν2γs +γ2s+ω2d, p2 = 8δ21ν1+8δ21ν2+4δ21γs+2ν31+14ν21ν2+4ν21γs +14ν1ν22+16ν1ν2γs+2ν1γ2s+2ν1ω2d+2ν32 +4ν22γs+2ν2γ2s+2ν2ω2d, p1 = 4δ21ν21+8δ21ν1ν2+4δ21ν1γs+4δ21ν22+4δ21ν2γs (27) +4ν31ν2+8ν21ν22+8ν21ν2γs+4ν1ν32+8ν1ν22γs +4ν1ν2γ2s+4ν1ν2ω2d,

where . And,

 ¯Z2(s) =~cz2(sI−~A)−1xa(0)+~cz2(sI−~A)−1~b/s (28) =r3s3+r2s2+r1s+r0s5+p4s4+p3s3+p2s2+p1s,

with

 r3 = μ2, r2 = 2δ21+4μ2ν1+2μ2ν2+2μ2γs, r1 = 2δ21μ1+2δ21μ2+2δ21ν1+2δ21ν2+2δ21γs+5μ2ν21 +μ2ν22+μ2γ2s+μ2ω2d+6μ2ν1ν2+6μ2ν1γs +2μ2ν2γs, r0 = 2μ2ν31+2δ21μ1γs+2δ21μ2γs+2μ2ν1ν22+4μ2ν21ν2 (29) +2μ2ν1γ2s+4μ2ν21γs+2μ2ν1ω2d+2δ21μ1ν1 +2δ21μ1ν2+2δ21μ2ν1+2δ21μ2ν2+4μ2ν1ν2γs

and defined as in Eq. (IV).

It is clear from these expressions that we can identify , , which then allows identification of , . At the same time, note that only the linear combinations and occur in the above equations but not the individual parameters and (). This implies that only these linear combinations can be determined from the measurements and the initial state, but not the individual parameters that enter them. These linear combinations describe the energy difference between the qubits and the dephasing-induced broadening of this energy difference. The physical observables encoding average population of the excited state of either qubit only allow determination of these collective (in the case of ) or relative (in the case of ) properties of the system. Furthermore, another restriction that we can immediately observe is that only even powers of and occur in all of the polynomials above, and therefore we only expect to determine these parameters up to a sign difference. The signs of these parameters are not estimable by the local measurements we have chosen, and prior knowledge, or additional initial states or measurements are necessary to identify these signs.

Before we present the results of the estimation simulation, we outline four possible modes of estimation that could be performed given that there are two possible observables:

1. In mode 1, only the measurement trace is used to construct the Hankel matrix and system realization (), and the polynomial system used to solve for the unknown parameters is constructed using only .

2. In mode 2, only the measurement trace is used to construct the Hankel matrix and system realization (), and the polynomial system used to solve for the unknown parameters is constructed using only .

3. In mode 3, the measurement traces and are used to construct the Hankel matrix and system realization (), and the polynomial system used to solve for the unknown parameters is constructed using only .

4. In mode 4, the measurement traces and are used to construct the Hankel matrix and system realization (), and the polynomial system used to solve for the unknown parameters is constructed using only .

Obviously, modes 1 and 2 are practically more attractive since they only involve collecting one observable’s time trace. However, there may be a benefit to consider modes 3 and 4 since in these modes of estimation one is providing the algorithm with more data with which to construct the realization (although the accessible vector does not change, and so some of this additional data is redundant). One could also imagine a fifth mode where one uses the measurement traces and to construct the Hankel matrix and system realization (), and then constructs the polynomial system from the definitions of both and , i.e., some polynomial equations from the definition of the coefficients in and some from the definition of the coefficients in . For our example, where the number of parameters is small enough such that one can get enough equations from using the definition of just one of the Laplace transforms, we did not find any advantage to using this fifth mode of estimation, and so do not investigate it further.

To illustrate the algorithm we fix the nominal parameters of the system as MHz, MHz, MHz, MHz, MHz, MHz, MHz, with

 ¯n(ω)=1eωkBT−1

being the Bose-Einstein distribution at temperature . In the following we fix the temperature at . For a system with these parameters, we generate a time trace of and from to s, with s. The resulting time traces are shown in Fig. 1. We form the Hankel matrix using either one or both of these time traces, depending on the mode of estimation, with (i.e., using every data point), and . The ERA realization of the discrete time system is formed with , and the corresponding continuous time system is formed using the prescription given in section III.2. Finally, the Laplace transform expression on the right-hand-side of Eq. (18) is computed to obtain 222We used the Matlab Control System Toolbox function tf in order to obtain these Laplace transforms.

 ¯Zest1(s) =s4+0.2702s3+1.7302s2+0.072s−0.0034s5+0.3624s4+2.2569s3+0.3243s2+0.011s

and

 ¯Zest2(s) =−0.0176s3+0.4944s2+0.0209s−0.0039s5+0.3624s4+2.2569s3+0.3243s2+0.011s.

Combining these expressions with corresponding Laplace transforms Eqs. (25) and (28), we obtain a system of polynomial equations for the unknown parameters (we can choose the seven simplest equations since there are seven unknown parameters). We solved these equations using PHClab, the Matlab interface to the PHCPack libraries for solving polynomial systems Guan and Verschelde (2008).

Table 1 shows the results of solving for the parameters when the measurements are noiseless. We found no difference between modes 1 and 3, and also between modes 2 and 4, when the measurements are noiseless, and therefore we only present results from modes 1 and 2 of estimation in table 1. For mode 1, the first observation is that there exist several estimates that deviate from the nominal values as shown on the first line. This is because in this case we choose the lowest order polynomials defined by the coefficients of and the resulting polynomial system has multiple solutions, among which the nominal set is only one of them. Secondly, as expected from the above observation that only even orders of and enter the polynomial system, the signs of these parameters are indeterminate. In estimation mode 2, the estimate quality increases and in fact, the only uncertainty is in the sign of and . This is an example of how the choice of observable dictates the quality of the estimation.

In summary, we see that given noiseless measurement records, the above estimation algorithm can determine the unknown Hmailtonian and Lindblad parameters to within the limitations of the data (e.g., in the above example, the limitations were that , , , and are not individually estimable and that the signs of and are not estimable).

## V Noisy measurements

In this section we investigate the robustness of the system identification algorithm by reexamining the two-qubit dissipative energy transfer example with noisy measurement traces. In principle, the noise on expectation values of observables can be made arbitrary small since the signal-to-noise decreases as , where is the number of measurements that are averaged to estimate the expectation value of the observable. Because of this, measurement noise is especially small in ensemble systems like NMR Hou et al. (2014); however, in systems without natural access to ensembles, e.g., a single superconducting qubit, noise on time traces cannot be neglected in practical situations and we must assess the robustness of the above system identification algorithm to noise.

We consider the same system as in section IV, with access to time traces of one or both observables: and . Suppose that these time traces are corrupted by additive Gaussian noise

 znoisyi(j)=¯zi(j)+ξi(j),

where for all . Since the expectation values are formed by averaging many independent measurement outcomes, this is a reasonable model for the noise by the Central Limit Theorem. We construct the Hankel matrix and perform the estimation in exactly the same way as section IV, with the only difference being that is fixed to be instead of the rank of the Hankel matrix , since the order of the denominator polynomial on the left hand side of Eq. (18) (which takes the form in Eqs. (25) or (28) for this example) is .

To assess the quality of estimation, we compute the relative error in estimation for each of the seven parameters as

 ei=∣∣∣^θi−θiθi∣∣∣×100%,

where and are the nominal and estimated values of the parameter, respectively. If the estimation produces multiple solutions, then we choose the one with the least sum of errors . We generate instances of noise with given standard deviation , and calculate the estimation errors for each instance. These errors are then averaged to yield a mean relative error , which captures the performance of the algorithm. We evaluate these mean relative errors for standard deviations , , , as well as for the noiseless case (). The initial state in all instances is the same as in section IV, and the only difference in parameters is that the duration of the measurement time traces is longer: s.

The first observation from this simulation (data not shown) is that there is a difference between modes 1 and 3 of estimation (and modes 2 and 4) when the measurements are noisy; it is beneficial to use data from both measurement time traces, and , to form the realization even if the polynomial system is formed from the Laplace transform of one of the time traces. Therefore we present only the estimation modes 3 and 4.

Figure 2 shows the mean relative error for each parameter as a function of the standard deviation of the measurement noise. The left (right) column plots under mode 3 (mode 4) of estimation. This figure shows that the average error in estimation is small for small , but quickly becomes quite large. An interesting feature is that the performance can be very different under mode 3 and mode 4 of estimation, with one performing better for some parameters and worse for others. Also, the performance can vary significantly among parameters. Estimates of the Hamiltonian parameters, and , exhibit the greatest robustness to measurement noise, while estimates of the open system parameters are more sensitive. In particular, estimates of and , which are the parameters of smallest magnitude, suffer the most from measurement noise. It is clear that this open system identification algorithm is not as robust as the Hamiltonian version Zhang and Sarovar (2014). However, in the regions of low noise () relevant for NMR and other ensemble experimental platforms, the performance is acceptable, especially for the Hamiltonian parameters, which can be estimated well even in the presence of dissipation and dephasing.

## Vi Discussion

We have extended the quantum system identification approach developed in Ref. Zhang and Sarovar (2014) for Hamiltonian systems, to Markovian open quantum systems. The approach proceeds by forming a realization of the quantum system from (time-dependent) input-output data and then uses this realization to form a system of polynomial equations for the parameters that define the system. The strengths of the approach are its ability to incorporate prior information and its ability to produce parameter estimates even with time dependent measurements of only a few observables.

As with the Hamiltonian case dealt with in Ref. Zhang and Sarovar (2014), having access to time-dependent data enables us to directly estimate the generator of dynamics, which has a significant advantage in that it is specified by fewer parameters than the dynamical map at a fixed time (e.g., Kraus map or unitary). In contrast to the Hamiltonian case, the Markovian open system identification scenario ideally requires the symbolic computation of a resolvent as expressed in Eq. (18), which can be computationally expensive for large systems. Alternatively, as mentioned in Section III.2 one could avoid the resolvent computation by using a power series expansion of the resolvent. This approach is equivalent to using the Markov parameters of an LTI system as the model realization invariant, as opposed to the Laplace transform of the output. Although this is computationally advantageous, we have found that the Markov parameters are more susceptible to measurement noise and therefore this approach is expected to yield less robust estimates of the parameters. As a consequence, we expect that symbolic computation of the resolvent will be the fundamental limitation to applying this approach to identification of large Markovian open quantum systems. However, as demonstrated in Sections A and IV, the required symbolic computation can be easily performed for small open systems, which are still difficult to identify using other approaches.

The noise robustness results in V suggest that a direction for future work is to understand exactly why this realization-based system identification algorithm is more robust to measurement noise in the Hamiltonian (closed system) case than in the open system case. One clue as to why this may be is that Markovian open system parameters almost always dictate exponential rates of decay of measurement traces, and parameter fitting to decaying exponentials is a notoriously ill-conditioned problem Pereyra and Scherer (