Newton algorithm for Hamiltonian characterization in quantum control
Abstract
We propose a Newton algorithm to characterize the Hamiltonian of a quantum system interacting with a given laser field. The algorithm is based on the assumption that the evolution operator of the system is perfectly known at a fixed time. The computational scheme uses the CrankNicholson approximation to explicitly determine the derivatives of the propagator with respect to the Hamiltonians of the system. In order to globalize this algorithm, we use a continuation method that improves its convergence properties. This technique is applied to a twolevel quantum system and to a molecular one with a doublewell potential. The numerical tests show that accurate estimates of the unknown parameters are obtained in some cases. We discuss the numerical limits of the algorithm in terms of basin of convergence and non uniqueness of the solution.
1 Introduction
The control of quantum systems by means of intense laser pulses has been a topic of increasing interest in the past decades [1, 2, 3]. It has now become a wellrecognized field of research with applications ranging from chemistry and physics to material science and nanotechnology. In this context, several advances have been achieved extending, on the theoretical side, from the discovery of elementary basic mechanisms of fieldinduced dynamics [1, 2] to optimal control algorithms [4, 5, 6, 7, 8, 9, 10, 11]. The recent progress of numerical optimization techniques has made possible the design of control fields able to manipulate quantum systems of growing complexity. However, to be efficient, such openloop control methods require the accurate knowledge of the dynamics of the system [12, 13, 14, 15, 16, 17]. In this framework, on the basis of different measurements of the system from different quantum states, quantum process tomography (QPT) is a set of techniques allowing to identify the dynamical map, which relates the initial states to the final ones (see, e.g. [25, 26, 27] and references therein). Such data can then be used to characterize the Hamiltonian or other parameters of the system [21, 22, 23, 24, 29, 30, 31, 32, 33]. In this paper, we assume that a full and ideal QPT is available on the system under concern. This leads to a perfect knowledge of the evolution operator at a given fixed time. Starting from this identification, the goal of our approach is thus to determine the Hamiltonians of the system, i.e. the fieldfree Hamiltonian and the interaction operator. For that purpose, we also assume that a known timedependent external field is applied to the system during the identification process. The framework of the method can be described more precisely as follows. We study a quantum system whose dynamics is ruled by the timedependent Schrödinger equation. The control field is known at any time, only one control field being used. From the final evolution operator of the system, we propose a Newton algorithm to compute the Hamiltonian of the system and the operator describing the interaction with the field. Note that the computational scheme uses the CrankNicholson approximation to explicitly determine the derivatives of the propagator with respect to the Hamiltonians of the system. A continuation method on the target state is used to improve the convergence properties of the algorithm. We also discuss the singularities that can occur in our method.
The paper is organized as follows. In Sec. 2, we present the theoretical framework and we introduce the numerical procedures that will be used. Section 3 is devoted to the application of this approach to two basic quantum systems, a twolevel system and a molecular one with a doublewell potential. Conclusion and prospective views are given in Sec. 4. Technical computations about the singularities of the Newton algorithm are reported in the A.
2 Theoretical framework
We start with a presentation of the model system and we introduce the different numerical algorithms.
2.1 Hamiltonian characterization problem
Let us consider a quantum system interacting with an electromagnetic field, whose dynamics is governed by the timedependent Schrödinger equation. We consider one (scalar) control field but the method could be generalized to several fields. We assume that the field enters linearly in the Hamiltonian through a dipole coupling, the non linear interaction being neglected [34, 35]. In this framework, the evolution equation reads:
(1) 
where is an unitary operator that describes the state of the system at time and and are the fieldfree and the coupling Hamiltonians respectively. Note that the formalism presented here can be extended to pure or mixed quantum states. Without loss of generality, we assume that the entries on the diagonal of are zero, which is the case for a dipole coupling. The matrix is the initial condition of the dynamical system. Units such that are used throughout the paper. The Hamiltonian characterization is an inverse engineering control problem where for given initial and targets states, and respectively and a known electromagnetic field, , the goal consists in identifying the pair () such that:
(2) 
where is the fixed final time of the control. We introduce the mapping
(3)  
where, and are the sets of hermitian Hamiltonians and hermitian Hamiltonians with diagonal entries equal to zero, respectively. The set denotes the set of unitary matrices of size . From a mathematical point of view, the operator characterization control problem is equivalent to the investigation of the surjectivity of . This means determining the preimage of by the mapping . Here, we focus on the case of real Hamiltonians, which covers a wide range of applications.
The Newton method to solve the corresponding equations requires to differentiate the function under consideration. For the sake of completeness, we recall that in the case of the mapping defined in Eq. (3), the differential is given by:
(4)  
where, for a given pair (, is the tangent space of at , the final state associated with the trajectory corresponding to . The space is of dimension and is defined by . The evolution equation of is obtained by differentiating Eq. (1),
(5) 
2.2 Numerical algorithms
Before defining the Newton solver, we introduce a relevant time discretization of Eq. (1).
2.2.1 CrankNicholson scheme
The approach is based on a CrankNicholson time discretization of Eq. (1). We give some details about this numerical scheme. We consider an equidistant time discretization grid and we denote by the number of sampling points of the time interval , being the time step, and by , , the approximation of at a given time grid point . The CrankNicholson algorithm is based on the following recursive relation:
(6) 
with . Equation (6) can be rewritten in a more compact form as follows:
(7) 
where 11 is the identity operator and . This scheme provides a second order approximation with respect to time, which enables to compute an accurate approximation of the trajectory . In addition, note that the CrankNicholson propagator preserves the norm.
Differentiating Eq. (7) with respect to (), we obtain:
(8) 
where . Combining Eq. (6) and Eq. (8) provides:
(9) 
The initial unitary operator being fixed, , and we get from Eq. (9),
(10) 
Note that the properties of the CrankNicholson scheme is crucial in the latter computation, since Eq. (10), which represents the central point of the algorithm used here, is based on the CrankNicholson relation, i.e. Eq. (6).
2.2.2 Newton method
In this section, we define the Newton method used to solve the characterization problem. Denoting by the time discretized version of (see Eq. (3)) and by , the Newton iteration applied to the equation
(11) 
reads:
(12) 
where and are the correction terms added to and at step , to define with . Since is fixed, we deduce that
As in Eq. (4), the lefthand side of Eq. (12) corresponds to and the latter equation gives rise to:
(13) 
Combining Eq. (10) and Eq. (13), we obtain
(14) 
Although the lefthand side of Eq. (14) is an hermitian operator, its righthand side may not be necessary hermitian. The righthand side is therefore approximated by an hermitian operator, , defined by
(15) 
In spite of this approximation, numerical simulations reveal that the Hamiltonians of the system can be determined with a very good accuracy by this approach. The resulting equation is
(16) 
Solving Equation (16) with respect to with requires an inversion of linear systems. In view of practical implementations, we rewrite Eq. (16) in a more explicit form. By denoting the vector representation of a given matrix in a columnmajor order, Eq. (16) is given by:
(17) 
with
and
where denotes the Kronecker product, and is the transposed matrix of .
The properties of symmetry of , and , induce redundancies in Eq. (17). The linear system stated in Eq. (17) cannot be directly solved in this form. The system of scalar equations associated with Eq. (17) is partially redundant. Indeed, the equations deal with symmetric matrices (see Eq. (16)), so that one shall only consider the scalar equations that correspond, e.g., to the entries above the diagonal in Eq. (16). Since the matrices and are symmetric, there are also redundancies in the coefficients of the unknowns and . As a consequence, the columns of the matrices involved in Eq. (17) have to be merged (by adding) in the case they correspond to the same unknown coefficient of or . We denote by
(18) 
the system obtained from Eq. (17) after these reductions. In this reduced form, note that the number of equations is equal to the number of unknowns, i.e. . The Newton algorithm for the operator characterization problem can then be summarized as follows.
Algorithm 1
Given and an initial guess

Set and .

While do

Compute with the solutions of Eq. (18).

Define the Hamiltonians for the next iteration by
and

Set .

Set .
2.2.3 Continuation method
It is well known that the convergence of the Newton algorithm is guaranteed only under restrictive conditions [36]. In this way, the method may not converge when the initial guess is not close enough to the solution. If no accurate approximate solution of the problem is known, obtaining convergence of the Newton algorithm can be difficult. To bypass this difficulty, we propose a globalization strategy based on a continuation method [38, 39].
Before summarizing the continuation method itself, we first introduce the key idea of the strategy: For a given hermitian operator , there exist symmetric and antisymmetric operators, and , respectively, such that
(19) 
The matrices and can be computed easily by means of a standard eigenvector solver. The continuation method consists in solving iteratively operator identification problem by using intermediate target states defined as:
(20) 
where is the number of intermediate targets. Figure 1 illustrates this decomposition of the target operator into these intermediate targets.
Each step uses the Hamiltonians obtained at the previous step as an initial guess in the Newton solver.
The crucial point is that a solution is known analytically for the target state with . Indeed, the pair is a solution of Eq. (2) when the righthand side coincides with . Here is the zero matrix.
We summarize the iterative algorithm of the continuation method as follows:
Algorithm 2
Given and set , , . While , do

Define .

Solve the operator identification problem by means of Algorithm 1 with initial conditions .

Set .
Note that only the knowledge of the initial and target operators and the final time, , is required. Through this strategy, the problem is decomposed into a set of smaller problems of the same nature which are solved sequentially with the Newton algorithm.
Remark 1
The pair is an exact solution of Eq. (2). As a consequence, these matrices may not solve from Eq. (11), which includes an approximation, due to time discretization. Instead, one may use Algorithm 1 with to compute accurately a solution associated with the discretized setting and the target . The pair can be used as an initial guess. Moreover, this strategy accelerates the solving of the step corresponding to by providing a better initial guess.
Finally, note that the possible singularities of the Newton method are discussed in the A.
3 Application to quantum systems
To test the efficiency of our procedures, we consider the problem of the Hamiltonian identification on two key examples.
3.1 A driven twolevel quantum system
As a first example, we consider a twolevel quantum system. The Hamiltonian for a twolevel atom driven resonantly by a laser field in the rotatingwave approximation reads [37]
(21) 
where is the detuning term, i.e. the difference between the laser frequency and the frequency of the twolevel. The envelope of the control field is assumed to be of the form
(22) 
where is defined by
(23) 
The strength of the dipole coupling is taken to be a.u. and the final propagation time is a.u.
3.1.1 Test of the Newton algorithm
The Hamiltonian considered in Eq. (21) can be decomposed as
(24) 
with
(25) 
The test of the Newton algorithm is performed as follows:

We consider the Hamiltonian given in Eq. (21) and the initial state
(26) We compute the corresponding final state for the given final time . The states and are respectively chosen as the initial and final states of the Newton algorithm, so that is the solution of the characterization problem.

Secondly, we start the Newton algorithm by considering an initial guess of the form:
(27) where and are chosen randomly with entries in . More precisely, these matrices are determined such that they have the same symmetry properties as and , i.e. and belong respectively to and . The parameter represents the magnitude of the perturbation.
Since for , the solution () is known, the goal is here to analyze the convergence of the Newton procedure with respect to when the initial guess is given by Eq. (27). In the following, the detuning is set to a.u. The convergence of the algorithm is analyzed by averaging over 15 random initial guesses.
We first investigate the convergence of the Newton algorithm with respect to the number of iterations for a fixed value of . At each iteration of the Newton algorithm, the deviation from the solution () is measured by taking the of the norm of the difference between and , where is the iteration index (see Algorithm 1) and . The matrix norm is defined here as the maximum absolute value of the eigenvalues of the matrix. In the case , Table 1 illustrates the typical quadratic convergence of a Newton algorithm [36].
1  3.5924012132  0.9529042109  0.2100714376 

2  3.7008867006  0.9439569383  1.0991134448 
3  3.7090187513  0.9508178308  2.6736586007 
4  4.8848725497  2.1267494911  5.4770040955 
5  7.2658303406  4.5077075456  7.2259624695 
6  11.5917321736  8.8341960474  9.3609642102 
7  14.3721229804  11.6141682339  12.7681301304 
8  15.4905077048  12.9092171963  14.3451244509 
9  14.6314758067  13.8960673510  14.5478814535 
Starting from a randomly perturbed Hamiltonian, the initial pair () is recovered after few iterations.
Figure 2 shows the convergence behavior of the Newton algorithm as a function of the parameter .
Here again, the accuracy is evaluated by taking the of the norm of the difference between the Hamiltonians obtained after iterations of the algorithm 1 and the solution of the problem. As could be expected, the algorithm is efficient for small values of . In Fig. 2, we observe three cases of convergence behavior:

Convergence to the initial solution () for (, with ). For small perturbations, the algorithm is able to find the expected solution.

Convergence to another pair of Hamiltonians for . It can be seen in Fig. 2 that the target is reached with a fidelity close to . However, the Hamiltonians found by the algorithm are not exactly the same as the initial ones since and . This result is a signature of the non uniqueness of the solution.

Convergence failure: For larger perturbations (), the Newton algorithm fails to converge. This may indicate the size of the basin of convergence, which is very narrow.
3.1.2 Test of the continuation procedure
In this section, the convergence of the continuation algorithm is tested on the previous twolevel quantum system. We keep the same target state and the same pair of Hamiltonian solution as in Sec. 3.1. The convergence behavior is illustrated in Fig. 3 with , the number of intermediate target states.
At each iteration of the continuation algorithm, a very good convergence of the initial state to the corresponding intermediate target is reached, as can be seen in Fig. 3(a). For each iteration , the number of required iterations of the Newton algorithm is of the order of 12. In the lower panel, except for , the deviation of the field free and interaction Hamiltonians from the solutions of the problem is larger than 10. However, for , the solution obtained is approximatively the expected one, and .
3.2 Driven double well potential
As a second example, we consider an asymmetric double well potential of mass a.u.. Figure 4 displays the energy potential curve of the onedimensional molecular system used in the computations.
The timedependent Hamiltonian governing the dynamics of the system is given by
(28) 
where is the reaction coordinate and is the electric field. The final time is set to ps. To solve the Schrödinger equation, we use as basis the eigenvectors of the fieldfree Hamiltonian , which is defined in Eq. (28). We consider the control field that transfers the population from the ground eigenstate with to the excited eigenstate with :
(29) 
where is the matrix element of the coupling Hamiltonian associated with the eigenstates and and is the energy difference between the eigenstates and . Figure 5 shows the time evolution of the population on the states and and the corresponding control field. As in the case of the twolevel quantum system, we apply the Newton algorithm by considering a target operator where is the final state obtained with the field of Eq. (29). We consider a finite Hilbert space of size , which is sufficient for the intensity of the electric field used here. Setting , Table 2 illustrates the convergence behavior of the Newton algorithm.
1  1.1548028126  5.5961244426  2.1831389602 

2  3.1579618260  6.3171771473  2.1983453573 
3  3.1952040375  7.9458560471  4.2173783130 
4  6.9904587028  10.7407123305  6.3911179031 
5  9.9359223447  11.7810549956  7.2643429329 
6  11.4100541376  12.2935308612  7.6346232913 
7  11.7710430315  12.2099673745  7.6105889426 
8  11.8271010726  12.3011508009  7.6735489057 
9  11.8027650635  12.2728029350  7.6204277008 
10  11.9229382795  12.2763162205  7.5877013449 
11  11.8190661799  12.2092613111  7.5493066079 
For , the target is reached with a high accuracy, , with . However, the Hamiltonian found by the algorithm is slightly different from the expected one since .
Note that the convergence to the target state is obtained only for very small values of (). We have also observed that the convergence behavior of the Newton algorithm depends on the size of the Hilbert space. For example, for , the Newton algorithm converges with .
The convergence and the efficiency of the continuation method are also analyzed in the case of the double well model for which we have considered the first eigenvectors. For the given initial operator, , the goal is to identify a fieldfree and a coupling Hamiltonians which drive the system to under the interaction with the electric field. The target is decomposed into intermediate target operators. Each intermediate target state is reached after about 15 iterations of the Newton algorithm. As in Fig. 3, the upper panel of Fig. 6 shows the deviation of from as a function of the number of iterations, . Very good results are observed showing the efficiency of the approach. The middle and lower panels of Fig. 6 display the deviations of the solutions, and obtained with the continuation algorithm from and , respectively.
At each iteration of the continuation algorithm, an accurate convergence to the intermediate target states is reached. As opposed to the previous example, the Hamiltonian solution obtained with the continuation method in this case is different from the expected solution. Here again, this behavior illustrates the non uniqueness of the solution.
3.3 Discussion
We discuss in this final paragraph the numerical cost of the algorithm with respect to the complexity of the quantum system under concern. To illustrate this analysis, we have plotted in Fig. 7 the computer time (CPU) used. More precisely, Fig. 7 displays the CPU time for the two models as a function of the number of iterations of the continuation method. For a given number of iterations, we see that the computational time roughly increases by a factor 3 from two to six quantum energy levels, that is a quasilinear increase of the time with the number of levels. However, note that the total duration of the computation is multiplied by a factor 6 when going from 6 to 12 energy levels. This exponential explosion of the CPU time shows that the algorithm cannot be applied actually to more complex systems having, for instance, several dozens of energy levels. The current numerical procedure is therefore too costly and improvements will be required in order to extend this approach to a wider family of quantum systems.
4 Conclusion
We have proposed a Newton algorithm to characterize the Hamiltonians of the system in a dynamical setting, i.e. when a control field is applied to the system. A crucial prerequisite of the computational scheme is the perfect knowledge at a given time of the evolution operator of the system. The procedure can be combined with a continuation method in order to enlarge its basin of convergence. We demonstrate the efficiency of this technique on two key examples, namely a twolevel quantum system and a simple molecular model described by a doublewell potential. This work also provides important insights into the different features of this algorithm such as the size of the basin of the convergence and the non uniqueness of the solution. Such drawbacks could be removed by considering overdetermined data, which is not the case in this paper. The general applicability of the method makes it an interesting and possibly useful tool to complete techniques of quantum state tomography.
In this work, we have assumed that the evolution operator is perfectly know at a given time without any error. This ideal model has allowed us to highlight the properties of the numerical algorithm. At this point, in view of applications to more realistic physical systems, the question which naturally arises is the generalization of this approach to a nonideal situation in which the propagator can only be estimated to a given accuracy. Another open question would be to consider more complicated quantum systems such as molecular ones with a large number of energy levels. This would require a modification of the present algorithm in order to avoid the explosion of the computational time, as shown in Fig. 6. We are currently working on these open questions.
Acknowledgments
Financial supports from the Conseil Régional de Bourgogne, the QUAINT coordination action (EC FETOpen) and the Agence Nationale de la Recherche (ANR), Projet Blanc EMAQS number ANR2011BS0101701, are gratefully acknowledged.
Appendix A Singularity problem of the Newton method
The Newton method is based on the matrix inversion of a Jacobian, and consequently can lead to a numerical explosion if the rank of the latter becomes small. For the continuation method discussed in this paper, we can find some examples of intermediate targets for which singularities can occur. We describe such an example. Consider the case of the twolevel quantum system presented in Sec. 3.1 with a target defined by
(30) 
After decomposition into intermediate targets, the continuation algorithm starts with the pair , see the Step 1 of the summary of the continuation method. For the target given by Eq. (30), the corresponding Hamiltonian is
(31) 
where denotes the matrix of eigenvectors and , its adjoint. At each time, the system is described by
(32) 
In the next step of the continuation algorithm (see Step 2 of the summary of the continuation method), the Newton algorithm is used to determine and from Eq. (16). In the following, we prove that the Newton algorithm cannot be applied because of the occurrence of singularities. First, let us recall that the operator identification problem is equivalent locally to the study of the surjectivity of of Eq. (3). As proved in Ref. [33], this property can be deduced from the surjectivity of , i.e for a given , the equation
(33) 
has a solution [33]. By inserting Eq. (32) into Eq. (33), we obtain:
(34) 
with , . Using the properties of symmetry of , we can express them as:
(35) 
Here, we consider that the electric field is given by Eq. (23). With this assumption, plugging Eq. (35) into Eq. (34) and evaluating the integral of this latter equation, we arrive at:
(36) 
It is possible to rewrite the left and right hand sides of Eq. (36) as a vector column:
(37) 
where denotes the vector representation of the matrix . We can then rewrite Eq. (37) as:
(38) 
The matrix of the left hand side of Eq. (38) has a rank equal to 3 and hence is not invertible. Consequently, the solution of Eq. (33) can not be obtained by a matrix inversion. We conclude that the use of the Newton approach to compute and will lead to singularities in this example.
References
References
 [1] S. Rice and M. Zhao : Optimal control of quantum dynamics, (Wiley, NewYork 2000).
 [2] M. Shapiro and P. Brumer : Principles of quantum control of molecular processes, (Wiley, NewYork 2003).
 [3] C. Brif, R. Chakrabarti and H. Rabitz, New. J. Phys. 12, 075008 (2010).
 [4] M. Lapert, Y. Zhang, M. Braun, S. J. Glaser and D. Sugny, Phys. Rev. Lett. 104, 083001 (2010).
 [5] J. Salomon, M2AN 41, 77 (2007).
 [6] L. Baudouin, J. Salomon, Syst. Cont. Lett. 57, 453 (2008).
 [7] Y. Maday, J. Salomon, G. Turinici, Num. Math. 103, 323 (2006).
 [8] L. Pontryagin et al., Mathematical theory of optimal processes, Mir, Moscou, 1974.
 [9] R. Kosloff, S. A. Rice, P. Gaspard, S. Tersigni and D. Tannor, Chem. Phys. 139, 201 (1989).
 [10] Y. Maday and G. Turinici, J. Chem. Phys. 118, 8191 (2003).
 [11] W. Zhu and H. Rabitz, J. Chem. Phys. 111, 472 (1999).
 [12] N. I. Gershenzon, K. Kobzar, B. Luy, S. J. Glaser and T. E. Skinner, J. Magn. Reson. 188, 330 (2007).
 [13] Y. Zhang, M. Lapert, M. Braun, D. Sugny and S. J. Glaser, J. Chem. Phys. 134, 054103 (2011).
 [14] E. Assémat, M. Lapert, Y. Zhang, M. Braun, S. J. Glaser and D. Sugny, Phys. Rev. A 82, 013415 (2010).
 [15] G. Turinici and H. Rabitz, Phys. Rev. A 70, 063412 (2004).
 [16] T. Viellard, F. Chaussard, D. Sugny, B. Lavorel and O. Faucher, J. Raman Spec. 39, 694 (2008).
 [17] D. Pentlehner, J. H. Nielsen, A. Slenczka, K. Molmer and H. Stapelfeldt, Phys. Rev. Lett. 110, 093002 (2013).
 [18] J. B. Altepeter, D. Branning, E. Jeffrey, T. C. Wey, P. G. Kwiat, R. T. Thew, J. L. Obrien, M. A. Nielsen and A. G. White, Phys. Rev. Lett. 90, 193601 (2003).
 [19] M. Mohseni, A. T. Rezakhani and D. A. Lidar, Phys. Rev. A 77, 032322 (2008).
 [20] A. Shabani, R. L. Kosut, M. Mohseni, H. Rabitz, M. A. Broome, M. P. Almevda, A. Fedrizzi and A. G. White, Phys. Rev. Lett. 106, 100401 (2011).
 [21] S. G. Schirmer, A. Kolli and D. K. L. Oi, Phys. Rev. A 69, 050306(R) (2004).
 [22] J. M. Geremia and H. Rabitz, J. Chem. Phys. 118, 5369 (2003).
 [23] X.J. Feng, H. Rabitz, G. Turinici and C. Le Bris, J. Phys. Chem. A 110, 7755 (2006).
 [24] J. M. Geremia and H. Rabitz, Phys. Rev. Lett. 89, 263902 (2002).
 [25] J. B. Altepeter, D. Branning, E. Jeffrey, T. C. Wey, P. G. Kwiat, R. T. Thew, J. L. Obrien, M. A. Nielsen and A. G. White, Phys. Rev. Lett. 90, 193601 (2003).
 [26] M. Mohseni, A. T. Rezakhani and D. A. Lidar, Phys. Rev. A 77, 032322 (2008).
 [27] A. Shabani, R. L. Kosut, M. Mohseni, H. Rabitz, M. A. Broome, M. P. Almevda, A. Fedrizzi and A. G. White, Phys. Rev. Lett. 106, 100401 (2011).
 [28] N. Boulant, T. F. Havel, M. A. Pravia and D. G. Cory, Phys. Rev. A 67, 042322 (2003).
 [29] C. Le Bris, M. Mirrahimi, H. Rabitz and G. Turinici, ESAIM: COCV 13, 378 (2007).
 [30] O. F. Alis, H. Rabiz, M. Q. Phan C. Rosenthal and M. Pence, J. Math. Chem. 35, 65 (2004).
 [31] N. Shenvi, J. M. Geremia and H. Rabitz, J. Phys. Chem. A 106, 12315 (2002).
 [32] M. Tadi and H. Rabitz, J. Guid. Control Dyn. 20, 486 (1997).
 [33] P. Laurent, H. Rabitz, J. Salomon, G. Turinici, Proceedings of the 51st IEEE Conference on Decision and Control, Maui, 1013 December 2012.
 [34] T. Kanai and H. Sakai, J. Chem. Phys. 115, 5492 (2001).
 [35] R. Tehini and D. Sugny, Phys. Rev. A 77, 023407 (2008).
 [36] A. E. Bryson and Y. C. Ho, Applied optimal control, (1975).
 [37] Leslie C. Allen and Joseph H. Eberly, Optical Resonance and TwoLevel Atoms (Dover, 1988).
 [38] B. Bonnard, N. Shcherbakova and D. Sugny, ESAIM: COCV 17, 262 (2011).
 [39] E. Trélat, J. Optim. Theory Appl. 154, 713 (2012).