Time-Optimal Collaborative Guidance Using the Generalized Hopf Formula

# Time-Optimal Collaborative Guidance Using the Generalized Hopf Formula

Matthew R. Kirchner, Robert Mar, Gary Hewer, Jérôme Darbon, Stanley Osher, Y.T. Chow This research was supported by the Office of Naval Research, ILIRs 4764 and 5100, and Office of Naval Research grants N00014-16-12119 and N00014-16-12157. M. Kirchner and G. Hewer are with the Image and Signal Processing Branch, Research Directorate, Code 4F0000D, Naval Air Warfare Center Weapons Division, China Lake, CA 93555, USA {matthew.kirchner, gary.hewer}@navy.milR. Mar is with the Guidance, Navigation, and Control Branch, Weapons and Energetics Department, Code 472100D, Naval Air Warfare Center Weapons Division, China Lake, CA 93555, USA robert.t.mar@navy.milJ. Darbon is with the Division of Applied Mathematics, Brown University, Providence, RI 02912, USA jerome_darbon@brown.eduS. Osher and Y. T. Chow are with the Department of Mathematics, University of California, Los Angeles, CA 90095, USA {sjo, ytchow}@math.ucla.edu
###### Abstract

Presented is a new method for calculating the time-optimal guidance control for a multiple vehicle pursuit-evasion system. A joint differential game of pursuing vehicles relative to the evader is constructed, and a Hamilton–Jacobi–Isaacs (HJI) equation that describes the evolution of the value function is formulated. The value function is built such that the terminal cost is the squared distance from the boundary of the terminal surface. Additionally, all vehicles are assumed to have bounded controls. Typically, a joint state space constructed in this way would have too large a dimension to be solved with existing grid-based approaches. The value function is computed efficiently in high-dimensional space, without a discrete grid, using the generalized Hopf formula. The optimal time-to-reach is iteratively solved, and the optimal control is inferred from the gradient of the value function.

## I Introduction

One of the first successful implementations of control laws for pursuit problems is proportional navigation (PN) [29], which attempts to drive the rate of the line-of-sight vector between pursuer and evading target vehicle to zero. In this derivation, the target vehicle is assumed moving, but not maneuvering (turning). Generalizations of this concept attempt to estimate the vehicle maneuver [30], but these methods are not optimal since evasion strategy is not considered, i.e. not formulated as a differential game [20]. Additionally, this family of control laws does not account for control saturation. PN typically requires the magnitude of the control bound of the pursuer to be much greater than that of the evader to be successful, on the order of 3-5 times greater [30]. These guidance laws are strictly one-on-one in nature, and do not readily generalize to collaborative systems of multiple vehicles where the desired pursuit guidance is to ’team’ together to capture a target. These early pursuit problems typically referred to controller designs as guidance laws, and in this letter we will use the terms controller and guidance interchangeably.

More recently, [31] proposed a solution to multi-vehicle pursuit evasion in a plane. In this case the problem was solved sub-optimally with heuristics in an effort to avoid the computational burden of direct solution to the Hamilton-Jacobi equation. Additionally, the method was based on simplified, single-integrator dynamics that require the vehicles to maneuver instantaneously to ensure capture.

A general alternative is to formulate the pursuit-evasion problem as a differential game, and derive a Hamilton–Jacobi–Isaacs (HJI) equation representing the optimal cost-to-go of the system. Traditionally, numerical solutions to HJI equations require a dense, discrete grid of the solution space [28, 26, 27]. Computing the elements of this grid scales poorly with dimension and has limited use for problems with dimension of greater than four. The exponential dimensional scaling in optimization is sometimes referred to as the “curse of dimensionality” [5, 4]. This phenomenon is seen clearly in [19], which formulated a differential game for a capture-the-flag problem and solved numerically on a four dimensional grid with [25]. The computational time was as much as 4 minutes, too slow for real-time application, even with a coarsely sampled grid of 30 points in each dimension and with low numeric accuracy. When the grid is increased to 45 points in each dimension and with high numeric accuracy, the computation time jumps to an hour.

Recent research [11] has discovered numerical solutions based on the generalized Hopf formula that do not require a grid and can be used to efficiently compute solutions to a certain class of Hamilton–Jacobi equations that arise in linear control theory and differential games. This readily allows the generalization with pursuit-evasion to collaborative guidance of multiple pursuing vehicles.

This letter presents a new method for multi-vehicle collaborative pursuit guidance of a maneuvering target, showing that teams of vehicles can intercept the target without requiring drastically higher control bound as in the family of methods in [30]. A joint system state space representing the kinematics of all pursuing vehicles relative to the target was constructed, the dimension of which makes it infeasible for traditional grid-based methods. This high-dimensional problem was then efficiently solved using the generalized Hopf formula, and included the constraint of time-varying bounds on the magnitude of available vehicle control, while ensuring intercept when starting within the reachable set.

The rest of the paper is organized as follows. We derive the models used in the study in Sec. II followed the presentation of efficient solution techniques that employ the generalized Hopf formula to solve the Hamilton–Jacobi equations for optimal control and differential games in Sec. III. The application of these methods to collaborative guidance is given in Sec. IV, followed by results on a planar, multiple vehicle pursuit-evasion game in Sec. V.

## Ii Pursuit-Evasion Model

### Ii-a Single Vehicle Model

First consider the pursuit-evasion game with only a single pursuer. We construct a state space representation of the position and orientation of the pursuer relative to the evader, with geometry shown in Fig. 1. With , the relative system becomes

 ˙x(t)=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣Vpcos(δθ)−Ve+δyaeVeVpsin(δθ)−δxaeVeapVp−aeVe⎤⎥ ⎥ ⎥ ⎥ ⎥⎦, (1)

with and representing the forward speed of the pursuer and evader, respectively. The terms and are the lateral acceleration inputs to the system for each vehicle. These accelerations are limited by the physical maneuver capabilities of the vehicles. This system is based on the relative state [27] of two modified Dubin’s car [33, 12] models, with acceleration instead of the more common turn rate input. Additionally, we constructed this system to be evader centric, allowing for the addition of multiple pursuers. Denoting by the transpose of a matrix, we introduce the new state vector , where and are the positional displacement separating the vehicles (see Figure 1), , and is the relative vertical velocity. We proceed to linearize the system with

 ˙x(t) =[02I20202]x(t)+⎡⎢ ⎢ ⎢⎣000±1⎤⎥ ⎥ ⎥⎦ap+⎡⎢ ⎢ ⎢⎣000−1⎤⎥ ⎥ ⎥⎦ae, (2) =Ax(t)+Bap+Dae,

with the sign needed depending on whether its tail-chase or head-on engagement. The linearization at first glance may seem extreme, but this linearization strategy is used when deriving proportional navigation, or its variants such as augmented proportional guidance and extended proportional guidance, using linear quadratic control techniques [30]. The controls for the pursuer are constrained to the set and the controls for the evader are constrained to the set . The infinity norm with diagonal matrix , scales the control limit independently in orthogonal directions. is a function of time since some systems have control bounds that vary with time, and is needed to model aerodynamic control surfaces on decelerating vehicles. Both controls are considered symmetric (centered at zero) for this paper and all simulations.

We represent the capture set, , as an ellipsoid

 Ω={x:⟨x,W−1x⟩≤1}. (3)

where is the ellipsoid shape matrix. The elements of are selected such that the pursuing vehicle must be within a distance

 ∥∥∥[δxδy]∥∥∥≤r,

and the velocity at intercept is within some large bound (we don’t care what the velocity was at capture, just as long as capture has occurred). This gives

 W=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣r2⋯0r2⋮⋮V2max0⋯V2max⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦.

### Ii-B Multi-Vehicle Model

For a multi-vehicle problem with pursuers against a single evader, the joint state space with state vector can be constructed as follows

 χ=⎡⎢ ⎢ ⎢ ⎢⎣˙x1˙x2⋮˙xk⎤⎥ ⎥ ⎥ ⎥⎦ =⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣A⋯0A⋮⋮⋱0⋯A⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦⎡⎢ ⎢ ⎢ ⎢⎣x1x2⋮xk⎤⎥ ⎥ ⎥ ⎥⎦ +⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣B1⋯0B2⋮⋮⋱0⋯Bk⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦⎡⎢ ⎢ ⎢ ⎢ ⎢⎣ap1ap2⋮apk⎤⎥ ⎥ ⎥ ⎥ ⎥⎦ (4) +⎡⎢ ⎢ ⎢ ⎢⎣DD⋮D⎤⎥ ⎥ ⎥ ⎥⎦ae ⟹˙χ =^Aχ+^Bap+^Dae. (5)

Collaborative guidance is induced by noticing that capture can happen by any single vehicle of the vehicles in the system. The capture set for the -th vehicle in the joint system is denoted as

 Ωi={χ:⟨χ,W−1iχ⟩≤1},

with the shape matrix defined as the block diagonal matrix with on the -th block of the matrix, and the matrix occupying all other blocks. This implies that the capture set for the joint system is

 Ω=∪iΩi. (6)

## Iii Hamilton–Jacobi Equations with Bounded Control

### Iii-a Viscosity Solutions with the Hopf Formula

To compute optimal guidance, we use the generalized Hopf formula [11, 18, 22]. Consider system dynamics represented as

 ˙x(t)=f(u(t)) (7)

where is the system state and is the control input, constrained to lie in the convex admissible control set . We consider a cost functional for a given initial time , and terminal time

 K(x,t,u)=∫TtL(u(s))ds+J(x(T)), (8)

where is the solution of at terminal time, . We assume that the terminal cost function is convex. The function is the running cost, and is assumed proper, lower semicontinuous, convex, and 1-coercive. The value function is defined as the minimum cost, , among all admissible controls for a given state , and time with

 v(x,t)=infu∈CK(x,t,u). (9)

The value function in satisfies the dynamic programming principle [7, 15] and also satisfies the following initial value Hamilton-Jacobi (HJ) equation by defining the function as , with being the viscosity solution of

 {∂φ∂t(x,t)+H(t,∇xφ(x,t))=0inRn×(0,+∞),φ(x,0)=J(x)∀x∈Rn, (10)

where the Hamiltonian is defined by

 H(p)=supc∈Rm{⟨−f(c),p⟩−L(c)}. (11)

To apply the constraint that the control must bounded, we introduce the following running cost , where

 IC={0ifc∈C+∞otherwise,

is the indicator function for the set . This induces a time-optimal control formulation and reduces the Hamiltonian to

 H(p)=maxc∈C⟨−f(c),p⟩.

Solving the HJ equation describes how the value function evolves with time at any point in the state space and from this, optimal control policies can be found.

It was shown in [11] that an exact, point-wise viscosity solution to can be found using the Hopf formula [18]. The value function can be found with the Hopf formula

 φ(x,t)=−minp∈Rn{J⋆(p)+tH(p)−⟨x,p⟩}, (12)

where the Fenchel-Legendre transform of a convex, proper, lower semicontinuous function is defined by [13]

 (13)

Following the basic definition of the Fenchel-Legendre transform, can be written [22] as

 φ(x,t)=(J⋆+tH)⋆(x).

This shows that value function is itself a Fenchel-Legendre transform. It follows from a well known property of the Fenchel-Legendre transform [10] that the unique minimizer of is the gradient of the value function

 ∇xφ(x,t)=argminp∈Rn{J⋆(p)+tH(p)−⟨x,p⟩},

provided the gradient exists. So by solving for the value function using , we automatically solve for the gradient.

### Iii-B General Linear Models

Now consider the following linear state space model

 ˙x(t)=Ax(t)+B(t)u(t), (14)

with , , state vector , and control input . We can make a change of variables

 z(t)=e−tAx(t), (15)

which results in the following system

 ˙z(t)=e−tAB(t)u(t), (16)

with terminal cost function now defined in with , which depends on terminal time, . Notice that the system is of the form presented in , with the exception that the system is now time-varying. It was shown in [21, Section 5.3.2, p. 215] that the Hopf formula in can be generalized for a time-varying Hamiltonian to find the value function of the system in with

 φ(z,t)=−minp∈Rn{J⋆z(p,t)+∫t0H(p,s)ds−⟨z,p⟩}, (17)

with the time-varying Hamiltonian defined as

The change of variable to is required for time since the problem was converted to an initial value formulation from a terminal value formulation in .

### Iii-C Linear Differential Games

Now consider the system

 ˙x(t)=Ax(t)+B(t)u(t)+D(t)w(t), (18)

with , which is equal to with an extra term, , added. We assume that the additional control input is adversarial and bounded by . The cost functional becomes

 (19)

where is the solution of at terminal time, . We assume that the goal of the adversarial control input is to increase the cost functional , in direct contradiction with the input , which we are designing in an attempt to minimize the cost. This system forms a differential game [20], and has a corresponding lower value function

 V(x,t)=infu∈Csupw∈DG(x,t,u,w),

and upper value function

 U(x,t)=supw∈Dinfu∈CG(x,t,u,w).

As derived in [14], the upper and lower value functions are viscosity solutions of possibly non convex HJ equation. We can define the following upper and lower Hamiltonians as

 H+(p,t) H−(p,t)

The running cost becomes

 L(u,w)=IC(u)−ID(w),

where is the indicator function of the convex set . If the Hamiltonians and coincide, then from [14]

 H+(p,t)=H−(p,t)=H±(p,t)⟹U(x,t)=V(x,t).

We can apply the same change of variables from to get

and then we can find a candidate solution of the value function with the generalized Hopf formula

 φ(z,t)=−minp∈Rn{J⋆z(p,t)+∫t0H±(p,s)ds−⟨z,p⟩},

with the time-varying, non convex Hamiltonian given by

In general, if , then the Hopf formula in does not hold.

## Iv Time-Optimal Control with the Hopf Formula

Following the methods presented above in , we have the transformed system as

 ˙z(t)=e−t^A^Bap(t)+e−t^A^Dae(t),

and the Hamiltonian is the dual norm of the control set

 H(p,t) (22) −∥∥∥Qe^D†e−(T−t)^A†p∥∥∥1,

where we denote by the 1-norm. We choose a convex terminal cost function such that

 ⎧⎪⎨⎪⎩J(z,0)<0for anyz∈intΩ,J(z,0)>0for anyz∈(Rn∖Ω),J(z,0)=0for anyz∈(Ω∖intΩ), (23)

where denotes the interior of . The intuition behind defining the terminal cost function this way is simple. If the value function for some and , then there exists a control that drives the state from the initial condition at , to the final state, inside the set . The smallest value of time , such that is the minimum time to reach the set , starting at state . The control associated with the minimum time to reach is the time-optimal control. The ellipsoid terminal set defined in results in a quadratic terminal cost function

 Jx(x)=⟨x,W−1x⟩−1,

After variable substitution the cost function becomes

 Jz(z)=⟨z,V(0)z⟩−1,

with . Following the property that the Fenchel-Legendre transform of a norm function is the dual norm [6], we have

 J⋆z(p,t)=1+14⟨p,V(0)−1p⟩.

The generalized Hopf formula requires the integration of the Hamiltonian which is approximated by Riemann sum quadrature [3] with step size

 ∫t0H(p,t)ds ≈h∑sk∈SH(p,sk),

where denotes the set of discrete time samples. Rectangular quadrature with fixed step size was used to pre-compute the time samples from time to , which requires only a simple sum at run time to evaluate the integral. We can approximate the matrix exponential terms efficiently at fixed time intervals, with bounded error, using [2].

To solve the Hopf formula in , we are performing an unconstrained minimization problem where the objective function is non-smooth. Non-smooth unconstrained minimization problems can be solved in a variety of ways. However, because we can explicitly derive the gradient and Hessian, this directs the use of a relaxed Newton’s method [9]. We chose for the initial guess of Newton’s method , the minimum of the Hopf objective without the Hamiltonian integral. The initial step size is 1 (full Newton), and is halved whenever the function value increases during an iteration (without updating the search direction). The minimization is terminated when the norm of the change in iterations is small. Most importantly for efficient implementation, the gradient and Hessian (ignoring discontinuities), denoted as and , respectively, for the minimization can be found directly. The gradient is

 ∇pφ(z,t) =V(0)−1p2−z +h∑sk∈S(Qp(sk)Ep(sk)sgn(Ep(sk)†p) −Qe(sk)Ee(sk)sgn(Ee(sk)†p)),

with , and . Additionally the Hessian is

 Hp(φ(z,t))=V(0)−12.

To find the optimal control to the desired convex terminal set , we proceed by solving for the , the minimum time to reach the boundary of the set . This is solved numerically with

 T∗=argmint

If the minimum time to reach is greater than total available time , then the set is not reachable in time . The optimal control can then be found from the following relation

 ∇pH(∇zφ(z0,T∗),T∗)=e−t^A^B(t)a∗p+e−t^A^D(t)a∗e.

To induce collaborative guidance we proceed to solve for the joint terminal set in . Let represent terminal cost function of vehicle with shape matrix , then the terminal cost function of the collaborative system is

 J(z,t)=mini=1,…,kJi(z,t). (24)

It was shown in [11] that max/min-plus algebra [1, 16, 23] can be used to generalize the Hopf formula to solve for non-convex initial data that can be formed as the union of convex sets, such as the terminal cost considered in . This is true provided that the Hamiltonian is convex. In general, the Hamiltonian of the differential game given in is non-convex. But consider the case where and the system is constrained to the form in and , then is convex and max/min-plus algebra holds. To find the value function with the terminal set given by , we solve the initial value problems of the form

 {∂ϕi∂t(z,t)+H(t,∇zϕi(z,t))=0inRn×(0,+∞),ϕi(z,0)=Ji(z)∀z∈Rn, (25)

and take the pointwise minimum over the solutions , each of which has convex initial data, with

 φ(z,t)=mini=1,…,kϕi(z,t).

Each in are independent of each other, and can be computed in parallel. In the case where the is non-convex, then the pointwise minimum is only an upper bound of the true value function; see [24] for more details.

## V Results

The above control solution has been integrated into a closed loop 2-on-1 pursuit-evasion 3 degree of freedom (3DOF) simulation using MATLAB R2016a and Simulink at 120Hz with Euler integration. This included using a third order autopilot for each pursuer, and using the gradient of the value function to find optimal evader control. Preliminary results solved for the optimal control on average on a 3 GHz Intel Core i7 950.

As a post-process, the evader’s inertial state is found by solving the modified Dubin’s car initial-value problem relative to a fixed origin with zero initial conditions and known inputs. Adding the evader’s inertial state to the vehicle’s relative state and correcting for the induced rotational motion provides the vehicle’s inertial state.

The first example uses a simple geometry in the tail-chase scenario and the engagement trajectory is shown in Figure 2. The capture radius is , evader control is limited to , and both pursuers have control bounds that decrease in time with

 ∥ap∥≤(t−40)240m/s2,

when , and otherwise. The evader is assumed to travel at speed and the pursuers at speed ( Mach). Both pursuing vehicles, initially launched at from the evader, are simultaneously traveling directly at the evader. Notice that both pursuers separate as to surround and contain the evader. The miss distance was and time to intercept was seconds. In this example, and the Hamiltonian remained convex for the duration of the simulated engagement.

The second example utilized a similar engagement, but with head-on aspect configuration. The parameters are the same as example 1, but a initial separation. In this case, the initial conditions are such that during simulation, the linearization error in is large. When this occurs, the solution of the zero level set time maybe higher than available flight time . This indicates the set is not reachable (due to the linearization error) and in our simulations reverts to proportional navigation (PN) until the set is considered reachable. This can easily be countered by increasing the control bound of the evader to account for linearization error. Additionally, the convexity assumption of is violated in this example, but only for the last seconds, or about of the engagement. With both vehicles launched simultaneously, intercept still occurred, with a miss distance of . Time to intercept was seconds and the flyout paths are given in Figure 3.

## Vi Conclusions and Future Work

The generalized Hopf formula provide new capabilities for solving high-dimensional optimal control and differential games, such as the pursuit-evasion guidance presented here. Additionally, the above work can be used for evasion strategies that could be of interest for collision avoidance problems. Future work will focus on extending the generalized Hopf formula for certain classes of non-linear systems, such as feedback linearizable systems [32], and apply splitting algorithms [11, 17, 8] for efficient optimization when the gradient and Hessian is not explicitly known.

## Acknowledgments

The authors would like to thank the anonymous reviewers. Their comments and suggestions greatly improved the accuracy and clarity of this paper.

## References

• [1] M. Akian, R. Bapat, and S. Gaubert. Max-plus algebra. Handbook of Linear Algebra (Discrete Mathematics and its Applications), 39:10–14, 2006.
• [2] A. H. Al-Mohy and N. J. Higham. Computing the action of the matrix exponential, with an application to exponential integrators. SIAM Journal on Scientific Computing, 33(2):488–511, 2011.
• [3] H. Anton, S. Davis, and I. Bivens. Calculus: A New Horizon. Wiley New York, 1999.
• [4] R. E. Bellman. Dynamic Programming, volume 1. Princeton University Press, 1957.
• [5] R. E. Bellman. Adaptive Control Processes: A Guided Tour. Princeton University Press, 2015.
• [6] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
• [7] A. R. Bryson and Y.-C. Ho. Applied Optimal Control: Optimization, Estimation and Control. CRC Press, 1975.
• [8] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2011.
• [9] S. C. Chapra and R. P. Canale. Numerical Methods for Engineers, volume 2. McGraw-Hill New York, 1998.
• [10] J. Darbon. On convex finite-dimensional variational methods in imaging sciences and Hamilton-Jacobi equations. SIAM Journal on Imaging Sciences, 8(4):2268–2293, 2015.
• [11] J. Darbon and S. Osher. Algorithms for overcoming the curse of dimensionality for certain Hamilton-Jacobi equations arising in control theory and elsewhere. Research in the Mathematical Sciences, 3(1):19, 2016.
• [12] L. E. Dubins. On curves of minimal length with a constraint on average curvature, and with prescribed initial and terminal positions and tangents. American Journal of Mathematics, 79(3):497–516, 1957.
• [13] I. Ekeland and R. Temam. Convex Analysis and Variational Problems. SIAM, 1999.
• [14] L. C. Evans and P. E. Souganidis. Differential games and representation formulas for solutions of Hamilton-Jacobi-Isaacs equations. Technical report, DTIC Document, 1983.
• [15] Lawrence C. Evans. Partial differential equations. American Mathematical Society, Providence, R.I., 2010.
• [16] W. H. Fleming. Deterministic nonlinear filtering. Annali della Scuola Normale Superiore di Pisa-Classe di Scienze, 25(3-4):435–454, 1997.
• [17] T. Goldstein and S. Osher. The split Bregman method for l1-regularized problems. SIAM Journal on Imaging Sciences, 2(2):323–343, 2009.
• [18] E. Hopf. Generalized solutions of non-linear equations of first order. Journal of Mathematics and Mechanics, 14:951–973, 1965.
• [19] H. Huang, J. Ding, W. Zhang, and C. J. Tomlin. A differential game approach to planning in adversarial scenarios: A case study on capture-the-flag. In 2011 IEEE International Conference on Robotics and Automation (ICRA), pages 1451–1456. IEEE, 2011.
• [20] R. Isaacs. Differential Games: A Mathematical Theory with Applications to Warfare and Pursuit, Control and Optimization. Courier Corporation, 1999.
• [21] A. B. Kurzhanski and P. Varaiya. Dynamics and Control of Trajectory Tubes: Theory and Computation, volume 85. Springer, 2014.
• [22] P. L. Lions and J.-C. Rochet. Hopf formula and multitime Hamilton-Jacobi equations. Proceedings of the American Mathematical Society, 96(1):79–84, 1986.
• [23] W. M. McEneaney. Max-Plus Methods for Nonlinear Control and Estimation. Springer Science & Business Media, 2006.
• [24] W. M. McEneaney and A. Pandey. An idempotent algorithm for a class of network-disruption games. Kybernetika, 52(5):666–695, 2016.
• [25] I. Mitchell. A toolbox of level set methods. Dept. Comput. Sci., Univ. British Columbia, Vancouver, BC, Canada, http://www. cs. ubc. ca/~ mitchell/ToolboxLS/toolboxLS. pdf, Tech. Rep. TR-2004-09, 2004.
• [26] I. Mitchell. The flexible, extensible and efficient toolbox of level set methods. Journal of Scientific Computing, 35(2):300–329, 2008.
• [27] I. Mitchell, A. M. Bayen, and C. J. Tomlin. A time-dependent Hamilton-Jacobi formulation of reachable sets for continuous dynamic games. IEEE Transactions on Automatic Control, 50(7):947–957, 2005.
• [28] S. Osher and R. Fedkiw. Level Set Methods and Dynamic Implicit Surfaces, volume 153. Springer Science & Business Media, 2006.
• [29] N. F. Palumbo, R. A. Blauwkamp, and J. M Lloyd. Basic principles of homing guidance. Johns Hopkins APL Technical Digest, 29(1):25–41, 2010.
• [30] N. F. Palumbo, R. A. Blauwkamp, and J. M Lloyd. Modern homing missile guidance theory and techniques. Johns Hopkins APL Technical Digest, 29(1):42–59, 2010.
• [31] S. Pan, H. Huang, J. Ding, W. Zhang, and C. J. Tomlin. Pursuit, evasion and defense in the plane. In American Control Conference (ACC), pages 4167–4173. IEEE, 2012.
• [32] J.-J. Slotine and W. Li. Applied Nonlinear Control, volume 199. Prentice-Hall Englewood Cliffs, NJ, 1991.
• [33] D. M. Stipanović, G. Inalhan, R. Teo, and C. J. Tomlin. Decentralized overlapping control of a formation of unmanned aerial vehicles. Automatica, 40(8):1285–1296, 2004.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters