# Visual-Inertial Target Tracking and Motion Planning for UAV-based Radiation Detection

###### Abstract

This paper addresses the problem of detecting radioactive material in transit using an unmanned aerial vehicle (uav) of minimal sensing capability, where the objective is to classify the target’s radioactivity as the vehicle plans its paths through the workspace while tracking the target for a short time interval. To this end, we propose a motion planning framework that integrates tightly-coupled visual-inertial localization and target tracking. In this framework, the 3D workspace is known, and this information together with the uav dynamics, is used to construct a navigation function that generates dynamically feasible, safe paths which avoid obstacles and provably converge to the moving target. The efficacy of the proposed approach is validated through realistic simulations in Gazebo.

## I Introduction

The ultimate goal of this work is to identify whether a target moving in a cluttered environment is radioactive or not, in a matter of a few minutes, using an airborne radiation sensor (e.g. a Geiger counter or Neutron detector). As the potential radioactivity of the target is assumed to be known, the problem essentially is an instance of binary hypothesis testing – does the target have the assumed intensity or not? Earlier work [1] has established that the optimal way, in a Neyman-Pearson sense [2], of answering this question using a controllable mobile radiation detector is to drive the detector as fast as possible to the moving target. Intuitively, this strategy maximizes over time the solid angle of the radiation sensor, boosting its signal-to-noise-ratio (snr) [3], thus improving the accuracy of decision-making.

The ability of a uav equipped with a radiation detector to complete this task autonomously in a cluttered and possibly gps-denied environment, rests on two critical capabilities: (i) estimating consistently and accurately its own state and that of its target, and (ii) planning motion in a way that maximizes the probability of accurate detection while avoiding collisions with obstacles and the target. In isolation, both of these technical problems have received some attention in literature.

In the context of target tracking, most methods rest on assumptions that limit their applicability to real-world scenarios. For instance, many algorithms exist for robot-based target tracking in 2D [4, 5], without straightforward extensions to 3D cases. One idea explored in the context of active tracking [6] is to choose robot motion in ways that minimize future target uncertainty. This methodology, again, was developed for 2D problems and requires that the state of the sensor is perfectly known. Other approaches assume the ability to directly measure the relative position between sensor and target [7], and estimate robot and target states separately; because of this separation, however, there is little that can be said about the optimality of either estimate, or a quantitative measure of the uncertainty associated with them. Compounding these technical challenges in the application considered here, the (unknown) relative distance between sensor and target is a significant source of uncertainty when it comes to the accuracy of the final, radiation classification decision that needs to be made.

The proposed approach tightly couples 3D visual-inertial navigation and target tracking, to ensure that the error in estimating the relative distance between the robot and the target can be minimized. In the application at hand, this is particularly important because this relative distance affects directly the information content of radiation measurements, and consequently the radiation detection accuracy. Estimating the relative position between detector and target is achieved with exclusive use of an inertial measurement unit (imu) and a (stereo) camera, on the hardware side, and a lightweight multi-state constraint Kalman filter (msckf) [8] on the algorithmic side. The particular estimator utilizes stochastic cloning to estimate a sliding window of past robot poses along with the current imu state. Features are linearly marginalized to utilize their motion information without the need to store them in the state vector. The marginalization allows for the creation of constraints between the window poses while bounding the problem size, resulting in a computationally efficient estimation algorithm. This approach has been extended in many directions, e.g., including camera-to-imu spatial and temporal calibration [9], handling degenerate motions [10], and enforcing the correct observability properties on the system [11].

Aerial target tracking and navigation based on-board estimation has been recently demonstrated in a case of tracking a spherical rolling target [12]. It employed a geometric approach similar to visual servoing, a receding horizon strategy that penalizes velocity and position errors, and a uav motion control scheme based on minimum-snap trajectory generation [13]. It is not clear to what degree this tracking algorithm depends on the target being spherical, but the overall estimation and motion control scheme was engineered to run impressively fast.

The work reported here involves an additional layer of complexity. In addition to target tracking and obstacle avoidance, the mission calls for fast and accurate target classification based on a different sensing modality. Here target classification, estimation, and tracking are coupled; because the accuracy of estimates and the performance of tracking have direct impact on the accuracy of classification. The size of tracking errors reported in [7] can be detrimental to detection of low-intensity mobile sources of radioactivity.

In the application context of radiation detection, (binary) classification accuracy can be quantified probabilistically in terms of probabilities of false alarm and missed detection. The computation of the latter two probabilities are in general intractable; however, analytical Chernoff bounds on these have recently been derived [2] in the form of highly non-convex, non-linear integral functions. While these analytic expressions provide a handle for manipulating accuracy of decision-making through mobility, they are still not particularly amenable to direct incorporation into standard motion planning and optimal estimation frameworks. Yet we do know [1] that in the absence of constraints in terms of vehicle dynamics or workspace topology, the optimal motion strategy for maximizing detection accuracy is to close the distance between radiation detector and source as quickly as possible.

This insight, therefore, is what can be utilized by means of a variety of motion planning strategies [14]. Not all of them, however, can offer provable collision avoidance and target convergence guarantees, especially when considering nontrivial sensor platform dynamics. The approach in this paper achieves this by combining a new type of navigation functions developed for moving targets [1], with a geometric position and attitude tracking uav controller. The navigation function creates an almost globally attractive (to the target) vector field, which is utilized as a velocity reference for the uav’s controller. Vector field tracking has been entertained in earlier work [15], but neither for the purpose of intercepting moving targets, nor in non-spherical workspace topologies and without full knowledge of robot and target states [16].

In particular, the main contribution of this paper consist of (i) the extension of visual-inertial navigation systems (vins) to tightly-coupled 3D visual-inertial localization and target tracking (ii) integration of geometric uav control with time-varying navigation functions in star-forests. (iii) validation of the proposed system in realistic ROS/Gazebo simulations.

## Ii Radiation Detection

To a radiation detector, gamma rays/neutrons emanating from nuclear material are indistinguishable from naturally occurring (background) radiation. In the sensor’s input stream, the two signals are essentially superimposed. To (optimally) determine whether a source is present based on measurements from such a radiation counter, one performs a binary hypothesis test.

Pahlajani et al. [2] formulate this problem as a Likelihood Ratio Test (lrt) in the Neyman-Pearson framework. In this framework, one of the following competing hypotheses is chosen: , which asserts that the target is benign, or , which states that the target is radioactive. In making this decision, two types of errors can occur. False alarm occurs if the target is falsely classified as radioactive; missed detection occurs when the target is falsely classified as benign.

Analytically computing the probability of making either error is in general intractable, except for (simplistic) Gaussian counting processes. Not only is the counting process here Poisson, but it is also a time-inhomogeneous one, since it explicitly depends on the distance between sensor and source which changes (deterministically) with time. Yet, analytical Chernoff (upper) bounds for the errors involved in this particular binary hypothesis test have been derived [2], and these can serve as surrogates for the otherwise unknown true error probabilities.

Let for be the time-inhomogeneous counting process observed at a detector with cross-section (square meters), when monitoring a source of activity in counts per second (cps) in an environment with background radiation of activity . The quantities and denote the position of the detector (hereafter assumed identical to that of the uav) and the target, respectively, in the inertial Map frame. The perceived (given the source-sensor distance) source activity, incident to the detector is denoted , while a dimensionless parameter quantifies the relative strength of this perceived activity over background and is conceptually related to snr (norms are Euclidean)

(1) |

Let be an upper bound on the acceptable probability of false alarm, and a constant parameter. Then after defining

(2a) | ||||

the logarithm of the analytically derived bounds on the probabilities of false alarm and missed detection is [2] | ||||

(2b) | ||||

(2c) |

Parameter is specified by solving (2b) for a given .

For , let denote the -th jump time (when the detector registers a count) in a counting random process taking the form of piecewise constant random signal with maximum over a finite time window of . The lrt to decide between the two hypotheses and involves comparing a statistic to a constant threshold

(3) |

The bound on probability of missed detection (2c) serves as a performance measure of decision-making accuracy when performing the lrt.

## Iii Motion Planning

### Iii-a Navigation Function

A navigation function [17] is a real-valued map , constructed on uav’s free configuration space that when tuned appropriately has a unique minimum at the desired goal configuration and is uniformly maximal over the boundary of . The construction has been extended to the case of goal manifolds that take the form of a sphere (bubble) around a moving target [1]. Because these moving manifolds are (almost) globally attractive, the use of this motion planning technique guarantees the convergence of the uav to the moving target within a specified distance, without hitting either this target or the surrounding obstacles.

Let denote the position of the uav, and that of the target, assumed both in . Take to be the radius of the spherical bubble around the target and define the goal for navigation as minimizing the function

(4) |

Let be the index set of spherical obstacles, including the workspace’s outer boundary indexed , and denote , for the center and radius of each spherical obstacle, respectively. Define for the workspace boundary, and for any other interior obstacle. Let .

It can be shown [1] that there exists a positive number such that , and for a suitably large parameter ,

(5) |

would be a navigation function on a spherical world with a suitably chosen (analytic switch) parameter .

For a diffeomorphism parameterized by a suitably chosen positive parameter , mapping a star world to the forest of stars in the form of three-dimensional squircles [18], the composition can be shown to have navigation function properties on [18], in the sense that for any position of the target satisfying some reasonable conditions, all (unstable) critical points outside the destination manifold are either nondegenerate with attraction region of measure zero, or inside the target’s bubble.

### Iii-B Uav Control

Let denote the mass of the quadrotor uav, its moment of inertia about a frame aligned with the principal axes and attached at the center of mass , and the acceleration of gravity in the inertial frame. The relative orientation between the inertial frame and the principal one at the uav’s center of mass is captured by the rotation matrix . (Premultiplication with rotates a vector by as much as cg is rotated relative to M.) The linear and angular velocity of the uav relative to the inertial frame are denoted and , respectively; the angular velocity vector relative to the body cg frame is denoted . Let denote the (wedge) operation that maps a vector in to a member of the Lie algebra . With denoting the magnitude of the total thrust produced by the thrusters, and the total moment relative to the body-fixed cg frame, the dynamics of the quadrotor can finally be expressed as

(6a) | ||||

(6b) | ||||

(6c) | ||||

(6d) |

The desired velocity for the uav is determined using (5). Specifically, if denotes the vehicle’s maximum speed given the capabilities of its actuators or safety specifications, and is a positive control gain, then the velocity reference relative to the inertial frame is

(7) |

Consequently, the velocity error for the uav would simply be . With and denoting some desired uav orientation and angular velocity for the uav, respectively, the orientation and angular velocity errors are defined accordingly

(8a) | ||||

(8b) |

With these definitions in place, and with , and denoting positive control gains for velocity, orientation, and angular velocity, the control inputs for the uav are set as (cf. [19])

(9a) | ||||

(9b) |

In the closed-loop system (6)–(9), the errors , , and converge exponentially to zero [19].

## Iv State Estimation

### Iv-a State Definition

Both control, as well as detection, rely on accurate estimates of the states of the uav and its target. It is assumed that the uav is only equipped with an imu, which provides linear acceleration and angular velocity readings, and a set of stereo cameras. The camera images provide bearing measurements to three classes of objects: the target, known landmarks (which can be obtained, e.g., through a previous mapping session), and unknown features. To deal with the limited computational resources on the uav, and because low-latency is required, we extend a popular filtering-based vins solution, the msckf [8] to estimate the uav and target states, and the transformation between the uav and map frames, through fusion of inertial (imu) and visual (camera) data.

Typically within a visual-inertial localization framework, the state of the uav, parameterized by that of the imu, is represented by an element , where is the group of unit quaternions. This state representation for the uav’s imu at time step , with denoting the imu’s global frame and the imu’s local frame at step , can be written explicitly as

(10) |

which includes the unit quaternion parameterizing the rotation from the global frame to the local imu frame, gyro bias, global velocity, accelerometer bias, and global position, respectively. The convention for quaternions and their corresponding rotation matrices is that the left subscript denotes the starting frame of the rotation, while the left superscript denotes the end frame. When used with vector quantities, the left superscript indicates the frame in which the vector is being represented relative to.

The state of the target, is parameterized using a constant derivative model of order , in which denotes the derivative of the target’s position of order ,

Static quantities also need to be estimated: . The first term denotes a set of stochastic clones of the past imaging time imu poses

These clones are maintained as per the msckf framework, wherein bearing measurements to unknown features are transformed into relative constraints between historical poses (see Section IV-C). The second static term refers to the transformation from the imu’s “global” frame, , and the frame of the prior map, . This term is required as the estimator initializes its own frame during startup without any knowledge of its pose relative to the prior map. With denoting the prior map’s origin in the global frame, the relative transformation between frames is parameterized by

The full state of this system vector at time step is now

It is important to note that the true state, , does not belong to a vector space, making direct estimation of it difficult with standard techniques. Instead, as in many recent vins algorithms, these quantities have been indirectly estimated by performing filtering on a minimal representation error state vector. The relationship between the true value of the state, , the estimated state, , and error state, , is defined by the generalized update operation . For vector terms, this operation takes the form ; quaternion errors, with representing quaternion multiplication [20], are represented as , .

A working assumption is that the prior map and global frame align along their -direction [21] due to the observability of orientation in vins up to rotations about the gravity vector. As such, the transformation only has four degrees of freedom: a relative position between the respective origins and a relative yaw. Parameterizing relative orientation using a single degree-of-freedom quaternion —representing a rotation about a common gravity vector— allows expressing the relative orientation state as

All updates to this quaternion are achieved by further rotations about the -axis.

### Iv-B Filter Propagation

As the uav moves through the environment, the imu’s gyroscope and accelerometer collect measurements relating to the evolution of the imu state. If and denote the local angular velocity and linear acceleration measurements while and are the true local angular velocity and local linear acceleration, represents the rotation matrix associated with , and with and being continuous-time white Gaussian noise vectors that corrupt respective measurements, the measurement model becomes

Measurement biases are modeled as continuous-time random walks, driven by Gaussian white noises and . The dynamics of the state are now compactly expressed as

while the target, letting denote a Gaussian noise vector, is assumed evolving with as

giving rise to a linear target evolution model —involving a matrix of all zeros and the last block entry as the identity matrix, and a matrix which is the target state’s Jacobian— and is of the form

Applying the expectation operator on these evolution equations, the dynamics of the state estimate, (estimate notation should not be confused with the slightly wider differential geometric wedge mapping) is

These equations allow propagating the state estimate by analytically solving the differential equations across the measurement time-interval .

In order to quantify the covariance of the propagated estimates, the state equations are linearized about the current estimates. With and being the Jacobians of the imu state and noise dynamics respectively, being the stacked vector of imu noise, and and representing the dimensions of target and static states, respectively, this linearized dynamics takes the form

(11) |

Let and denote the state-transition matrices of the imu and target from time to based on (11). Matrices and will denote the noise covariances for the imu and target state evolution, associated with discrete-time noise characterization [22, 20]. After defining the following two matrices

the propagation of the covariance of the state estimator can be compactly expressed as

### Iv-C Filter Update

Let represent the (fixed) rotation matrix and the displacement between the imu and the camera. Let be the pixel measurement associated with a 3D feature and set

(12) |

Expand and define the map

(13) |

The pixel measurement is assumed to have noise . This measurement projected into a camera associated with timestep is generated by a measurement function

This function transforms the 3D position of the feature into the camera’s local frame, and then projects it onto the image plane. Similarly, the projection of the target into the plane can be described by replacing the unknown feature position in (13) with the target’s position.
A similar procedure is followed for a known landmark^{1}^{1}1This is assumed perfectly known, no noise. To include this noise into the system, one could use the computationally efficient Cholesky-Schmidt-Kalman Filter proposed in [21]. expressed in the (inertial) map frame, , to the current camera frame,

resulting in a projected measurement

These three different bearing measurement types define the filter updates. For the target and map localization measurements, a standard Extended Kalman Filter (ekf) update can be utilized —all the quantities involved in the measurement are contained in the state.

At this stage, in order to use the unknown feature bearing measurements, linear marginalization through the msckf update step [8] is performed, since otherwise a standard ekf storing all the measured features in the state vector would lead to unbounded computation. Let and be the measurement Jacobians with respect to the estimated state and the unknown feature, and and are the corresponding state errors, while denotes the stacked measurement noise. Given a vector of pixel measurements associated with a feature that has been tracked across the window, and the corresponding stacked measurement generation function , the linearized measurement residual is expressed as

The linearization point for the feature position is found through triangulation using the corresponding bearing measurements and the current state estimates. By performing QR decomposition on , it is possible to define the matrix whose columns span the left nullspace of . If denotes the covariance of the original stacked measurement and

then multiplying by removes the dependency on feature errors

The msckf therefore creates a new residual, , with corresponding measurement Jacobian, , that does not require storing features into the state vector, as the transformed measurement relates only to the current imu state and the clones in the window.

### Iv-D State Initialization

#### Iv-D1 Map Initialization

As the filter does not start with a prior estimate of the map transformation parameters, it is important to perform initialization of these quantities. In particular, bearing measurements to known landmarks are collected until the transformation between the global and prior map frames can be estimated. An estimate of the landmarks in the imu’s global frame, , can be recovered through multi-view triangulation. Using the positions for the landmarks expressed in both frames, the 6 DOF relative pose is estimated [23], from which the relative position and yaw are extracted. If denotes the stacked residual associated with these bearing measurements and the stacked measurement noise, then using this estimate the linearized system for the landmarks’ bearing measurements is built

(14) |

The Jacobians and have been separated with respect to the already initialized state and the new map parameters, respectively. Using delayed initialization these new map parameters can be added to the state. All future bearing measurements to known landmarks can now be processed using a standard ekf update.

#### Iv-D2 Target Initialization

The target is initialized in a similar manner by collecting bearing measurements corresponding to times , where denotes the first time that the target is seen. Let , , , , be the initial -th derivatives of the target’s position, the position of the uav’s measuring camera (left or right) in the global frame at time , the depth of the target in the j-th image, the measured bearing vector from the uav to the target expressed in the global frame, and the time since the initial target measurement, respectively. Assuming a constant derivative model over the interval, the constraint satisfied at time is given by:

Construct by stacking all the relevant camera poses, and let be the initial target state. Set to be the vector of target depths, and let be the matrix encoding the constraints. Stacking the equations and separating out the unknowns leads to the following least-squares solution

Propagating this initial state yields estimates of the target at each of the measuring times, . In order to add the target states into the filter, the motion model is treated as a measurement constraining the target states between two consecutive times. For example, between timesteps and , for ,

The target evolution and target bearing measurements are stacked into a single vector and used to perform delayed initialization as in (14). The filter then will contain estimates for the target at each of the imaging times, and so all but the active target state are marginalized.

## V Simulations

### V-a Estimation Validation

Being a significant and new addition, the estimation module of the system is first validated using simulated sensors from the ROS/Gazebo package RotorS [24], without incorporating motion-planning or in-house developed controllers. In a simulated environment, a Firefly uav equipped with a forward facing vi-sensor executes a sinusoidal trajectory of approximately 60 m at a speed of 1 m/s. A Pelican uav, flying in front of the Firefly at constant velocity, serves as the target, and the estimator assumes this target to follow a constant velocity motion model. Unknown features are simulated along the walls and floor of the hallway (Fig. 1). This test is performed without known landmarks, in order to show that the proposed estimator also works in unmapped environments. Over a path of length 58 m, the robot position estimate has a root mean square error (rmse) and ending error of 0.2379 m and 0.55 m, respectively, while the target position rmse and ending error are 0.2835 m and 0.52 m (Fig. 1).

### V-B System Validation

The workspace in which the simulated detection scenario evolves is shown in Fig. 2. There are three (interior) obstacles in this workspace: two houses and a radio tower (not to scale). The workspace boundary (not shown in Fig. 2) is modeled as a squircle (rounded box) of dimension centered at coordinate —all components in m; the interior obstacles are also modeled as squircles of dimension equal to their bounding boxes. Figure 2 shows a section of the constructed navigation function with a target location at coordinate . (The variation of the function in the direction for the same target location can be seen in the video submitted.) The navigation function parameters for this setup were , and . The gradient of the function gives a desired velocity orientation for the uav, while the yaw angle is chosen so that the uav always faces the estimated target location.

A dense set of 1550 unknown features and a sparse set of 160 known landmarks is randomly placed over different surfaces in the environment. The estimator subscribes to the ground-truth uav and target odometry, as well as the noisy imu at 200 Hz, while publishing the resulting uav and target estimates at the imu rate. Bearing measurements are simulated at a rate of 20 Hz, and are corrupted by simulated noise following a Gaussian distribution with one pixel standard deviation. These measurements are then processed using the proposed algorithm with an msckf window size of 8. Occlusions are also simulated by checking whether each bearing ray intersects any of the obstacle boundaries, which are approximated as rectangular boxes.

In this scenario, a uav (Firefly) equipped with a downward facing vi-sensor intercepts a target (which is realized by a turtlebot) assumed to carry an isotropic Cf-252 neutron source.
Radiation emitted by this source is assumed to be picked up by a Domino thermal neutron detector^{2}^{2}2Radiation Detection Technologies Inc http://radectech.com/products/rdt-domino-v5-4 assumed mounted on the uav.
The radioactivity is simulated using the thining algorithm [25] —benchmarked against a Ci (micro Curie) source available— to guide the simulated realization of higher activity sources.
(The activity of a Ci source, 1.5 m away from the target essentially blends completely into background.)

The case of Maneuver-1 in the companion video^{3}^{3}3 https://www.youtube.com/watch?v=zur04_avhJg illustrates the motion planning approach while Maneuver-2 showcases a sharp turn by the uav to avoid an obstacle.
The results reported below refer exclusively to Maneuver-1.
There, the target moves in a straight line with a velocity of 1 m/s and the uav intercepts it with a maximum speed of 6 m/s.
The motion planning process consists of three phases:
(i) hovering at 2 m using a position controller to initialize the tracking,
(ii) target chase using the navigation function based planner and velocity controller, and
(iii) hovering again using the position controller once the target crosses the detection area.

The uav starts chasing the target at 6 m/s, and eventually follows the target maintaining a constant distance of about 0.7 m. The true and estimated relative distance between the uav and the target has been shown in Fig. 3. The error in these two eventually converges to approximately 15–20 mm. Figure 3 and 3 shows the , and components of the error between estimates, and ground truth positions of the uav and the target. The estimation error in the target’s position starts with a high value due to initialization at a far distance, but eventually converges to acceptable values. The uav state estimation error, due to access to highly informative known landmark bearing measurements, remains bounded within 10 cm over the path.

Figure 4 presents the desired and achieved velocities of the uav. The fluctuations in achieved velocities at the start of phases (i) and (iii) are due to commanding the uav to hover. Figure 4 shows the , and components of the trajectory of the uav and the target during phase (ii), which have been used in subsequent radiation detection calculations.

Figure 4 justifies tightly coupling state estimates to the target tracking as well as using a navigation function based planner.
It presents the variation of the bound on probability of missed detection with the radius of safety bubble around the target, for a bound on probability of false alarm . Multiple trajectories during the chase phase were generated with different safety bubble radii around the target to generate Fig 4. Color-coded lines show bounds calculated assuming source strengths of Ci, Ci, and Ci, for which the activity is , , and respectively.^{4}^{4}4The value of source activity is calculated assuming neutrons per Curie per second emitted by Cf-252 source. (See http://www.logwell.com/tech/nuclear/Californium-252.html).

The background radioactivity and detector’s radiation cross-section used in the calculations were and . The total time allocated to make the decision was capped at sec. The value of bound on probability of missed detection equal to 1 indicates the inability to make any accurate decision. It can be seen that uav needs to be within 1–1.5 m radius of the target in order to detect a low intensity source using commercial-of-the-shelf (cots) detectors, guiding the choice of the proposed approach.

## Vi Conclusions and Future Work

An integrated motion-planning framework that tightly-couples visual-inertial navigation and target tracking with a 3D navigation function-based planner and geometric controller is shown to be appropriate for applications in uav-based radiation detection. Such an integrated system offers close tracking for accurate radiation detection, while avoiding collisions with obstacles in the environment. Future effort will be directed towards extending this methodology to more realistic and efficient scenarios such as monocular camera deployments with noisy prior landmarks, handling more complicated target motion models, experimentally validating the system, and investigating the effect of navigation function tuning on the uav maneuvers.

## References

- [1] J. Sun and H. G. Tanner, “Constrained decision-making for low-count radiation detection by mobile sensors,” Auton. Robots, vol. 39, no. 4, pp. 519–536, 2015.
- [2] C. D. Pahlajani, J. Sun, I. Poulakakis, and H. G. Tanner, “Error probability bounds for nuclear detection: Improving accuracy through controlled mobility,” Automatica, vol. 50, no. 10, pp. 2470–2481, 2014.
- [3] R. J. Nemzek, J. S. Dreicer, D. C. Torney, and T. T. Warnock, “Distributed sensor networks for detection of mobile radioactive sources,” IEEE Transactions on Nuclear Science, vol. 51, no. 4, pp. 1693–1700, Aug 2004.
- [4] G. Huang, M. Kaess, and J. Leonard, “Consistent unscented incremental smoothing for multi-robot cooperative target tracking,” Robotics and Autonomous Systems, Sep. 2014, (In press).
- [5] F. M. Mirzaei, A. I. Mourikis, and S. I. Roumeliotis, “On the performance of multi-robot target tracking,” in Proc. of the IEEE International Conference on Robotics and Automation, Rome, Italy, Apr. 10–14, 2007, pp. 3482–3489.
- [6] K. Zhou and S. Roumeliotis, “Multirobot active target tracking with combinations of relative observations,” IEEE Transactions on Robotics, vol. 27, no. 4, pp. 678 –695, Aug. 2011.
- [7] J. Chen, T. Liu, and S. Shen, “Tracking a moving target in cluttered environments using a quadrotor,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2016, pp. 446–453.
- [8] A. I. Mourikis and S. I. Roumeliotis, “A multi-state constraint Kalman filter for vision-aided inertial navigation,” in Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy, Apr. 10–14, 2007, pp. 3565–3572.
- [9] M. Li and A. I. Mourikis, “Online temporal calibration for Camera-IMU systems: Theory and algorithms,” International Journal of Robotics Research, vol. 33, no. 7, pp. 947–964, Jun. 2014.
- [10] D. Kottas, K. Wu, and S. Roumeliotis, “Detecting and dealing with hovering maneuvers in vision-aided inertial navigation systems,” in Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, Nov. 2013, pp. 3172–3179.
- [11] J. Hesch, D. Kottas, S. Bowman, and S. Roumeliotis, “Consistency analysis and improvement of vision-aided inertial navigation,” IEEE Transactions on Robotics, vol. PP, no. 99, pp. 1–19, 2013.
- [12] J. Thomas, J. Welde, G. Loianno, K. Daniilidis, and V. Kumar, “Autonomous flight for detection, localization, and tracking of moving targets with a small quadrotor,” IEEE Robotics and Automation Letters, vol. 2, no. 3, pp. 1762–1769, July 2017.
- [13] D. Mellinger and V. Kumar, “Minimum snap trajectory generation and control for quadrotors,” in Proc. of the IEEE International Conference on Robotics and Automation, Shanghai, China, May 9–13, 2011, pp. 2520–2525.
- [14] S. Karaman and E. Frazzoli, “Optimal kinodynamic motion planning using incremental sampling-based methods,” in 49th IEEE Conference on Decision and Control (CDC), Dec 2010, pp. 7681–7687.
- [15] D. Zhou and M. Schwager, “Vector field following for quadrotors using differential flatness,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), May 2014, pp. 6567–6572.
- [16] G. P. Roussos, D. V. Dimarogonas, and K. J. Kyriakopoulos, “3d navigation and collision avoidance for a non-holonomic vehicle,” in 2008 American Control Conference, June 2008, pp. 3512–3517.
- [17] E. Rimon and D. E. Koditschek, “Exact robot navigation using artificial potential functions,” IEEE Transactions on Robotics and Automation, vol. 8, no. 5, pp. 501–518, Oct 1992.
- [18] C. Li, “Star-world navigation functions for convergence to a time-varying destination manifold,” Master’s thesis, University of Delaware, 2017.
- [19] T. Lee, M. Leoky, and N. H. McClamroch, “Geometric tracking control of a quadrotor uav on se(3),” in 49th IEEE Conference on Decision and Control (CDC), Dec 2010, pp. 5420–5425.
- [20] N. Trawny and S. I. Roumeliotis, “Indirect Kalman filter for 3D attitude estimation,” University of Minnesota, Dept. of Comp. Sci. & Eng., Tech. Rep., Mar. 2005.
- [21] R. C. DuToit, J. A. Hesch, E. D. Nerurkar, and S. I. Roumeliotis, “Consistent map-based 3d localization on mobile devices,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), May 2017, pp. 6253–6260.
- [22] P. S. Maybeck, Stochastic Models, Estimation and Control, ser. Mathematics in Science and Engineering. London: Academic Press, 1982, vol. 141-2.
- [23] B. K. P. Horn, “Closed-form solution of the absolute orientation using quaternions,” J. Optical Soc. Am. A, vol. 4, pp. 629–642, Apr. 1987.
- [24] F. Furrer, M. Burri, M. Achtelik, and R. Siegwart, Robot Operating System (ROS): The Complete Reference (Volume 1). Cham: Springer International Publishing, 2016, ch. RotorS—A Modular Gazebo MAV Simulator Framework, pp. 595–625.
- [25] R. Pasupathy, “Generating nonhomogeneous Poisson processes,” in Wiley Encyclopedia of Operations Research and Management Science. Wiley, 2009.