Vision-based Real-Time Aerial Object Localization and Tracking for UAV Sensing System

# Vision-based Real-Time Aerial Object Localization and Tracking for UAV Sensing System

Yuanwei Wu, Yao Sui,  and Guanghui Wang,  Y. Wu, Y. Sui, and G. Wang are with the Department of Electrical Engineering and Computer Science, University of Kansas, 1520 West 15th Street, Lawrence, KS 66045. Email: ghwang@ku.edu, wuyuanwei2010@gmail.com.
###### Abstract

The paper focuses on the problem of vision-based obstacle detection and tracking for unmanned aerial vehicle navigation. A real-time object localization and tracking strategy from monocular image sequences is developed by effectively integrating the object detection and tracking into a dynamic Kalman model. At the detection stage, the object of interest is automatically detected and localized from a saliency map computed via the image background connectivity cue at each frame; at the tracking stage, a Kalman filter is employed to provide a coarse prediction of the object state, which is further refined via a local detector incorporating the saliency map and the temporal information between two consecutive frames. Compared to existing methods, the proposed approach does not require any manual initialization for tracking, runs much faster than the state-of-the-art trackers of its kind, and achieves competitive tracking performance on a large number of image sequences. Extensive experiments demonstrate the effectiveness and superior performance of the proposed approach.

Salient object detection; visual tracking; Kalman filter; object localization; real-time tracking.

## I Introduction

In the last two decades, we have seen rapid growth in the applications of unmanned aerial vehicles (UAV). In the military, UAVs have been demonstrated to be an effective mobile platform in future combat scenarios. In civil applications, numerous UAV platforms have mushroomed and been applied to surveillance, disaster monitoring and rescue [9], package delivering [1], and aerial photography [22]. A number of companies are developing their own UAV systems, such as Amazon Prime Air [1], Google’s Project Wing [11], and DHL’s Parcelcopter [8]. In order to increase the flight safety, the UAV must be able to adequately sense and avoid other aircraft or intruders during its flight.

The ability of sense and avoid (SAA) enables UAVs to detect the potential collision threat and make necessary avoidance maneuvers. This technique has attracted lots of attention in recent years. Among all available approaches, vision-based SAA system [22, 25] is becoming more and more attractive since cameras are light-weighted and low-cost, and most importantly, they can provide richer information than other sensors. A successful SAA system should have the capability to automatically detect and track the obstacles. The study of these problems, as a central theme in computer vision, has been active for the past decades and achieved great progress.

Salient object detection in computer vision is interpreted as a process of computing a saliency map in a scene that highlights the visual distinct regions and suppresses the background. Most salient object detection methods rely on the assumption about the properties of objects and background. The most widely used assumption is contrast prior [5, 14, 15], which assumes that the appearance contrasts between the objects and backgrounds are very high. Several recent approaches exploit image background connectivity prior [45, 40], which assumes that background regions are usually connected to the image boundary. However, those methods lack of the capability to utilize the contextual information between consecutive frames in the image sequence.

On the other hand, given the position of the object of interest at the first frame, the goal of visual tracking is to estimate the trajectory of the object in every frame of an image sequence. The tracking-by-detection methods have become increasingly popular for real-time applications [42] in visual tracking. The correlation filter-based trackers attract more attention in recent years due to its high speed performance [4]. However, those conventional tracking methods [42, 41, 7, 20, 6, 44, 12, 32, 30, 19, 43] require manual initialization with the ground truth at the first frame. Moreover, they are sensitive to the initialization variation caused by scales and position errors, and would return useless information once failed during tracking [36].

Combining a detector with a tracker is a feasible solution for automatic initialization [2]. The detector, however, needs to be trained with large amount of training samples, while the prior information about the object of interest is usually not available in advance. In [24], Mahadevan et al. proposed a saliency-based discriminative tracker with automatic initialization, which builds the motion saliency map using optical flow. This technique, however, is computational intensive and not suitable for real-time applications. Some recent techniques on salient object detection and visual tracking [23, 13] have achieved superior performance by using deep learning. However, these methods need large amount of samples for training.

In [40], Zhang et al. proposed a fast salient object detection method based on minimum barrier distance transform. Since the saliency map effectively discovers the spatial information of the distinct objects in a scene, it enables us to improve the localization accuracy of the salient objects during tracking. In this paper, we propose a scale adaptive object tracking framework by integrating two complementary processes: salient object detection and visual tracking. A Kalman filter is employed to predict a coarse location of the object of interest. Then, a detector is used to refine the location of the object. The optimal state of the object in each frame is estimated relying on the recursive process of prediction, refining, and correction.

The proposed approach has been compared with the state-of-the-art detector and trackers on a sequence with challenging situations, including scale variation, rotation, illumination change, and out-of-view and re-apperance. As shown in Fig. 1, for object detection, the single view saliency detection algorithm (MB+) [40] is not able to provide high quality saliency maps for the image sequence because it does not manage to use the contextual information between consecutive frames; and from the tracking perspective, the existing trackers are not capable to handle out-of-view and re-appearance challenges.

In summary, our contributions are threefold: 1) The proposed algorithm integrates a saliency map into a dynamic model and adopts the target-specific saliency map as the observation for tracking; 2) we develop a tracker with automatic initialization for a UAV sensing system; and 3) the proposed technique achieves better performance than the state-of-the-art trackers from extensive experimental evaluations.

Our remaining part of this paper is organized as follows: Some related work is briefly reviewed in section II; in section III, the proposed approach is discussed thoroughly; in section IV, we demonstrate the quantitative and qualitative evaluation results, and some limitations; and finally, the paper is concluded in section V.

## Ii Related Works

Salient object detection and visual tracking plays important roles in many computer vision based applications, such as traffic monitoring, surveillance, video understanding, face recognition, and human-computer interaction [37].

The task of salient object detection is to compute a saliency map and segment an accurate boundary of that object. For natural images, the assumptions on background and object have been shown to be effective for salient object detection [5, 45]. One of the most widely used assumptions is called contrast prior, which assumes high appearance difference between the object and background. Region-based salient object detection has become increasingly popular with the development of superpixel algorithm [3]. In addition to contrast prior and region-based methods, several recent approaches exploit boundary connectivity [40, 45]. Wei et al. [33] proposed a geodesic saliency detection method based on contrast, image boundary, and background priors. The salient object is extracted by finding the shortest path to the virtual background node. Zhu et al. [45] formulated the saliency detection as an optimization problem and solved it by a combination of superpixel and background measurement. Cheng et al. [5] computed the global contrast using the histogram and color statistics of input images. These state-of-the-art saliency detection methods achieve pixel-level resolution. Readers may refer to [3] for a comprehensive review on salient object detection.

The goal of visual tracking is to estimate the boundary and trajectory of the object in every frame of an image sequence. Designing an efficient and robust tracker is a critical issue in visual tracking, especially in challenging situations, such as illumination variation, in-plane rotation, out-of-plane rotation, scale variation, occlusion, background clutter and so on [36]. Over the past decades, various tracking algorithms have been proposed to cope with the challenges in visual tracking. According to the models adopted, these approaches can be generally classified into generative models [27, 31, 38, 39], and discriminative models [42, 24, 16]. Ross et al. [27] exploited an incremental subspace learning to visual tracking, which assumes that the obtained temporal targets reside in a low-dimensional subspace. Sui et al. [31] proposed a sparsity-induced subspace learning which selects effective features to construct the target subspace. Yin et al. [38] proposed a hierarchical tracking method based on the subspace representation and Kalman filter. Yu et al. [39] introduced a large-scale fiber tracking approach based on Kalman filter and group-wise thin-plate spline point matching.

The discriminative tracking-by-detection approaches have become increasingly popular in recent years. Zhang et al. [42] proposed a real-time tracker based on compressive sensing. Mahadevan et al. [24] proposed a saliency-based discriminative tracker, which learns the salient features based on Bayesian framework. Kalal et al. [16] introduced a long-term tracker which enables a re-initialization in case of tracking failures.

In particular, the correlation filter-based discriminative tracking methods have attracted much attention and achieved significant progress [4]. Henriques et al. [12] proposed a tracker using kernelized correlation filters (KCF). Zhu et al. [44] extended the KCF to a multi-scale kernelized tracker in order to deal with the scale variation. Zhang et al. [41] proposed a tracker via dense spatio-temporal context learning. Danelljan et al. [6] introduced a discriminative tracker using a scale pyramid representation. Li et al. [20] proposed to tackle the scale variation by integrating different low-level features. Danelljan et al. [7] designed a tracker by adaptive extension of color attributes. Readers can refer to [37, 28] and the references therein for details about visual tracking.

## Iii The Proposed Approach

The proposed fast object localization and tracking (FOLT) algorithm is formulated within a robust Kalman filter framework [38, 35] to estimate the optimal state of the salient object from the saliency map in every frame of a given image sequence. Our tracking approach relies on a recursive process of prediction, object detection, and correction, as shown in Fig. 2. A linear dynamic (with constant velocity) model has been employed to represent the transition of the motion state of the salient object in a scene [35]. The tracker is initialized on the first frame using the saliency map computed on the entire image. The motion state is predicted on each frame according to the motion states of previously obtained object of interest. Under the constraint of natural scenes, the prediction is not far away from the ground truth [38], however, it only provides a coarse state estimation (“predicted bounding box” shown in Fig. 2) about the target location. We take this predicted coarse location as an initial region for further estimation during object tracking. The refined target is marked by a bounding box on that frame according to its motion state. Finally, the Kalman gain and a posteriori error covariance in the dynamic model are updated. The details about prediction, object localization and correction scheme are discussed in the following.

### Iii-a Dynamic model formulation

In the dynamic model, the object of interest is defined by a motion state variable , where denotes the center coordinates, denotes its velocities, and denotes its width and height. The state at each frame is estimated using a linear stochastic difference equation , where the prediction noise is Gaussian distributed with covariance , is a driving function with dimension . The vector describes the motion states of the salient object. The orthogonal transition matrix evolves the state from the previous frame to the state at the current frame . The vector is denoted as an observation or measurement with dimension measured in frame . In our notation, we will define , and . With the driving function removed from our model, the autoregressive model of the salient object in a frame is built based on the following linear stochastic model

 st+1=Hst+wt,wt∼N(0,Q), (1)
 yt=Cst+vt,vt∼N(0,R), (2)

where the measurement matrix is , and the measurement noise is Gaussian distributed with covariance . The diagonal elements in the prediction noise covariance matrix and measuremnt noise matrix represent the covariance of the size and position.

In the -th frame, given the probability of and all the previously obtained states from the first to -th frame, denoted as , the optimal motion state of the target in the -th frame, denoted as , is predicted by maximizing the posterior probability . To simplify our model, we inherit the Markov assumption, which states that the current state is only dependent on the previous state. Therefore, the objective function becomes

 ^st=argmaxsp(st|yt,st−1). (3)

Using Bayes formula, the posterior probability becomes:

 p(st|yt,st−1)=p(yt|st,st−1)p(st|st−1)p(yt), (4)

where the denominator is a normalization constant, which indicates the prior distribution of the observation . The observation is not dependent on the previous state in the presence of , since is only generated by the state . Thus equation (4) is reduced to:

 p(st|yt,st−1)=p(yt|st)p(st|st−1), (5)

where the observation model measures the likelihood to be the target of the measurement with the motion state . Finally, we formulate the objective function as

 ^st=argmaxsp(yt|st)p(st|st−1). (6)

The state estimation can be converted to the standard Kalman filter framework [34] when we assumed that the state transition model and the observation model follow a Gaussian distribution.

### Iii-B Object tracking

The step of object tracking consists of two procedures: to implement object localization in the search region; and to infer the target state after conducting the post-processing on the saliency map.

#### Iii-B1 Object localization

A background prior, i.e., the image boundary connectivity cue [40], has been applied to locate the object in each frame. However, the proposed localization method has two obvious differences. First, by integrating with the contextual information, the proposed approach is capable to localize the salient object in both individual images and video sequences. Second, to leverage this cue, the saliency map, which represents the probability of a certain region in an image to be a salient object or background, is updated locally based on the coarse prediction (“predicted bounding box” shown in Fig. 3).

In this paper, we will denote as the image on the -th frame. Firstly, on the frame , under the constrain of natural image [35], it is reasonable to define a search region, where the salient object is guaranteed to exist, by expanding the predicted bounding box with certain percentage . Next, the saliency map in the search region is updated by computing the minimum barrier distance [40, 29] with respect to a set of background seed set pixels (see illustration in Fig. 3). While the values for the pixels that are not in the search region are kept the same. Through those two steps, the position and scaling of the object is estimated on the frame .

It is assumed under the image background connectivity cue that background regions are normally connected to the baground seed set . In this paper, a path from pixel to pixel consists of a sequence of pixels, and is denoted as . In this sequence, each of the two consecutive pixels are neighbors. Each pixel in a 2-D single-channel image is denoted as a vertex. The neighboring pixels are connected by edges. In this work, we consider 4-adjacent neighbors as demonstrated in Fig. 3. For the image , the cost function of computing the distance of a path from pixel to the background seed set is defined as finding the difference between the maximum and minimum intensity values in this path. The formula of the cost function is

 F(p)=nmaxj=1It(p(j))−nminj=1It(p(j)), (7)

where denotes the intensity value of a pixel in frame . The saliency map is obtained by minimizing the cost function ,

 D(z)=minp∈ΘB,zF(p), (8)

where denotes the set, which includes all the possible paths from pixel to the background seed set . This formulates the computation of the saliency map as a problem of finding the shortest path for each pixel in the image . It can be solved by scanning each pixel using the Dijkstra-like algorithm. We denote as the edge between two connected pixels and , and as the path asigned to pixel , and as the path connected pixel and pixel with edge . Therefore, the cost function of is evaluated using

 F(pm(z))=max(Ut(m),It(z))−min(Lt(m),It(z)), (9)

where and are matrices with the highest and lowest pixel values for the path , respectively. The initial values of and are identical with the image . In the initial saliency map , the region corresponding to the image boundary seed set is initialized with intensity of zeros, and the left pixels are initialized with intensity of infinity. The raster scanning pass and inverse-raster scanning pass are implemented alternately to update the saliency map , as shown in Fig. 3. The number of passes is denoted as , and in this paper we select . In the raster scanning pass, each pixel in the search region is visited line by line. The intensity value at pixel is updated using its two neighbors (as illustrated in Fig. 3). The inverse-raster scanning applies the same updating but in an opposite order. The updating strategy of saliency map is

 D(z)←min{D(z)F(pm(z)). (10)

#### Iii-B2 Post-processing

Two efficient post-processing operations have been implemented to improve the quality of the saliency map, and to further segment the salient object in every frame. A Threshold is applied to the saliency map, which transforms the saliency map to a binary image. Then, the tracking bounding box is extracted after dilation (see Fig. 2). Global threshold is not an efficient solution in the scenario where image has non-uniform illumination and lighting conditions. Hence, it is wise to employ adaptive threshold [10]. We denote as the mean value of the set of pixels contained in a neighboring block, , centered at coordinates in an image. The size of the block is . The following formula defines the local thresholds

 Tab=mab−λ, (11)

where is a nonnegative offset. The segmented image is computed as

 f(a,b)={$1$,if g(a,b)≥Tab$0$,otherwise, (12)

where is the input saliency map image. The equ. (12) is evaluated for all pixels in the image, and a threshold is calculated at each position using the pixels in the neighboring block of . The idea of dilation is applied to enhance the quality of the thresholded image . The dilation of by , denoted by , is defined as

 A⊕S={q|^(S)q∩A≠∅}, (13)

where is a structuring element. After dilation, the minimum bounding box of the extracted region gives the state estimation of the target. Fig. 4 shows examples on two sequences (motorcycle_011 and airplane_016). The tracking bounding box is estimated by passing the original image through the processes of salient object detection and post-processing.

### Iii-C Fast object localization tracking

In this section, we denote and as a priori error covariance and a posteriori error covariance, respectively. In the correction stage of frame , both of the posterior error covariance and Kalman gain are updated as follows.

1. Compute Kalman gain

 Kt=HTGt(HG−tHT+R) (14)
2. Update estimate with measurement

 ^st=^s−t+Kt(yt−C^s−t) (15)
3. Compute a posteriori error covariance

 Gt=(I−KtH)G−t (16)

In summary, through the recursive prediction, object localization, and correction, the salient object in a image sequence is automatically detected and tracked. The details of the fast object localization tracking is illustrated in Alg. 1. The saliency map is updated on the entire image every frames as a trade-off between the accuracy and the speed.

## Iv Experimental Evaluations

The proposed approach is implemented in C++ with OpenCV 3.0.0 on a PC with an Intel Xeon W3250 2.67 GHz CPU and 8 GB RAM. The dataset and source code of the proposed approach will be available on the author’s homepage. The proposed tracker is evaluated on 15 popular image sequences collected from [18, 21, 17, 36]. There are a total of over 6700 frames in the dataset. The highest and lowest frame sizes are and , respectively. The dataset, from tracking perspective of view, includes different scenarios with challenging situations, such as scale variation, occlusions, in-plane and out-of-plane rotations, illumination, and background interference, as shown in Table. I. The image sequences without sky-region scenarios, i.e. Skyjumping_ce, motorcycle_006, surfing, Surfer, Skater, Sylvester, ball and Dog have been selected to test the robustness and generality of the proposed approach within different scenarios. In each frame of these video sequences, we labeled the target manually in a bounding box, which is used as the ground truth in the quantitative evaluations.

In our implementation, input images are first resized so that the maximum dimension is 300 pixels. The transition state matrix and measurement matrix are fixed during the experiment. The diagonal values corresponding to the position (i.e., ) and scale (i.e., ) covariance in prediction noise covariance matrix and measurement covariance matrix are set to and , respectively. Some other parameters for all image sequences are as follows. The percentage value , the size of block is , the offset , and the structuring element is .

We compared the proposed approach with seven state-of-the-art trackers. The seven competing trackers are manually initialized at the first frame using the ground truth of the target object. Once initialized, the trackers automatically track the target object in the remaining frames. However, the proposed tracker automatically runs to track the target object from the first frame to the end frame. Three experiments are designed to evaluate trackers as discussed in [36]: one pass evaluation (OPE), temporal robustness evaluation (TRE), and spatial robustness evaluation. For TRE, we randomly select the starting frame and run a tracker to the end of the sequence. Spatial robustness evaluation initializes the bounding box in the first frame by shifting or scaling. As discussed in Section III, the proposed method manages to automatically initialize the tracker and is not sensitive to spatial fluctuation. Therefore, we applied one pass evaluation and temporal robustness evaluation in this section using the same temporal randomization as in [36], and readers may refer to [36] for more details.

### Iv-a Speed performance

For salient object detection, the most up-to-date fast detector MB+ [40] attains a speed of 49 frames-per-second (fps). In contrast, the proposed method achieves a speed of 149 fps, three times faster than MB+, and the detection performance is better than MB+. For object tracking, the average speed comparison of the proposed and the seven state-of-the-art competing trackers is tabulated in Table II. The average speed of our tracker is 141 fps, which is at the same level as the fastest tracker KCF [12], however, KCF adopts a fixed tracking box, which could not reflect the scale changes of the target object during tracking. On average, our method is more than ten times faster than CT [42] and SAMF [20], five times faster than DSST [6] and CCT [44] and about two times faster than STC [41] and CN [7].

### Iv-B Comparison with the state-of-the-art trackers

The performance of our approach is quantitatively evaluated following the metrics used in [36]. We present the results using precision, centre location error (CLE), and success rate (SR). The CLE is defined as the Euclidean distance between the centers of the tracking and the ground-truth bounding boxes. The precision is computed from the percentage of frames where the CLEs are smaller than a threshold. Following [36], the threshold value is set at 20 pixels for the precision in our evaluations. A tracking result in a frame is considered successful if for a threshold , where and denote the areas of the bounding boxes of the tracking and the ground truth, respectively. Thus, SR is defined as the percentage of frames where the overlap rates are greater than a threshold . Normally, the threshold is set to 0.5. We evaluate the proposed method by comparing to the seven state-of-the-art trackers: CT, STC, CN, SAMF, DSST, CCT, and KCF.

The comparison results on the 15 sequences are shown in Table II. We present the results under one-pass evaluation and temporal robustness evaluation using the average precision, success rate, and CLE over all sequences. As shown in the table, the proposed method outperforms all seven competing trackers. It is evident that, in the one pass evaluations, the proposed tracker obtains the best performance in the CLE (14.5 pixels), and the precision (0.83), which are 8.7 pixels and superior to the second best tracker, the CCT tracker (23.2 pixels in CLE and 0.66 in precision). Meanwhile, in the success rate, the proposed tracker achieves the best result, which is a improvement against the second best tracker, the SAMF tracker. Please note that, for the seven competing trackers, the average performance in TRE is higher than that in OPE; while for the proposed tracker, the average precision and success rates in TRE are lower than those in OPE. One possible reason is that the proposed tracker tends to perform well in longer sequences, while the seven competing trackers work better in shorter sequences [36].

We also report the comparison results in the one pass evaluation against the seven competing trackers on all 15 video sequences in Table III and Table IV, respectively. Our approach obtains the best or the second best performance of 14 in precision and 9 in success rate out of the 15 sequences. Fig. 5 plots the average precision and success plots in the one pass evaluation and temporal robustness evaluation over all 15 sequences. In the two evaluations, according to both the precision and the success rate, our approach significantly outperforms the seven competing trackers. In summary, the precision plot demonstrates that our approach is superior in robustness compared to its counterparts in the experiments; the success rate shows that our method estimates the scale changes of the target more accurately.

### Iv-C Qualitative evaluation

In this section, we present some qualitative comparisons of our approach with respect to the seven competing trackers. The proposed approach is generic and can be applied to track any object of interest, including non-rigid and articulated objects. In this section, we present qualitative results our tracker using eight representative image sequences to demonstrate the effectiveness using the dataset described in previous section. We assume the target object is in low resolution when more than one ground truth bounding box has less than 400 pixels. The eight image sequences are categorized into four groups based on their scenarios and tracking challenges, as shown in Table I.

The first group has clear sky as the background, including three image sequences, Aircraft, airplane_001 and airplane_006, which are shown from the first row to the third row in Fig. 6. All the three image sequences have scale variation and in-plane and out-of-plane rotations. The propeller plane in image sequence Aircraft are in high resolution. The jet planes in the other two image sequences are in low resolution, which increases the difficulity of tracking. Moreover, the background near the jet plane in the image sequence Airplane_006 has an appearance similar to the target. The competing trackers STC, SAMF, DSST, and CCT were proposed to deal with scale variation, but they failed in the three image sequences. The predicted bounding box is either too large or too small. The reason is that their scaling strategy depends on the hard-coded scaling ratio, which is not adaptive to rotation and scale variation of the target. In contrast, our tracker is based on a saliency map, which leads to an accurate localization of the salient object at each frame. Therefore, it is adaptive to scale and rotation variations, which gives more accurate estimation on both the scale and the position of the target object.

The second group with target object in high resolution includes three image sequences airplane_011, airplane_016, and big_2, which are illustrated in the fourth to the sixth rows in Fig. 6. The three image sequences have scale variation and in-plane and out-of-plane rotations challenges. Moreover, there still exists background interference caused by the clouds in the image sequences of airplane_016 and big_2. The background interference is more severe in the image sequence big_2 as the jet plane is at a lower altitude during its flight. Only the proposed tracker tracks the flying propeller plane accurately with giving the minimum bounding box of the target object in the image sequence of airplane_011. While most of the competing trackers are able to track the aircraft in the image sequences of airplane_016 and big_2, only the proposed approach has the capability to give the optimal tracking bounding box in the sky-region scenarios with clouds interference.

The target object in the third group is a person doing sky jumping, as shown in the seventh row in Fig. 6. There exists small scale variation and large in-plane and out-of-plane rotations. Moreover, the clouds and terrain of the land in the background would interfere with the tracking performance. The competing trackers STC, SAMF, DSST, and CCT were capable to handle scale changes, but they failed in this image sequence. The competing trackers fail to handle the significant appearance changes of rotating motions and fast scale variations. In contrast, our tracker is robust to large and fast scale variations.

The proposed tracker can also be used to track the object of interest on the ground with large scale variation and out-of-plane rotation, as shown in the eighth row in Fig. 6. Since the cycler is driving the motorcycle with large rotation, the competing trackers can only track the head of the motorcycle, however, the proposed tracker is still able to give the minimum tracking bounding box of the object of interest, as shown from frame to frame .

In summary, the proposed tracker has better performance than the seven competing trackers in handling large scale variation, in-plane and out-of-plane rotations with acute angle, and background cloud interference.

### Iv-D Limitation

Although the proposed tracking approach outperforms its competitors in most experiments, it has a key limitation in handling occlusion challenge. As shown in Fig. 7, the image sequence airplane_005, where the aircraft is partially (frame ) or severely (frame ) occluded by its ejected smoke and the cloud in the sky, which would cause a tracking failure. This failure can be automatically corrected by the tracker after a few frames, as shown in frame . However, this limitation would deteriorate the performance of the tracker.

## V Conclusion

In this paper, we have proposed an effective and efficient approach for real-time visual object localization and tracking, which can be applied to UAV navigation, such as obstacle sense and avoidance. Our method integrates a fast salient object detector within the Kalman filtering framework. Compared to the state-of-the-art trackers, our approach can not only initialize automatically, it also achieves the fastest speed and better performance than the competing trackers.

Although the proposed tracker performs very well in most image sequences in our experiments, it cannot handle occluded scene very well. However, it has the capability to automatically re-localize and track the salient object of interest when it re-appears in the field of view again.

## Acknowledgment

This work is partly supported by the National Aeronautics and Space Administration (NASA) LEARN II program under grant number NNX15AN94N. The authors would like to thank Mr. Arjan Gupta at the University of Kansas for labeling the test data.

## References

• [1] Amazon, “Amazon prime air,” https://www.youtube.com/watch?v=98BIu9dpwHU, 2013.
• [2] M. Andriluka, S. Roth, and B. Schiele, “People-tracking-by-detection and people-detection-by-tracking,” in IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition (CVPR), 2008, pp. 1–8.
• [3] A. Borji, M. Cheng, H. Jiang, and J. Li, “Salient object detection: A survey,” arXiv preprint, pp. 1–26, 2015.
• [4] Z. Chen, Z. Hong, and D. Tao, “An experimental survey on correlation filter-based tracking,” arXiv preprint, pp. 1–13, 2015.
• [5] M. Cheng, N. J. Mitra, X. Huang, P. Torr, and S. Hu, “Global contrast based salient region detection,” IEEE Trans. Patt. Anal. Mach. Intell., vol. 37, no. 3, pp. 569–582, 2015.
• [6] M. Danelljan, G. Häger, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” in Proc. Br. Mach. Conf. (BMVC), 2014, pp. 1–11.
• [7] M. Danelljan, F. Shahbaz Khan, M. Felsberg, and J. Van de Weijer, “Adaptive color attributes for real-time visual tracking,” in IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1090–1097.
• [8] DHL, “Dhl parcelcopter,” http://www.dpdhl.com/en/media_relations/specials/parcelcopter.html, 2013.
• [9] G. Fasano, D. Accardo, A. E. Tirri, A. Moccia, and E. De L., “Morphological filtering and target tracking for vision-based uas sense and avoid,” in Int. Conf. on Unma. Airc. Syst. (ICUAS).   IEEE, 2014, pp. 430–440.
• [10] R. Gonzalez and R. Woods, Digital Image Processing (3rd Edition).   Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 2006.
• [12] J. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Trans. Patt. Anal. Mach. Intell., vol. 37, no. 3, pp. 583–596, 2015.
• [13] S. Hong, T. You, S. Kwak, and B. Han, “Online tracking by learning discriminative saliency map with convolutional neural network,” arXiv preprint, 2015.
• [14] H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient object detection: A discriminative regional feature integration approach,” in IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2083–2090.
• [15] Z. Jiang and L. Davis, “Submodular salient region detection,” in IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2043–2050.
• [16] Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-learning-detection,” IEEE Trans. Patt. Anal. Mach. Intell., vol. 34, no. 7, pp. 1409–1422, 2012.
• [17] M. Kristan and et al., “The visual object tracking vot2014 challenge results,” in Eur. Conf. Computer Vision (ECCV) Workshop.   Springer, 2014, pp. 191–217.
• [18] A. Li, M. Lin, Y. Wu, M. Yang, and S. Yan, “Nus-pro: A new visual tracking challenge,” IEEE Trans. Patt. Anal. Mach. Intell., vol. 38, no. 2, pp. 335–349, 2016.
• [19] F. Li, H. Lu, D. Wang, Y. Wu, and K. Zhang, “Dual group structured tracking,” IEEE Trans. Circuits Syst. Video Technol., vol. 26, no. 9, pp. 1697–1708.
• [20] Y. Li and J. Zhu, “A scale adaptive kernel correlation filter tracker with feature integration,” in Eur. Conf. Computer Vision (ECCV) Workshops.   Springer, 2014, pp. 254–265.
• [21] P. Liang, E. Blasch, and H. Ling, “Encoding color information for visual tracking: Algorithms and benchmark,” IEEE Trans. Image Process., vol. 24, no. 12, pp. 5630–5644, 2015.
• [22] Y. Lyu, Q. Pan, C. Zhao, Y. Zhang, and J. Hu, “Vision-based uav collision avoidance with 2d dynamic safety envelope,” IEEE A & E Systems Magazine, vol. 31, no. 7, pp. 16–26, 2016.
• [23] C. Ma, J. Huang, X. Yang, and M. Yang, “Hierarchical convolutional features for visual tracking,” in IEEE Int. Conf. Computer Vision (ICCV), 2015, pp. 3074–3082.
• [24] V. Mahadevan and N. Vasconcelos, “Saliency-based discriminant tracking,” in IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition (CVPR), 2009, pp. 1007–1013.
• [25] L. Mejias, A. McFadyen, and J. Ford, “Sense and avoid technology developments at queensland university of technology,” IEEE A & E Systems Magazine, vol. 31, no. 7, pp. 28–37, 2016.
• [26] A. Mian, “Realtime visual tracking of aircrafts,” in Digital Image Computing: Techniques and Applications (DICTA).   IEEE, 2008, pp. 351–356.
• [27] D. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis., vol. 77, no. 1-3, pp. 125–141, 2008.
• [28] A. Smeulders, D. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: An experimental survey,” IEEE Trans. Patt. Anal. Mach. Intell., vol. 36, no. 7, pp. 1442–1468, 2014.
• [29] R. Strand, K. Ciesielski, F. Malmberg, and P. Saha, “The minimum barrier distance,” Comput. Vis. Image Underst., vol. 117, no. 4, pp. 429–437, 2013.
• [30] Y. Sui and L. Zhang, “Visual tracking via locally structured gaussian process regression,” IEEE Signal Process. Lett., vol. 22, no. 9, pp. 1331–1335, 2015.
• [31] Y. Sui, S. Zhang, and L. Zhang, “Robust visual tracking via sparsity-induced subspace learning,” IEEE Trans. Image Process., vol. 24, no. 12, pp. 4686–4700, 2015.
• [32] Y. Sui, Z. Zhang, G. Wang, Y. Tang, and L. Zhang, “Real-time visual tracking: Promoting the robustness of correlation filter learning,” in Eur. Conf. Computer Vision (ECCV).   Springer, 2016.
• [33] Y. Wei, F. Wen, W. Zhu, and J. Sun, “Geodesic saliency using background priors,” in Eur. Conf. Computer Vision (ECCV).   Springer, 2012, pp. 29–42.
• [34] G. Welch and G. Bishop, “An introduction to the kalman filter,” in University of North Carolina at Chapel Hill, NC, USA, Technique report, 2006, pp. 1–16.
• [35] S. Weng, C. Kuo, and S. Tu, “Video object tracking using adaptive kalman filter,” J. Vis. Commun. Image R., vol. 17, no. 6, pp. 1190–1208, 2006.
• [36] Y. Wu, J. Lim, and M. Yang, “Online object tracking: A benchmark,” in IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2411–2418.
• [37] A. Yilmaz, O. Javed, and M. Shah, “Object tracking: A survey,” ACM Comput. Surv., vol. 38, no. 4, pp. 1–45, 2006.
• [38] S. Yin, J. Na, J. Choi, and S. Oh, “Hierarchical kalman-particle filter with adaptation to motion changes for object tracking,” Comput. Vis. Image Underst., vol. 115, no. 6, pp. 885–900, 2011.
• [39] H. Yu, Y. Zhou, J. Simmons, C. Przybyla, Y. Lin, X. Fan, Y. Mi, and S. Wang, “Groupwise tracking of crowded similar-appearance targets from low-continuity image sequences,” in IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition (CVPR), 2016.
• [40] J. Zhang, S. Sclaroff, Z. Lin, X. Shen, B. Price, and R. Mech, “Minimum barrier salient object detection at 80 fps,” in IEEE Int. Conf. Computer Vision (ICCV), 2015, pp. 1404–1412.
• [41] K. Zhang, L. Zhang, Q. Liu, D. Zhang, and M. Yang, “Fast visual tracking via dense spatio-temporal context learning,” in Eur. Conf. Computer Vision (ECCV).   Springer, 2014, pp. 127–141.
• [42] K. Zhang, L. Zhang, and M. Yang, “Real-time compressive tracking,” in Eur. Conf. Computer Vision (ECCV).   Springer, 2012, pp. 864–877.
• [43] K. Zhang, L. Zhang, M. Yang, and Q. Hu, “Robust object tracking via active feature selection,” IEEE Trans. Circuits Syst. Video Technol., vol. 23, no. 11, pp. 1957–1967, 2013.
• [44] G. Zhu, J. Wang, Y. Wu, and H. Lu, “Collaborative correlation tracking,” in Proc. Br. Mach. Conf. (BMVC), 2015, pp. 1–12.
• [45] W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in IEEE Computer Soc. Conf. Computer Vision and Pattern Recognition (CVPR), 2014, pp. 2814–2821.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters