Visual Tracking via Boolean Map Representations

Visual Tracking via Boolean Map Representations

Shell : Bare Demo of IEEEtran.cls for Computer Society Journals

Figure 1: Boolean map representation. For clarity, only the BMRs of a positive and negative sample are demonstrated. Note that the Boolean maps contain more connected structures than the LAB+HOG representations.
Figure 1: Boolean map representation. For clarity, only the BMRs of a positive and negative sample are demonstrated. Note that the Boolean maps contain more connected structures than the LAB+HOG representations.

1Introduction

Object tracking is a fundamental problem in computer vision and image processing with numerous applications. Despite significant progress in past decades, it remains a challenging task due to large appearance variations caused by illumination changes, partial occlusion, deformation, as well as cluttered background. To address these challenges, a robust representation plays a critical role for the success of a visual tracker, and attracts much attention in recent years [1].

Numerous representation schemes have been developed for visual tracking based on holistic and local features. Lucas and Kanade [2] leverage holistic templates based on raw pixel values to represent target appearance. Matthews et al. [3] design an effective template update scheme that uses stable information from the first frame for visual tracking. In [4] Henriques et al. propose a correlation filter based template (trained with raw intensity) for visual tracking with promising performance. Zhang et al. [5] propose a multi-expert restoration scheme to address the drift problem in tracking, in which each base tracker leverages an explicit feature map representation via quantizing the CIE LAB color channels of spatially sampled image patches. To deal with appearance changes, subspace learning based trackers have been proposed. Black and Jepson [6] develop a pre-learned view-based eigenbasis representation for visual tracking. However, the pre-trained representation cannot adapt well to significant target appearance variations. In [7] Ross et al. propose an incremental update scheme to learn a low-dimensional subspace representation. Recently, numerous tracking algorithms based on sparse representation have been proposed. Mei and Ling [8] devise a dictionary of holistic intensity templates with target and trivial templates, and then find the location of the object with minimal reconstruction error via solving an minimization problem. Zhang et al. [9] formulate visual tracking as a multi-task sparse learning problem, which learns particle representations jointly. In [10] Wang et al. introduce regularization into the eigen-reconstruction to develop an effective representation that combines the merits of both subspace and sparse representations.

In spite of demonstrated success of exploiting global representations for visual tracking, existing methods are less effective in dealing with heavy occlusion and large deformation as local visual cues are not taken into account. Consequently, local representations are developed to handle occlusion and deformation. Adam et al. [11] propose a fragment-based tracking method that divides a target object into a set of local regions and represents each region with a histogram. In [12], He et al. present a locality sensitive histogram for visual tracking by considering the contributions of local regions at each pixel, which can model target appearance well. Babenko et al. [13] formulate the tracking task as a multiple instance learning problem, in which Haar-like features are used to represent target appearance. Hare et al. [14] pose visual tracking as a structure learning task and leverage Haar-like features to describe target appearance. In [15] Henriques et al. propose an algorithm based on a kernel correlation filter (KCF) to describe target templates with feature maps based on histogram of oriented gradients (HOG) [16]. This method has been shown to achieve promising performance on the recent tracking benchmark dataset [17] in terms of accuracy and efficiency. Kwon and Lee [18] present a tracking method that represents target appearance with a set of local patches where the topology is updated to account for large shape deformation. Jia et al. [19] propose a structural sparse representation scheme by dividing a target object into some local image patches in a regular grid and using the coefficients to analyze occlusion and deformation.

Hierarchical representation methods that capture holistic and local object appearance have been developed for visual tracking [20]. Zhong et al. [20] propose a sparse collaborative appearance model for visual tracking in which both holistic templates and local representations are used. Li and Zhu [21] extend the KCF tracker [15] with a scale adaptive scheme and effective color features. In [22], Wang et al. demonstrate that a simple tracker based on logistic regression with a representation composed of HOG and raw color channels performs favorably on the benchmark dataset [17]. Ma et al. [23] exploit features from hierarchical layers of a convolutional neural network and learn an effective KCF which takes account of spatial details and semantics of target objects for visual tracking.

In biological vision, it has been suggested that object tracking is carried out by attention mechanisms [24]. Global topological structure such as connectivity is used to model tasks related to visual attention [26]. However, all aforementioned representations do not consider topological structure for visual tracking.

In this work, we propose a Boolean map based representation (BMR) that leverages connectivity cues for visual tracking. One case of connectivity is the enclosure topological relationship between the (foreground) figure and ground which defines the boundaries of figures. Recent gestalt psychological studies suggest that the enclosure topological cues play an important role in figure-ground segregation and have been successfully applied to saliency detection [28] and measuring objectness [29]. The proposed BMR scheme characterizes target appearance by concatenating multiple layers of Boolean maps at different granularities based on uniformly thresholding HOG and color feature maps. The fine-grained Boolean maps capture locally spatial structural details that are effective for precise localization and coarse-grained ones which encode much global shape information to account for significant appearance variations. The Boolean maps are then concatenated and normalized to a BMR scheme that can be just approximated by an explicate feature map. We learn a logistic regression classifier using the BMR scheme and online update to estimate target locations within a particle filter framework. The effectiveness of the proposed algorithm is demonstrated on a large tracking benchmark dataset with 50 challenging videos [17] against the state-of-the-art approaches.

The main contributions of this work are summarized as follows:

  • We demonstrate that the connectivity cues can be effectively used for robust visual tracking.

  • We show that the BMR scheme can be approximated as an explicit feature map of the intersection kernel which can find a nonlinear classification boundary via a linear classifier. In addition, it is easy to train and detect for robust visual tracking with this approach.

  • The proposed tracking algorithm based on the BMR scheme performs favorably in terms of accuracy and robustness to initializations based on the benchmark dataset with 50 challenging videos [17] against 35 methods including the state-of-the-art trackers based on hierarchical features from deep networks [23] and multiple experts with entropy minimization (MEEM) [5].

2Tracking via Boolean Map Representations

We present the BMR scheme and a logistic regression classifier with online update for visual tracking.

2.1Boolean Map Representation

The proposed image representation is based on recent findings of human visual attention [31] which shows that momentary conscious awareness of a scene can be represented by Boolean maps. The Boolean maps are concerned with center-surround contrast that mimic the sensitivity of neurons either to dark centers on bright surrounds or vice versa [32]. Specifically, we exploit the connectivity cues inside a target measured by the Boolean maps which can be used for separating the foreground object from the background effectively [26]. As demonstrated in Figure 1, the connectivity inside a target can be well captured by the Boolean maps at different scales.

Neurobiological studies have demonstrated that human visual system is sensitive to color and edge orientations [33] which provide useful cues to discriminate the foreground object from the background. In this work, we use color features in the CIE LAB color space and HOG features to represent objects. To extract the perceptually uniform color features, we first normalize each sample to a canonical size ( in our experiments), and then subsample it to a half size to reduce appearance variations, and finally transform the sample into the CIE LAB color space, denoted as ( in this work). Furthermore, we leverage the HOG features to capture edge orientation information of a target object, denoted as ( in this work). Figure 1 demonstrates that most color and HOG feature maps of the target own center-surrounded patterns that are similar to biologically plausible architecture of primates in [32]. We normalize both and to range from 0 to 1, and concatenate and to form a feature vector with . The feature vector is rescaled to by

where and denotes the maximal and minimal operators, respectively.

Next, in (Equation 1) is encoded into a set of vectorized Boolean maps by

where is a threshold drawn from a uniform distribution over , and the symbol denotes elementwise inequality. In this work, we set that is simply sampled at a fixed-step size , and a fixed-step sampling is equivalent to the uniform sampling in the limit [28]. Hence, we have . It is easy to show that

Figure 2: Right two columns: reconstructed LAB+HOG representations of the target by BMRs in our experiments. Left two columns: the corresponding prototypes shown in Figure . Some reconstructed ones with more connected structures than their prototypes are highlighted in yellow.
Figure 2: Right two columns: reconstructed LAB+HOG representations of the target by BMRs in our experiments. Left two columns: the corresponding prototypes shown in Figure . Some reconstructed ones with more connected structures than their prototypes are highlighted in yellow.

where and are the -th entries of and , respectively.

In (Equation 3), when (i.e., ) we have

In this work, we set . Although (Equation 4) may not be strictly satisfied, empirical results show that most distinct structures in can be reconstructed as demonstrated in Figure 2. Furthermore, the reconstructed representations contain more connected structures than the original ones (see the ones highlighted in yellow in Figure 2), which shows that the Boolean maps facilitate capturing global geometric information of target objects.

Based on (Equation 4), to measure the similarity between two samples and , we use the intersection function [34]

where .

To avoid favoring larger input sets [34], we normalize in (Equation 5) and define the kernel as

where is an explicit feature map function. In this work, the feature map function is defined by

where is an norm operator. We use to train a linear classifier, which is able to address the nonlinear classification problem in the feature space of for visual tracking with favorable performance. The proposed tracking algorithm based on BMR is summarized in Algorithm ?.

2.2Learning Linear Classifier with BMRs

We pose visual tracking as a binary classification problem with local search, in which a linear classifier is learned in the Boolean map feature space to separate the target from the background. Specifically, we use a logistic regressor to learn the classifier for measuring similarity scores of samples.

Let denote the location of the -th sample at frame . We assume that is the object location, and densely draw samples within a search radius centered at the current object location, and label them as positive samples. Next, we uniformly sample some patches from set , and label them as negative samples. After representing these samples with BMRs, we obtain a set of training data , where is the class label and is the number of samples. The cost function at frame is defined as the negative log-likelihood for logistic regression,

where is the classifier parameter vector, and the corresponding classifier is denoted as

We use a gradient descent method to minimize by iterating

where . In this work, we use the parameter obtained at frame to initialize in (Equation 10) and iterate 20 times for updates.

2.3Proposed Tracking Algorithm

We estimate the target states sequentially within a particle filter framework. Given the observation set up to frame , the target sate is obtained by maximizing the posteriori probability

where , is the target state with translations and , and scale , is a dynamic model that describes the temporal correlation of the target states in two consecutive frames, and is the observation model that estimates the likelihood of a state given an observation. In the proposed algorithm, we assume that the target state parameters are independent and modeled by three scalar Gaussian distributions between two consecutive frames, i.e., , where is a diagonal covariance matrix whose elements are the standard deviations of the target state parameters. In visual tracking, the posterior probability in (Equation 11) is approximated by a finite set of particles that are sampled with corresponding importance weights , where . Therefore, (Equation 11) can be approximated as

In our method, the observation model is defined as

where is the logistic regression classifier defined by (Equation 9).

To adapt to target appearance variations while preserving the stable information that helps prevent the tracker from drifting to background, we update the classifier parameters in a conservative way. We update by (Equation 10) only when the confidence of the target falls below a threshold . This ensures that the target states always have high confidence scores and alleviate the problem of including noisy samples when updating classifier [22]. The main steps of the proposed algorithm are summarized in Algorithm ?.

3Experimental Results

We first present implementation details of the proposed algorithm, and discuss the dataset and metrics for performance evaluation. Next, we analyze the empirical results using widely-adopted metrics. We present ablation study to examine the effectiveness of each key component in the proposed BMR scheme. Finally, we show and analyze some failure cases.

3.1Implementation Details

All images are resized to a fixed size of pixels [22] for experiments and each patch is resized to a canonical size of pixels. In addition, each canonical patch is subsampled to a half size with for color representations. The HOG features are extracted from the canonical patches that supports both gray and color images, and the sizes of HOG feature maps are the same as (as implemented in http:///github.com/pdollar/toolbox).

For grayscale videos, the original image patches are used to extract raw intensity and HOG features, and the feature dimension . For color videos, the image patches are transformed to the CIE LAB color space to extract raw color features, and the original RGB image patches are used to extract HOG features. The corresponding total dimension . The number of Boolean maps is set to , and the total dimension of BMRs is for gray videos, and for color videos, and the sampling step . The search radius for positive samples is set to . The inner search radius for negative samples is set to , where and are the weight and height of the target, respectively, and the outer search radius , where the search step is set to , which generates a small subset of negative samples. The target state parameter set for particle filter is set to , and the number of particles is set to . The confidence threshold is set to . All parameter values are fixed for all sequences and the source code will be made available to the public. More results and videos are available at http://kaihuazhang.net/bmr/bmr.htm.

3.2Dataset and Evaluation Metrics

For performance evaluation, we use the tracking benchmark dataset and code library [17] which includes 29 trackers and 50 fully-annotated videos. In addition, we also add the corresponding results of 6 most recent trackers including DLT [35], DSST [36], KCF [15], TGPR [37], MEEM [5], and HCF [23]. For detailed analysis, the sequences are annotated with 11 attributes based on different challenging factors including low resolution (LR), in-plane rotation (IPR), out-of-plane rotation (OPR), scale variation (SV), occlusion (OCC), deformation (DEF), background clutters (BC), illumination variation (IV), motion blur (MB), fast motion (FM), and out-of-view (OV).

We quantitatively evaluate the trackers with success and precision plots [17]. Given the tracked bounding box and the ground truth bounding box , the overlap score is defined as Hence, and a larger value of means a better performance of the evaluated tracker. The success plot demonstrates the percentage of frames with through all threshold . Furthermore, the area under curve (AUC) of each success plot serves as a measure to rank the evaluated trackers. On the other hand, the precision plot shows the percentage of frames whose tracked locations are within a given threshold distance (i.e., 20 pixels in [17]) to the ground truth. Both success and precision plots are used in the one-pass evaluation (OPE), temporal robustness evaluation (TRE), and spatial robustness evaluation (SRE) where OPE reports the average precision or success rate by running the trackers through a test sequence with initialization from the ground truth position, and TRE as well as SRE measure a trackers robustness to initialization with temporal and spatial perturbations, respectively [17]. We report the OPE, TRE, and SRE results. For presentation clarity, we only present the top 10 algorithms in each plot.

Figure 3: Success and precision plots of OPE, TRE, and SRE by the top 10 trackers. The trackers are ranked by the AUC scores (shown in the legends) when the success rates are used, or precession cores at the threshold of 20 pixels.
Success and precision plots of OPE, TRE, and SRE by the top 10 trackers. The trackers are ranked by the AUC scores (shown in the legends) when the success rates are used, or precession cores at the threshold of 20 pixels.
Success and precision plots of OPE, TRE, and SRE by the top 10 trackers. The trackers are ranked by the AUC scores (shown in the legends) when the success rates are used, or precession cores at the threshold of 20 pixels.
Success and precision plots of OPE, TRE, and SRE by the top 10 trackers. The trackers are ranked by the AUC scores (shown in the legends) when the success rates are used, or precession cores at the threshold of 20 pixels.
Success and precision plots of OPE, TRE, and SRE by the top 10 trackers. The trackers are ranked by the AUC scores (shown in the legends) when the success rates are used, or precession cores at the threshold of 20 pixels.
Success and precision plots of OPE, TRE, and SRE by the top 10 trackers. The trackers are ranked by the AUC scores (shown in the legends) when the success rates are used, or precession cores at the threshold of 20 pixels.
Figure 3: Success and precision plots of OPE, TRE, and SRE by the top 10 trackers. The trackers are ranked by the AUC scores (shown in the legends) when the success rates are used, or precession cores at the threshold of 20 pixels.

3.3Empirical Results

1) Overall Performance: Figure 3 shows overall performance of the top 10 trackers in terms of success and precision plots. The BMR-based tracking algorithm ranks first on the success rate of all OPE, and second based on TRE and SRE. Furthermore, the BMR-based method ranks third based on the precision rates of OPE, TRE, and SRE. Overall, the proposed BMR-based tracker performs favorably against the state-of-the-art methods in terms of all metrics except for MEEM [5] and HCF [23]. The MEEM tracker exploits a multi-expert restoration scheme to handle the drift problem, which combines a tracker and the historical snapshots as experts. In contrast, even using only a logistic regression classifier without using any restoration strategy, the proposed BMR-based method performs well against MEEM in terms of most metrics (i.e., the success rates of the BMR-based method outperform the MEEM scheme while the precision rates of the BMR-based method are comparable to the MEEM scheme), which shows the effectiveness of the proposed representation scheme for visual tracking. In addition, the HCF method is based on deep learning, which leverages complex hierarchical convolutional features learned off-line from a large dataset and correlation filters for visual tracking. Notwithstanding, the proposed BMR-based algorithm performs comparably against HCF in terms of success rates on all metrics.

2) Attribute-based Performance: To demonstrate the strength and weakness of BMR, we further evaluate the 35 trackers on videos with 11 attributes categorized by [17].

Figure 4: Screenshots sampled results from six long sequences sylvester, mhyang, dog1, lemming, liquor, and doll.
Screenshots sampled results from six long sequences sylvester, mhyang, dog1, lemming, liquor, and doll.
Figure 4: Screenshots sampled results from six long sequences sylvester, mhyang, dog1, lemming, liquor, and doll.

Table ? and ? summarize the results of success and precision scores of OPE with different attributes. Among them, the BMR-based method ranks within top 3 with most attributes. Specifically, with the success rate of OPE, the BMR-based method ranks first on 4 out of 11 attributes while second on 6 out of 11 attributes. In the sequences with the BC attribute, the BMR-based method ranks third and its score is close to the MEEM scheme that ranks second (0.555 vs. 0.569). For the precision scores of OPE, the BMR-based method ranks second on 4 out of 11 attributes and third on 3 out of 11 attributes. In the sequences with the OV attribute, the BMR-based tracker ranks first, and for the videos with the IPR and BC attributes, the proposed tracking algorithm ranks fourth with comparable performance to the third-rank DSST and KCF methods.

Table ? and ? show the results of TRE with different attributes. The BMR-based method ranks within top 3 with most attributes. In terms of success rates, the BMR-based method ranks first on 2 attributes, second on 3 attributes and third on 6 attributes. In terms of precision rates, the BMR-based tracker ranks third on 7 attributes, and first and second on the OV and OCC attributes, respectively. Furthermore, for other attributes such as LR and BC, the BMR-based tracking algorithm ranks fourth but it scores are close to the results of MEEM and KCF that rank third (0.581 vs. 0.598, and 0.772 vs. 0.776).

Table ? and ? show the results of SRE with different attributes. In terms of success rates, the rankings of the BMR-based method are similar to those based on TRE except for the IPR and OPR attributes. Among them, the BMR-based tracker ranks third based on SRE and second based on TRE. Furthermore, although the MEEM method ranks higher than the BMR-based tracker in most attributes, the differences of the scores are within . In terms of precision rates, the BMR-based algorithm ranks within top 3 with most attributes except for the LR, DEF, and IV attributes.

The AUC score of success rate measures the overall performance of each tracking method [17]. Figure 3 shows that the BMR-based method achieves better results in terms of success rates than that precision rates in terms of all metrics (OPE, SRE, TRE) and attributes. The tracking performance can be attributed to two factors. First, the proposed method exploits a logistic regression classifier with explicit feature maps, which efficiently determines the nonlinear decision boundary through online training. Second, the online classifier parameter update scheme in (Equation 10) facilitates recovering from tracking drift.

Figure 5: Overlap score plots of six long sequences shown in Figure .
Overlap score plots of six long sequences shown in Figure .
Overlap score plots of six long sequences shown in Figure .
Overlap score plots of six long sequences shown in Figure .
Overlap score plots of six long sequences shown in Figure .
Overlap score plots of six long sequences shown in Figure .
Figure 5: Overlap score plots of six long sequences shown in Figure .

Figure 4 shows sampled tracking results from six long sequences (each with more than 1000 frames). The total number of frames of these sequences is that accounts for about of the total number of frames (about ) in the benchmark, and hence the performance on these sequences plays an important role in performance evaluation. For clear presentation, only the results of the top performing BMR, HCF, and MEEM methods are shown. In all sequences, the BMR-based tracker is able to track the targets stably over almost all frames. However, the HCF scheme drifts away from the target objects after a few frames in the sylvester () and lemming () sequences. The MEEM method drifts to background when severe occlusions happen in the liquor sequence (). To further demonstrate the results over all frames clearly, Figure 5 shows the plots in terms of overlap score of each frame. Overall, the BMR-based tracker performs well against the HCF and MEEM methods in most frames of these sequences.

Figure 6: Success and precision plots of OPE, TRE, and SRE for BMR, BMR only with LAB+HOG representations, and KCF (KCF is used as a baseline for comparisons).
Success and precision plots of OPE, TRE, and SRE for BMR, BMR only with LAB+HOG representations, and KCF (KCF is used as a baseline for comparisons).
Success and precision plots of OPE, TRE, and SRE for BMR, BMR only with LAB+HOG representations, and KCF (KCF is used as a baseline for comparisons).
Success and precision plots of OPE, TRE, and SRE for BMR, BMR only with LAB+HOG representations, and KCF (KCF is used as a baseline for comparisons).
Success and precision plots of OPE, TRE, and SRE for BMR, BMR only with LAB+HOG representations, and KCF (KCF is used as a baseline for comparisons).
Success and precision plots of OPE, TRE, and SRE for BMR, BMR only with LAB+HOG representations, and KCF (KCF is used as a baseline for comparisons).
Figure 6: Success and precision plots of OPE, TRE, and SRE for BMR, BMR only with LAB+HOG representations, and KCF (KCF is used as a baseline for comparisons).

3.4Analysis of BMR

To demonstrate the effectiveness of BMRs, we eliminate the component of Boolean maps in the proposed tracking algorithm and only leverage the LAB+HOG representations for visual tracking. In addition, we use the KCF as a baseline as it adopts the HOG representations as the proposed tracking method. Figure 6 shows quantitative comparisons on the benchmark dataset. Without using the proposed Boolean maps, the AUC score of success rate in OPE of the proposed method is reduced by . For TRE and SRE, the AUC scores of the proposed method are reduced by and , respectively without the component of Boolean maps. It is worth noticing that the proposed method, without using the Boolean maps, still outperforms KCF in terms of all metrics on success rates, which shows the effectiveness of the LAB color features in BMR. These experimental results show that the BMRs in the proposed method play a key role for robust visual tracking.

Figure 7: Failure cases of the BMR-based tracker in the singer2 and motorRolling sequences. The results of HCF and MEEM are also illustrated.
Failure cases of the BMR-based tracker in the singer2 and motorRolling sequences. The results of HCF and MEEM are also illustrated.
Failure cases of the BMR-based tracker in the singer2 and motorRolling sequences. The results of HCF and MEEM are also illustrated.
Figure 7: Failure cases of the BMR-based tracker in the singer2 and motorRolling sequences. The results of HCF and MEEM are also illustrated.

3.5Failure Cases

Figure 7 shows failed results of the proposed BMR-based method in two sequences singer2 and motorRolling. In the singer2 sequence, the foreground object and background scene are similar due to the dim stage lighting at the beginning (). The HCF, MEEM and proposed methods all drift to the background. Furthermore, as the targets in the motorRolling sequence undergo from 360-degree in-plane rotation in early frames (), the MEEM and proposed methods do not adapt to drastic appearance variations well due to limited training samples. In contrast, only the HCF tracker performs well in this sequence because it leverages dense sampling and high-dimensional convolutional features.

4Conclusions

In this paper, we propose a Boolean map based representation which exploits the connectivity cues for visual tracking. In the BMR scheme, the HOG and raw color feature maps are decomposed into a set of Boolean maps by uniformly thresholding the respective channels. These Boolean maps are concatenated and normalized to form a robust representation, which approximates an explicit feature map of the intersection kernel. A logistic regression classifier with the explicit feature map is trained in an online manner that determines the nonlinear decision boundary for visual tracking. Extensive evaluations on a large tracking benchmark dataset demonstrate the proposed tracking algorithm performs favorably against the state-of-the-art algorithms in terms of accuracy and robustness.

References

  1. X. Li, W. Hu, C. Shen, Z. Zhang, A. Dick, and A. V. D. Hengel, “A survey of appearance models in visual object tracking,” ACM Transactions on Intelligent Systems and Technology, vol. 4, no. 4, p. 58, 2013.
  2. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision.,” in International Joint Conference on Artificial Intelligence, vol. 81, pp. 674–679, 1981.
  3. I. Matthews, T. Ishikawa, and S. Baker, “The template update problem,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 6, pp. 810–815, 2004.
  4. J. Henriques, R. Caseiro, P. Martins, and J. Batista, “Exploiting the circulant structure of tracking-by-detection with kernels,” in Proceedings of European Conference on Computer Vision, pp. 702–715, 2012.
  5. J. Zhang, S. Ma, and S. Sclaroff, “Meem: Robust tracking via multiple experts using entropy minimization,” in Proceedings of European Conference on Computer Vision, pp. 188–203, 2014.
  6. M. J. Black and A. D. Jepson, “Eigentracking: Robust matching and tracking of articulated objects using a view-based representation,” International Journal of Computer Vision, vol. 26, no. 1, pp. 63–84, 1998.
  7. D. Ross, J. Lim, R. Lin, and M.-H. Yang, “Incremental learning for robust visual tracking,” International Journal of Computer Vision, vol. 77, no. 1, pp. 125–141, 2008.
  8. X. Mei and H. Ling, “Robust visual tracking and vehicle classification via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 11, pp. 2259–2272, 2011.
  9. T. Zhang, B. Ghanem, S. Liu, and N. Ahuja, “Robust visual tracking via multi-task sparse learning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2042–2049, 2012.
  10. D. Wang, H. Lu, and M.-H. Yang, “Online object tracking with sparse prototypes,” IEEE Transactions on Image Processing, vol. 22, no. 1, pp. 314–325, 2013.
  11. A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 798–805, 2006.
  12. S. He, Q. Yang, R. Lau, J. Wang, and M.-H. Yang, “Visual tracking via locality sensitive histograms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2427–2434, 2013.
  13. B. Babenko, M.-H. Yang, and S. Belongie, “Robust object tracking with online multiple instance learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 8, pp. 1619–1632, 2011.
  14. S. Hare, A. Saffari, and P. H. Torr, “Struck: Structured output tracking with kernels,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 263–270, 2011.
  15. J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, pp. 583–596, 2015.
  16. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886–893, 2005.
  17. Y. Wu, J. Lim, and M.-H. Yang, “Online object tracking: A benchmark,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2411–2418, 2013.
  18. J. Kwon and K. M. Lee, “Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive basin hopping monte carlo sampling,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1208–1215, 2009.
  19. X. Jia, H. Lu, and M.-H. Yang, “Visual tracking via adaptive structural local sparse appearance model,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1822–1829, 2012.
  20. W. Zhong, H. Lu, and M.-H. Yang, “Robust object tracking via sparsity-based collaborative model,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1838–1845, 2012.
  21. Y. Li and J. Zhu, “A scale adaptive kernel correlation filter tracker with feature integration,” in European Conference on Computer Vision-Workshops, pp. 254–265, 2014.
  22. N. Wang, J. Shi, D.-Y. Yeung, and J. Jia, “Understanding and diagnosing visual tracking systems,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3101–3109, 2015.
  23. C. Ma, J.-B. Huang, X. Yang, and M.-H. Yang, “Hierarchical convolutional features for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3074–3082, 2015.
  24. R. Allen, P. Mcgeorge, D. Pearson, and A. B. Milne, “Attention and expertise in multiple target tracking,” Applied Cognitive Psychology, vol. 18, no. 3, pp. 337–347, 2004.
  25. P. Cavanagh and G. A. Alvarez, “Tracking multiple targets with multifocal attention,” Trends in Cognitive Sciences, vol. 9, no. 7, pp. 349–354, 2005.
  26. A. Set and B. Set, “Topological structure in visual perception,” Science, vol. 218, p. 699, 1982.
  27. MIT Press, 1999.
    S. E. Palmer, Vision Science: Photons to Phenomenology, vol. 1.
  28. J. Zhang and S. Sclaroff, “Saliency detection: A boolean map approach,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 153–160, 2013.
  29. B. Alexe, T. Deselaers, and V. Ferrari, “Measuring the objectness of image windows,” IEEE Transactions Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2189–2202, 2012.
  30. M.-M. Cheng, Z. Zhang, W.-Y. Lin, and P. Torr, “Bing: Binarized normed gradients for objectness estimation at 300fps,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3286–3293, 2014.
  31. L. Huang and H. Pashler, “A boolean map theory of visual attention.,” Psychological review, vol. 114, no. 3, p. 599, 2007.
  32. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, no. 11, pp. 1254–1259, 1998.
  33. M. S. Livingstone and D. H. Hubel, “Anatomy and physiology of a color system in the primate visual cortex,” The Journal of Neuroscience, vol. 4, no. 1, pp. 309–356, 1984.
  34. K. Grauman and T. Darrell, “The pyramid match kernel: Efficient learning with sets of features,” The Journal of Machine Learning Research, vol. 8, pp. 725–760, 2007.
  35. N. Wang and D.-Y. Yeung, “Learning a deep compact image representation for visual tracking,” in Advances in Neural Information Processing Systems, pp. 809–817, 2013.
  36. M. Danelljan, G. Häger, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” in Proceedings of British Machine Vision Conference, 2014.
  37. J. Gao, H. Ling, W. Hu, and J. Xing, “Transfer learning based visual tracking with Gaussian processes regression,” in Proceedings of European Conference on Computer Vision, pp. 188–203, 2014.
  38. Z. Kalal, J. Matas, and K. Mikolajczyk, “Pn learning: Bootstrapping binary classifiers by structural constraints,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 49–56, 2010.
7231
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
Edit
-  
Unpublish
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel
Comments 0
Request comment
""
The feedback must be of minumum 40 characters
Add comment
Cancel
Loading ...

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description