Patch-based adaptive weighting with segmentationand scale (PAWSS) for visual tracking

Patch-based adaptive weighting with segmentation
and scale (PAWSS) for visual tracking

Xiaofei Du
xiaofei.du.13@ucl.ac.uk
   Alessio Dore
alessio.dore@deliveroo.co.uk
   Danail Stoyanov
danail.stoyanov@ucl.ac.uk
Surgical Vision Group, University College London, UK
Deliveroo, London, UK
Abstract

Tracking-by-detection algorithms are widely used for visual tracking, where the problem is treated as a classification task where an object model is updated over time using online learning techniques. In challenging conditions where an object undergoes deformation or scale variations, the update step is prone to include background information in the model appearance or to lack the ability to estimate the scale change, which degrades the performance of the classifier. In this paper, we incorporate a Patch-based Adaptive Weighting with Segmentation and Scale (PAWSS) tracking framework that tackles both the scale and background problems. A simple but effective colour-based segmentation model is used to suppress background information and multi-scale samples are extracted to enrich the training pool, which allows the tracker to handle both incremental and abrupt scale variations between frames. Experimentally, we evaluate our approach on the online tracking benchmark (OTB) dataset and Visual Object Tracking (VOT) challenge datasets. The results show that our approach outperforms recent state-of-the-art trackers, and it especially improves the successful rate score on the OTB dataset, while on the VOT datasets, PAWSS ranks among the top trackers while operating at real-time frame rates.

1 Introduction

Tracking-by-detection is one of the highly successful paradigms for visual object tracking [1, 2, 3, 4]. A typical tracking-by-detection algorithm treats the tracking problem as a classification task, it begins with a detector initialized with a bounding region in the first frame, and updates the detection model over time with collected positive and negative samples. The choice of the samples used to update the classifier is critical for robust tracking and maintaining the model’s reliability but as the object moves background information within the bounding box is falsely included in the sample descriptors which causes corruption in the classifier. Additionally, real world objects usually undergo different transformations, such as deformation, scale change, occlusion or all at the same time, which render the robust estimation of scale difficult.

To address these problems, different methods have been proposed to decrease the effects of background information in the model template, such as using patch-based descriptors and assigning weights based on the pixel spatial location or appearance similarity [5, 6, 7]. Directly integrating a segmentation step into the tracking update has also been effective [8, 9]. In this paper, we follow a similar idea to incorporate a Patch-based Adaptive Weighting with Segmentation and Scale (PAWSS) into the tracking framework. It uses a simple but effective colour-based segmentation model to assign weights to the patch-based descriptor which decreases background information influences within the bounding box, and also a two-level sampling strategy is introduced to extract multi-scale samples, which enables the tracker to handle both incremental and abrupt scale variations between frames. Our method is evaluated and compared with the state-of-the-art methods on the online tracking benchmark (OTB) [10] and VOT challenge datasets with promising results demonstrating that PAWSS is among the best performing real-time trackers without any specific code optimisation.

2 Proposed Algorithm

2.1 Probabilistic Segmentation Model for Patch Weighting

We used the patch-based descriptor to represent the appearance of the object. In frame , the bounding box  is evenly decomposed into  non-overlapping patches , then the descriptor  is constructed by concatenating the low-level feature vectors of all the patches in their spatial order. Since background information is potentially included in the bounding box, we would like to incorporate an global probabilistic segmentation model [11, 8] to assign weights to the patches based on their colour appearance.

(1)

where  is the weight of the feature vector of the -th patch . The global segmentation model is based on colour histogram by using a recursive Bayesian formulation to discriminate foreground and background.

Let  be the colour observation of a pixel from frame  to , the foreground probability of that pixel at frame  is based on the tracked results from previous frames

(2)

where  is the class of the pixel at frame : 0 for background, and 1 for foreground, and  is a normalization constant to keep the probabilities sum to 1. The transition probabilities for foreground and background  where are empirical choices as in [8]. The foreground histogram  and the background histogram  are initialized from the pixels inside the bounding box and from those which are surrounding the bounding box (with some margin between) in the first frame, respectively. For the following frames, the colour histogram distributions are updated using the tracked result.

(3)

where  is the model update factor. represents the tracked bounding box in frame . Instead of treating every pixel equal, the weighting of a pixel also depends on the patch where it is located. Patches with higher weight are more likely to contain object pixels and vice versa. So the colour histogram update for colour observation of current frame  is defined as

(4)

where  represents the number of pixels with colour observation in the -th patch  in frame , and  represents any colour observation in frame , so the denominator means the weighted number of all the pixel colour observations in the bounding box .

The weights  for all the patches are initialized as 1 at the first frame, and then are updated based on the segmentation model

(5)
(6)
(7)

where denotes the average foreground probability of all pixels in the patch  in the current frame , it is normalized so the highest weight update  equals 1. The patch weight  is then updated gradually over time. We omit the background probability  since it is similar to Eq. 2.

Unlike the weighting strategy in [12, 3] by analysing the similarities between neighbouring patches, our patch weighting method is simple and straightforward to implement, the weight update for each patch is independent from each other, and only relies on the colour histogram based segmentation model. We show examples of the patch weight evolvement in Figure 1. The patch weight thumbnails are displayed on the top corner of each frame, which indicate the objectness in the bounding box and also reflect the deformation of the object over time. Since we update the segmentation model based on the previous patch weight, and in turn the segmentation model facilitates updating the weight patches. This co-training strategy enhances the weight contrast between foreground and occluded patches, which suppresses the background information efficiently.

Figure 1: Example patch weights are shown for the highlighted bounding box displayed in the top corner of the image. Warmer colour indicates higher foreground possibility.

2.2 Scale Estimation

The tracked object often undergoes complicated transformations during tracking, for example, deformation, scale variations, occlusion et. al as shown in Figure 2. Fixed-scale bounding box estimation is ill-equipped to capture the accurate extents of the object, which would degrade the classifier performance by providing samples which are either partial cropped or include background information.

Figure 2: Examples from of objects undergo challenging transformations for tracking, inclusion of background information or partial object within the bounding box usually degrade the classifier.

When locating the object in a new frame, all the bounding box candidates are collected within a searching window, and the bounding box with the maximum classification score is selected to update the object location. Rather than making a suboptimal decision by choosing from fixed-scale samples, we augment the training sample pool with multi-scale candidates. Obviously, the scales of the augmented samples are critical. We consider two complementary strategies that handle both incremental and abrupt scale variations.

Firstly, to deal with relatively small scale changes between frames, we build a scale set 

(8)

where  is a fixed value which is slightly larger than . It is set to accurately search the scale change. is the scale number in the scale set is the scale of the object in frame  compared with the initial bounding box in the first frame. Considering object scale usually does not vary too much between frames, scale set  includes scales which are close to the previous frame.

Secondly, when object undergoes abrupt scale changes between frames, scale set  is unable to keep pace with the speed of the scale variations. To address this problem, we build an additional scale set  by incorporating Lucas-Kanade tracker (KLT) [13, 14], which helps us estimate the scale change explicitly. We randomly pick points from each patch in the bounding box of frame , and tracked all these points in the next frame . With sufficient well-tracked points, we can estimate the scale variation between frames by comparing the distance changes of the tracked point pairs.

We illustrated the scale estimation by KLT tracker in Figure 3. Let denotes one picked point in the previous frame and its matched point  in the current frame . We compute the distance  between point-pair , and the distance  between the matched point-pair .

Figure 3: Illustration of the scale estimation by using the KLT tracker. Random points located on the patches are picked in frame , and are tracked in the next frame  by the KLT tracker, the distance ratio of point pairs  between two frames are used for scale estimation.

For all the matched point pairs, we compute the distance ratio between the two frame

(9)

where is the set with all the distance ratios. We sort  by value and pick the median element as the potential scale change of the object. To make the scale estimation more robust, we uniformly sample the scales ranging between  or to construct the scale set .

(10)

where is the scale number in the scale set . When the object is out-of-view, occluded or abruptly deforms, the ratio of well-tracked points will be low. In that case, the estimation from the KLT tracker will be unreliable. In our implementation, when the ratio is lower than 0.5, we then set , therefore the scale set will only add samples with the previous scale into the candidate pool. Only when there are enough points well tracked, the estimation from the KLT tracker will be trusted. We fuse these two complementary scale sets and into to enrich our sample candidate pool. To show the effectiveness, we evaluate our proposed tracker in section 3 with or without scale set estimated by the KLT tracker.

2.3 Tracking Framework

We incorporate PAWSS into the Struck [1]. The algorithm relies on an online structured output SVM learning framework which integrates the learning and tracking. It directly predicts the location displacement between frame, avoiding the heuristic intermediate step for assigning binary labels to training samples, which acheives top performance in the OTB dataset [10].

Given the bounding box in the previous frame , sample candidates are extracted in a searching window , which centers at the in the current frame , unlike other tracking-by-detection approaches, we adapt a two-level sampling statergy. On the first level, all the bounding box samples are extracted with fixed-scale , on the second level, multi-scale samples are extracted to enrich the sample pool.

First, the searching window is chosen at the same as above centered at the with a radius of , since we have the second level to make the final decision, rather than extracting sample per pixel, we extract samples at a down-sample factor of 2, which could decrease the candidate number by 4, then the weighted patch-based descriptor of each candidate is constructed, and we select the bounding box with the maximum classification score not as the final decision, but as the search center for our second level. After this step, the rough location of the object is narrowed into a smaller area. Like discussed in Section 2.2, given the scale in the previous frame , to handle small scale variation between frames, we construct the scale set , which includes scales which are close to . Additionally, to deal with potential abrupt scale changes, we randomly pick points from each patch of the bounding box , and pass all these points to the KLT tracker to generate the scale set . This scale set is estimated explicitly by the KLT tracker and facilitates to augment the scale estimation. The two scale sets and are complementary to handle different scenarios. Then we use the fused scale set to extract bounding box candidates. We set a smaller search window with search radius of , centering at the bounding box selected in the first level, and we construct multiple candidates for each pixel within the search window. The scales of candidates at one pixel are set as scales in the fused scale set . We then evaluate all the multi-scale samples, and select the bounding box sample with the maxiumn score as the final location of the object. For multiple bounding box samples with the same scores, the sample whose scale is closer to 1.0 is selected to prevent potential gradual shrinking or enlargement of the bounding box.

Then, the classifier, the colour-based segmentation model and the weights of all patches are updated as discussed in Section 2.1. Finally, the whole process starts at the next frame. Additionally, to prevent introducing potential corrupt samples to the classifier, the classifier only updates when the similarity between the tracked object and the positive support vectors are above certain threshold .

3 Results

Implementation Details  Our algorithm is publicly available online111https://github.com/surgical-vision/PAWSS and is implemented in C++ and performs at about frames per second with an i7-2.5GHz CPU without any optimisation. For structured output SVM, we are using a linear kernel and the parameters are empirically set as in Eq. 3 and Eq. 5, in Eq. 8, the scale numbers of the scale set are . The number of extracted points from each patch . The updating threshold for the classifier is set as . For each sequence, we scale the frame to make sure the minimum side length of the bounding box is larger than 32 pixels, and the search window radius is fixed to , where  and represents the width and height of the scaled bounding box, respectively, and the search window radius is fixed to 5 pixels. Selecting the right features to describe the object appearance plays a critical role to differentiate object and background. We tested different low-level features and found that the combination of HSV colour and gradient features achieves the best results. The patch number affects the tracking performance, too many patches increase the computation and too less patches do not robustly reflect the local appearance of the object. We tested different patch numbers, and selected to strike a performance balance.

3.1 Online Tracking Benchmark (OTB)

The OTB dataset [10] includes sequences tagged with attributes, which represent the challenging aspects for tracking such as illumination variation, occlusion, deformation et al. The tracking performance is quantitatively evaluated using both precision rate (PR) and success rate (SR), as defined in [10]. PR/SR scores are depicted using precision plot and success plot, respectively. The precision plot shows the percentage of frames whose tracked centre is within certain Euclidean distance (20 pixels) from the centre of the ground truth. Success plot computes the percentage of frames whose intersection over union overlap with the ground truth annotation is within a threshold varying between 0 and 1, and the area under curve (AUC) is used for SR score. To evaluate the effectiveness of incorporating the scale set proposed by the KLT tracker, we provide two versions of our tracker as PAWSSa and PAWSSb: PAWSSa only includes scale set , while PAWSSb includes both and for scale estimation.

We use the evaluation toolkit provided by Wu [10] to generate the precision and success plots for the one pass evaluation (OPE) of the top 10 algorithms in Figure 4. The toolkit includes 29 benchmark trackers, besides that we also include SOWP tracker. It is shown that PAWSSb achieves the best PR/SR scores among all the trackers. For a more detailed evaluation, we also compared our tracker with the state-of-the-art trackers in Table 1. Notice that in all the attribute field, our tracker achieves either the best or the second best PR/SR scores. Our tracker achieves 36.7% gain in PR and 36.9% gain in SR over Struck [1]. By using a simple patch weighting strategy and training with adaptive scale samples, the performance shows that our tracker provides comparable PR scores, and higher SR score compared with SOWP [3]. PAWSSa tracker improves the SR score by 2.6% considering gradually small changes between frames, PAWSSb improves the SR score by 4.8% by incorporating scales estimated by the external KLT tracker. Specifically, when the object undergoes scare variation PAWSS achieves a performance gain of 10.3% in SR over SOWP.

Figure 4: Comparison of the precision and success plots on the OTB with the top 10 trackers; the PR scores are illustrated with the threshold at 20 pixels and the SR scores with the average overlap (AUC) in the legend.
Struck [1] DSST [15] SAMF [16] FCNT [4] MTA [17] MEEM [18] SOWP [3] PAWSSa PAWSSb
IV(25) 0.558 / 0.428 0.727 / 0.534 0.735 / 0.563 0.830 / 0.598 0.738 / 0.547 0.778 / 0.548 0.842 / 0.596 0.860 / 0.616 0.880 / 0.648
SV(28) 0.639 / 0.425 0.723 / 0.516 0.730 / 0.541 0.830 / 0.558 0.721 / 0.478 0.809 / 0.506 0.849 / 0.523 0.849 / 0.564 0.849 / 0.577
OCC(29) 0.564 / 0.413 0.845 / 0.619 0.716 / 0.534 0.797 / 0.571 0.772 / 0.563 0.815 / 0.560 0.867 / 0.603 0.859 / 0.618 0.872 / 0.634
DEF(19) 0.521 / 0.393 0.813 / 0.622 0.660 / 0.510 0.917 / 0.644 0.851 / 0.622 0.859 / 0.582 0.918 / 0.666 0.908 / 0.656 0.934 / 0.688
MB(12) 0.551 / 0.433 0.651 / 0.519 0.547 / 0.464 0.789 / 0.580 0.695 / 0.540 0.740 / 0.565 0.716 / 0.567 0.786 / 0.593 0.783 / 0.603
FM(17) 0.604 / 0.462 0.663 / 0.515 0.517 / 0.435 0.767 / 0.565 0.677 / 0.524 0.757 / 0.568 0.744 / 0.575 0.784 / 0.572 0.792 / 0.587
IPR(31) 0.617 / 0.444 0.691 / 0.507 0.765 / 0.560 0.811 / 0.555 0.773 / 0.547 0.810 / 0.531 0.847 / 0.584 0.860 / 0.594 0.852 / 0.600
OPR(39) 0.597 / 0.432 0.763 / 0.554 0.733 / 0.535 0.831 / 0.581 0.777 / 0.557 0.854 / 0.566 0.896 / 0.615 0.898 / 0.623 0.901 / 0.635
OV(6) 0.539 / 0.459 0.708 / 0.609 0.515 / 0.459 0.741 / 0.592 0.612 / 0.534 0.730 / 0.597 0.802 / 0.635 0.771 / 0.611 0.828 / 0.645
BC(21) 0.585 / 0.458 0.708 / 0.524 0.694 / 0.517 0.799 / 0.564 0.795 / 0.592 0.808 / 0.578 0.839 / 0.618 0.847 / 0.632 0.859 / 0.647
LR(4) 0.545 / 0.372 0.459 / 0.361 0.497 / 0.409 0.765 / 0.514 0.579 / 0.397 0.494 / 0.367 0.606 / 0.410 0.679 / 0.504 0.669 / 0.500
Avg.(50) 0.656 / 0.474 0.777 / 0.570 0.737 / 0.554 0.856 / 0.599 0.812 / 0.583 0.840 / 0.570 0.894 / 0.619 0.889 / 0.635 0.897 / 0.649
Table 1: Comparison of the PR/SR score with state-of-the-art trackers in the OPE based on the 11 sequence attributes: illumination variation (IV), scale variation (SV), occlusion (OCC), deformation (DEF), motion blur (MB), fast motion (FM), in-plane rotation (IPR), out-of-plane rotation (OPR), out-of-view (OV), background cluttered (BC) and low resolution (LR). The best and the second best results are shown in red and blue colours respectively.

We show tracking results in Figure 5 and Figure 6 with the top trackers including TLD [2], SCM [19], Struck [1], SOWP [3] and the proposed PAWSSa and PAWSSb. In Figure 5, five challenging sequences are selected from the benchmark dataset, which include illumination variation, scale variations, deformation, occlusion or background clusters. PAWSS can adapt when the object deforms in a complicated scene and track the target accurately. In Figure 6, we select five representative sequences with different scale variations. PAWSS can well track the object with scale variation, while other trackers drift away. The results show that our proposed tracking framework PAWSS can track the object robustly through sequence by using the weighting strategy to suppress the background information within the bounding box, and also by incorporating scale estimation allowing the classifier to train with adaptive scale samples. Please see the supplementary video for more sequence tracking results.

Figure 5: Comparison of the tracking results of our proposed tracker PAWSS with SOWP [3] and three conventional trackers: TLD [2], SCM [19] and Struck [1] on some especially challenging sequences in the benchmark.
Figure 6: Comparison of the tracking results of our proposed tracker PAWSS with SOWP [3] and three conventional trackers: TLD [2], SCM [19] and Struck [1] on some sequences with scale variations in the benchmark.

3.2 Visual Object Tracking (VOT) Challenges

For completeness, we also validated our algorithm on VOT2014 (25 sequences) and VOT2015 (60 sequences) datasets. VOT datasets use ranking-based evaluation methodology: accuracy and robustness. Similar to SR rate for OTB dataset, the accuracy measures the overlap of the predicted result and the ground truth bounding box, while the robustness measures how many times the tracker fails during tracking. A failure is indicated whenever the tracker loses the target object which means the overlap becomes zero, and it will be re-initialized afterwards. All the trackers are evaluated, compared and ranked based on with respect to each measure separately using the official evaluation toolkit from the challenge 222http://www.votchallenge.net/.


VOT2014  The VOT2014 challenge includes two experiments: baseline experiment and region-noise experiment. In baseline experiment, a tracker runs on all the sequences by initializing with the ground truth bounding box on the first frame; while in the region-noise experiment, the tracker is initialized with a random noisy bounding box with the perturbation in the 10% of the ground truth bounding box size. [20]. The ranking plots with 38 trackers are shown in Figure 7 for comparing PAWSS with the top three trackers: DSST [15], SAMF [16], KCF [21] in Table 2. For both the experiments our PAWSS has lower accuracy score , but less failures   and have a second average rank. But considering the tracking process of the experiments: once a failure is detected, the tracker will be re-initialized, to eliminate the effect of achieving higher accuracy score by more re-initialization steps, we performed experiments without the re-initialization, also shown in Table 2. The results show that PAWSS has the highest accuracy score  among all the trackers without re-initialization, which means it is more robust than the other trackers.

Figure 7: The accuracy-robustness score and ranking plots with respect to the baseline and region-noise experiments of VOT2014 dataset. Tracker is better if its result is closer to the top-right corner of the plot.
Baseline Region-noise Avg Rank
Accuracy Robustness Accuracy (w/o) Accuracy Robustness Accuracy (w/o)
Score Rank Failure Rank Score Score Rank Failure Rank Score
DSST [15] 0.62 5.16 1.16 8.2 0.47 0.57 4.32 1.28 7.4 0.43 6.27
SAMF [16] 0.61 4.32 1.28 8.68 0.50 0.57 4.2 1.43 8.44 0.48 6.41
KCF [21] 0.62 3.68 1.32 8.68 0.40 0.57 4.84 1.51 9.00 0.36 6.92
PAWSSb 0.58 5.80 0.88 8.00 0.51 0.55 6.08 0.78 5.4 0.48 6.32
Table 2: The results of VOT2014 baseline and region-noise experiments with and without-re-initialization. The best and the second best results are shown in red and blue colours respectively.

VOT2015  Finally, we evaluated and compared PAWSS with 62 trackers on the VOT2015 dataset. The VOT2015 challenge only includes baseline experiment, and the ranking plots are shown in Figure 8. In VOT2015 [22], expected average overlap measure is introduced which combines both per-frame accuracies and failures in a principled manner. Compared with the average rank used in VOT2014, expected overlap has a more clear practical interpretation. We list the score / rank and expected overlap of the top trackers from VOT2015 [22] which are either quite robust or accurate, the above VOT2014 top three trackers DSST [15], SAMF [16], KCF [21]333This is an improved version of the original tracker., and the baseline NCC tracker in Table 3. It can be shown that the average rank is not always consistent with the expected overlap. Our tracker PAWSS is among those top trackers (ranks the -th), also PAWSS achieves better than any of the VOT2014 top trackers on VOT2015 dataset.

Figure 8: The accuracy-robustness ranking plots of VOT2015 dataset. Tracker is better if its result is closer to the top-right corner of the plot.
Baseline Avg Rank Exp Overlap
Accuracy Robustness
Score Rank Failure Rank
MDNet [23] 0.59 2.03 0.77 5.68 3.86 0.378
DeepSRDCF [24] 0.56 5.92 1.00 8.38 7.15 0.318
EBT [25] 0.45 15.48 0.81 7.23 11.36 0.313
SRDCT [26] 0.55 5.25 1.18 9.83 7.54 0.288
LDP [27] 0.49 12.08 1.30 13.07 12.58 0.279
sPST [28] 0.54 6.57 1.42 12.57 9.57 0.277
PAWSSb 0.53 7.75 1.28 11.22 9.49 0.266
NSAMF 0.53 7.02 1.45 10.1 8.56 0.254
RAJSSC [29] 0.57 4.23 1.75 13.87 9.05 0.242
RobStruck 0.49 11.45 1.58 14.82 13.14 0.220
DSST [15] 0.53 8.05 2.72 26.02 17.04 0.172
SAMF [16] 0.51 7.98 2.08 18.08 13.03 0.202
KCF [21] 0.47 12.83 2.43 21.85 17.34 0.171
NCC* 0.48 12.47 8.18 50.33 31.4 0.080
Table 3: VOT2015 score/ranking and expected overlap results from the top trackers of VOT2014, VOT2015 and the baseline tracker. The NCC tracker is the VOT2015 baseline tracker. Trackers marked with are submitted to VOT2015 without publication.

4 Conclusions

In this paper, we propose a tracking-by-detection framework, called PAWSS, for online object tracking. It uses a colour-based segmentation model to suppress background information by assigning weights to the patch-wise descriptor. We incorporate scale estimation into the framework, allowing the tracker to handle both incremental and abrupt scale variations between frames. The learning component in our framework is based on Struck, but we would like to point out that theoretically our proposed method can also support other online learning techniques with effective background suppression and scale adaption. The performance of our tracker is thoroughly evaluated on the OTB, VOT2014 and VOT2015 datasets and compared with recent state-of-the-art trackers. Results demonstrate that PAWSS achieves the best performance in both PR and SR in the OPE for OTB dataset. It outperforms Struck by and in PR/SR scores. Also, it provides a comparable PR score, and improves SR score by over SOWP. On the VOT2014 and VOT2015 datasets, PAWSS has relatively lower accuracies but the lowest failure rate among the top trackers, we evaluated without re-initialization, and achieves the highest performance.

Acknowledgements

Xiaofei Du is supported by the China Scholarship Council (CSC) scholarship. The work has been carried out as part of an internship at Wirewax Ltd, London, UK. The work was supported by the EPSRC (EP/N013220/1, EP/N022750/1, EP/N027078/1, NS/A000027/1, EP/P012841/1), The Wellcome Trust (WT101957, 201080/Z/16/Z) and the EU-Horizon2020 project EndoVESPA (H2020-ICT-2015-688592).

References

  • [1] S. Hare, A. Saffari, and P. H. Torr, “Struck: Structured output tracking with kernels,” in Computer Vision (ICCV), 2011 IEEE International Conference on.   IEEE, 2011, pp. 263–270.
  • [2] Z. Kalal, K. Mikolajczyk, and J. Matas, “Tracking-learning-detection,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 34, no. 7, pp. 1409–1422, 2012.
  • [3] H.-U. Kim, D.-Y. Lee, J.-Y. Sim, and C.-S. Kim, “Sowp: Spatially ordered and weighted patch descriptor for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3011–3019.
  • [4] L. Wang, W. Ouyang, X. Wang, and H. Lu, “Visual tracking with fully convolutional networks,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3119–3127.
  • [5] D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 25, no. 5, pp. 564–577, 2003.
  • [6] S. He, Q. Yang, R. Lau, J. Wang, and M.-H. Yang, “Visual tracking via locality sensitive histograms,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 2427–2434.
  • [7] D.-Y. Lee, J.-Y. Sim, and C.-S. Kim, “Visual tracking using pertinent patch selection and masking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3486–3493.
  • [8] S. Duffner and C. Garcia, “Pixeltrack: a fast adaptive algorithm for tracking non-rigid objects,” in Computer Vision (ICCV), 2013 IEEE International Conference on.   IEEE, 2013, pp. 2480–2487.
  • [9] M. Godec, P. M. Roth, and H. Bischof, “Hough-based tracking of non-rigid objects,” Computer Vision and Image Understanding, vol. 117, no. 10, pp. 1245–1256, 2013.
  • [10] Y. Wu, J. Lim, and M.-H. Yang, “Online object tracking: A benchmark,” in Computer vision and pattern recognition (CVPR), 2013 IEEE Conference on.   IEEE, 2013, pp. 2411–2418.
  • [11] R. T. Collins, Y. Liu, and M. Leordeanu, “Online selection of discriminative tracking features,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 27, no. 10, pp. 1631–1643, 2005.
  • [12] D. Chen, Z. Yuan, Y. Wu, G. Zhang, and N. Zheng, “Constructing adaptive complex cells for robust visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 1113–1120.
  • [13] J.-Y. Bouguet, “Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm,” Intel Corporation, vol. 5, no. 1-10, p. 4, 2001.
  • [14] J. Shi et al., “Good features to track,” in Computer Vision and Pattern Recognition, 1994. Proceedings CVPR’94., 1994 IEEE Computer Society Conference on.   IEEE, 1994, pp. 593–600.
  • [15] M. Danelljan, G. Häger, F. Khan, and M. Felsberg, “Accurate scale estimation for robust visual tracking,” in British Machine Vision Conference, Nottingham, September 1-5, 2014.   BMVA Press, 2014.
  • [16] Y. Li and J. Zhu, “A scale adaptive kernel correlation filter tracker with feature integration,” in Computer Vision-ECCV 2014 Workshops.   Springer, 2014, pp. 254–265.
  • [17] D.-Y. Lee, J.-Y. Sim, and C.-S. Kim, “Multihypothesis trajectory analysis for robust visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 5088–5096.
  • [18] J. Zhang, S. Ma, and S. Sclaroff, “Meem: Robust tracking via multiple experts using entropy minimization,” in Computer Vision–ECCV 2014.   Springer, 2014, pp. 188–203.
  • [19] W. Zhong, H. Lu, and M.-H. Yang, “Robust object tracking via sparsity-based collaborative model,” in Computer vision and pattern recognition (CVPR), 2012 IEEE Conference on.   IEEE, 2012, pp. 1838–1845.
  • [20] M. Kristan, R. Pflugfelder, A. Leonardis, J. Matas, L. Čehovin, G. Nebehay, T. Vojíř, G. Fernández, and A. Lukežič, “The visual object tracking vot2014 challenge results,” in Computer Vision - ECCV 2014 Workshops: Zurich, Switzerland, September 6-7 and 12, 2014, Proceedings, Part II, 2015, pp. 191–217.
  • [21] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-speed tracking with kernelized correlation filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, pp. 583–596, 2015.
  • [22] M. Kristan, J. Matas, A. Leonardis, M. Felsberg, L. Cehovin, G. Fernández, T. Vojir, G. Hager, G. Nebehay, and R. Pflugfelder, “The visual object tracking vot2015 challenge results,” in Proceedings of the IEEE international conference on computer vision workshops, 2015, pp. 1–23.
  • [23] H. Nam and B. Han, “Learning multi-domain convolutional neural networks for visual tracking,” arXiv preprint arXiv:1510.07945, 2015.
  • [24] M. Danelljan, G. Hager, F. Shahbaz Khan, and M. Felsberg, “Convolutional features for correlation filter based visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2015, pp. 58–66.
  • [25] N. Wang and D.-Y. Yeung, “Ensemble-based tracking: Aggregating crowdsourced structured time series data.” in ICML, 2014, pp. 1107–1115.
  • [26] M. Danelljan, G. Hager, F. Shahbaz Khan, and M. Felsberg, “Learning spatially regularized correlation filters for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 4310–4318.
  • [27] A. Lukežič, L. Čehovin, and M. Kristan, “Deformable parts correlation filters for robust visual tracking,” arXiv preprint arXiv:1605.03720, 2016.
  • [28] Y. Hua, K. Alahari, and C. Schmid, “Online object tracking with proposal selection,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3092–3100.
  • [29] M. Zhang, J. Xing, J. Gao, X. Shi, Q. Wang, and W. Hu, “Joint scale-spatial correlation tracking with adaptive rotation estimation,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2015, pp. 32–40.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
4232
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description