Learning Dynamic Memory Networks for Object Tracking

Learning Dynamic Memory Networks for Object Tracking

Tianyu Yang Antoni B. Chan Department of Computer Science, City University of Hong Kong
tianyyang8-c@my.cityu.edu.hk, abchan@cityu.edu.hk
Abstract

Template-matching methods for visual tracking have gained popularity recently due to their comparable performance and fast speed. However, they lack effective ways to adapt to changes in the target object’s appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target’s appearance variations during tracking. An LSTM is used as a memory controller, where the input is the search feature map and the outputs are the control signals for the reading and writing process of the memory block. As the location of the target is at first unknown in the search feature map, an attention mechanism is applied to concentrate the LSTM input on the potential target. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. Unlike tracking-by-detection methods where the object’s information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target’s appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers – the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on OTB and VOT demonstrates that our tracker MemTrack performs favorably against state-of-the-art tracking methods while retaining real-time speed of 50 fps.

Keywords:
Addressable Memory, Gated Residual Template Learning

1 Introduction

Along with the success of convolution neural networks in object recognition and detection, an increasing number of trackers [1, 2, 3, 4, 5] have adopted deep learning models for visual object tracking. Among them are two dominant tracking strategies. One is the tracking-by-detection scheme that online trains an object appearance classifier [1, 2] to distinguish the target from the background. The model is first learned using the initial frame, and then fine-tuned using the training samples generated in the subsequent frames based on the newly predicted bounding box. The other scheme is template matching, which adopts either the target patch in the first frame [4, 6] or the previous frame [7] to construct the matching model. To handle changes in target appearance, the template built in the first frame may be interpolated by the recently generated object template with a small learning rate [8].

The main difference between these two strategies is that tracking-by-detection maintains the target’s appearance information in the weights of the deep neural network, thus requiring online fine-tuning with stochastic gradient descent (SGD) to make the model adaptable, while in contrast, template matching stores the target’s appearance in the object template, which is generated by a feed forward computation. Due to the computationally expensive model updating required in tracking-by-detection, the speed of such methods are usually slow, e.g. [1, 2, 9] run at about 1 fps, although they do achieve state-of-the-art tracking accuracy. Template matching methods, however, are fast because there is no need to update the parameters of the neural networks. Recently, several trackers [4, 5, 10] adopt fully convolutional Siamese networks as the matching model, which demonstrate promising results and real-time speed. However, there is still a large performance gap between template-matching models and tracking-by-detection, due to the lack of an effective method for adapting to appearance variations online.

In this paper, we propose a dynamic memory network, where the target information is stored and recalled from external memory, to maintain the variations of object appearance for template-matching. Unlike tracking-by-detection where the target’s information is stored in the weights of neural networks and therefore the capacity of the model is limited by the number of parameters, the model capacity of our memory networks can be easily enlarged by increasing the size of external memory, which is useful for memorizing long-term appearance variations. Since aggressive template updating is prone to overfit recent frames and the initial template is the most reliable one, we use the initial template as a conservative reference of the object and a residual template, obtained from retrieved memory, to adapt to the appearance variations. During tracking, the residual template is gated channel-wise and combined with the initial template to form the final matching template, which is then convolved with the search image features to get the response map. The channel-wise gating of the residual template controls how much each channel of the retrieved template should be added to the initial template, which can be interpreted as a feature/part selector for adapting the template. An LSTM (Long Short-Term Memory) is used to control the reading and writing process of external memory, as well as the channel-wise gate vector for the residual template. In addition, as the target position is at first unknown in the search image, we adopt an attention mechanism to locate the object roughly in the search image, thus leading to a soft representation of the target for the input to the LSTM controller. This helps to retrieve the most-related template in the memory. The whole framework is differentiable and therefore can be trained end-to-end with SGD. In summary, the contributions of our work are:

  • We design a dynamic memory network for visual tracking. An external memory block, which is controlled by an LSTM with attention mechanism, allows adaptation to appearance variations.

  • We propose gated residual template learning to generate the final matching template, which effectively controls the amount of appearance variations in retrieved memory that is added to each channel of the initial matching template. This prevents excessive model updating, while retaining the conservative information of the target.

  • We extensively evaluate our algorithm on large scale datasets OTB and VOT. Our tracker performs favorably against state-of-the-art tracking methods while possessing real-time speed of 50 fps.

2 Related Work

Template-Matching Trackers. Matching-based methods have recently gained popularity due to its fast speed and comparable performance. The most notable is the fully convolutional Siamese networks (SiamFC) [4]. Although it only uses the first frame as the template, SiamFC achieves competitive results and fast speed. The key deficiency of SiamFC is that it lacks an effective model for online updating. To address this, [8] proposes model updating using linear interpolation of new templates with a small learning rate, but does only sees modest improvements in accuracy. Recently, the RFL (Recurrent Filter Learning) tracker [10] adopts a convolutional LSTM for model updating, where the forget and input gates control the linear combination of historical target information, i.e., memory states of LSTM, and incoming object’s template automatically. Guo et al. [5] propose a dynamic Siamese network with two general transformations for target appearance variation and background suppression. To further improve the speed of SiamFC, [11] reduces the feature computation cost for easy frames, by using deep reinforcement learning to train policies for early stopping the feed-forward calculations of the CNN when the response confidence is high enough. SINT [6] also uses Siamese networks for visual tracking and has higher accuracy, but runs much slower than SiamFC (2 fps vs 86 fps) due to the use of deeper CNN (VGG16) for feature extraction, and optical flow for its candidate sampling strategy. Unlike other template-matching models that use sliding windows or random sampling to generate candidate image patches for testing, GOTURN [7] directly regresses the coordinates of the target’s bounding box by comparing the previous and current image patches. Despite its advantage on handling scale and aspect ratio changes and fast speed, its tracking accuracy is much lower than other state-of-the-art trackers.

Different from existing matching-based trackers where the capacity of adaptivity is limited by the size of neural networks, we use SiamFC [4] as the baseline feature extractor and extend it to use an addressable memory, whose memory size is independent of neural networks and thus can be easily enlarged as memory requirements of a task increase, to adapt to variations of object appearance.

Memory Networks. Recent use of convolutional LSTM for visual tracking [10] shows that memory states are useful for object template management over long timescales. Memory networks are typically used to solve simple logical reasoning problem in natural language processing like question answering and sentiment analysis. The pioneering works include NTM (Neural Turing Machine) [12] and MemNN (Memory Neural Networks) [13]. They both propose an addressable external memory with reading and writing mechanism – NTM focuses on problems of sorting, copying and recall, while MemNN aims at language and reasoning task. MemN2N [14] further improves MemNN by removing the supervision of supporting facts, which makes it trainable in an end-to-end fashion. Based on their predecessor NTM, [15] proposes a new framework called DNC (Differentiable Neural Computer), which uses a different access mechanism to alleviate the memory overlap and interference problem. Recently, NTM is also applied to one-shot learning [16] by redesigning the method for reading and writing memory, and has shown promising results at encoding and retrieving new information quickly.

Our proposed memory model differs from the aforementioned memory networks in the following aspects. Firstly, for question answering problem, the input of each time step is a sentence, i.e., a sequence of feature vectors (each word corresponds to one vector) which needs an embedding layer (usually RNN) to obtain an internal state. While for object tracking, the input is a search image which needs a feature extraction process (usually CNN) to get a more abstract representation. Furthermore, for object tracking, the target’s position in the search image patch is unknown, and here we propose an attention mechanism to highlight the target’s information when generating the read key for memory retrieval. Secondly, the dimension of feature vector stored in memory for natural language processing is relatively small (50 in MemN2N vs 66256=9216 in our case). Directly using the original template for address calculation is time-consuming. Therefore we apply an average pooling on the feature map to generate a template key for addressing, which is efficient and effective experimentally. Furthermore, we apply channel-wise gated residual template learning for model updating, and redesign the memory writing operation to be more suitable for visual tracking.

3 Dynamic Memory Networks for Tracking

Figure 1: The pipeline of our tracking algorithm. The green rectangle are the candidate region for target searching. The Feature Extractions for object image and search image share the same architecture and parameters. An attentional LSTM extracts the target’s information on the search feature map, which guides the memory reading process to retrieve a matching template. The residual template is combined with the initial template, to obtain a final template for generating the response score. The newly predicted bounding box is then used to crop the object’s image patch for memory writing.

In this section we propose a dynamic memory network with reading and writing mechanisms for visual tracking. The whole framework is shown in Figure 1. Given the search image, first features are extracted with a CNN. The image features are input into an attentional LSTM, which controls the memory reading and writing. A residual templates is read from the memory and combined with the initial template learned from the first frame, forming the final template. The final template is convolved with the search image features to obtain the response map, and the target bounding box is predicted. The new target’s template is cropped using the predicted bounding box, features are extracted and then written into memory for model updating.

3.1 Feature Extraction

Given an input image at time , we first crop the frame into a search image patch with a rectangle that is computed by the previous predicted bounding box. Then it is encoded into a high level representation , which is a spatial feature map, via a fully convolutional neural networks (FCNN). In this work we use the FCNN structure from SiamFC [4]. After getting the predicted bounding box, we use the same feature extractor to compute the new object template for memory writing.

3.2 Attention Scheme

Since the object information in the search image is needed to retrieve the related template for matching, but the object location is unknown at first, we apply an attention mechanism to make the input of LSTM concentrate more on the target. We define as the -th square patch on in a sliding window fashion.111We use , which is the same size of the matching template. Each square patch covers a certain part of the search image. An attention-based weighted sum of these square patches can be regarded as a soft representation of the object, which can then be fed into LSTM to generate a proper read key for memory retrieval. However the size of this soft feature map is still too large to directly feed into LSTM. To further reduce the size of each square patch, we first adopt an average pooling with filter size on ,

(1)

and is the feature vector for the th patch.

The attended feature vector is then computed as the weighted sum of the feature vectors,

(2)

where is the number of square patches, and the attention weights is calculated by a softmax,

(3)

where

(4)

is an attention network which takes the previous hidden state of the LSTM controller and a square patch as input. and are weight matrices and biases for the network.

By comparing the target’s historical information in the previous hidden state with each square patch, the attention network can generate attentional weights that have higher values on the target and smaller values for surrounding regions. Figure 2 shows example search images with attention weight maps. We can see that our attention network can always focus on the target which is beneficial when retrieving memory for template matching.

Figure 2: Visualization of attentional weights map: for each pair, (left) search images and ground-truth target box, and (right) attention maps over search image. For visualization, the attention maps are resized using bicubic interpolation to match the size of the original image.

3.3 LSTM Memory Controller

For each time step, the LSTM controller takes the attended feature vector , obtained in the attention module, and the previous hidden state as input, and outputs the new hidden state to calculate the memory control signals, including read key, read strength, bias gates, and decay rate (discussed later). The internal architecture of the LSTM uses the standard model (details in the Supplemental), while the output layer is modified to generate the control signals. In addition, we also use layer normalization [17] and dropout regularization [18] for the LSTM. The initial hidden state and cell state are obtained by passing the initial target’s feature map through one average pooling layer and two separate fully-connected layer with tanh activation functions, respectively.

3.4 Memory Reading

Figure 3: Diagram of memory access mechanism.

Memory is retrieved by computing a weighted summation of all memory slots with a read weight vector, which is determined by the cosine similarity between a read key and the memory keys. This aims at retrieving the most related template stored in memory. Suppose represents the memory module, such that is the template stored in the memory slot and is the number of memory slots. The LSTM controller outputs the read key and read strength ,

(5)
(6)

where are corresponding weight matrices and biases. The read key is used for matching the contents in the memory, while the read strength indicates the reliability of the generated read key. Given the read key and read strength, a read weight is computed for memory retrieval,

(7)

where is the memory key generated by a average pooling on . is the cosine similarity between vectors, . Finally, the template is retrieved from memory as a weighted sum,

(8)

3.5 Residual Template Learning

Directly using the retrieved template for similarity matching is prone to overfit recent frames. Instead, we learn a residual template by multiplying the retrieved template times a channel-wise gate vector and add it to the initial template to capture the appearance changes. Therefore, our final template is formulated as,

(9)

where is the initial template and is channel-wise multiplication. is the residual gate produced by LSTM controller,

(10)

where are corresponding weights and biases, and represents sigmoid function. The residual gate controls how much each channel of the retrieved template is added to the initial one, which can be regarded as a form of feature selection.

Figure 4: The feature channels respond to target parts: images are reconstructed from conv5 of the CNN used in our tracker. Each image is generated by accumulating reconstructed pixels from the same channel. The input image is shown in the top-left.

By projecting different channels of a target feature map to pixel-space using deconvolution, as in [19], we find that the channels focus on different object parts (see Figure 4). Thus, the channel-wise feature residual learning has the advantage of updating different object parts separately. Experiments in Section 5.1 show that this yields a big performance improvement.

3.6 Memory Writing

The image patch with the new position of the target is used for model updating, i.e., memory writing. The new object template is computed using the feature extraction CNN. There are three cases for memory writing: 1) when the new object template is not reliable (e.g. contains a lot of background), there is no need to write new information into memory; 2) when the new object appearance does not change much compared with the previous frame, the memory slot that was previously read should be updated; 3) when the new target has a large appearance change, a new memory slot should be overwritten. To handle these three cases, we define the write weight as

(11)

where is the zero vector, is the read weight, and is the allocation weight, which is responsible for allocating a new position for memory writing. The write gate , read gate and allocation gate , are produced by the LSTM controller with a softmax function,

(12)

where are the weights and biases. Since , these three gates govern the interpolation between the three cases. If , then and nothing is written. If or have higher value, then the new template is either used to update the old template (using ) or written into newly allocated position (using ). The allocation weight is calculated by,

(13)

where is the access vector,

(14)

which indicates the frequency of memory access (both reading and writing), and is a decay factor. Memory slots that are accessed infrequently will be assigned new templates.

The writing process is performed with a write weight in conjunction with an erase factor for clearing the memory,

(15)

where is the erase factor computed by

(16)

and is the decay rate produced by the LSTM controller,

(17)

where is sigmoid function. and are corresponding weights and biases. If (and thus ), then serves as the decay rate for updating the template in the memory slot (case 2). If (and ), has no effect on , and thus the memory slot will be erased before writing the new template (case 3). Figure 3 shows the detailed diagram of the memory reading and writing process.

4 Implementation Details

We adopt an Alex-like CNN as in SiamFC [4] for feature extraction, where the input image sizes of the object and search images are and respectively. The whole network is trained offline on the VID dataset (object detection from video) of ILSVRC [20] from scratch, and takes about a day. Adam [21] optimization is used with a mini-batches of 8 video clips of length 16. The initial learning rate is 1e-4 and is multiplied by 0.8 every 10k iterations. The video clip is constructed by uniformly sampling frames (keeping the temporal order) from each video. This aims to diversify the appearance variations in one episode for training, which can simulate fast motion, fast background change, jittering object, low frame rate. We use data augmentation, including small image stretch and translation for the target image and search image. The dimension of memory states in the LSTM controller is 512 and the retain probability used in dropout for LSTM is 0.8. The number of memory slots is . The decay factor used for calculating the access vector is . At test time, the tracker runs completely feed-forward and no online fine-tuning is needed. We locate the target based on the upsampled response map as in SiamFC [4], and handle the scale changes by searching for the target over three scales . To smoothen scale estimation and penalize large displacements, we update the object scale with the new one by a decay rate 0.6 and dampen the response map with a cosine window by decay rate 0.15.

Our algorithm is implemented in Python with the TensorFlow toolbox [22]. It runs at about 50 fps on a computer with four Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz and a single NVIDIA GTX 1080 Ti with 11GB RAM.

5 Experiments

We evaluate our proposed tracker, denoted as MemTrack, on three challenging datasets: OTB-2013 [23], OTB-2015 [24] and VOT-2016 [25]. We follow the standard protocols, and evaluate using precision and success plots, as well as area-under-the-curve (AUC).

5.1 Ablation Studies

Figure 5: Ablation studies: (left) success plots of different variants of our tracker on OTB-2015; (right) success plots for different memory sizes {1, 2, 4, 8, 16} on OTB-2015.

Our MemTrack tracker contains three important components: 1) an attention mechanism, which calculates the attended feature vector for memory reading; 2) a dynamic memory network, which maintains the target’s appearance variations; and 3) residual template learning, which controls the amount of model updating for each channel of the template. To evaluate their separate contributions to our tracker, we implement several variants of our method and verify them on OTB-2015 dataset.

We first design a variant of MemTrack without attention mechanism (MemTrack-NoAtt), which averages all feature vectors to get the feature vector for the LSTM input. Mathematically, it changes (2) to . As we can see in Figure 5 (left), Memtrack without attention decreases performance, which shows the benefit of using attention to roughly localize the target in the search image. We also design a naive strategy that simply writes the new target template sequentially into the memory slots as a queue (MemTrack-Queue). When the memory is fully occupied, the oldest template will be replaced with the new template. The retrieved template is generated by averaging all templates stored in the memory slots. As seen in Fig. 5 (left), such simple approach cannot produce good performance, which shows the necessity of our dynamic memory network. To verify the effectiveness of gated residual template learning, we design another variant of MemTrack— removing channel-wise residual gates (MemTrack-NoRes), i.e. directly adding the retrieved and initial templates to get the final template. From Fig. 5 (left), our gated residual template learning mechanism boosts the performance as it helps to select correct residual channel features for template updating.

We also investigate the effect of memory size on tracking performance. Figure 5 (right) shows success plots on OTB-2015 using different numbers of memory slots. Tracking accuracy increases along with the memory size and saturates at 8 memory slots. Considering the runtime and memory usage, we choose 8 as the default number.

5.2 Comparison Results

We compare our method MemTrack with 9 recent real-time trackers ( 15 fps), including CFNet [8], LMCF [26], ACFN [27], RFL [10], SiamFC [4], SiamFC_U [8], Staple [28], DSST [29], and KCF [30] on both OTB-2013 and OTB-2015. To further show our tracking accuracy, we also compared with another 8 recent state-of-the art trackers that are not real-time speed, including CREST [1], CSR-DCF [31], MCPF [32], SRDCFdecon [33], SINT [6], SRDCF [34], HDT [35], HCF [36] on OTB-2015.

Figure 6: Precision and success plot on OTB-2013 for recent real-time trackers.

OTB-2013 Results: OTB-2013 [23] dataset contains 51 sequences with 11 video attributes and two evaluation metrics, which are center location error and overlap ratio. Figure 6 shows the one-pass comparison results with recent real-time trackers on OTB-2013. Our tracker achieves the best AUC on the success plot and second place on precision plot. Compared with SiamFC [4], which is the baseline for matching-based methods without online updating, our tracker achieves an improvement of 4.9% on precision plot and 5.8% on success plot. Our method also outperforms SiamFC_U, the improved version of SiamFC [8] that uses simple linear interpolation of the old and new filters with a small learning rate for online updating. This indicates that our dynamic memory networks can handle object appearance changes better than simply interpolating new templates with old ones.

Figure 7: Precision and success plot on OTB-2015 for recent real-time trackers.

OTB-2015 Results: The OTB-2015 [24] dataset is the extension of OTB-2013 to 100 sequences, and is thus more challenging. Figure 7 presents the precision plot and success plot for recent real-time trackers. Our tracker outperforms all other methods in both measures. Specifically, our method performs much better than RFL [10], which uses the memory states of LSTM to maintain the object appearance variations. This demonstrates the effectiveness of using an external addressable memory to manage object appearance changes, compared with using LSTM memory which is limited by the size of the hidden states. Furthermore, MemTrack improves the baseline of template-based method SiamFC [4] with 6.4% on precision plot and 7.6% on success plot respectively. Our tracker also outperforms the most recently proposed two trackers, LMCF [26] and ACFN [27], on AUC score with a large margin.

Figure 8: (left) Success plot on OTB-2015 comparing our real-time MemTrack with recent non-real-time trackers. (right) AUC score vs speed with recent trackers.

Figure 8 presents the comparison results of 8 recent state-of-the-art non-real time trackers for AUC score (left plot), and the AUC score vs speed (right plot) of all trackers. Our MemTrack, which runs in real-time, has similar AUC performance to CREST [1], MCPF [32] and SRDCFdecon [33], which all run at about 1 fps. Moreover, our MemTrack also surpasses SINT, which is another matching-based method with optical flow as motion information, in terms of both accuracy and speed.

Figure 9: The success plot of OTB-2015 on eight challenging attributes: illumination variation, out-of-plane rotation, scale variation, occlusion, motion blur, fast motion, in-plane rotation and low resolution

Figure 9 further shows the AUC scores of real-time trackers on OTB-2015 under different video attributes including illumination variation, out-of-plane rotation, scale variation, occlusion, motion blur, fast motion, in-plane rotation, and low resolution. Our tracker outperforms all other trackers on these attributes. In particular, for the low-resolution attribute, our MemTrack surpasses the second place (SiamFC) with a 10.7% improvement on AUC score. In addition, our tracker also works well under out-of-plane rotation and scale variation. Fig. 10 shows some qualitative results of our tracker compared with 6 real-time trackers.

Figure 10: Qualitative results of our MemTrack, along with SiamFC [4], RFL [10], CFNet [8], Staple [28], LMCF [26], ACFN [27] on eight challenge sequences. From left to right, top to bottom: board, bolt2, dragonbaby, lemming, matrix, skiing, biker, girl2.
Trackers MemTrack SiamFC RFL Staple SRDCF KCF HCF DSST
EAO () 0.2729 0.2352 0.2230 0.2952 0.2471 0.1924 0.2203 0.1814
A () 0.51 0.50 0.50 0.54 0.52 0.48 0.46 0.48
R () 1.34 1.65 2.10 1.35 1.50 2.03 1.47 2.52
Ar () 2.07 2.32 2.22 1.67 1.93 2.68 2.72 2.32
Rr () 2.70 3.07 3.87 2.72 2.80 3.67 2.72 4.07
Table 1: Comparison results on VOT-2016. The evaluation metrics include expected average overlap (EAO), accuracy and robustness value (A and R), accuracy and robustness rank (Ar and Rr). Best results are bolded, and second best is underlined. The up arrows indicate higher values are better for that metric, while down arrows mean lower values are better.

VOT-2016 Results: The VOT-2016 dataset contains 60 video sequences with per-frame annotated visual attributes. Objects are marked with rotated bounding boxes to better fit their shapes. We compare our tracker with 7 trackers on the benchmark, including SiamFC [4], RFL [10], Staple [28], SRDCF [34], KCF [30], HCF [36] and DSST [29]. Table 1 summarizes the detailed comparison results. Overall, Staple achieves the best results on EAO and accuracy. Our MemTrack gets the second place on EAO and best performance on robustness. As reported in VOT2016, the SOTA bound is EAO 0.251, which MemTrack exceeds (0.273). In addition, our tracker outperforms the baseline matching-based tracker SiamFC and RFL over all evaluation metrics.

6 Conclusion

In this paper, we propose a dynamic memory network with an external addressable memory block for visual tracking, aiming to adapt matching templates to object appearance variations. An LSTM with attention scheme controls the memory access by parameterizing the memory interactions. We develop channel-wise gated residual template learning to form the final matching model, which preserves the conservative information present in the initial target, while providing online adapability of each feature channel. Once the offline training process is finished, no online fine-tuning is needed, which leads to real-time speed of 50 fps. Extensive experiments on standard tracking benchmark demonstrates the effectiveness of our MemTrack.

References

  • [1] Song, Y., Ma, C., Gong, L., Zhang, J., Lau, R., Yang, M.H.: CREST: Convolutional Residual Learning for Visual Tracking. In: ICCV. (2017)
  • [2] Nam, H., Han, B.: Learning Multi-Domain Convolutional Neural Networks for Visual Tracking. In: CVPR. (2016)
  • [3] Wang, L., Ouyang, W., Wang, X., Lu, H.: Visual Tracking with Fully Convolutional Networks. In: ICCV. (2015)
  • [4] Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A., Torr, P.H.S.: Fully-Convolutional Siamese Networks for Object Tracking. In: ECCVW. (2016)
  • [5] Guo, Q., Feng, W., Zhou, C., Huang, R., Wan, L., Wang, S.: Learning Dynamic Siamese Network for Visual Object Tracking. In: ICCV. (2017)
  • [6] Tao, R., Gavves, E., Smeulders, A.W.M.: Siamese Instance Search for Tracking. In: CVPR. (2016)
  • [7] Held, D., Thrun, S., Savarese, S.: Learning to Track at 100 FPS with Deep Regression Networks. In: ECCV. (2016)
  • [8] Valmadre, J., Bertinetto, L., Henriques, F., Vedaldi, A., Torr, P.H.S.: End-to-end representation learning for Correlation Filter based tracking. In: CVPR. (2017)
  • [9] Nam, H., Baek, M., Han, B.: Modeling and Propagating CNNs in a Tree Structure for Visual Tracking. In: ECCV. (2016)
  • [10] Yang, T., Chan, A.B.: Recurrent Filter Learning for Visual Tracking. In: ICCVW. (2017)
  • [11] Huang, C., Lucey, S., Ramanan, D.: Learning Policies for Adaptive Tracking with Deep Feature Cascades. In: ICCV. (2017)
  • [12] Graves, A., Wayne, G., Danihelka, I.: Neural Turing Machines. Arxiv (2014)
  • [13] Weston, J., Chopra, S., Bordes, A.: Memory Networks. In: ICLR. (2015)
  • [14] Sukhbaatar, S., Szlam, A., Weston, J., Fergus, R.: End-To-End Memory Networks. In: NIPS. (2015)
  • [15] Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., Gómez Colmenarejo, S., Grefenstette, E., Ramalho, T., Agapiou, J., Badia, A.P., Moritz Hermann, K., Zwols, Y., Ostrovski, G., Cain, A., King, H., Summerfield, C., Blunsom, P., Kavukcuoglu, K., Hassabis, D.: Hybrid computing using a neural network with dynamic external memory. Nature (2016)
  • [16] Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., Lillicrap, T.: One-shot Learning with Memory-Augmented Neural Networks. In: ICML. (2016)
  • [17] Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer Normalization. arXiv (2016)
  • [18] Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout : A Simple Way to Prevent Neural Networks from Overfitting. JMLR (2014)
  • [19] Zeiler, M.D., Fergus, R.: Visualizing and Understanding Convolutional Networks. In: ECCV. (2014)
  • [20] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) (2015) 211–252
  • [21] Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv (2014)
  • [22] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., et al.: Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv (2016)
  • [23] Wu, Y., Lim, J., Yang, M.H.: Online Object Tracking: A Benchmark. In: CVPR. (2013)
  • [24] Wu, Y., Lim, J., Yang, M.H.: Object Tracking Benchmark. PAMI (2015)
  • [25] Kristan, M., Leonardis, A., Matas, J., Felsberg, M.: The Visual Object Tracking VOT2016 Challenge Results. In: ECCVW. (2016)
  • [26] Wang, M., Liu, Y., Huang, Z.: Large Margin Object Tracking with Circulant Feature Maps. In: CVPR. (2017)
  • [27] Choi, J., Chang, H.J., Yun, S., Fischer, T., Demiris, Y., Jin Young Choi: Attentional Correlation Filter Network for Adaptive Visual Tracking. In: CVPR. (2017)
  • [28] Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.: Staple: Complementary Learners for Real-Time Tracking. In: CVPR. (2016)
  • [29] Danelljan, M., Häger, G., Khan, F., Felsberg, M.: Accurate Scale Estimation for Robust Visual Tracking. In: BMVC. (2014)
  • [30] Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. TPAMI (2015)
  • [31] Lukežič, A., Vojíř, T., Čehovin, L., Matas, J., Kristan, M.: Discriminative Correlation Filter with Channel and Spatial Reliability. In: CVPR. (2017)
  • [32] Zhang, T., Xu, C., Yang, M.h.: Multi-task Correlation Particle Filter for Robust Object Tracking. In: CVPR. (2017)
  • [33] Danelljan, M., Häger, G., Khan, F.S., Felsberg, M.: Adaptive Decontamination of the Training Set: A Unified Formulation for Discriminative Visual Tracking. In: CVPR. (2016)
  • [34] Danelljan, M., Gustav, H., Khan, F.S., Felsberg, M.: Learning Spatially Regularized Correlation Filters for Visual Tracking. In: ICCV. (2015)
  • [35] Qi, Y., Zhang, S., Qin, L., Yao, H., Huang, Q., Yang, J.L.M.h.: Hedged Deep Tracking. In: CVPR. (2016)
  • [36] Ma, C., Huang, J.b., Yang, X., Yang, M.h.: Hierarchical Convolutional Features for Visual Tracking. In: ICCV. (2015)
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
133634
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description