Human Following for Wheeled Robot with Monocular Pan-tilt Camera

Human Following for Wheeled Robot with Monocular Pan-tilt Camera

Zheng Zhu, Hongxuan Ma, Wei Zou Zheng Zhu, Hongxuan Ma, Wei Zou are with Institute of Automation, Chinese Academy of Sciences and University of Chinese Academy of Sciences.
Abstract

Human following on mobile robots has witnessed significant advances due to its potentials for real-world applications. Currently most human following systems are equipped with depth sensors to obtain distance information between human and robot, which suffer from the perception requirements and noises. In this paper, we design a wheeled mobile robot system with monocular pan-tilt camera to follow human, which can stay the target in the field of view and keep following simultaneously. The system consists of fast human detector, real-time and accurate visual tracker, and unified controller for mobile robot and pan-tilt camera. In visual tracking algorithm, both Siamese networks and optical flow information are exploited to locate and regress human simultaneously. In order in perform following with a monocular camera, the constraint of human height is introduced to design the controller. In experiments, human following are conducted and analysed in simulations and a real robot platform, which demonstrate the effectiveness and robustness of the overall system.

I Introduction

Tracking and following human on mobile robots within cameras are of significance for human-computer interaction [1, 2], automatic driving [3, 4], personal assistant robot [5, 6, 7] and service robot [8, 9, 10, 11, 12]. The core technology for the following robot mainly consists of human visual tracker and robot controller. The former tracks a specified human in a changing video sequences automatically given a detected bounding box in the first frame, while the controller generates necessary motion commands so that the robot can follow the target human.

The core problem of visual tracking is how to detect and locate the object accurately and fast in human following scenarios with occlusions, shape deformation, illumination variations [13, 14, 15, 16, 17, 18]. Similarly with other computer vision tasks [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30], deep convolutional networks have achieved favourable performance in recent tracking benchmarks [31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49]. In this paper, both Siamese networks [47, 48] and optical flow information [44, 50, 51] are exploited to locate and regress human simultaneously. Besides, negative pairs of human are emphasised to suppress the response of distractor. A simple yet effective failure recovery strategy is proposed to handle the occlusion and out-of-view during tracking.

Since distance information of target is essential for robot perception and control, depth sensors are always equipped for following task [52, 53, 54, 55, 56, 57, 58, 59]. Laser scans with camera are always used to detect and track the fixed height legs of human [59], which cannot provide robust features for discriminating the different persons following task. Kinect cameras are frequently adopted in the robotics community [53, 55, 57], whose minimum distance requirement and sensitivity to the illumination variations limit its applications. Some robots [56] are equipped with stereo vision systems to reconstruct the depth information, which suffer from baseline configuration, camera calibration and field of view problems. In paper [60], monocular camera with an ultrasonic sensor are adopted to implement the human tracking system. However, the accuracy of ultrasonic sensor is always effected by reflection problem and noise. Above sensors are fixed to the robots, which limits the range of perception. In this paper, we develop a wheeled mobile robot system with monocular pan-tilt camera, which does not need the distance information provided by depth sensors. Besides, The camera can actively track human using pan-tilt motors. In order in perform following with a monocular camera, the constraint of human height is introduced to design unified controller for wheeled robot and pan-tilt camera.

The rest of this paper is organized as follows: Section II describes our designed human following systems, including robot configuration, fast human detector, real-time and accurate visual tracker, and unified controller for mobile robot within pan-tilt camera. Section III shows experiment results on human following scenarios. Section IV concludes the paper with a summary.

Ii Following Human Utilizing Mobile Robot within Monocular Pan-tilt Camera

In this section, we introduce the overall framework of human following system at first. Then the separate parts, including robot system and coordinate definition, fast human detector, real-time and accurate visual tracker, and unified controller for mobile robot within pan-tilt camera are detailed described respectively.

Ii-a Overall Human Following Framework

The overall human following framework consists of human detection and tracking in captured video streaming and controller for mobile robot within pan-tilt camera. At the beginning of human following, the human bounding box is indicated by tiny YOLO [61] detector. Once the initial box is given, human tracking loop is performed using the proposed FlowTrack++ algorithm. Tracking results are fed into the controller for mobile robot and pan-tilt platform, which results in the motion of camera. Both the visual tracker and robot controller form a closed loop for human following task as shown in Figure 1.

Fig. 1: The overall framework for human following.

Ii-B Robot System and Coordinate Definition

As shown in Figure 2, a wheeled mobile robot within monocular pan-tilt camera is adopted in this paper to achieve human following task. The robot system consists of a wheeled mobile platform and a pan-tail camera platform. The wheeled mobile platform are equipped with a microcomputer (including a GTX1050Ti GPU). The pan-tilt camera platform contains pan/tilt motors as well as their corresponding encoders. Compared with conventional human tracking systems that are equipped with fixed depth sensors, our hardware structure has a larger filed-of-view and are less suffered from the perception requirements as well as noises.

Fig. 2: Left part is the designed robot platform for human following task. Right part is corresponding coordinate systems.

Right part of Figure 2 illustrates the coordinate systems used in this paper. represents the mobile robot coordinate system whose origin locates at the midpoint of two wheels axis, and the -axis is aligned with the forward direction of mobile robot. and are defined as pan and tilt coordinate systems respectively. is the camera coordinate system. At the beginning, the -axis is aligned with -axis and -axis is aligned with -axis. The origin of is the same with , and the origin of is the same with . The direction of -axis changes as the pan motor rotates , and the direction of -axis changes as the tilt motor rotates . All axis of are always aligned with the corresponding axis of .

Ii-C Fast Human Detector

YOLO [61] is utilized as detector in human following task because of its superior speed and accuracy. In this framework, single neural network predicts bounding boxes and class probabilities directly from full images, which regards object detection as a regression problem. Specifically, YOLO divides the full image into grid and for each grid cell predicts 2 bounding boxes, confidence for those boxes, and their class probabilities. For our following task, only person class is adopted while other classes are ignored. We implement Tiny YOLO version on the mobile robot platform where it can perform at 80 FPS. The human bounding box is initialized when the detected position is less than 10 pixels among three consecutive frames.

Fig. 3: The overall framework for FlowTrack++ algorithm. The input size of kernel branch (top) and search branch (bottom) are and , respectively. The output size of FeatureNet in kernel branch is , which is transformed to a kernel and a kernel ( is the anchor number in each position) by two convolution layer. Similarly, the output size of attention-guided fusion in search branch is , which is extended to two feature maps by convolution layer. Finally, the feature map and kernels are correlated to produce classification map and regression map.

Ii-D Real-time and Accurate Visual Tracker

In this subsection, the overall architecture of proposed FlowTrack++ algorithm is introduced, which gracefully combines flow aggregation module [44] and high-quality Siamese network [47, 48]. As shown in Figure 3, the Siamese network contains a kernel branch and a search branch. In kernel branch, the feature maps of template frame is extracted by FeatureNet. In search branch, the flow aggregation module contains FeatureNet (feature extraction sub-network), FlowNet [62], warping module, attention-guided fusion module. Appearance features and flow information are extracted by the FeatureNet and FlowNet at first. Then previous frames at predefined intervals is warped to frame guided by flow information. Meanwhile, a attention-guided fusion module is designed to weight the warped feature maps. More details about flow aggregation module can be found in [44]. Finally, both two branches are fed into subsequent high-quality Siamese network for simultaneous classification and regression [47]. All the modules are differentiable and trained end-to-end.

There are always other person and object in the human following scenarios, which may drift the tracking results. Besides, conventional visual tracking algorithms lack consideration for occlusion and out-of-view, which occur frequently in human following. To address these problems, inspired by [48], we adopt hard-negative samples mining strategy and failure recovering strategy.

Hard-negative Samples Mining

The hard-negative samples contains intra-class pairs and inter-class pairs. In implementations, intra-class pairs are sampled from the different videos that is labelled as person (i.e different person from different video). Similarly, inter-class pairs are sampled from the different videos that is labelled different class, such as person and car from different video. All the image pairs are sent to two branches of Figure 3 to train the FlowTrack++ algorithm. After the hard-negative samples are addressed in training process, the results map of Siamese network becomes high-quality: the high response only appear in the desired target, where the responses of other position (including other human and objects) is suppressed due to the proposed training strategy.

Failure Recovering

Human following task always encounters occlusion and out-of-view because of the unconstrained environments and drastic camera motions. Conventional trackers lack handling mechanism towards these challenges, which may cause permanent tracking failure. In this paper, a simple yet efficient failure recovering strategy is designed based the high-quality Siamese network output. Specifically, When the failed tracking is indicated (highest score of results map is lower than the threshold), the size of search region is iteratively increased with a constant step size until the target is re-detected. This module significantly improves the performance in out-of-view and occlusion challenges. The iterative local-to-global search strategy does not cover the entire images in most cases. This is more efficient than that version of SINT [63] which samples over the whole image and adopts time-consuming multi-scale test strategy. The proposed FlowTrack++ algorithm can perform at 40 FPS in human following scenarios. The detailed process of our failure recovering strategy is described in Algorithm 1.

0:  thresholds and to enter and quit failure cases.
0:  target position and tracking score during sequences.
1:  set =
2:  perform normal tracking in the first frame, get the position and the score .
3:  repeat
4:     if  then
5:        set =
6:     else
7:        if  then
8:           Set =
9:        end if
10:     end if
11:     if  then
12:        increase the search region by the iterative local-to-global strategy, perform tracking with this larger region, get the position and the score .
13:     else
14:        perform normal tracking, get the position and the score .
15:     end if
16:  until end of video sequences.
Algorithm 1 Algorithm for recovering from tracking failure

Ii-E Unified Controller

Fig. 4: The control objective is to stay the human bounding box near the center of view and keep the half height of box () near a pre-defined constant .

In this paper, the controller is designed to stay the human bounding box near the center of view and keep the half height of the tracking box near a pre-defined constant , which is illustrated in Figure 4. The visual servo formulation is adopted to derive our controller:

(1)

where is the human target coordinate in camera coordinate system, and are the linear and angular velocity of the camera in the camera coordinate, respectively. and can be obtained as follows:

(2)
(3)

where denotes the rotation matrix from to , is the velocity of the mobile robot, and denote the the pan and tilt angular velocity of the camera respectively, and represents the robot angular velocity.

Substituting (2) and (3) into (1), equation (1) can be rewritten as follows:

(4)

The control objective is to stay the human bounding box near the center of view and keep the half height of box near a pre-defined constant . To this end, the three image errors in pixel level are defined as:

(5)

where is the center point of human bounding box, is the middle point of top border, is the center point of captured image, is a pre-defined constant, which is the half of the desired height of tracking box. The relationship between coordinate systems and is:

(6)

Since human and mobile robot move on the flat ground, we can assume that the hight of human in coordinate system , i.e remains constant. According to equation (5), (6) and pin-hole camera model (, , are camera intrinsic parameters, represents a point in camera coordinate system and is its corresponding image coordinate), the relationship between and can be obtained:

(7)
(13)
(14)
(15)
(16)

Differentiating equation (5) to time on both sides and take (7) into consideration, we can obtain equation (8).

(8)

where and , is the y-coordinate of the center point of human body, and is the y-coordinate of the middle point of top border as shown in Figure 4. They are both donated in coordinate system . The meaning of other symbols are shown in equation (9).

(9)

To make errors converge to zero, the following equations should be satisfied:

(10)

where , and are positive gains respectively. Substituting (8) and (9) into (10), robot linear velocity, pan and tilt angular velocity of the camera can be obtained by equations (13), (14), and (15) respectively. Besides, the strategy to control the robot angular velocity is adopted as equation (16).

Iii Experiments

Iii-a Simulation Results of Control Law

In this section, simulations are performed to verify the effectiveness of proposed controller. A rectangle is utilized to simulate the tracked human, where the control objective is to stay the human bounding box near the center of view and keep the half height of box near a pre-defined constant . In our simulation, the human moves along a circle in world coordinate system. The center of the circle is at , and it is denoted as the following equation:

(17)

The robot starts moving from in the world coordinate system and is set to 100. As shown in Figure 5 the proposed controller results in a smooth trajectory. Because the object is moving along an circle, the changes in a sinusoidal wave. The height of the human in image changes little, so the changes a little. As the human moves, in order to adjust the size of rectangle in the image, the mobile robot moves forward and backward, and the value of fluctuates slightly around 0. Due to motion of the human, , and are always changing. Besides, can stay around 100 according to the designed controller. From above results, one can find that the control law is effective.

Fig. 5: The simulation result when the target moves along a circle, including the curves of and , the curve of , the curves of , and , the curve of .

Iii-B Human Following Results

In this section, the effectiveness of human following system is verified by indoor and outdoor experiments on real-world robot platform. The indoor experiment is conducted in our laboratory and the outdoor experiment is performed outside the building in the Institute of Automation, Chinese Academy of Sciences. The height of the mounted camera is 0.7m, and the height of the person is about 1.8m. So the and in equation (8) are 5 and 0.91, respectively. We compared three different tracking algorithms on our designed controller, including proposed FlowTrack++, ECO [43] and GOTURN [64].

Fig. 6: The results of indoor experiments. (a) Curves of . (b) Curves of . (c) Curves of .
Fig. 7: The results of outdoor experiments. (a) Curves of . (b) Curves of . (c) Curves of .

In indoors experiments, is set as 500 pixel. Figure 6 shows the influence of different tracking algorithm on the controller. When adopting proposed FlowTrack++, the increases to 211 pixels due to the relative motion between humans and robots while . Then when , decreases to 0 quickly. Besides, the tracked human hight can stay around 500 pixels while other trackers fluctuate violently. From those results, it can be known that the tracking algorithms of FlowTrack++ have better performance on human following task.

Fig. 8: Human tracking results in indoor experiments. (a) Pose changes challenge. (b) Scale challenge.

In the outdoor experiments as shown in Figure 7, is set as 300 pixel. By FlowTrack++ tracking algorithm, and can be kept around 0, and the fluctuates stably around 300 pixels. Compared with FlowTrack++ algorithm, both ECO and GOTURN trackers result in larger tracking errors and unstable following results.

Fig. 9: Human tracking results in outdoor experiments. (a) Occlusion and distractor challenge. (b) Background changes challenge.

Figure 8 and Figure 9 further visualize the tracking results in human following task. Due to flow aggregation and box regression modules, FlowTrack++ can handle the pose changes and scale challenges in Figure 8. In Figure 9, targets are under occlusion, distractor and background changes. Since hard-negative samples mining and failure recovering strategies are adopted in FlowTrack++ algorithm, these challenges can be handled.

Iv Conclusion

In this paper, we propose a human following system on mobile robot with monocular pan-tilt camera, which mainly consists of a visual tracker and a motion controller. In visual tracking algorithm, both Siamese networks and optical flow information are exploited to locate and regress human simultaneously. Besides, a motion controller is derived to stay the target in the field of view and keep following simultaneously, which does not need the depth sensors. In experiments, the overall system obtains accurate and robust following results both in simulations and real robot platform. Future work will explore the multi-object tracking and re-identification for human following task.

References

  • [1] W. Luo, P. Sun, F. Zhong, W. Liu, T. Zhang, and Y. Wang, “End-to-end active object tracking and its real-world deployment via reinforcement learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
  • [2] Q. Wang, W. Zou, D. Xu, and Z. Zhu, “Motion control in saccade and smooth pursuit for bionic eye based on three-dimensional coordinates,” Journal of Bionic Engineering, vol. 14, no. 2, pp. 336–347, 2017.
  • [3] S. Verma, Y. H. Eng, H. X. Kong, H. Andersen, M. Meghjani, W. K. Leong, X. Shen, C. Zhang, M. H. Ang, and D. Rus, “Vehicle detection, tracking and behavior analysis in urban driving environments using road context,” in 2018 IEEE International Conference on Robotics and Automation.    IEEE, 2018, pp. 1413–1420.
  • [4] A. Buyval, A. Gabdullin, R. Mustafin, and I. Shimchik, “Realtime vehicle and pedestrian tracking for didi udacity self-driving car challenge,” in 2018 IEEE International Conference on Robotics and Automation.    IEEE, 2018, pp. 2064–2069.
  • [5] N. Hirose, R. Tajima, and K. Sukigara, “Personal robot assisting transportation to support active human life—human-following method based on model predictive control for adjacency without collision,” in IEEE International Conference on Mechatronics.    IEEE, 2015, pp. 76–81.
  • [6] R. Zhang, Z. Zhu, P. Li, R. Wu, C. Guo, G. Huang, and H. Xia, “Exploiting offset-guided network for pose estimation and tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019.
  • [7] P. Li, J. Zhang, Z. Zhu, Y. Li, L. Jiang, and G. Huang, “State-aware re-identification feature for multi-target multi-camera tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019.
  • [8] A. Bajcsy, S. L. Herbert, D. Fridovich-Keil, J. F. Fisac, S. Deglurkar, A. D. Dragan, and C. J. Tomlin, “A scalable framework for real-time multi-robot, multi-human collision avoidance,” in 2019 International Conference on Robotics and Automation.    IEEE, 2019, pp. 936–943.
  • [9] N. Bellotto and H. Hu, “Multisensor-based human detection and tracking for mobile service robots,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39, no. 1, pp. 167–181, 2009.
  • [10] N. Yao, E. Anaya, Q. Tao, S. Cho, H. Zheng, and F. Zhang, “Monocular vision-based human following on miniature robotic blimp,” in IEEE International Conference on Robotics and Automation.    IEEE, 2017, pp. 3244–3249.
  • [11] H.-X. Ma, W. Zou, Z. Zhu, C. Zhang, and Z.-B. Kang, “Selection of observation position and orientation in visual servoing with eye-in-vehicle configuration for manipulator,” International Journal of Automation and Computing, pp. 1–14.
  • [12] Z. Kang, W. Zou, H. Ma, and Z. Zhu, “Adaptive trajectory tracking of wheeled mobile robots based on a fish-eye camera,” International Journal of Control, Automation and Systems, vol. 17, no. 9, pp. 2297–2309, 2019.
  • [13] P. Liang, Y. Wu, H. Lu, L. Wang, C. Liao, and H. Ling, “Planar object tracking in the wild: A benchmark,” in 2018 IEEE International Conference on Robotics and Automation.    IEEE, 2018, pp. 651–658.
  • [14] Y. Wu, J. Lim, and M. H. Yang, “Online object tracking: A benchmark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 2411–2418.
  • [15] L. Dressel and M. J. Kochenderfer, “Hunting drones with other drones: Tracking a moving radio target,” in 2019 International Conference on Robotics and Automation.    IEEE, 2019, pp. 1905–1912.
  • [16] Y. Wu and J. Lim, “Object tracking benchmark,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 9, pp. 1834–1848, 2015.
  • [17] M. Kristan, A. Leonardis, J. Matas, M. Felsberg, R. Pflugfelder, L. Cehovin Zajc, T. Vojir, G. Hager, A. Lukezic, A. Eldesokey, and G. Fernandez, “The visual object tracking vot2017 challenge results,” in Proceedings of the The IEEE International Conference on Computer Vision Workshop, Oct 2017.
  • [18] M. Kristan, A. Leonardis, J. Matas, M. Felsberg, R. Pflugfelder, L. Č. Zajc, T. Vojír̃, G. Bhat, A. Lukežič, A. Eldesokey et al., “The sixth visual object tracking vot2018 challenge results,” in European Conference on Computer Vision.    Springer, Cham, 2018, pp. 3–53.
  • [19] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.
  • [20] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
  • [21] Y. Li, X. Chen, Z. Zhu, L. Xie, G. Huang, D. Du, and X. Wang, “Attention-guided unified network for panoptic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 7026–7035.
  • [22] T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [23] J. Zhang, Z. Zhu, W. Zou, P. Li, Y. Li, H. Su, and G. Huang, “Fastpose: Towards real-time pose estimation and tracking via scale-normalized multi-task networks,” arXiv preprint arXiv:1908.05593, 2019.
  • [24] J. Zhu, Z. Zhu, and W. Zou, “End-to-end video-level representation learning for action recognition,” in 2018 24th International Conference on Pattern Recognition (ICPR).    IEEE, 2018, pp. 645–650.
  • [25] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to human-level performance in face verification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1701–1708.
  • [26] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
  • [27] J. Zhu, W. Zou, and Z. Zhu, “Two-stream gated fusion convnets for action recognition,” in 2018 24th International Conference on Pattern Recognition (ICPR).    IEEE, 2018, pp. 597–602.
  • [28] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: towards real-time object detection with region proposal networks,” in Proceedings of the Advances in Neural Information Processing Systems, 2015, pp. 91–99.
  • [29] J. Zhu, W. Zou, L. Xu, Y. Hu, Z. Zhu, M. Chang, J. Huang, G. Huang, and D. Du, “Action machine: Rethinking action recognition in trimmed videos,” arXiv preprint arXiv:1812.05770, 2018.
  • [30] J. Zhu, W. Zou, Z. Zhu, and Y. Hu, “Convolutional relation network for skeleton-based action recognition,” Neurocomputing, 2019.
  • [31] C. Wang, H. K. Galoogahi, C.-H. Lin, and S. Lucey, “Deep-lk for efficient adaptive object tracking,” in 2018 IEEE International Conference on Robotics and Automation.    IEEE, 2018, pp. 627–634.
  • [32] N. Wang and D.-Y. Yeung, “Learning a deep compact image representation for visual tracking,” in Proceedings of the Advances in Neural Information Processing Systems, 2013, pp. 809–817.
  • [33] H. Li, Y. Li, and F. Porikli, “Deeptrack: Learning discriminative feature representations online for robust visual tracking,” IEEE Transactions on Image Processing, vol. 25, no. 4, pp. 1834–1848, 2016.
  • [34] M. Danelljan, G. Hager, F. S. Khan, and M. Felsberg, “Convolutional features for correlation filter based visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision Workshop, 2015, pp. 621–629.
  • [35] C. Ma, J.-B. Huang, X. Yang, and M.-H. Yang, “Hierarchical convolutional features for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision, December 2015.
  • [36] Y. Qi, S. Zhang, L. Qin, H. Yao, Q. Huang, J. Lim, and M.-H. Yang, “Hedged deep tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2016.
  • [37] Y. Song, C. Ma, X. Wu, L. Gong, L. Bao, W. Zuo, C. Shen, L. Rynson, and M.-H. Yang, “Vital: Visual tracking via adversarial learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • [38] J. Valmadre, L. Bertinetto, J. F. Henriques, A. Vedaldi, and P. H. S. Torr, “End-to-end representation learning for correlation filter based tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [39] Y. Song, C. Ma, L. Gong, J. Zhang, R. Lau, and M. H. Yang, “Crest: Convolutional residual learning for visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision, 2017.
  • [40] H. Fan and H. Ling, “Parallel tracking and verifying: A framework for real-time and high accuracy visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision, 2017.
  • [41] H. Nam and B. Han, “Learning multi-domain convolutional neural networks for visual tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2016.
  • [42] M. Danelljan, A. Robinson, F. S. Khan, and M. Felsberg, “Beyond correlation filters: Learning continuous convolution operators for visual tracking,” in Proceedings of the European Conference on Computer Vision, 2016, pp. 472–488.
  • [43] M. Danelljan, G. Bhat, F. Shahbaz Khan, and M. Felsberg, “Eco: Efficient convolution operators for tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, July 2017.
  • [44] Z. Zhu, W. Wu, W. Zou, and J. Yan, “End-to-end flow correlation tracking with spatial-temporal attention,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.    IEEE, 2018.
  • [45] Z. Zhu, G. Huang, W. Zou, D. Du, and C. Huang, “Uct: Learning unified convolutional networks for real-time visual tracking,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, Oct 2017.
  • [46] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. S. Torr, “Fully-convolutional siamese networks for object tracking,” in Proceedings of the European Conference on Computer Vision Workshop, 2016, pp. 850–865.
  • [47] B. Li, W. Wu, Z. Zhu, and J. Yan, “High performance visual tracking with siamese region proposal network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  • [48] Z. Zhu, Q. Wang, B. Li, W. Wu, J. Yan, and W. Hu, “Distractor-aware siamese networks for visual object tracking,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 101–117.
  • [49] S. Bai, Z. He, T.-B. Xu, Z. Zhu, Y. Dong, and H. Bai, “Multi-hierarchical independent correlation filters for visual tracking,” arXiv preprint arXiv:1811.10302, 2018.
  • [50] J. Huang, W. Zou, J. Zhu, and Z. Zhu, “Optical flow based real-time moving object detection in unconstrained scenes,” arXiv preprint arXiv:1807.04890, 2018.
  • [51] J. Huang, W. Zou, Z. Zhu, and J. Zhu, “An efficient optical flow based motion detection method for non-stationary scenes,” arXiv preprint arXiv:1811.08290, 2018.
  • [52] J. Razlaw, J. Quenzel, and S. Behnke, “Detection and tracking of small objects in sparse 3d laser range data,” in 2019 International Conference on Robotics and Automation.    IEEE, 2019.
  • [53] A. V. Gulalkari, G. Hoang, P. S. Pratama, H. K. Kim, S. B. Kim, and B. H. Jun, “Object following control of six-legged robot using kinect camera,” in International Conference on Advances in Computing, Communications and Informatics.    IEEE, 2014, pp. 758–764.
  • [54] Z. Zhu, W. Zou, Q. Wang, and F. Zhang, “Std: A stereo tracking dataset for evaluating binocular tracking algorithms,” in 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO).    IEEE, 2016, pp. 2215–2220.
  • [55] A. V. Gulalkari, P. S. Pratama, G. Hoang, D. H. Kim, B. H. Jun, and S. B. Kim, “Object tracking and following six-legged robot system using kinect camera based on kalman filter and backstepping controller,” Journal of Mechanical Science and Technology, vol. 29, no. 12, pp. 5425–5436, 2015.
  • [56] A. Mohamed, C. Yang, and A. Cangelosi, “Stereo vision based object tracking control for a movable robot head,” IFAC-PapersOnLine, vol. 49, no. 5, pp. 155–162, 2016.
  • [57] M. Gupta, S. Kumar, L. Behera, and V. K. Subramanian, “A novel vision-based tracking algorithm for a human-following mobile robot,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 7, pp. 1415–1427, 2017.
  • [58] Z. Zhu, W. Zou, Q. Wang, and F. Zhang, “A velocity compensation visual servo method for oculomotor control of bionic eyes,” International Journal of Robotics and Automation, vol. 33, no. 1, 2018.
  • [59] M. Kobilarov, G. Sukhatme, J. Hyams, and P. Batavia, “People tracking and following with mobile robot using an omnidirectional camera and a laser,” in IEEE International Conference on Robotics and Automation.    IEEE, 2006, pp. 557–562.
  • [60] M. Wang, Y. Liu, D. Su, Y. Liao, L. Shi, and J. Xu, “Accurate and real-time 3d tracking for the following robots by fusing vision and ultra-sonar information,” IEEE/ASME Transactions on Mechatronics, 2018.
  • [61] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779–788.
  • [62] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. van der Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolutional networks,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2758–2766.
  • [63] R. Tao, E. Gavves, and A. W. M. Smeulders, “Siamese instance search for tracking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1420–1429.
  • [64] D. Held, S. Thrun, and S. Savarese, “Learning to track at 100 fps with deep regression networks,” in Proceedings of the European Conference on Computer Vision, 2016, pp. 749–765.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
390099
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description