One - Click Annotation with Guided Hierarchical Object Detection
The increase in data collection has made data annotation an interesting and valuable task in the contemporary world. This paper presents a new methodology for quickly annotating data using click-supervision and hierarchical object detection. The proposed work is semi-automatic in nature where the task of annotations is split between the human and a neural network. We show that our improved method of annotation reduces the time, cost and mental stress on a human annotator. The research also highlights how our method performs better than the current approach in different circumstances such as variation in number of objects, object size and different datasets. Our approach also proposes a new method of using object detectors making it suitable for data annotation task. The experiment conducted on PASCAL VOC dataset revealed that annotation created from our approach achieves a mAP of 0.995 and a recall of 0.903. The Our Approach has shown an overall improvement by 8.5%, 18.6% in mean average precision and recall score for KITTI and 69.6%, 36% for CITYSCAPES dataset. The proposed framework is 3-4 times faster as compared to the standard annotation method.
Annotated data is an extremely valuable asset in both academia and industry. The availability of data has provided data annotation task a greater importance in the society. A lot of deep learning research has been focused on improving the generalizability of the deep neural networks with only a small set of training samples which is known as few shot learning [23, 8, 10, 24]. In contrary to this type of research a little attention has been shown in improving the annotation process to make more data available for allowing models to generalize well.
The current annotation strategy for object detection involves clicking the object on the left-top and right bottom of the image but this task puts the user into heavy mental stress and also a huge amount of time is consumed in the process of finding an extremely tight bounding box. The same has been proved in multiple research papers [12, 17, 16]. To avoid the mental stress as well as the time consumption, we propose a semi - automatic approach which combines best of the both worlds i.e. the accuracy of human eye and the speed of neural networks.
The current object detectors cannot be used for the purpose of annotating data as they have a low recall score and mean average precision score. This leads to unreliable results where the object detector might have miss-classified an object or it might have made a completely wrong prediction both in terms of classification and localization. The low recall score also suffers from the same issue but now the problem becomes severe as the detector classifies an object as a background. When a new network is trained with these annotation then it won’t able to generalize well for these incorrectly classified, incorrectly localized and missed out object category in the annotations.
Our approach is rather robust to these issues. The proposed framework being semi-automatic it only acquires partial annotation from the annotator by making them click on the object centers. The framework at the same time simultaneously predicts the intermediate detections from the object detector. These detections are further refined from these human annotated object centers removing the incorrect classifications as well as the localizations. These object centers are further used by the detector to create object proposals when it misses to predict objects at this object center. The created object proposals are further used by the detector as an input to the network for detecting the objects. This process is iterated hierarchically until the object is detected.
The further sections are ordered such that Section 2 highlights the related work, Section 3 describes the proposed work. Experimental results are discussed in Section 4 and conclusion for the work is derived at Section 5. The references for this work is listed after the conclusion.
2 Related work
The research in the field of deep learning has focused on using unlabelled or partially labelled data by developing model in semi-supervised, unsupervised or weakly-supervised learning paradigm to reduce the dependence of the model over annotated data. In the semi - supervised learning paradigm Tan et al.  has proposed a novel algorithm for large scale semi-supervised object detection by imbibing knowledge from visual and semantic cues. Rhee et al  has developed an object detector in semi - supervised learning paradigm which is initially trained on a set of perfectly labelled examples and then it uses active learning to batch imperfect and unlabelled samples. The weakly supervised learning has also been in spotlight lately. Li et al.  has worked on weakly supervised learning by making use of progressive domain adaptation to solve the problem of model initialization and local minima convergence which is a common issue in weakly - supervised learning paradigm. Zhang et al.  has proposed a new state-of-the-art weakly supervised model which combines saliency detection and weakly supervised object detection based on self-paced curriculum learning. There has also been research work where models have been created which can work in both weakly - supervised and semi-supervised paradigm such as .
The fatal problem with all these previous approaches is again the requirement of data to generalize well. The semi-supervised as well as the weakly-supervised approaches barely the match performance of a fully supervised object detector such as Faster - RCNN , Single Shot Multi-Box Detector , YOLO9000  and RetinaNet . The availability of data thus proves to be the easiest method to attain state-of-the-art performance in deep learning based object detection task.
The researchers realized this fact and started to develop tools as well as automated algorithms to annotate data efficiently to reduce the human effort but the progress in this direction is scant. Bianco et al  has developed a tool which uses algorithms like linear interpolation, template matching and as well as supervised object detector depending on mode of operation which can be manual, semi - automatic or fully automatic aiding the annotator to speed-up the annotation, allowing the deep networks to learn from a considerably large annotated data. Bouquet et al.  has attempted to annotate video’s by propagating the annotation throughout the frames using an offline tracker followed by dynamic programming and distance transformation to penalize to the displacement between frames. Konyushkova et al  has shown a different perspective of human - computer interaction for data-annotation by choosing the best sequence of actions to annotate images in least amount of time. This is learnt based on the previous experience which is achieved by using Q- learning to learn an approximate optimal policy. Similar interactive annotation methods has also been explored in semantic segmentation annotation task in the following works [2, 21, 3, 5, 11, 22].
On other hand recent research by Papadopoulos et al.  explores using object centers as a supervision to Multiple Instance Learning frameworks for visual detection task which can make use of the data available in the Internet. Papadopoulos et al.  has also worked on creating bounding boxes using four clicks on the extreme left, top, right and bottom to more intuitively annotate the data which results in a 7 times faster annotation method.
3 Proposed Work
The proposed work can be divided into 2 sub-sections where the first section discuss the method followed to attain the object centers which occupies minimal amount of user interaction but captures maximum information. This section also provides details on the complete work-flow of each and every step taken by the annotator to annotate the data. The second section explains the methodology followed to achieve state-of-the-art performance in case of click guided object detection task.
3.1 Mechanism to capture annotations
This section explains the steps followed to acquire one-click annotation i.e the object centers of the objects in the image. The first step is to display the input image to annotator where annotator selects the class to be annotated as shown in Fig 1. Once the class is selected, user can then click on the center point of the objects belonging to the selected class as shown in Fig 2. This process stores the object centers along with the associated class information and an instant feedback is provided by a red dot making user aware of the clicks made.The user then changes the class which is to be annotated. These steps are repeated continuously until all the object centers are captured. Once process is finished the annotation information is passed onto the network and the annotation results are provided which can be used by annotator to improve the clicking accuracy as shown in Fig 3.
3.2 Hierarchical object Detection
Hierarchical object detection consists of a base detector which is trained on a dataset having the same labels as that of the data which is to be annotated and the base detector used in the experiment is YOLO9000 which was trained to a fair amount of loss of 1.3332. The working pipeline of the detector can be broken down into a sequence of steps as seen in Fig 4,5 and the pseudo code for the same can be seen in Algorithm 1.
The framework uses the click data collected from the annotator consisting of object centers and the class it belongs. These object centers and it’s associated class information are used to validate the results obtained from the standard object detector and depending on the resulting accuracy of the standard object detector hierarchical object detection is performed. If the standard object detector is successful in detecting all the objects in the image then the predicted object centers are replaced with that of the annotated object centers and if the object is classified wrongly then its class information is replaced with the acquired class information. Similarly if any background information is classified as an object it is also filtered out with the help of these human annotation. If the object detector fails to detect objects in the image then hierarchical object detection comes into play, this way the time consumption of the annotation process is reduced. Hierarchical object detection plays an important role in improving the recall score of the model as the human annotation alone can only improve the mean average precision but not the recall score.
The framework first detects the location where the standard object detector has failed to detect an object by comparing the annotation from the human and the model. Object proposals are created at these locations using the width,height information associated with the anchors boxes located in the same grid. These object proposals are further fed into the object detector for object detection. The process of generating object proposals is applied continuously until all the missed out objects are detected. This makes the task of object detection hierarchical in nature. The detection results among all the object proposals at any level is chosen based on the probability of these detection and these results are propagated back to the higher level of the hierarchy once the best among them is chosen. The probability score of the detection transitioning from a lower to higher level is multiplied by the confidence value at higher level. The intuition behind this step is make sure that the annotator knows the difficulty faced by the detector to annotate the object which can used for post-processing of the coarse annotations. Any particular branch in the object proposal tree is expanded only when the resulting probability of the detection from the particular branch will be higher than the neighbouring branches which can be seen in Fig 6, this helps in reducing the computation time and memory consumption removing the dependency of the framework over the high compute power devices.
4 Experimental Results
This section analyzes multiple aspects of both the segments discussed in the proposed work section i.e. the one â click method and the hierarchical object detection.
4.1 One Click Annotation
This section briefly discusses how our annotation approach differs from the standard annotation approach in the aspects of computational power, object scale, number of objects and different datasets. The feedback from users with little domain knowledge on deep learning and data annotation claimed the following for our approach of annotation:
Our approach of annotation saves incredible amount of time.
Our approach is easier to use with when there are multiple objects to be annotated in the images.
Our approach offers much less mental stress when objects are placed at a far depth from the point of capture as in the objects which are extremely small.
4.1.1 Computational Power
The table 1 shows the average time taken to annotate an image in GPU, CPU using our approach and the time taken to annotate a data using standard approach. The results clearly shows that our approach is advantageous as compared to the standard in case of both CPU and GPU as it 3 - 4 times faster as compared to the standard annotation process. Table 2 shows the time consumed by our approach with change in the type of GPU used. The results show that a decent GPU is enough to annotate the data with only a minute difference in the annotation time.
|Method||NVIDIA TITAN X||GTX 1080 Ti||GTX 1050 Ti|
4.1.2 Object Scale
The table 3 shows the comparison between our approach and standard approach when the object size varies. The results in table 3 clearly indicate that our approach is better option when it comes to annotating object at smaller scale as compared to that of the standard approach of annotation, making it easier for the annotator by reducing the annotation time and mental load one has to carry around.
4.1.3 Number of Objects
This section analyses the effectiveness of our approach when the number of objects in the images increase and the results for the same can be viewed in Table 4.The table 4 shows that the time difference increases radically when the number of objects in the image starts to increase.
4.1.4 Different Datasets
The our approach is applicable to all datasets such as PASCAL VOC , KITTI  and CITYSCAPES . The table 5 shows the consistency of framework over multiple datasets proving that our approach is robust to changes in the distribution of the data.
4.2 Hierarchical Object Detection
The hierarchical object detection also depends on multiple parameters influencing its accuracy and computational time, this section gives a brief overview about such parameters.
The anchor boxes play a vital role in detecting the objects which were left out at the first iteration of detection but at the same time it takes a heavy toll on the computational time. The table 6 shows the trade off between accuracy and time based on the number of anchor boxes. The results show that with increase in number of anchor boxes the mean average precision score does increase but at the same time the computational time of annotation increases.
Number of Anchors
|Accuracy||Time taken (seconds)|
|Mean average precision||Recall|
4.2.2 Hierarchy count
The hierarchy count is the number of iterations the model runs over the anchor box based object proposal, the table 7 discuss about the accuracy vs computational time trade-off. The results show with the increase size of hierarchy the results aren’t much improving but only the time consumption increases.
|Hierarchy count||Accuracy||Time taken (seconds)|
|Mean average precision||Recall|
4.3 Detection Results
|Data set||Hierarchical object detector||Standard object detector|
|Mean average precision||Recall||Mean average precision||Recall|
The table 8 describes the performance of the hierarchical object detector on multiple datasets. It can be observed that the hierarchical object detector boosts the performance of annotation both in terms of mean average precision score as well as in recall score. The detection results for different types of scenarios are listed below where the detections from standard detector is compared to that of our approach.
4.3.1 Correctly labelled and localized
This is the case where the standard object detector is able to perform optimally by detecting all the object of interest in the image. The human annotated object center comes in handy here, the human annotated data containing the precise center co-ordinates are used instead of the network predicted centers to re-localize the detected objects. The changes in the looks of the bounding boxes can be viewed in Fig 7 and Fig 8. In Fig 8 we can observe that the objects are much more centered as compared to that of the Fig 7 the reason behind such loss turns out due to the loss of information occurring as a consequence of squeezing the image into a smaller grid losing some amount of information regarding the exact location of the object in order to obtain more dense feature for accurate classification of the object.
4.3.2 Incorrectly labelled but correctly localized
There are certain cases where the network will wrongly label the objects in the image which can be seen in Fig 9. The detections of such spurious nature are found by comparing the predicted object labels and human annotated object labels. The spurious detection are corrected using the label information and object center. The rest of the predicted information remains the same. The result of object detection after adding this detection can be seen in the Fig 10.
4.3.3 Incorrectly labelled and localized
These set of detection are popularly termed as false positives. The false positives play an important role in determining the mean average precision of the object. The object detector possess a low mean average precision due to the incorrect classification and localization of the object in the image. Thus, for any annotated data it is desired to have very high mean average precision. The Fig 11 and 12 show that the results after removing such spurious detections.
4.3.4 Not labelled and localized
In this subsection the case of missing out an object is explained. The hierarchical object detection comes into action in this region. The results for detection from standard object detection can be seen in Fig 13 and the results from hierarchical object detection can be seen in Fig 14.
5 Conclusion and Future work
The proposed framework provides a novel solution to problem of annotating the data with least amount of human effort both mentally and physically by harnessing maximum amount of information with minimum amount of interaction with the computer. The current framework acts as the current state of the art object detector in case of guided object detection task by attaining a mean average precision score of 99.95 ,99.7,99.23 on PASCAL VOC, KITTI, CITYSCAPES and a recall score of 90.38,80.1,45.02 on PASCAL VOC, KITTI, CITYSCAPES respectively. The framework reduces the annotation time, cost and mental stress. The framework also removes the spurious human object center annotations reducing the time required for refining the annotation. The framework has proven to be 3 - 4 times faster than that of the standard annotation procedure. There lies a lot unexploited potential in the framework which can be further taken up for future research. One of them is that the network finds it difficult to annotate the similar objects which are placed together. In the first image in Fig 15 we can observe that although both the parrot were clicked by the annotator but one of the parrot masks its presence as it is prominent in comparison to the neighboring the object which was clicked, so a bounding box is created around the prominent one leaving out the clicked object center out of the box. This results only in a single detection. The same trend can be observed in the other set of Figures 15 for the dogs in the second image and cows in third image.
Another future work lies in improving the object detection by allowing user to click any-where in the object as there are many situation where unwanted object might occlude the visual and spatial characteristics of the object of interest i.e an unwanted object might occlude the center point of the object of interest thus resulting in poor quality of anchor boxes to be used as an object proposal. An example for the same can be seen in Fig 16 where an object interest is occluded by an uninteresting object.
-  S. Bianco, G. Ciocca, P. Napoletano, and R. Schettini. An interactive tool for manual, semi-automatic and automatic video annotation. Computer Vision and Image Understanding, 131:88–99, 2015.
-  Y. Y. Boykov and M.-P. Jolly. Interactive graph cuts for optimal boundary & region segmentation of objects in nd images. In Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, volume 1, pages 105–112. IEEE, 2001.
-  L. Castrejón, K. Kundu, R. Urtasun, and S. Fidler. Annotating object instances with a polygon-rnn. In CVPR, volume 1, page 2, 2017.
-  M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016.
-  S. Dutt Jain and K. Grauman. Predicting sufficient annotation strength for interactive foreground segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1313–1320, 2013.
-  M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
-  L. Fagot-Bouquet, J. Rabarisoa, and Q. C. Pham. Fast and accurate video annotation using dense motion hypotheses. In Image Processing (ICIP), 2014 IEEE International Conference on, pages 3122–3126. IEEE, 2014.
-  V. Garcia and J. Bruna. Few-shot learning with graph neural networks. arXiv preprint arXiv:1711.04043, 2017.
-  A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11):1231–1237, 2013.
-  N. Hilliard, L. Phillips, S. Howland, A. Yankov, C. D. Corley, and N. O. Hodas. Few-shot learning with metric-agnostic conditional embeddings. arXiv preprint arXiv:1802.04376, 2018.
-  S. D. Jain and K. Grauman. Click carving: Segmenting objects in video with point clicks. arXiv preprint arXiv:1607.01115, 2016.
-  K. Konyushkova, J. Uijlings, C. Lampert, and V. Ferrari. Learning intelligent dialogs for bounding box annotation. arXiv preprint arXiv:1712.08087, 2017.
-  D. Li, J.-B. Huang, Y. Li, S. Wang, and M.-H. Yang. Weakly supervised object localization with progressive domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3512–3520, 2016.
-  T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. arXiv preprint arXiv:1708.02002, 2017.
-  W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016.
-  D. P. Papadopoulos, J. R. Uijlings, F. Keller, and V. Ferrari. Extreme clicking for efficient object annotation. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 4940–4949. IEEE, 2017.
-  D. P. Papadopoulos, J. R. Uijlings, F. Keller, and V. Ferrari. Training object class detectors with click supervision. arXiv preprint arXiv:1704.06189, 2017.
-  J. Redmon and A. Farhadi. Yolo9000: better, faster, stronger. arXiv preprint, 2017.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
-  P. K. Rhee, E. Erdenee, S. D. Kyun, M. U. Ahmed, and S. Jin. Active and semi-supervised learning for object detection with imperfect data. Cognitive Systems Research, 45:109–123, 2017.
-  C. Rother, V. Kolmogorov, and A. Blake. Grabcut: Interactive foreground extraction using iterated graph cuts. In ACM transactions on graphics (TOG), volume 23, pages 309–314. ACM, 2004.
-  N. Shankar Nagaraja, F. R. Schmidt, and T. Brox. Video segmentation with just a few strokes. In Proceedings of the IEEE International Conference on Computer Vision, pages 3235–3243, 2015.
-  J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4080–4090, 2017.
-  F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales. Learning to compare: Relation network for few-shot learning. arXiv preprint arXiv:1711.06025, 2017.
-  Y. Tang, J. Wang, X. Wang, B. Gao, E. Dellandrea, R. Gaizauskas, and L. Chen. Visual and semantic knowledge transfer for large scale semi-supervised object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
-  Z. Yan, J. Liang, W. Pan, J. Li, and C. Zhang. Weakly-and semi-supervised object detection with expectation-maximization algorithm. arXiv preprint arXiv:1702.08740, 2017.
-  D. Zhang, D. Meng, L. Zhao, and J. Han. Bridging saliency detection to weakly supervised object detection based on self-paced curriculum learning. arXiv preprint arXiv:1703.01290, 2017.