Collaborative Annotation of Semantic Objects in Images with Multi-granularity Supervisions

Collaborative Annotation of Semantic Objects in Images with Multi-granularity Supervisions

Lishi Zhang, Chenghan Fu, Jia Li State Key Laboratory of Virtual Reality Technology and Systems, SCSE, Beihang University jiali@buaa.edu.cn
Abstract.

Per-pixel masks of semantic objects are very useful in many applications, which, however, are tedious to be annotated. In this paper, we propose a human-agent collaborative annotation approach that can efficiently generate per-pixel masks of semantic objects in tagged images with multi-granularity supervisions. Given a set of tagged image, a computer agent is first dynamically generated to roughly localize the semantic objects described by the tag. The agent first extracts massive object proposals from an image and then infer the tag-related ones under the weak and strong supervisions from linguistically and visually similar images and previously annotated object masks. By representing such supervisions by over-complete dictionaries, the tag-related object proposals can pop-out according to their sparse coding length, which are then converted to superpixels with binary labels. After that, human annotators participate in the annotation process by flipping labels and dividing superpixels with mouse clicks, which are used as click supervisions that teach the agent to recover false positives/negatives in processing images with the same tags. Experimental results show that our approach can facilitate the annotation process and generate object masks that are highly consistent with those generated by the LabelMe toolbox.

Human-agent collaboration, sparse coding length, per-pixel annotation, object proposal, superpixel
footnotetext: *Corresponding author: Jia Li

1. Introduction

In the past decade, large-scale datasets have greatly boosted the development of computer vision techniques. Typically, images in such datasets are annotated with one or several tags to depict the semantic categories of primary objects. In recent applications such as autonomous driving (Chen et al., 2014; Mousavian et al., 2017; Geiger et al., 2012; Chen et al., 2015), robot navigation (Chen et al., 2017b; Kim and Park, 2017; Mao et al., 2017; Fankhauser et al., 2015; Zhu et al., 2017) and visual question answering (Goyal et al., 2017; Xu et al., 2017; Zhao et al., 2017; Zhou et al., 2017), however, knowing only the what attributes represented by such tags becomes insufficient. To interact with the real-world scenarios, these applications often have to train and test models based on per-pixel masks of semantic objects, within which both the what and where attributes are simultaneously annotated.

Figure 1. Two time-consuming tasks in annotating semantic objects. a) Approximating obvious non-polygonal boundaries with polygons; b) Separating ambiguous boundaries where targets and distractors have similar visual appearance. In most cases, a computer agent performs good at the former task, and the human annotator is skilled in the latter task. This motivates the proposal of our human-agent collaborative annotation approach.

Regardless of the overwhelming requirement of per-pixel masks of semantic objects, unfortunately, the pixel-wise annotation is often inefficient and tedious. It is difficult to construct datasets with per-pixel masks  (Cordts et al., 2016; Gould, 2012) of semantic objects that are as large as existing datasets of tagged images. For example, ImageNet (Deng et al., 2009; Russakovsky et al., 2015) contains 14.2 million tagged images from 21,841 object categories, while MS COCO (Lin et al., 2014), a large dataset with per-pixel masks of 80 object categories, contains only 328k images. As shown in Fig. 1, two of the most time-consuming tasks in annotating semantic objects include 1) approximating obvious non-polygonal boundaries by drawing polygons and 2) separating ambiguous boundaries where targets and distractors have similar visual appearance. In most cases, a computer agent performs good at the former task, and the human annotator is skilled in the latter one. Therefore, it is necessary to develop a human-agent collaborative annotation approach to speed up the annotation process so that large-scale datasets with per-pixel masks of semantic objects can be efficiently generated.

Figure 2. Framework of the proposed approach. Given a set of images tagged with “Cat,” a computer agent is dynamically generated with weak, strong and click dictionaries. It first extracts object proposals and superpixels, and the tag-related object proposals are then inferred by measuring the sparse coding length of weak and strong dictionaries. By converting the tag-related objects into the binary labels of superpixels, the human annotator can participate to flip the superpixel label or divide coarse superpixel into finer ones via mouse clicks. Such clicks are then used to form flip dictionaries which can be used to supervise the automatic refinement of subsequent images.

The problem of human-agent collaborative annotation (Su et al., 2010) has been studied for many years. Based on the collaboration strategy, existing approaches can be divided into two major categories, which can be denoted as agent-decision and human-decision, respectively. Approaches in the agent-decision group require human interactions like scribbles (Russell et al., 2008) and bounding boxes (Dai et al., 2015) at the beginning of the annotation process, followed by the automatic boundary refinement algorithms such as GrabCut (Rother et al., 2004; Chen et al., 2014) or Convolutional Neural Network (CNN) (Lin et al., 2016; Uhrig et al., 2016; Castrejón et al., 2017) that can be viewed as an agent. Such human-agent interactions can be repeated several times until satisfactory results are generated. A major drawback of the agent-decision approaches is that the final decision is made by the agent. Since the correlation between the input human interactions and the output annotation results is not very intuitive, these annotation results are rarely used as the official ground-truth.

For the human-decision group, the annotation process starts with an agent that automatically generates coarse annotation results in terms of superpixels (Li et al., 2016; Shi et al., 2017), bounding boxes (Kuettel et al., 2012; Jain and Grauman, 2016; Su et al., 2012) or polygons (Castrejón et al., 2017). These results are then manually refined by the human annotators through dragging and dropping anchor points (Castrejón et al., 2017) or clicking superpixels (Kim et al., 2011). Since the human annotator in these approaches makes the final decision, the generated masks can be used as the official ground-truth. Typically, the more powerful an agent is, the more time cost can be saved. However, one drawback of existing human-decision approaches is that the agents are often static with pre-trained (or pre-defined) parameters, while in annotating certain unusual categories of objects there may lack sufficient data for training such an agent. Moreover, the agents may repeatedly make the same mistakes even in segmenting the same categories of semantic objects. Therefore, it is necessary to develop a human-agent collaborative annotation approach, in which the agent is dynamically formed and gradually evolves to make less mistakes, and the human makes the final decision.

Toward this end, this paper proposes a human-agent collaborative approach that can efficiently generate per-pixel masks for tagged images. The main objective is to rapidly convert tagged images in previous datasets (e.g., ImageNet) into per-pixel masks of semantic objects. In this manner, a large-scale image dataset with per-pixel masks of semantic objects can be efficiently created. The system framework of the proposed approach is shown in Fig. 2, which takes a set of images with the same tags as the input. In this framework, a dynamic agent is first created from tagged images and previously annotated semantic objects that have high linguistic and visual similarities with the input images. After that, a weak dictionary is constructed from images with similar tags, and a strong dictionary is constructed from previously annotated semantic objects. Based on the weak and strong supervisions from the two dictionaries, the agent first extracts a set of object proposals from each image and then measures their relationship with image tags according to the sparse coding length  (Lee et al., 2007)(i.e., the difficulty of encoding the features of object proposals from the two dictionaries). Such object-level relationships are then used to initialize the binary labels of coarse superpixels, based on which a human annotator interacts with the superpixels with mouse clicks. In the human annotation stage, simple mouse clicks are used to invert the label of a superpixel and divide a large superpixel into much smaller ones that are better aligned with ambiguous object boundaries. Meanwhile, superpixels whose labels are corrected by the annotators are recorded to update a flip dictionary which records false positives/negatives. In annotating subsequent images with the same tags, the initialization results can be automatically refined by the agent that gradually learns how to recover false positives/negatives from the flip dictionary. Experimental results show that our approach can facilitate the annotation process of semantic objects, and the collaborative masks have 91.21% agreement with the masks generated by the LabelMe toolbox (Russell et al., 2008) in terms of F-Measure.

The main contributions of this paper include: 1) We propose a human-agent collaborative approach to convert tagged images in previous datasets into per-pixel masks of semantic objects; 2) We create a dynamic agent that can gradually evolve in facilitating the annotation process; and 3) We conduct extensive experiments which not only validates the effectiveness of the proposed approach in speed up the annotation but also show that the collaborative annotation results have high agreement with the masks generated by the LabelMe toolbox.

2. Automatic Annotation Initialization with Weak and Strong Supervisions

The main objective of this work is to develop a collaborative tool that can rapidly convert the semantic tags of images in previous large-scale datasets into per-pixel masks of semantic objects. Without loss of generality, we assume there exists an image dataset in which only the primary object(s) of each image is annotated by several nouns (like ImageNet). Given a set of images annotated by the same set of tags, we first retrieve images and previously annotated object masks with relevant tags to construct two dictionaries, which are then used as the weak and strong supervision to automatically initialize the masks of semantic objects.

2.1. Weak and Strong Dictionaries

Let be all image sets and be the set of tags of . The similarity between and is defined as

(1)

where and denote the linguistic and visual similarities, respectively. The linguistic similarity is computed as

(2)

where and are two tags represented by two normalized feature vectors and , respectively. Here and are formed by using the pre-trained word2vec model (Mikolov et al., 2013; Kim, 2014), and their similarity is computed as the Cosine distance. and denote the numbers of tags in and , respectively.

To compute the visual similarity, we adopt the DPN model (Chen et al., 2017a), a pre-trained Convolutional Neural Network, to convert each image in and into a 2688D feature vector. These vectors are also normalized to unit vectors, and the visual similarity is computed as

(3)

where and are two images represented by feature vectors and , respectively. and denote the numbers of images in and , respectively.

By incorporating the linguistic similarity (2) and visual similarity (3) into (1), we can create a ranked list of image sets in decreasing order of the overall similarity. From this list, we select the top-ranked image sets with the similarity value above . We assume that the per-pixel masks of some selected image sets are already annotated, while the others are not. As a result, we can construct a weak dictionary and a strong dictionary for these two groups of image sets. The weak dictionary is represented by a matrix whose columns are the feature vectors of the selected images, and the per-pixel masks are not annotated yet. The strong dictionary is constructed similarly on previously annotated objects by setting the image pixels outside the per-pixel masks to zero in extracting the object-based feature descriptors. The difference between the two dictionaries is that, features in the strong dictionary acts more precise in representing the attributes of semantic objects. To speed up the computation, we compress the 2688D feature vectors into 100D via Principal Component Analysis in constructing the two dictionaries.

Figure 3. Representative examples of initialization with weak and strong supervisions. We can see that the majority of semantic objects can be successfully detected, while some superpixels near ambiguous boundaries and in object-like regions may be assigned with wrong labels or have inappropriate scales.

2.2. Object Mask Initialization with Weak and Strong Supervisions

Given the two dictionaries and that are related to the image set , we can automatically initialize object masks for all images in . For an image , we first use the MCG model  (Arbeláez et al., 2014) to extract a number of object proposals and select the proposals with the highest objectness for subsequent analysis. Let be the selected object proposals (we empirically set =200), we extract their visual features by using the pre-trained DPN model  (Chen et al., 2017a) and compress them into 100D descriptors by using Principal Component Analysis. Such compressed descriptors are denoted as . For each proposal , its probability of being related to image tags can be computed according to the difficulties of encoding the proposal’s features with the weak and strong dictionaries. That is, the easier a proposal can be encoded from the images and objects of the same category, the more likely it belongs to the desired sematic object. The sparse representation of the feature vector can be estimated by solving

(4)

where and are the sparse representations of the input feature vector . The optimization objective of (4) is to minimize the two coding lengths of a feature vector based on the two dictionaries, which can be solved with the Orthogonal Matching Pursuit algorithm (Donoho et al., 2012). The constant is a predefined threshold and we empirically set in all experiments.

Given the two sparse representations of an object proposal, its coding length can be estimated as

(5)

Note that we only use the minimum coding length to measure difficulty of reconstructing the object proposals from tagged images and clean objects. With the coding length, the probability that is related to the semantic object described by image tags, denoted as , can be heuristically estimated as

(6)

where is a constant that is used to suppress superpixels with large coding lengths (we empirically set =2).

After estimating such probabilities of all object proposals, we convert them into binary labels of superpixels for subsequent human interactions, and the annotation process can be formulated as finding a subset of superpixels that 1) their union perfectly covers the semantic objects, and 2) their number is small to reduce the time cost in human decision. Toward this end, we first extract a set of non-overlapping superpixels  (Achanta and Süsstrunk, 2017)at a coarse scale, denoted as . The probability that a coarse superpixel is related to image tags can be computed as

(7)

where is a pixel and is an indicator function that equals 1 if holds and 0 otherwise. As a result, the binary label of a superpixel , denoted as , can be initialized with a predefined threshold

(8)

where is a predefined threshold which is empirically set to 0.4. As shown in Fig. 3, the initialized annotations can roughly cover the majority of primary objects, but may fail at ambiguous boundaries and certain object-like regions. Therefore, such superpixels should be refined by removing false positives, recovering false negatives and splitting into smaller superpixels.

3. Human-Agent Collaborative Refinement with Click Supervision

Based on the object masks initialized by the agent, a human annotator is involved for the collaborative annotation refinement. Such refinement can take two forms in processing the image set . For the very first images, only the manual refinement is used, and the revisions made by the annotator are recorded to create and update one additional flip dictionary. After that, the flip dictionary is used to provide automatic refinement before the manual stage to prevent the agent from making the same mistakes when processing the subsequent images.

3.1. The Flip Dictionary From Clicks

To speed up the manual refinement, we divide an image into massive superpixels at a much finer scale and further divide such superpixels by the boundaries of coarse superpixels. As a result, we can safely assume that superpixels at this scale align well with object boundaries. Based on the coarse and fine superpixels, only two click actions are require to refine object masks: 1) a left click flips the superpixel label between 0 and 1; and 2) a right click converts a coarse superpixel into much finer ones with the same labels. With these clicks, we can efficiently generate a per-pixel mask for an image. Compared with dragging anchor points to form polygons, the superpixel-based interactions are often easier to be conducted and can make use of the impressive capability of agents in segmenting non-polygonal boundaries.

Meanwhile, for a coarse superpixel , we embedded source image into multiple local contexts at scales. Here the local context at the scale is represented by a bounding box covering the superpixel and all its order- neighbors. As shown in Fig. 4, the order- neighbors are adjacent to the order neighbors, and the order- neighbors are adjacent to . By embedding a superpixel into multiple local contexts, we can extract its visual features at scales by DPN model (Chen et al., 2017a), denoted as . Then, for the coarse superpixels that are initialized with labels of 1 and receive left clicks, we use their features extracted at scales to form a flip dictionary that are used to record false positives. Similarly, a flip dictionary is also formed to record false negatives.

Figure 4. A superpixel is embedded into multiple local contexts for multi-scale visual feature extraction. (a) A coarse superpixel from an input image (marked with the red contour and a black dot); (b)-(d) the local contexts (bounding boxes) with order-, order- and order- neighbors. We can see that by embedding the superpixel into multiple contexts, its visual attributes can be described from both the local, regional and global perspectives.

3.2. Automatic Annotation Refinement Supervised by The Flip Dictionary

The flip dictionary record the failures made by the agent in annotating the first several images from an input image set. In annotating subsequent images, we expect that the agent can gradually evolve by learning from such failures on how to recover false positives/negatives. In practice, we focus on the coarse superpixel near the classification boundaries (i.e., , where is empirically set to 0.15). For such uncertain coarse superpixels with initial labels of 1, we first adopt the same sparse coding scheme to recognize probable false positives by solving

(9)

After that, we refer to the coding length of to selectively invert the labels of probable false positives:

(10)

where =0.1 is a predefined threshold to select coarse superpixels that are highly similar to false positives recorded in the flip dictionary. Similarly, the false negatives can be detected from the coarse superpixels with initial labels of 0 by using the flip dictionary . Such a simple strategy can provide a rough refinement of superpixel labels so that the workload of the human annotator can be gradually reduced.

4. Experiments

Figure 5. Representative images from four groups. (a) Group--, (b) Group--, (c) Group-- and (d) Group--. By measuring the inter-group and intra-group performance variation, we can provide a comprehensive evaluation.

4.1. Settings

The agent in our approach is dynamic and evolves along with the annotation process. To validate the effectiveness of such a dynamic agent, we build a synthetic dataset that consists of 120 image sets selected from:

MS-COCO (Lin et al., 2014). This dataset contains 328k images with objects annotated at the instance level. We crop a tight bounding box that covers each semantic object to generate a new image, which is also assigned the same tag with the semantic object. In total, we collect 80 image sets with 123,287 images. These images are used to simulate the tagged images with previously annotated per-pixel masks.

ImageNet (Deng et al., 2009). This dataset contains tagged image sets. From this dataset, we randomly select 20 image sets from the synsets that are previously used to train the DPN model (Chen et al., 2017a), and another 20 image sets from the rest ones. From each set, we randomly select 10% images and collect 2,389 images within 40 image sets that form four groups:

  1. Group--: 10 sets (1096 images) that are used in training DPN and have the same tags with the 80 MS-COCO image sets.

  2. Group--: 10 sets (574 images) that are used in training DPN and have different tags with the 80 MS-COCO image sets.

  3. Group--: 10 sets (659 images) that are not used in training DPN and have the same tags with the 80 MS-COCO image sets.

  4. Group--: 10 sets (510 images) that are not used in training DPN and have different tags with the 80 MS-COCO image sets.

The images from these four groups are used to simulate the tagged images whose per-pixel masks need to be annotated. Some representative images obtained from these four groups can be found in Fig. 5. We find that images from Group-- and Group-- contain semantic objects that are also contained in certain images from MS-COCO, indicating that these images can be annotated under strong supervision. Other images from Group-- and Group-- do not have corresponding images in MS-COCO, indicating that these images need to be annotated mainly with weak and click supervisions. In this manner, we can explore the influence of various types of supervisions. In addition, the DPN model is trained to best extract representative features of primary objects from images of Group-- and Group--. As a result, the performance of our approach on these groups, when compared with those on Group-- and Group--, can be used to depict the generalization ability of the proposed approach to new images.

Given the four groups, ten subjects (eight males and two females, aged from 20 to 28) are requested to annotate the semantic objects from the 40 image sets (four image sets per subject) by using the LabelMe toolbox (Russell et al., 2008). In this manner, we obtain a binary object mask for each image from the 40 image sets. These binary masks are used as the ground-truth in all experiments. Let be the ground-truth mask of an image , we access the quality of a binary mask generated by another approach with the metrics of Precision and Recall. Correspondingly, the overall performance of an approach, a.k.a. the agreement with the masks generated by the LabelMe toolbox, is measured by the F-measure:

(11)

4.2. Annotation Quality Test

A successful annotation approach should provide satisfactory per-pixel masks while greatly reducing the time cost of the human annotator. To validate our approach from these two perspectives, we request eight additional subjects to annotate the semantic objects from all the 40 images sets with our annotation tool. The annotation results are compared with the masks generated by the ten subjects with LabelMe toolbox to show the annotation quality. Moreover, the flip dictionary is inhibited in the annotation process so that the quality of the initialization results provided by the agent indirectly reflects the speed up factor. An inherent assumption here is: the better the initialization results, the faster the manual annotation process.

Figure 6. Representative annotation results. (a,g) Tagged images, (b,h) DeepMask, (c,i) SharpMask, (d,j) Our initialization results, (e,k) Our final annotation results, (f,l) LabelMe results (ground-truth).

The performance of our automatic initialization and manual annotation results are shown in Table 1, and some representative results can be found in Fig. 6. In Table 1 and Fig. 6, we also demonstrate the results from two state-of-the-art models, DeepMask (Pinheiro et al., 2015) and SharpMask (Pinheiro et al., 2016), to provide an intuitive comparison with our initialization results. For an input image, DeepMask uses a CNN model to output a pixel labeling of an object. SharpMask provides a refinement module to obtain more accurate results. Note that both DeepMask and SharpMask are trained on the MS-COCO dataset and we use their top-1 proposals as the labeling results.

Model Precision Recall F-Measure
Random 31.36 49.96 38.53
DeepMask 52.13 62.34 56.78
SharpMask 54.34 61.71 57.79
Our-Init 60.35 72.42 65.83
Our-Final 91.33 91.09 91.21
Table 1. Annotation qualities (in %) of four approaches on the 40 image sets. The top two approaches are highlighted with bold and underline, respectively.
Categories  Precision Recall F-Measure
Top 5  colour supplement 99.11 94.83 96.93
 comic book 96.57 95.78 96.17
 Irish water spaniel 94.79 97.30 96.03
 mustache cup 95.15 96.11 95.63
 Egyptian cat 94.31 96.36 95.32
Bottom 5  banded gecko 88.69 84.90 86.75
 bucking bronco 83.86 85.35 84.60
 lockage 92.17 75.71 83.14
 pirate ship 75.87 85.31 80.31
 fireboat 77.67 74.25 75.93
Table 2. The best and worse image sets in terms of annotation qualities (in %) of our approach.

From Table 1, we can see that the agreement between the annotation results of our approach and LabelMe reaches up to 91.23% in terms of F-measure. To further investigate this, we demonstrate in Table 2 the top five and bottom five image sets selected according to the F-measure of our approach.

For the top 5 image sets such as comic book and mustache cup, the annotation results from two annotation approaches perfectly match each other. Even for the challenging image sets such as Irish water spaniel, we still have an agreement above 96.03%. Interestingly, we can see that the in the top 5 image sets, most of them have corresponding semantic objects in MS-COCO (except Irish water spaniel), indicating that the strong supervision plays an important role in improving the annotation quality. In contrast, the bottom 5 categories usually have no such strong supervision (except fireboat). Moreover, the objects in the bottom five sets may have diversified appearance, which may prevent them from being easily annotated and may lead to annotation ambiguity even between different subjects.

Beyond the annotation quality, we find that the time cost of the annotator may also be remarkably reduced due to the impressive performance of the computer agent in initializing the object masks. On average, the F-measure of our initialization results reaches up to 65.83%, which outperforms DeepMask (56.78%) and SharpMask (57.79%) by 9.05% and 8.04%, respectively. Note that the time-consuming pre-training processes are not required by our approach, in which the agent is dynamically generated by retrieving related images and objects. This implies that our approach can provide better initialization results and can be efficiently deployed for annotating various types of semantic objects.

In addition, we compare the initialization results with the final annotation results with our approach and find that only 25.02 coarse superpixels per image have wrong labels or inaccurate boundaries that need to be flipped or divided through left or right mouse clicks. Considering that such mouse click are much easier to conduct than dragging and dropping the anchor points, we can safely claim that the proposed approach can greatly reduce the time cost of human-being in annotating semantic objects.

4.3. Performance Analysis

We further analyze the performance of our approach from the perspective of generalization ability and agent evolvability. To verify the generalization ability, we list in Table 3 the performance of our initialization results on the four groups. From Table 3, we can find that our annotation approach have comparable performance for various types of images inside or outside the training set of DPN and the MS-COCO dataset, implying an impressive generalization ability.

To validate the evolvability of the agent, we equally divide each of the 40 image sets into two click collection sets and a click verification set. On the click collection sets, we collect the mouse clicks of annotators to form a flip dictionary, and compare the agent performance over the click verification sets with or without using the flip dictionaries. We find that by using the flip dictionaries formed on zero, one or two click collection sets, the agent obtains a F-measure of 64.53%, 65.14% and 65.27% on the images from the click verification set, respectively. Note that for each of the 40 image sets, the flip dictionaries are formed only on hundreds of clicks (i.e., 59.72 images per image set and 25.02 coarse superpixels get manually clicked per image). Some representative results before and after the automatic agent refinement with the flip dictionaries can be found in Fig. 7. These results indicate that the propose approach has the capability of memorizing past failures and can gradually evolve to avoid them when facing similar images. In this manner, the agent gradually becomes smarter and thus save more time of the annotator.

Group Precision Recall F-Measure
Group-- 64.76 67.42 66.06
Group-- 56.98 76.80 65.42
Group-- 61.02 71.80 65.97
Group-- 53.78 79.03 64.00
Table 3. Agent performances (in %) over four groups with only weak and strong supervisions

5. Conclusion

Per-pixel masks of semantic image objects are difficult to be obtained due to the tedious and inefficient annotation process. In this paper, we propose a human-agent collaborative annotation approach that aims to convert existing large-scale datasets of tagged images to per-pixel masks of semantic objects. Multi-granularity supervisions from images with similar tags, previously annotated objects and annotator clicked superpixels are incorporated into the collaborative framework in the form of weak, strong and click supervisions. Compared with previous annotation tools, the proposed agent is dynamically formed and can gradually evolve to recover failures. Experimental results on a synthetic dataset show that the proposed approach performs impressively on annotating tagged images. We believe this approach can be help to the construction of large-scale image datasets with per-pixel masks of semantic objects.

Figure 7. Effects of automatic annotation refinement with click supervision. The second row shows the initialization results, and the third row shows the object masks after automatic refinement with the flip dictionaries.

References

  • (1)
  • Achanta and Süsstrunk (2017) Radhakrishna Achanta and Sabine Süsstrunk. 2017. Superpixels and polygons using simple non-iterative clustering. In CVPR. 4895–4904.
  • Arbeláez et al. (2014) Pablo Arbeláez, Jordi Pont-Tuset, Jonathan T Barron, Ferran Marques, and Jitendra Malik. 2014. Multiscale combinatorial grouping. In CVPR. 328–335.
  • Castrejón et al. (2017) Lluıs Castrejón, Kaustav Kundu, Raquel Urtasun, and Sanja Fidler. 2017. Annotating object instances with a polygon-rnn. In CVPR, Vol. 1. 2.
  • Chen et al. (2015) Chenyi Chen, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. 2015. Deepdriving: Learning affordance for direct perception in autonomous driving. In ICCV. 2722–2730.
  • Chen et al. (2014) Liang-Chieh Chen, Sanja Fidler, Alan L Yuille, and Raquel Urtasun. 2014. Beat the mturkers: Automatic image labeling from weak 3d supervision. In CVPR. 3198–3205.
  • Chen et al. (2017a) Yunpeng Chen, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, and Jiashi Feng. 2017a. Dual path networks. In NIPS. 4470–4478.
  • Chen et al. (2017b) Zetao Chen, Fabiola Maffra, Inkyu Sa, and Margarita Chli. 2017b. Only look once, mining distinctive landmarks from ConvNet for visual place recognition. In IROS. 9–16.
  • Cordts et al. (2016) Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. 2016. The cityscapes dataset for semantic urban scene understanding. In CVPR. 3213–3223.
  • Dai et al. (2015) Jifeng Dai, Kaiming He, and Jian Sun. 2015. Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In ICCV. 1635–1643.
  • Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In CVPR. 248–255.
  • Donoho et al. (2012) David L Donoho, Yaakov Tsaig, Iddo Drori, and Jean-Luc Starck. 2012. Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE transactions on Information Theory (2012), 1094–1121.
  • Fankhauser et al. (2015) Péter Fankhauser, Michael Bloesch, Diego Rodriguez, Ralf Kaestner, Marco Hutter, and Roland Siegwart. 2015. Kinect v2 for mobile robot navigation: Evaluation and modeling. In IEEE ICRA. 388–394.
  • Geiger et al. (2012) Andreas Geiger, Philip Lenz, and Raquel Urtasun. 2012. Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR. 3354–3361.
  • Gould (2012) Stephen Gould. 2012. Multiclass pixel labeling with non-local matching constraints. In CVPR. 2783–2790.
  • Goyal et al. (2017) Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In CVPR. 9.
  • Jain and Grauman (2016) Suyog Dutt Jain and Kristen Grauman. 2016. Active Image Segmentation Propagation.. In CVPR. 4.
  • Kim et al. (2011) Edward Kim, Xiaolei Huang, and Gang Tan. 2011. Markup SVG¡ªAn Online Content-Aware Image Abstraction and Annotation Tool. IEEE TMM (2011), 993–1006.
  • Kim and Park (2017) Jun-Sik Kim and Jung-Min Park. 2017. Direct hand manipulation of constrained virtual objects. In IROS. 357–362.
  • Kim (2014) Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014).
  • Kuettel et al. (2012) Daniel Kuettel, Matthieu Guillaumin, and Vittorio Ferrari. 2012. Segmentation propagation in imagenet. In ECCV. 459–473.
  • Lee et al. (2007) Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y Ng. 2007. Efficient sparse coding algorithms. In Advances in neural information processing systems. 801–808.
  • Li et al. (2016) Ke Li, Bharath Hariharan, and Jitendra Malik. 2016. Iterative instance segmentation. In CVPR. 3659–3667.
  • Lin et al. (2016) Di Lin, Jifeng Dai, Jiaya Jia, Kaiming He, and Jian Sun. 2016. Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. In CVPR. 3159–3167.
  • Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV. 740–755.
  • Mao et al. (2017) Huitan Mao, Jing Xiao, Mabel M Zhang, and Kostas Daniilidis. 2017. Shape-based object classification and recognition through continuum manipulation. In IROS. 456–463.
  • Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).
  • Mousavian et al. (2017) Arsalan Mousavian, Dragomir Anguelov, John Flynn, and Jana Košecká. 2017. 3d bounding box estimation using deep learning and geometry. In CVPR. 5632–5640.
  • Pinheiro et al. (2015) Pedro O Pinheiro, Ronan Collobert, and Piotr Dollár. 2015. Learning to segment object candidates. In NIPS. 1990–1998.
  • Pinheiro et al. (2016) Pedro O Pinheiro, Tsung-Yi Lin, Ronan Collobert, and Piotr Dollár. 2016. Learning to refine object segments. In ECCV. 75–91.
  • Rother et al. (2004) Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. 2004. Grabcut: Interactive foreground extraction using iterated graph cuts. In ACM TOG. 309–314.
  • Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. IJCV (2015), 211–252.
  • Russell et al. (2008) Bryan C Russell, Antonio Torralba, Kevin P Murphy, and William T Freeman. 2008. LabelMe: a database and web-based tool for image annotation. IJCV (2008), 157–173.
  • Shi et al. (2017) Zhiyuan Shi, Yongxin Yang, Timothy M Hospedales, and Tao Xiang. 2017. Weakly-supervised image annotation and segmentation with objects and attributes. IEEE TPAMI (2017), 2525–2538.
  • Su et al. (2010) Addison YS Su, Stephen JH Yang, Wu-Yuin Hwang, and Jia Zhang. 2010. A Web 2.0-based collaborative annotation system for enhancing knowledge sharing in collaborative learning environments. Computers & Education (2010), 752–766.
  • Su et al. (2012) Hao Su, Jia Deng, and Li Fei-Fei. 2012. Crowdsourcing annotations for visual object detection. In AAAI, Vol. 1.
  • Uhrig et al. (2016) Jonas Uhrig, Marius Cordts, Uwe Franke, and Thomas Brox. 2016. Pixel-level encoding and depth layering for instance-level semantic labeling. 14–25.
  • Xu et al. (2017) Dejing Xu, Zhou Zhao, Jun Xiao, Fei Wu, Hanwang Zhang, Xiangnan He, and Yueting Zhuang. 2017. Video Question Answering via Gradually Refined Attention over Appearance and Motion. In ACM MM. 1645–1653.
  • Zhao et al. (2017) Zhou Zhao, Jinghao Lin, Xinghua Jiang, Deng Cai, Xiaofei He, and Yueting Zhuang. 2017. Video Question Answering via Hierarchical Dual-Level Attention Network Learning. In ACM MM. 1050–1058.
  • Zhou et al. (2017) Yiyi Zhou, Rongrong Ji, Jinsong Su, Yongjian Wu, and Yunsheng Wu. 2017. More Than An Answer: Neural Pivot Network for Visual Qestion Answering. In ACM MM. 681–689.
  • Zhu et al. (2017) Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J Lim, Abhinav Gupta, Li Fei-Fei, and Ali Farhadi. 2017. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In IEEE ICRA. 3357–3364.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
206440
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description