SSAP: Single-Shot Instance Segmentation With Affinity Pyramid

SSAP: Single-Shot Instance Segmentation With Affinity Pyramid

Naiyu Gao, Yanhu Shan, Yupei Wang, Xin Zhao, Yinan Yu, Ming Yang, Kaiqi Huang
CRISE, Institute of Automation, Chinese Academy of Sciences
University of Chinese Academy of Sciences Horizon Robotics, Inc
CAS Center for Excellence in Brain Science and Intelligence Technology
{gaonaiyu2017,wangyupei2014}@ia.ac.cn,{xzhao,kaiqi.huang}@nlpr.ia.ac.cn
{yanhu.shan,yinan.yu}@horizon.ai, m-yang4@u.northwestern.edu
Corresponding author
Abstract

Recently, proposal-free instance segmentation has received increasing attention due to its concise and efficient pipeline. Generally, proposal-free methods generate instance-agnostic semantic segmentation labels and instance-aware features to group pixels into different object instances. However, previous methods mostly employ separate modules for these two sub-tasks and require multiple passes for inference. We argue that treating these two sub-tasks separately is suboptimal. In fact, employing multiple separate modules significantly reduces the potential for application. The mutual benefits between the two complementary sub-tasks are also unexplored. To this end, this work proposes a single-shot proposal-free instance segmentation method that requires only one single pass for prediction. Our method is based on a pixel-pair affinity pyramid, which computes the probability that two pixels belong to the same instance in a hierarchical manner. The affinity pyramid can also be jointly learned with the semantic class labeling and achieve mutual benefits. Moreover, incorporating with the learned affinity pyramid, a novel cascaded graph partition module is presented to sequentially generate instances from coarse to fine. Unlike previous time-consuming graph partition methods, this module achieves speedup and 9% relative improvement on Average-Precision (AP). Our approach achieves state-of-the-art results on the challenging Cityscapes dataset.

1 Introduction

The rapid development of Convolutional networks [30, 29] has revolutionized various vision tasks, enabling us to move towards more fine-grained understanding of images. Instead of classic bounding-box level object detection [18, 19, 46, 39, 44, 15] or class-level semantic segmentation [41, 8, 49], instance segmentation provides in-depth understanding by segmenting all objects and distinguishing different object instances. Researchers are thus showing increasing interests in instance segmentation recently.

Current state-of-the-art solutions to this challenging problem can be classified into the proposal-based and proposal-free approaches  [34, 28, 40]. The proposal-based approaches regard it as an extension to the classic object detection task [46, 39, 44, 15]. After localizing each object with a bounding box, a foreground mask is predicted within each bounding box proposal. However, the performances of these proposal-based methods are highly limited by the quality of the bounding box predictions and the two-stage pipeline also limits the speed of the systems. By contrast, the proposal-free approach has the advantage of its simple and efficient design. This work also focuses on the proposal-free paradigm.

Figure 1: Overview of the proposed method. The per-pixel semantic class and pixel-pair affinities are generated with a single pass of a fully-convolutional network. The final instance segmentation result is then derived from these predictions by the proposed cascaded graph partition module.

The proposal-free methods mostly start by producing instance-agnostic pixel-level semantic class labels [41, 8, 7, 49], followed by clustering them into different object instances with particularly designed instance-aware features. However, previous methods mainly treat the two sub-processes as two separate stages and employ multiple modules, which is suboptimal. In fact, the mutual benefits between the two sub-tasks can be exploited, which will further improve the performance of instance segmentation. Moreover, employing multiple modules may result in additional computational costs for real-world applications.

To cope with the above issues, this work proposes a single-shot proposal-free instance segmentation method, which jointly learns the pixel-level semantic class segmentation and object instance differentiating in a unified model with a single backbone network, as shown in Fig.  1. Specifically, for distinguishing different object instances, an affinity pyramid is proposed, which can be jointly learned with the labeling of semantic classes. The pixel-pair affinity computes the probability that two pixels belong to the same instance. In this work, the short-range affinities for pixels close to each other are derived with dense small learning windows. Simultaneously, the long-range affinities for pixels distant from each other are also required to group objects with large scales or nonadjacent parts. Instead of enlarging the windows, the multi-range affinities are decoupled and long-range affinities are sparsely derived from instance maps with lower resolutions. After that, we propose learning the affinity pyramid at multiple scales along the hierarchy of an U-shape network, where the short-range and long-range affinities are effectively learned from the feature levels with higher and lower resolutions respectively. Experiments in Table 3 show that the pixel-level semantic segmentation and pixel-pair affinity pyramid based grouping are indeed mutually benefited from the proposed joint learning scheme. The overall instance segmentation is thus further improved.

Then, in order to utilize the cues about global context reasoning, this work employs a graph partition method [26] to derive instances from the learned affinities. Unlike previous time-consuming methods, a cascaded graph partition module is presented to incorporate the graph partition process with the hierarchical manner of the affinity pyramid and finally provides both acceleration and performance improvements. Concretely, with the learned pixel-pair affinity pyramid, a graph is constructed by regarding each pixel as a node and transforming affinities into the edge scores. Graph partition is then employed from higher-level lower-resolution layers to lower-level higher-resolution layers progressively. Instance segmentation predictions from lower resolutions produce confident proposals, which significantly reduce node numbers at higher resolutions. Thus the whole process is accelerated.

The main contributions of this paper are as follows:

  • A novel instance-aware pixel-pair affinity pyramid is proposed to distinguish instances, which can be jointly learned with the pixel-level labeling of semantic class. The mutual benefits between the two sub-tasks are explored by encouraging bidirectional interactions, which further boosts instance segmentation.

  • A single-shot, proposal-free instance segmentation method is proposed, based on the proposed affinity pyramid. Unlike most previous methods, our approach requires only one single pass to generate instances. On the challenging Cityscapes dataset, our method achieves state of art with 37.3% AP (val) / 32.7% (test) and 61.1% PQ (val).

  • Incorporating with the hierarchical manner of the affinity pyramid, a novel cascaded graph partition module is proposed to gradually segment an image into instances from coarse to fine. Compared with the non-cascaded way, this module achieves speedup and 9% relative improvement on AP.

2 Related Work

2.1 Instance Segmentation

Existing approaches on instance segmentation could be divided into two paradigms: proposal-based methods and proposal-free methods.

Proposal-based methods recognize object instances with bounding boxes that generated with detectors [46, 39, 15]. MNC [14] decomposes instance segmentation into a cascade of sub-tasks, including box localization, mask refinement and instance classification. Another work [2, 32] combines the predictions of detection and semantic segmentation with a CRFasRNN [50] to generate instances. FICS [33] develops the position sensitive score map [13]. Mask R-CNN [20] extends Faster R-CNN [46] by adding a segmentation mask predicting branch on each Region of Interest (RoI). Following works extend Mask R-CNN by modifying feature layers [37] or the mask prediction head [6].

Proposal-free methods mainly solve instance segmentation based on the success of semantic segmentation [8, 49, 7]. The segmentation based methods learn instance-aware features and use corresponding grouping methods to cluster pixels into instances. DWT [3] learns boundary-aware energy for each pixel followed by watershed transform. Several methods [5, 17, 43] adopt instance level embeddings to differentiate instances. SGN [36] sequentially groups instances with three sub-networks. Recurrent Neural Networks (RNNs) is adopted in several approaches [47, 45] to generate one instance mask at each time. Graph based algorithm [26] is also utilized for post-processing [31, 28], which segments an image into instances with global reasoning. However, the graph based algorithm is usually time-consuming. To speed up, Levinkov \etal[31] down-sample the outputs before the graph optimization while Kirillov \etal[28] only derive edges for adjacent neighbors. They all accelerate at the expense of performance. Recently, Yang \etal[48] propose a single-shot image parser that achieves a balance between accuracy and efficiency.

2.2 Pixel-Pair Affinity

The concept of learning pixel-pair affinity has been developed in many previous works  [38, 24, 1, 4, 42] to facilitate semantic segmentations during training or post-processing. Recently, Liu \etal[40] propose learning instance-aware affinity and grouping pixels into instances with agglomerative hierarchical clustering. Our approach also utilizes instance-aware affinity to distinguish object instances, but both the ways to derive affinities and group pixels are significantly different. Importantly, Liu \etal[40] employ two models and require multiple passes for the RoIs generated from semantic segmentation results. Instead, our approach is single-shot, which requires only one single pass to generate the final instance segmentation result.

3 Proposed Approach

This work proposes a single-shot proposal-free instance segmentation model based on the jointly learned semantic segmentation and pixel-pair affinity pyramid, which are equipped with a cascaded graph partition module to differentiate object instances. As shown in Fig. 3, our model consists of two parts: (a) a unified network to learn the semantic segmentation and affinity pyramid with a single backbone network, and (b) a cascaded graph partition module to sequentially generate multi-scale instance predictions using the jointly learned affinity pyramid and semantic segmentation. In this section, the affinity pyramid is firstly explained at Subsection 3.1, then the cascaded graph partition module is described at Subsection 3.2.

3.1 Affinity Pyramid

With the instance-agnostic semantic segmentation, grouping pixels into individual object instance is critical for instance segmentation. This work proposes distinguishing different object instances based on the instance-aware pixel-pair affinity, which specifies whether two pixels belong to the same instance or not. As shown in the second column of Fig. 2, for each pixel, the short-range affinities to neighboring pixels within a small window are learned. In this way, a affinity response map is presented. For training, the average L2 loss is calculated with the predicted affinities for each pixel:

(1)

where . is the predicted affinity between the current pixel and the pixel in its affinity window, representing the probability that two pixels belong to the same instance. The sigmoid activation is used to let . Here, and represents the ground truth affinity for . is set to 1 if two pixels are from the same instance, 0 if two pixels are from different instances. Importantly, the training data generated in this way is unbalanced. Specifically, the ground truth affinities are mostly with all as most pixels are at the inner-regions of instances. To this end, pixels with all ground truth affinities are randomly dropped during training. Additionally, we set 3 times loss for pixels belonging to object instances.

Figure 2: Illustration of affinity pyramid. Pixel-pair affinity specifies whether two pixels belong to the same instance or not. For each current pixel, the affinities to neighboring pixels within a small window (here, ) are predicted. The short-range and long-range affinities are decoupled and derived from instance maps with higher and lower resolutions respectively. In practice, ground truth affinity is set to 1 if two pixels are from the same instance, otherwise 0. Best viewed in color and zoom.
Figure 3: Our instance segmentation model consists of two parts: (a) a unified U-shape framework that jointly learns the semantic segmentation and affinity pyramid. The affinity pyramid is constructed by learning multi-range affinities from feature levels with different resolutions separately. (b) a cascaded graph partition module that utilizes the jointly learned affinity pyramid and semantic segmentation to progressively refine instance predictions starting from the deepest layer. Instance predictions in the lower-level layers with higher resolution are guided by the instance proposals generated from the deeper layers with lower resolution. Best viewed in color and zoom.

Moreover, apart from the short-range affinities above, the long-range affinities are also required to handle objects of larger scales or nonadjacent object parts. A simple solution is to utilize a large affinity window size. However, besides the cost of GPU memories, a large affinity window would inevitably conflict with the semantic segmentation during training, which severely hinders the joint learning of the two sub-tasks. As shown in experiments (see Table 2), jointly learning the short-range affinities with semantic segmentation obtains mutual benefits for the two tasks. However, the long-range affinities are obviously more difficult to be jointly learned with the pixel-level semantic class labeling. Similar observation is also captured by Ke \etal[23].

Instead of enlarging the affinity window, we propose to learn multi-scale affinities as an affinity pyramid, where the short-range and long-range affinities are decoupled and the latter is sparsely derived from instance maps with lower resolutions. More concretely, as shown in Fig. 2, the long-range affinities are achieved with the same small affinity window at the lower resolutions. Note that the window sizes can be different, however they are fixed in this work for simplicity. In this way, the windows from the resolution can produce affinities between pixels at most 128 pixel-distance. With the constructed affinity pyramid, the finer short-range and coarser long-range affinities are learned from the higher and lower resolutions, respectively. Consequently, multi-scale instance predictions are generated by affinities under corresponding resolutions. As shown in Fig. 2, the predictions of larger instances are proposed by the lower resolution affinities, and are further detailed by higher resolution affinities. Meanwhile, although the smaller instances have too weak responses to be proposed at lower resolutions, they can be generated by the affinities with higher resolutions.

After that, the affinity pyramid can be easily learned by adding affinity branches in parallel with the existing branches for semantic segmentation along the hierarchy of the decoder network. As shown in Fig. 3 (a), affinities are predicted under resolutions of the original image. In this way, the short-range and long-range affinities can be effectively learned at different feature levels in the feature pyramid of the U-shape architecture. The formed affinity pyramid can thus be jointly learned with the semantic segmentation in a unified model, resulting in mutual benefits.

3.2 Cascaded Graph Partition

With the jointly learned semantic segmentation and affinity pyramid, a graph-based partition mechanism is employed in this work to differentiate object instances. In particular, incorporating with the hierarchical manner of the affinity pyramid, a cascaded graph partition module is presented. This module sequentially generates instances with multiple scales, guided by the cues encoded in the deeper-level layers of the affinity pyramid.

Graph Partition With the learned pixel-pair affinity pyramid, an undirected graph is constructed, where is the set of pixels and is the set of pixel-pairs within affinity windows. represents the edge between the pixels . Furthermore, are the affinities for pixels , which are predicted at pixels and , respectively. The average affinity is then calculated and transformed into the score of edge by:

(2)
(3)

As the affinities predict how likely two pixels belong to the same instance, the average affinities higher than are transformed into positive and negative otherwise. In this way, instance segmentation is transformed into a graph partition problem [11] and can be addressed by solving the following optimization problem [26]:

(4)
(5)

Here, is a binary variable and represents nodes and belong to different partitions. is the set of all cycles of the graph . The objective in formulation 4 is about to maximize the total score of the selected edges, and the inequality 5 constrains each feasible solution representing a partition. A search-based algorithm [26] is developed to solve the optimization problem. However, when this algorithm is employed to segment instances, the inference time is not only long but also rises significantly w.r.t. the number of nodes, which brings potential problem for real-world applications.

Cascade Scheme The sizes of instances in the Cityscapes dataset are various significantly. For large instances, the pixels are mostly at the inner-regions which cost long inference time although are easy for segmentation. Motivated by this observation, a cascaded strategy is developed to incorporate the graph partition mechanism with the hierarchical manner of the affinity pyramid. As shown in Fig. 3 (b), the graph partition is firstly utilized on a low resolution where it has fewer pixels and requires a short running time for graph partition. Although only coarse segments for large instances are generated, the inner-regions for these segments are still reliable. In this case, these inner-regions can be up-sampled and regarded as proposals for the higher resolution. At the higher resolution, the pixels in each proposal are combined to generate a node and the remaining pixels are each treated as a node. To construct a graph with these nodes, the edge score between nodes and is calculated by adding all pixel-pair edge scores between the two nodes: . In this way, the proposals for instance predictions are progressively refined. Because the number of nodes decreases significantly at each step, the entire graph partition is accelerated.

Figure 4: Influence of segmentation refinement (SR). (a) Input image. (b) Semantic segmentation. (c) Instance segmentation without SR (d) Instance segmentation with SR. (e) Ground truth. SR significantly improves the errors in instance segmentation which are caused by the semantic segmentation failures. Best viewed in color and zoom.

Segmentation Refinement In previous steps, the partition is made within each class to speed up. At this step, the cues from both semantic segmentation and affinity branches are integrated to segmentation instances from all the pixels which are classified as foreground. In practice, the average affinity for pixels is refined to by:

(6)
(7)
(8)

Here, and are the semantic segmentation scores for object classes at the pixel and , which represent the classification possibility distributions on the object classes. The distance between the two distributions can be measured with the popular Jensen-Shannon divergence, as described in Eq. 7-8. After the refinement for the initial affinities, graph partition is conducted for all the foreground pixels at the resolution. By combining the information from semantic segmentation and affinity branches, the errors in instance segmentation which are caused by the semantic segmentation failures are significantly improved, as shown in Fig. 4.

Finally, the class label for each instance is obtained by voting among all pixels based on semantic segmentation labels. Following DWT [3], small instances are removed and semantic scores from semantic segmentation are used to rank predictions.

4 Experiments

Dataset Our model is evaluated on the challenging urban street scenes dataset Cityscapes [12]. In this dataset, each image has a high resolution of 1,0242,048 pixels. There are 5,000 images with high quality dense pixel annotations and 20,000 images with coarse annotations. Note that only the fine annotated dataset is used to train our model. Cityscapes benchmark evaluates 8 classes for instance segmentation. Together with another 11 background classes, 19 classes are evaluated for semantic segmentation.

Metrics The main metric for evaluation is Average-Precision (AP), which is calculated by averaging the precisions under IoU (Intersection over Union) thresholds from 0.50 to 0.95 at the step of 0.05. Our result is also reported with three sub-metrics from Cityscapes: AP50%, AP100m and AP50m. They are calculated at 0.5 IoU threshold or only for objects within specific distances.

This paper also evaluates the results with a new metric Panoptic Quality (PQ) [27], which is further divided into Segmentation Quality (SQ) and Recognition Quality (RQ) to measure recognition and segmentation performances respectively. The formulation PQ is defined as:

(9)

where and are the predicted and ground truth segments, while , and represent matched pairs of segments, unmatched predicted segments and unmatched ground truth segments respectively. Moreover, both countable objects (thing) and uncountable regions (stuff) are evaluated in PQ and are separately reported with PQ and PQ. As the stuff is not concerned in this work, only PQ and PQ is reported.

AP (%) PQ (%) PQ (%)
diff. 27.5 45.0 54.6
diff. 29.5 48.0 55.8
diff. 31.5 49.2 56.6
same 31.0 48.7 56.2
diff. 31.0 49.2 56.3
diff. 28.1 46.4 53.4
Table 1: Influence of the balancing parameters.
AP (%) PQ (%) PQ (%) mIoU (%)
0 - - - 74.5
3 30.5 48.5 56.4 75.0
5 31.3 49.0 56.5 75.0
7 31.2 48.1 56.0 75.1
9 30.0 46.2 55.0 74.3
Table 2: Influence of affinity window size . mIoU for semantic segmentation evaluation is also provided. means to train semantic segmentation only.
Feature JL AP (%) PQ (%) PQ (%) mIoU (%)
Single 29.4 46.9 54.9 74.5
(w/o dilation) 30.2 47.6 55.0 74.2
Single 30.6 48.2 55.5 74.5
(w/ dilation) 30.8 48.8 55.8 74.5
Hierarchical 30.0 47.7 55.2 74.5
31.3 49.0 56.5 75.0
Table 3: JL: joint learning. Comparing with learning all layers of the affinity pyramid from the single 1/4 resolution feature map, our hierarchical manner with joint learning performs better.

Implementation Details Our model predicts semantic segmentation and pixel-pair affinity with a unified U-shape framework based on ResNet-50 [21]. The training loss is defined as:

(10)

where and are multi-class focal loss [35] and average L2 loss (see Eq. 1) for semantic segmentation and affinity branches at the resolution in resolutions respectively. To combine losses from each scale, we firstly tune the balancing parameter to make losses of each scale are in the same order, which are finally set to respectively. After that, is set to 0.003 to balance the losses of affinity pyramid and semantic segmentation. The influence of and are shown in Table 1. We run all experiments using the MXNet framework [9]. Our model is trained with Nadam [16] for 70,000 iterations using synchronized batch normalization [22] over 8 TitanX 1080ti GPUs and the batch size is set to 24. The learning rate is initialized to and divided by 10 at the 30,000 and 50,000 iterations, respectively.

Influence of Joint Learning Our separately trained semantic segmentation model achieves 74.5% mIoU. This result is significantly improved after being jointly trained with the affinity pyramid, as shown in Table 2. However, the performance for both instance and semantic segmentation is affected by the affinity window size. Similar phenomenon is also observed by Ke \etal[23] and they explain that small windows and large windows benefit small objects and large objects, respectively. Due to the limitation of GPU memory, the window size is tested from 3 to 9. Among them, affinity window balances the conflict and achieves the best performance, which is used in the other experiments. Furthermore, in our proposed model, the semantic segmentation and affinity pyramid are jointly learned along the hierarchy of the U-shape network. We compare this approach with generating all layers of the affinity pyramid from the single resolution feature map with corresponding strides. The employing of dilated convolution  [8] is also tested. Table 3 shows our approach performs best, where the mutually benefits of the two tasks are explored and finally improve the performance on instance segmentation.

Figure 5: Running time for the cascaded graph partition module under different object sizes. The cascade scheme significantly reduces the time for large objects. Best viewed in color.
Init. Res. GP time (s) AP (%) PQ (%) PQ (%)
1/4 1.26 28.9 45.1 54.9
1/8 0.33 31.3 49.2 56.6
1/16 0.26 31.5 49.2 56.6
1/32 0.26 30.9 48.8 56.5
1/64 0.26 30.9 48.7 56.5
Table 4: Influence of the initial resolution for the cascaded graph partition. With the decreasing of initial resolution, the GP time (running time for cascaded graph partition per image) keeps decreasing. Comparing with resolution initialization, initializing cascaded graph partition from the resolution achieves speedup with 9% AP improvement.

Influence of Cascaded Graph Partition At this part, the proposed cascaded graph partition module is analyzed by being initialized from each resolution. As shown in Fig. 5, the running time for graph partition increases rapidly \wrtthe size of object regions when conducting the partition at the resolution directly, without the guidance of instance proposals. However, the time significantly reduces when initializing the cascaded graph partition from lower resolutions, like the resolution, where the graph partition is constructed at the resolutions sequentially, and the latter two are guided by the proposals from the previous stage. The quantitative results are shown in Table 4. Comparing with the resolution initialization (non-cascaded), the resolution initializing scheme achieves acceleration. Importantly, the cascaded approach achieves speeding up without scarifying precisions. As shown in Table 4, initializing from the resolution has 2.0% absolute improvement on AP, which is achieved due to that the proposals from lower resolutions can reduce the disturbing information for prediction. Meanwhile, the resolution initializing approach achieves better performance than the and manner, which indicates proposals from too low resolutions still bring errors for prediction. In the other experiments, cascaded graph partitions are initialized from the resolution.

Affinities Used AP (%) PQ (%) PQ (%)
only 25.7 41.2 53.2
29.8 46.5 55.4
30.8 48.6 56.3
31.4 49.2 56.5
31.5 49.2 56.6
Table 5: Effectiveness of the long-range affinities. are affinities of the resolutions respectively. Affinities with longer-range are gradually added.
BD OL Kernel AP (%) PQ (%) PQ (%)
3 29.1 46.4 55.8
3 30.0 48.8 56.0
3 31.3 49.0 56.5
5 31.5 49.2 56.6
Table 6: BD: balance the training data by randomly dropping 80% pixels with all 1 ground truth affinities. OL: set 3 times affinity loss for pixels belonging to object instances. Kernel: kernel size.

Quantitative Results Firstly, to show the effectiveness of the long-range affinities, we start with just using the affinities from the 1/4 resolution, and gradually add longer-range affinities. Results are shown in Table 5. Then, the influences of balancing training data, setting larger affinity loss and employing a large kernel are evaluated and shown in Table 6. After that, as shown in Table 7, the segmentation refinement improves the performance with 2.8% AP. With test tricks, our model achieves 34.4% AP and 58.4% PQ on the validation set.

Backbone SR HF MS AP (%) PQ (%) PQ (%)
ResNet-50 28.7 45.4 55.1
ResNet-50 31.5 49.2 56.6
ResNet-50 32.8 50.4 57.6
ResNet-50 34.4 50.6 58.4
ResNet-101 37.3 55.0 61.1
Table 7: SR: segmentation refinement. HF: horizontal flipping test. MS: multiscale test.
Method AP (%) PQ (%) PQ (%) Backbone
Li \etal[32] 28.6 42.5 53.8 ResNet-101
SGN [36] 29.2 - - -
Mask R-CNN [20] 31.5 49.6111 - ResNet-50
GMIS [40] 34.1 - - ResNet-101
Deeperlab [48] - - 56.5 Xception-71 [10]
PANet [37] 36.5 - - ResNet-50
SSAP (ours) 34.4 50.6 58.4 ResNet-50
SSAP (ours) 37.3 55.0 61.1 ResNet-101
Table 8: Results on Cityscapes val set. All results are trained with Cityscapes data only.
Method PQ [val] PQ SQ RQ PQ SQ RQ PQ SQ RQ
DeeperLab [48] 33.8 34.3 77.1 43.1 37.5 77.5 46.8 29.6 76.4 37.4
SSAP (ours) 36.5 36.9 80.7 44.8 40.1 81.6 48.5 32.0 79.4 39.3
Table 9: Results on COCO val (‘PQ [val]’ column ) and test-dev (remaining columns) sets. Results are reported as percentages.
11footnotetext: This result is reported by Kirillov \etal[27].

Our model is also trained with ResNet-101, which achieves 37.3% AP and 61.1% PQ, as shown in Table 8. For the test set, our model attains a performance of 32.7% AP, which exceeds all previous methods. Details are in Table 10.

Visual Results The proposals generated from the and resolutions are visualized in Fig 7. A few sample results on the validation set are visualized in Fig 7, where fine details are precisely captured. As shown in the second column, the cars occluded by persons or poles and separated into parts are successfully grouped.

Results on COCO To show the effectiveness of our method in scenarios other than streets, we evaluate it on the COCO dataset. The annotations for COCO instance segmentation are with overlaps, making it unsuitable to train and test a proposal-free method like ours. So our method is evaluated in the panoptic segmentation task. To train on COCO, we resize the longer edge to 640 and train the model with crops. The number of iterations is 80,000 and the learning rate is divided by 10 in 60,000 and 70,000 iterations. Other experimental settings are remained the same. The performance of our model (ResNet-101 based) is summarized in Table  9. To the best of our knowledge, DeeperLab [48] is currently the only proposal-free method to report COCO result. Our method outperformes DeeperLab (Xception-71 based) in all sub metrics.

Method Training data AP AP50% AP50m AP100m person rider car trunk bus train motor bicycle
InstanceCut [28] fine+coarse 13.0 27.9 26.1 22.1 10.0 8.0 23.7 14.0 19.5 15.2 9.3 4.7
Multi-task [25] fine 21.6 39.0 37.0 35.0 19.2 21.4 36.6 18.8 26.8 15.9 19.4 14.5
SGN [36] fine+coarse 25.0 44.9 44.5 38.9 21.8 20.1 39.4 24.8 33.2 30.8 17.7 12.4
Mask RCNN [20] fine 26.2 49.9 40.1 37.6 30.5 23.7 46.9 22.8 32.2 18.6 19.1 16.0
GMIS [40] fine+coarse 27.3 45.6 - - 31.5 25.2 42.3 21.8 37.2 28.9 18.8 12.8
Neven \etal[43] fine 27.6 50.9 - - 34.5 26.1 52.4 21.7 31.2 16.4 20.1 18.9
PANet [37] fine 31.8 57.1 46.0 44.2 36.8 30.4 54.8 27.0 36.3 25.5 22.6 20.8
SSAP (ours) fine 32.7 51.8 51.4 47.3 35.4 25.5 55.9 33.2 43.9 31.9 19.5 16.2
Table 10: Results on Cityscapes test set. All results are trained with Cityscapes data only. Results are reported as percentages.
Image Proposals from Res. Proposals from Res. Instance Seg. Ground Truth
Figure 6: Visualizations of proposals generated from lower resolutions within the cascaded graph partition module and the final instance segmentation results. Best viewed in color and zoom.
Semantic Seg. Instance Seg. Semantic Seg. Instance Seg.
Figure 7: Visualizations of sampled results on the validation set. Best viewed in color and zoom.

5 Conclusion

This work has proposed a single-shot proposal-free instance segmentation method, which requires only one single pass to generate instances. Our method is based on a novel affinity pyramid to distinguish instances, which can be jointly learned with the pixel-level semantic class labels using a single backbone network. Experiment results have shown the two sub-tasks are mutually benefited from our joint learning scheme, which further boosts instance segmentation. Moreover, a cascaded graph partition module has been developed to segment instances with the affinity pyramid and semantic segmentation results. Comparing with the non-cascaded way, this module has achieved speedup and 9% relative improvement on AP. Our approach has achieved a new state of the art on the challenging Cityscapes dataset.

Acknowledgment

This work is supported in part by the National Key Research and Development Program of China (Grant No.2016YFB1001005), the National Natural Science Foundation of China (Grant No. 61673375 and No.61602485), and the Projects of Chinese Academy of Science (Grant No. QYZDB-SSW-JSC006).

References

  • [1] J. Ahn and S. Kwak (2018) Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In CVPR, Cited by: §2.2.
  • [2] A. Arnab and P. H. S. Torr (2017) Pixelwise instance segmentation with a dynamically instantiated network. In CVPR, Cited by: §2.1.
  • [3] M. Bai and R. Urtasun (2017) Deep watershed transform for instance segmentation. In CVPR, Cited by: §2.1, §3.2.
  • [4] G. Bertasius, L. Torresani, S. X. Yu, and J. Shi (2017) Convolutional random walk networks for semantic image segmentation. In CVPR, Cited by: §2.2.
  • [5] B. D. Brabandere, D. Neven, and L. V. Gool (2017) Semantic instance segmentation with a discriminative loss function. arXiv:1708.02551. External Links: Link, 1708.02551 Cited by: §2.1.
  • [6] L. Chen, A. Hermans, G. Papandreou, F. Schroff, P. Wang, and H. Adam (2018) MaskLab: instance segmentation by refining object detection with semantic and direction features. In CVPR, Cited by: §2.1.
  • [7] L. Chen, G. Papandreou, F. Schroff, and H. Adam (2017) Rethinking atrous convolution for semantic image segmentation. arXiv:1706.05587. Cited by: §1, §2.1.
  • [8] L. Chen, G. Papandreou, I. Kokkinos, K. P. Murphy, and A. L. Yuille (2018) DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI 40 (4). Cited by: §1, §1, §2.1, §4.
  • [9] T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang (2015) Mxnet: a flexible and efficient machine learning library for heterogeneous distributed systems. arXiv:1512.01274. Cited by: §4.
  • [10] F. Chollet (2017) Xception: deep learning with depthwise separable convolutions. In CVPR, Cited by: Table 8.
  • [11] S. Chopra and M. R. Rao (1993) The partition problem. Mathematical Programming 59 (1). Cited by: §3.2.
  • [12] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016) The cityscapes dataset for semantic urban scene understanding. In CVPR, Cited by: §4.
  • [13] J. Dai, K. He, Y. Li, S. Ren, and J. Sun (2016) Instance-sensitive fully convolutional networks. In ECCV, Cited by: §2.1.
  • [14] J. Dai, K. He, and J. Sun (2016) Instance-aware semantic segmentation via multi-task network cascadeds. In CVPR, Cited by: §2.1.
  • [15] J. Dai, Y. Li, K. He, and J. Sun (2016) R-fcn: object detection via region-based fully convolutional networks. In NIPS, Cited by: §1, §1, §2.1.
  • [16] T. Dozat (2016) Incorporating nesterov momentum into adam. Cited by: §4.
  • [17] A. Fathi, Z. Wojna, V. Rathod, P. Wang, H. O. Song, S. Guadarrama, and K. P. Murphy (2017) Semantic instance segmentation via deep metric learning. arXiv:1703.10277. Cited by: §2.1.
  • [18] R. Girshick, J. Donahue, T. Darrell, and J. Malik (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, Cited by: §1.
  • [19] R. Girshick (2015) Fast r-cnn. In ICCV, Cited by: §1.
  • [20] K. He, G. Gkioxari, P. Dollar, and R. Girshick (2017) Mask r-cnn. In ICCV, Cited by: §2.1, Table 10, Table 8.
  • [21] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §4.
  • [22] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In ICML, Cited by: §4.
  • [23] T. Ke, J. Hwang, Z. Liu, and S. X. Yu (2018) Adaptive affinity fields for semantic segmentation. In ECCV, Cited by: §3.1, §4.
  • [24] T. Ke, J. Hwang, Z. Liu, and S. X. Yu (2018) Adaptive affinity fields for semantic segmentation. In ECCV, Cited by: §2.2.
  • [25] A. Kendall, Y. Gal, and R. Cipolla (2018) Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In CVPR, Cited by: Table 10.
  • [26] M. Keuper, E. Levinkov, N. Bonneel, G. Lavoue, T. Brox, and B. Andres (2015) Efficient decomposition of image and mesh graphs by lifted multicuts. In ICCV, Cited by: §1, §2.1, §3.2.
  • [27] A. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollar (2019) Panoptic segmentation. In CVPR, Cited by: §4, §4.
  • [28] A. Kirillov, E. Levinkov, B. Andres, B. Savchynskyy, and C. Rother (2017) InstanceCut: from edges to instances with multicut. In CVPR, Cited by: §1, §2.1, Table 10.
  • [29] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In NIPS, Cited by: §1.
  • [30] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11). Cited by: §1.
  • [31] E. Levinkov, J. Uhrig, S. Tang, M. Omran, E. Insafutdinov, A. Kirillov, C. Rother, T. Brox, B. Schiele, and B. Andres (2017) Joint graph decomposition & node labeling: problem, algorithms, applications. In CVPR, Cited by: §2.1.
  • [32] Q. Li, A. Arnab, and P. H.S. Torr (2018) Weakly- and semi-supervised panoptic segmentation. In ECCV, Cited by: §2.1, Table 8.
  • [33] Y. Li, H. Qi, J. Dai, X. Ji, and Y. Wei (2017) Fully convolutional instance-aware semantic segmentation. In CVPR, Cited by: §2.1.
  • [34] X. Liang, Y. Wei, X. Shen, J. Yang, L. Lin, and S. Yan (2015) Proposal-free network for instance-level object segmentation. arXiv:1509.02636. Cited by: §1.
  • [35] T. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollar (2017) Focal loss for dense object detection. In ICCV, Cited by: §4.
  • [36] S. Liu, J. Jia, S. Fidler, and R. Urtasun (2017) SGN: sequential grouping networks for instance segmentation. In ICCV, Cited by: §2.1, Table 10, Table 8.
  • [37] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia (2018) Path aggregation network for instance segmentation. In CVPR, Cited by: §2.1, Table 10, Table 8.
  • [38] S. Liu, S. De Mello, J. Gu, G. Zhong, M. Yang, and J. Kautz (2017) Learning affinity via spatial propagation networks. In NIPS, Cited by: §2.2.
  • [39] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. E. Reed, C. Fu, and A. C. Berg (2016) SSD: single shot multibox detector. In ECCV, Cited by: §1, §1, §2.1.
  • [40] Y. Liu, S. Yang, B. Li, W. Zhou, J. Xu, H. Li, and Y. Lu (2018) Affinity derivation and graph merge for instance segmentation. In ECCV, Cited by: §1, §2.2, Table 10, Table 8.
  • [41] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In CVPR, Cited by: §1, §1.
  • [42] M. Maire, T. Narihira, and S. X. Yu (2016) Affinity cnn: learning pixel-centric pairwise relations for figure/ground embedding. In CVPR, Cited by: §2.2.
  • [43] D. Neven, B. D. Brabandere, M. Proesmans, and L. V. Gool (2019) Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth. In CVPR, Cited by: §2.1, Table 10.
  • [44] J. Redmon and A. Farhadi (2017) YOLO9000: better, faster, stronger. In CVPR, Cited by: §1, §1.
  • [45] M. Ren and R. S. Zemel (2017) End-to-end instance segmentation with recurrent attention. In CVPR, Cited by: §2.1.
  • [46] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In NIPS, Cited by: §1, §1, §2.1.
  • [47] B. Romeraparedes and P. H. S. Torr (2016) Recurrent instance segmentation. In ECCV, Cited by: §2.1.
  • [48] T. Yang, M. D. Collins, Y. Zhu, J. Hwang, T. Liu, X. Zhang, V. Sze, G. Papandreou, and L. Chen (2019) DeeperLab: single-shot image parser. arXiv:1902.05093. Cited by: §2.1, Table 8, Table 9, §4.
  • [49] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia (2017) Pyramid scene parsing network. In CVPR, Cited by: §1, §1, §2.1.
  • [50] S. Zheng, S. Jayasumana, B. Romeraparedes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. S. Torr (2015) Conditional random fields as recurrent neural networks. In ICCV, Cited by: §2.1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
388349
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description