Online Adaptation of Convolutional Neural Networks for Video Object Segmentation

Online Adaptation of Convolutional Neural Networks for Video Object Segmentation

Abstract

We tackle the task of semi-supervised video object segmentation, i.e\bmvaOneDotsegmenting the pixels belonging to an object in a video using the ground truth pixel mask for the first frame. We build on the recently introduced one-shot video object segmentation (OSVOS) approach which uses a pretrained network and fine-tunes it on the first frame. While achieving impressive performance, at test time OSVOS uses the fine-tuned network in unchanged form and is not able to adapt to large changes in object appearance. To overcome this limitation, we propose Online Adaptive Video Object Segmentation (OnAVOS) which updates the network online using training examples selected based on the confidence of the network and the spatial configuration. Additionally, we add a pretraining step based on objectness, which is learned on PASCAL. Our experiments show that both extensions are highly effective and improve the state of the art on DAVIS to an intersection-over-union score of .

\addauthor

Paul Voigtlaendervoigtlaender@vision.rwth-aachen.de1 \addauthorBastian Leibeleibe@vision.rwth-aachen.de1 \addinstitution Computer Vision Group
Visual Computing Institute
RWTH Aachen University
Germany Online Adaptation for Video Object Segmentation

1 Introduction

un-adapted baseline adaptation targets online adapted ground truth
Figure 1: Qualitative results on two sequences of the DAVIS validation set. The second row shows the pixels selected as positive (red) and negative (blue) training examples. It can be seen that after online adaptation, the network can deal better with changes in viewpoint (left) and new objects appearing in the scene (the car in the right sequence).

Visual object tracking is a fundamental problem in computer vision with many applications including video editing, autonomous cars, and robotics. Recently, there has been a trend to move from bounding box level to pixel level tracking, mainly driven by the availability of new datasets, in particular DAVIS [Perazzi et al.(2016)Perazzi, Pont-Tuset, McWilliams, Van Gool, Gross, and Sorkine-Hornung]. In our work, we focus on semi-supervised video object segmentation (VOS), i.e\bmvaOneDotthe task of segmenting the pixels belonging to a generic object in the video using the ground truth pixel mask of the first frame.

Recently, deep learning based approaches, which often utilize large classification datasets for pretraining, have shown extremely good performance for VOS [Caelles et al.(2017)Caelles, Maninis, Pont-Tuset, Leal-Taixé, Cremers, and Van Gool, Perazzi et al.(2017)Perazzi, Khoreva, Benenson, Schiele, and Sorkine-Hornung, Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele, Jain et al.(2017a)Jain, Xiong, and Grauman] and the related tasks of single-object tracking [Nam and Han(2016), Held et al.(2016)Held, Thrun, and Savarese, Bertinetto et al.(2016)Bertinetto, Valmadre, Henriques, Vedaldi, and Torr] and background modeling [Babaee et al.(2017)Babaee, Dinh, and Rigoll, Braham and Droogenbroeck(2016), Wang et al.(2016)Wang, Luo, and Jodoin]. In particular, the one-shot video object segmentation (OSVOS) approach introduced by Caelles et al\bmvaOneDot[Caelles et al.(2017)Caelles, Maninis, Pont-Tuset, Leal-Taixé, Cremers, and Van Gool], has shown very promising results for VOS. This approach fine-tunes a pretrained convolutional neural network on the first frame of the target video. However, since at test time OSVOS only learns from the first frame of the sequence, it is not able to adapt to large changes in appearance, which might for example be caused by drastic changes in viewpoint.

While online adaptation has been used with success for bounding box level tracking (e.g\bmvaOneDot[Kalal et al.(2012)Kalal, Mikolajczyk, and Matas, Grabner and Bischof(2006), Nam and Han(2016), Wang and Yeung(2013), Li et al.(2016)Li, Li, and Porikli]), its use for VOS [Papoutsakis and Argyros(2013), Ellis and Zografos(2012), Bai et al.(2009)Bai, Wang, Simons, and Sapiro, Bai et al.(2010)Bai, Wang, and Sapiro] has received less attention, especially in the context of deep learning. We thus propose Online Adaptive Video Object Segmentation (OnAVOS), which updates a convolutional neural network based on online-selected training examples. In order to avoid drift, we carefully select training examples by choosing pixels for which the network is very certain that they belong to the object of interest as positive examples, and pixels which are far away from the last assumed pixel mask as negative examples (see Fig. 1, second row). We further show that naively performing online updates on every frame quickly leads to drift, which manifests in strongly degraded performance. As a countermeasure, we propose to mix in the first frame (for which the ground truth pixel mask is known) as additional training example during online updates.

Our contributions are the following: We introduce OnAVOS, which uses online updates to adapt to changes in appearance. Furthermore, we adopt a more recent network architecture and an additional objectness pretraining step [Jain et al.(2017b)Jain, Xiong, and Grauman, Jain et al.(2017a)Jain, Xiong, and Grauman] and demonstrate their effectiveness for the semi-supervised setup. We further show that OnAVOS significantly improves the state of the art on two datasets.

2 Related Work

Video Object Segmentation.  A common approach of many classical video object segmentation (VOS) methods is to reduce the granularity of the input space, e.g\bmvaOneDotby using superpixels [Chang et al.(2013)Chang, Wei, and III, Grundmann et al.(2010)Grundmann, Kwatra, Han, and Essa], patches [Ramakanth and Babu(2014), Fan et al.(2015)Fan, Zhong, Lischinski, Cohen-Or, and Chen], or object proposals [Perazzi et al.(2015)Perazzi, Wang, Gross, and Sorkine-Hornung]. While these methods significantly reduce the complexity of subsequent optimization steps, they can introduce unrecoverable errors early in the pipeline. The obtained intermediate representations (or directly the pixels [Maerki et al.(2016)Maerki, Perazzi, Wang, and Sorkine-Hornung]) are then used for either a global optimization over the whole video [Maerki et al.(2016)Maerki, Perazzi, Wang, and Sorkine-Hornung, Perazzi et al.(2015)Perazzi, Wang, Gross, and Sorkine-Hornung], over parts of it [Grundmann et al.(2010)Grundmann, Kwatra, Han, and Essa], or using only the current and the preceding frame [Chang et al.(2013)Chang, Wei, and III, Fan et al.(2015)Fan, Zhong, Lischinski, Cohen-Or, and Chen, Ramakanth and Babu(2014)].

Recently, neural network based approaches [Caelles et al.(2017)Caelles, Maninis, Pont-Tuset, Leal-Taixé, Cremers, and Van Gool, Perazzi et al.(2017)Perazzi, Khoreva, Benenson, Schiele, and Sorkine-Hornung, Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele, Jain et al.(2017a)Jain, Xiong, and Grauman] including OSVOS [Caelles et al.(2017)Caelles, Maninis, Pont-Tuset, Leal-Taixé, Cremers, and Van Gool] have become the state of the art for VOS. Since OnAVOS is built on top of OSVOS, we include a detailed description in Section 3. While OSVOS handles every video frame in isolation, we expect that incorporating temporal context should be helpful. As a step in this direction, Perazzi et al\bmvaOneDot[Perazzi et al.(2017)Perazzi, Khoreva, Benenson, Schiele, and Sorkine-Hornung] propose the MaskTrack method, in which the estimated segmentation mask from the last frame is used as an additional input channel to the neural network, enabling it to use temporal context. Jampani et al\bmvaOneDot[Jampani et al.(2017)Jampani, Gadde, and Gehler] propose a video propagation network (VPN) which applies learned bilateral filtering operations to propagate information across video frames. Furthermore, optical flow has been used as an additional temporal cue in conjunction with deep learning in the semi-supervised [Perazzi et al.(2017)Perazzi, Khoreva, Benenson, Schiele, and Sorkine-Hornung, Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele] and unsupervised setting [Tokmakov et al.(2017)Tokmakov, Alahari, and Schmid], in which the ground truth for the first frame is not available. In our work, we focus on including context information implicitly by adapting the network online, i.e\bmvaOneDotwe store temporal context information in the adapted weights of the network.

Recently, Jain et al\bmvaOneDot[Jain et al.(2017b)Jain, Xiong, and Grauman] proposed to train a convolutional neural network for pixel objectness, i.e\bmvaOneDotfor deciding for each pixel whether it belongs to an object-like region. In another paper, Jain et al\bmvaOneDot[Jain et al.(2017a)Jain, Xiong, and Grauman] showed that using pixel objectness is helpful in the unsupervised VOS setting. We adopt pixel objectness as a pretraining step for the semi-supervised setting based on the one-shot approach.

The current best result on DAVIS is obtained by LucidTracker from Khoreva et al\bmvaOneDot[Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele], which extends MaskTrack by an elaborate data augmentation method, which creates a large number of training examples from the first annotated frames and reduces the dependence on large datasets for pretraining. Our experiments show that our approach achieves better performance using only conventional data augmentation methods.

Online Adaptation.  For bounding box level tracking, Kalal et al\bmvaOneDot[Kalal et al.(2012)Kalal, Mikolajczyk, and Matas] introduced the Tracking-Learning-Detection (TLD) framework, which tries to detect errors of the used object detector and to update the detector online to avoid these errors in the future. Grabner and Bischof [Grabner and Bischof(2006)] used an online version of AdaBoost [Freund and Schapire(1995)] for multiple computer vision tasks including tracking. Nam and Han [Nam and Han(2016)] proposed a Multi-Domain Network (MDNet) for bounding box level tracking. MDNet trains a separate domain-specific output layer for each training sequence and at test time initializes a new output layer, which is updated online together with two fully-connected layers. To this end, training examples are randomly sampled close to the current assumed object position, and are used as either positive or negative targets, based on their classification scores. This scheme of sampling training examples online has some similarities to our approach. However, our method works on the pixel level instead of the bounding box level and, in order to avoid drift, we take special care to only select training examples online for which we are very certain that they are positive or negative examples. For VOS, online adaptation is less well explored; mainly classical methods like online-updated color and/or shape models [Bai et al.(2010)Bai, Wang, and Sapiro, Bai et al.(2009)Bai, Wang, Simons, and Sapiro, Papoutsakis and Argyros(2013)] and online random forests [Ellis and Zografos(2012)] have been proposed. Fully Convolutional Networks for Semantic Segmentation.  Fully Convolutional Networks (FCNs) for semantic segmentation have been introduced by Long et al\bmvaOneDot[Long et al.(2015)Long, Shelhamer, and Darrell]. The main idea is to repurpose a network initially designed for classification for semantic segmentation by replacing the fully-connected layers with convolutions, and by introducing skip connections which help capture higher resolution details. Variants of this approach have since been widely adopted for semantic segmentation with great success (e.g\bmvaOneDotResNets by He et al\bmvaOneDot[He et al.(2016)He, Zhang, Ren, and Sun]).

Recently, Wu et al\bmvaOneDot[Wu et al.(2016a)Wu, Shen, and Hengel] introduced a ResNet variant with fewer but wider layers than the original ResNet architectures [He et al.(2016)He, Zhang, Ren, and Sun] and a simple approach for segmentation, which avoids some of the subsampling steps by replacing them by dilated convolutions [Yu and Koltun(2016)] and which does not use any skip connections. Despite the simplicity of their architecture for segmentation, they obtained outstanding results across multiple classification and semantic segmentation datasets, which motivates us to adopt their architecture.

3 One-Shot Video Object Segmentation

Figure 2: The pipeline of OnAVOS. Starting from pretrained weights, the network is first pretrained for objectness on PASCAL (a). Afterwards we pretrain on DAVIS to incorporate domain specific information (b). During test time, we fine-tune on the first frame, to obtain the test network (c). On the following frames, the network is then fine-tuned online to adapt to the changes in appearance (d).

OnAVOS (see Fig. 2 for an overview) builds upon the recently introduced one-shot video object segmentation (OSVOS) approach [Caelles et al.(2017)Caelles, Maninis, Pont-Tuset, Leal-Taixé, Cremers, and Van Gool], but introduces pretraining for pixel objectness [Jain et al.(2017b)Jain, Xiong, and Grauman] as a new component, adopts a more recent network architecture, and incorporates a novel online adaptation scheme, which is described in detail in Section 4.

Base Network.  The first step of OnAVOS is to pretrain a base network on large datasets (e.g\bmvaOneDotImageNet [Deng et al.(2009)Deng, Dong, Socher, Li, Li, and Fei-Fei] for image classification) in order to learn a powerful representation of objects, which can later be used as a starting point for the video object segmentation (VOS) task.

Objectness Network.  In a second step, the network is further pretrained for pixel objectness [Jain et al.(2017b)Jain, Xiong, and Grauman] using a binary cross-entropy loss. In order to obtain targets for foreground and background, we use the PASCAL [Everingham et al.(2015)Everingham, Eslami, Van Gool, Williams, Winn, and Zisserman] dataset and map all 20 annotated classes to foreground and all other image regions are treated as background. As demonstrated by Jain et al\bmvaOneDot[Jain et al.(2017a)Jain, Xiong, and Grauman], the resulting objectness network alone already performs well on DAVIS, but here we use objectness only as a pretraining step.

Domain Specific Objectness Network.  The objectness network was trained on the PASCAL dataset. However, the target dataset on which the VOS should be performed may exhibit different characteristics, e.g\bmvaOneDota higher resolution and less noise in the case of DAVIS. Hence, we fine-tune the objectness network using the DAVIS training data and obtain a domain specific objectness network. The DAVIS annotations do not directly correspond to objectness, as usually only one object out of possibly multiple is annotated. However, we argue that the learned task here is still similar to general objectness, since in most sequences of DAVIS the number of visible objects is relatively low and the object of interest is usually relatively large and salient. Note that OSVOS trained the base network directly on DAVIS without objectness pretraining on PASCAL. Our experiments show that both steps are complementary.

Test Network.  After the preceding pretraining steps, the network has learned a domain specific notion of objectness, but during test time, it does not know yet which of the possibly multiple objects of the target sequence it should segment. Hence, we fine-tune the pretrained network on the ground truth mask of the first frame, which provides it with the identity and specific appearance of the object of interest and allows it to learn to ignore the background. This one-shot step has been shown to be very effective for VOS [Caelles et al.(2017)Caelles, Maninis, Pont-Tuset, Leal-Taixé, Cremers, and Van Gool], which we also confirm in our experiments. However, the first frame does not provide enough information for the network to adapt to drastic changes in appearance or viewpoint. In these cases, our online adaptation approach (see Section 4) is needed.

Network Architecture.  While OSVOS used a variant of the well-known VGG network [Simonyan and Zisserman(2015)], we choose to adopt a more recent network architecture which incorporates residual connections. In particular, we adopt model A from Wu et al\bmvaOneDot[Wu et al.(2016a)Wu, Shen, and Hengel], which is a very wide ResNet [He et al.(2016)He, Zhang, Ren, and Sun] variant with 38 hidden layers and roughly 124 million parameters. The approach for segmentation is very simple, as no upsampling mechanism or skip connections are used. Instead, downsampling by a factor of two using strided convolutions is performed only three times. This leads to a loss of resolution by a factor of eight in each dimension, following which the receptive field is increased using dilated convolutions [Yu and Koltun(2016)] at no additional loss of resolution. Despite its simplicity, this architecture has shown excellent results both for classification (ImageNet) and segmentation (PASCAL) tasks [Wu et al.(2016a)Wu, Shen, and Hengel]. When applying it for segmentation, we bilinearly upsample the pixelwise posterior probabilities to the initial resolution before thresholding with .

We use the weights provided by Wu et al\bmvaOneDot[Wu et al.(2016a)Wu, Shen, and Hengel], which were obtained by pretraining on ImageNet [Deng et al.(2009)Deng, Dong, Socher, Li, Li, and Fei-Fei], Microsoft COCO [Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, and Zitnick], and PASCAL [Everingham et al.(2015)Everingham, Eslami, Van Gool, Williams, Winn, and Zisserman], as a very strong initialization for the base network. We then replace the output layer with a two-class softmax. As loss function, we use the bootstrapped cross-entropy loss function [Wu et al.(2016b)Wu, Shen, and v. d. Hengel], which takes the average over the cross-entropy loss values only over a fraction of the hardest pixels, i.e\bmvaOneDotpixels which are predicted worst by the network, instead of all pixels. This loss function has been shown to work well for unbalanced class distributions, which also commonly occur for VOS due to the dominant background class. In all our experiments, we use a fraction of 25% of the hardest pixels and optimize this loss using the Adam optimizer [Kingma and Ba(2015)]. In our evaluations, we separate the effect of the network architecture from the effect of the algorithmic improvements.

4 Online Adaptation

1:Objectness network , positive threshold , distance threshold , total online steps , current frame steps
2:Fine-tune for 50 steps on
3:
4:for  do
5:      
6:      
7:      
8:      
9:      
10:      if  then
11:            interleaved:
12:             Fine-tune for steps on    using and
13:             Fine-tune for steps on     using
14:      end if
15:      
16:      
17:      Output for frame
18:end for
Algorithm 1 Online Adaptive Video Object Segmentation (OnAVOS)

Since the appearance of the object of interest changes over time and new background objects can appear, we introduce an online adaptation scheme to adapt to these changes (see Algorithm 1). New objects entering the scene are especially problematic when pretraining for objectness, since they were never used as negative training examples and are thus assigned a high probability (see Fig. 1 (right) for an example).

The basic idea of our online adaptation scheme is to use pixels with very confident predictions as training examples. We select the pixels for which the predicted foreground probability exceeds a certain threshold as positive examples. One could argue that using these pixels as positive examples is useless, since the network already gives very confident predictions for them. However, it is important that the adaptation retains a memory of the positive class in order to create a counterweight to the many negative examples being added. In our experiments, leaving out this step resulted in holes in the foreground mask.

We initially selected negative training examples in the same way, i.e\bmvaOneDotusing pixels with a very low foreground probability. However, this led to degraded performance, probably, because during large appearance changes, false negative pixels will be selected as negative training examples, effectively destroying all chances to adapt to these changes. We thus select negative training examples in a different way, based on the assumption that the movement between two frames is small. The idea is to select all pixels which are very far away from the last predicted object mask. In order to deal with noise, the last mask can first be shrunk by an erosion operation. For our experiments, we use a square structural element with size , but we found that the exact value of this parameter is not critical. Afterwards, we compute a distance transform, which for each pixel provides the Euclidean distance to the closest foreground pixel of the mask. Finally, we apply a threshold and treat all pixels with a distance larger than as negative examples.

Pixels which are neither marked as positive nor as negative examples are assigned a “don’t care” label and are ignored during the online updates. We can now fine-tune the network on the current frame, since every pixel has a label for training. However, in practice, we found that naively fine-tuning using the obtained training examples quickly leads to drift. To circumvent this problem, we propose to mix in the first frame as additional training examples during the online updates, since for the first frame the ground truth is available. We found that in order to obtain good results, the first frame should be sampled more often than the current frame, i.e\bmvaOneDotduring online adaptation we perform a total of update steps per frame, of which only are performed on the current frame, and the rest is performed on the first frame. Additionally, we reduce the weight of the loss for the current frame by a factor (e.g\bmvaOneDot). A value of might seem surprisingly small, but one has to keep in mind that the first frame is used very often for updates, quickly leading to smaller gradients, while the current frame is only selected a few times.

During online adaptation, the negative training examples are selected based on the mask of the preceding frame. Hence, it can happen that a pixel is selected as a negative example and that it is predicted as foreground at the same time. We call such pixels hard negatives. A common case in which hard negatives occur is when a previously unseen object enters the scene far away from the object of interest (see Fig. 1 (right)), which will then usually be detected as foreground by the network. We found it helpful to remove hard negatives from the foreground mask which is used in the next frame to determine negative training examples. This step allows selecting the hard negatives in the next frame again as negative examples. Additionally, we tried to adapt the network more strongly to hard negatives by increasing the number of update steps and/or the loss scale for the current frame in the presence of hard negatives. However, this did not improve the results further.

In addition to the previously described steps, we propose a simple heuristic which makes our method more robust against difficulties like occlusion: If (after the optional erosion) nothing is left of the last assumed foreground mask, we assume that the object of interest is lost and do not apply any online updates until the network again finds a non-empty foreground mask.

5 Experiments

Datasets.  For objectness pretraining (cf\bmvaOneDotSection 3), we used the 1,464 training images of the PASCAL VOC 2012 dataset [Everingham et al.(2015)Everingham, Eslami, Van Gool, Williams, Winn, and Zisserman] plus the additional annotations provided by Hariharan et al\bmvaOneDot[Hariharan et al.(2011)Hariharan, Arbelaez, Bourdev, Maji, and Malik], leading to a total of 10,582 training images with 20 classes, which we all mapped to a single foreground class. For video object segmentation (VOS), we conducted most experiments on the recently introduced DAVIS dataset [Perazzi et al.(2016)Perazzi, Pont-Tuset, McWilliams, Van Gool, Gross, and Sorkine-Hornung], which consists of 50 short full-HD video sequences, from which 30 are taken for training and 20 for validation. Consistent with most prior work, we conduct all experiments on the subsampled version with a resolution of pixels. In order to show that our method generalizes, we also conducted experiments on the YouTube-Objects [Prest et al.(2012)Prest, Leistner, Civera, Schmid, and Ferrari, Jain and Grauman(2014)] dataset for VOS, consisting of 126 sequences.

Experimental Setup.  We pretrain on PASCAL and DAVIS, for 10 epochs each. For the baseline one-shot approach, we found 50 update steps on the first frame with a learning rate of to work well. For simplicity, we used a mini-batch size of only one image. Since DAVIS only has a training and a validation set, we tuned all hyperparameters on the training set of 30 sequences using three-fold cross validation, i.e\bmvaOneDot20 training sequences are used for training and 10 for validation for each fold. As is standard practice, we augmented the training data by random flipping, scaling with a factor uniformly sampled from , and gamma augmentations [Pohlen et al.(2017)Pohlen, Hermans, Mathias, and Leibe].

For evaluation, we used the Jaccard index, i.e\bmvaOneDotthe mean intersection-over-union (mIoU) between the predicted foreground masks and the ground truth masks. Results for additional evaluation measures suggested by Perazzi et al\bmvaOneDot[Perazzi et al.(2016)Perazzi, Pont-Tuset, McWilliams, Van Gool, Gross, and Sorkine-Hornung] are shown in the supplementary material. We noticed that, especially for fine-tuning on the first frame, the random augmentations introduce non-negligible variations in the results. Hence, for these experiments, we conducted three runs and report mean and standard deviation values. All experiments were performed with our TensorFlow [Abadi et al.(2015)Abadi, Agarwal, Barham, Brevdo, Chen, Citro, Corrado, et al.] based implementation, which we will make available together with pretrained models at https://www.vision.rwth-aachen.de/software/OnAVOS.

5.1 Baseline Systems

PASCAL DAVIS First frame mIoU [%]
Table 1: Effect of (pre-)training steps on the DAVIS validation set. As can be seen, each of the three training steps are useful. The objectness pretraining step on PASCAL significantly improves the results.

Effect of Pretraining Steps.  Starting from the base network (cf\bmvaOneDotSection 3) our full baseline system (i.e\bmvaOneDotwithout adaptation) includes a first pretraining step on PASCAL for objectness, then on the training sequences of DAVIS, and finally a one-shot fine-tuning on the first frame. Each of these three steps can be enabled or disabled individually. Table 1 shows the results on DAVIS for all resulting combinations. As can be seen, each of these steps is useful since removing any step always deteriorates the results.

The base network was trained for a different task than binary segmentation and thus a new output layer needs to be learned at the same time as fine-tuning the rest of the network. Without pretraining on either PASCAL or DAVIS, the randomly initialized output layer is learned only from the first frame of the target sequence, which leads to a largely degraded performance of only 65.2% mIoU. However, when either PASCAL or DAVIS is used for pretraining, the result is greatly improved to 77.6% mIoU and 78.0% mIoU, respectively. While both results are very similar, it can be seen that PASCAL and DAVIS do provide complementary information, since using both datasets together further improves the result to 80.3%. We argue that the relatively large PASCAL dataset is useful for learning general objectness, while the limited amount of DAVIS data is useful to adapt to the characteristics (e.g\bmvaOneDotrelatively high image quality) of the data of DAVIS, which provides an advantage for evaluating on DAVIS sequences.

Interestingly, even without looking at the segmentation mask of the first frame, i.e\bmvaOneDotin the unsupervised setup, we already obtain a result of 72.7% mIoU; slightly better than the current best unsupervised method FusionSeg [Jain et al.(2017a)Jain, Xiong, and Grauman], which obtains 70.7% mIoU on the DAVIS validation set111In FusionSeg [Jain et al.(2017a)Jain, Xiong, and Grauman], the result for all sequences including the training set is reported, but here we calculated the average only over the validation sequences for better comparability using objectness and optical flow as an additional cue.

Comparison to OSVOS Without including their boundary snapping post-processing step, OSVOS achieves a result of 77.4% mIoU on DAVIS. Our system without objectness pretraining on PASCAL is directly comparable to this result and achieves 78.0% mIoU. We attribute this moderate improvement to the more recent network architecture which we adopted. Including PASCAL for objectness pretraining improves this result by further 2.3% to 80.3%.

5.2 Online Adaptation

Hyperparameter Study.  As described in Section 4, OnAVOS involves relatively many hyperparameters. After some coarse manual tuning on the DAVIS training set, we found , , , , to work well. While the initial update steps on the first frame are performed with a learning rate of , it proved useful to use a different learning rate for the online updates on the current and the first frame. Starting from these values as the operating point, we conducted a more detailed study by changing one hyperparameter at a time, while keeping the others constant. We found that OnAVOS is not very sensitive to the choice of most hyperparameters and each configuration we tried performed better than the non-adapted baseline and we achieved only small improvements compared to the operating point (detailed plots are shown in the supplementary material). To avoid overfitting to the small DAVIS training set, we kept the values from the operating point for all further experiments.

Ablation Study.  Table 2 shows the results of the proposed online adaptation scheme and multiple variants, where parts of the algorithm are disabled, on the DAVIS validation set. Using the full method, we obtain an mIoU score of . When disabling all adaptation steps, the performance significantly degrades to , which demonstrates the effectiveness of the online adaptation method. The table further shows that negative training examples are more important than positive ones. If we do not mix in the first frame during online updates, the result is significantly degraded to 69.1% due to drift.

Timing Information.  For the initial fine-tuning stage on the first frame, we used 50 update steps. Including the time for the forward pass for all further frames, this leads to a total runtime of around 90 seconds per sequence (corresponding to roughly 1.3 seconds per frame) of the DAVIS validation set using an NVIDIA Titan X (Pascal) GPU. When using online adaptation with , the runtime increases to around 15 minutes per sequence (corresponding to roughly 13 seconds per frame). However, our hyperparameter analysis revealed that this runtime can be significantly decreased by reducing without much loss of accuracy. Note that for best results, OSVOS used a higher number of update steps on the first frame and needs about 10 minutes per sequence (corresponding to roughly 9 seconds per frame).

Method mIoU [%]
No adaptation
Full adaptation
Only negatives
Only positives
No first frame during online adaptation
Table 2: Online adaptation ablation experiments on the DAVIS validation set. As can be seen, mixing in the first frame during online updates is essential, and negative examples are more important than positive ones.

5.3 Comparison to State of the Art

Method DAVIS YouTube-Objects
mIoU [%] mIoU [%]
OnAVOS (ours), no adaptation
+CRF
+CRF +Test time augmentations
OnAVOS (ours), online adaptation
+CRF
+CRF +Test time augmentations
OSVOS [Caelles et al.(2017)Caelles, Maninis, Pont-Tuset, Leal-Taixé, Cremers, and Van Gool]
MaskTrack [Perazzi et al.(2017)Perazzi, Khoreva, Benenson, Schiele, and Sorkine-Hornung]
LucidTracker [Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele]
VPN [Jampani et al.(2017)Jampani, Gadde, and Gehler] -
Table 3: Comparison to the state of the art on the DAVIS validation set and the YouTube-Objects dataset. : Concurrent work only published on arXiv. More results are shown in the supplementary material.

Current state of the art methods use post-processing steps such as boundary snapping [Caelles et al.(2017)Caelles, Maninis, Pont-Tuset, Leal-Taixé, Cremers, and Van Gool], or conditional random field (CRF) smoothing [Perazzi et al.(2017)Perazzi, Khoreva, Benenson, Schiele, and Sorkine-Hornung, Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele] to improve the contours. In order to compare with them, we included per-frame post-processing using DenseCRF [Krähenbühl and Koltun(2011)]. This might be especially useful since our network only provides one output for each pixel block. Additionally, we added data augmentations during test time. To this end, we created 10 variants of each test image by random flipping, zooming, and gamma augmentations, and averaged the posterior probabilities over all 10 images.

In order to demonstrate the generalization ability of OnAVOS and since there is no separate training set for YouTube-Objects, we conducted our experiments on this dataset using the same hyperparameter values as for DAVIS, including the CRF parameters. Additionally, we omitted the pretraining step on DAVIS. Note that for YouTube-Objects, the evaluation protocols in prior publications sometimes differed by not including frames in which the object of interest is not present [Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele]. Here, we report results following the DAVIS evaluation protocol, i.e\bmvaOneDotincluding these frames, consistent with Khoreva et al\bmvaOneDot[Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele].

Table 3 shows the effect of our post-processing steps and compares our results on DAVIS and YouTube-Objects to other methods. Note that the effect of the test time augmentations is stronger when combined with online adaptation. We argue that this is because in this case, the augmentations do not only directly improve the end result as a post-processing step, but they also deliver better adaptation targets. On DAVIS, we achieve an mIoU of which is, to the best of our knowledge significantly higher than any previously published result. Compared to OSVOS, this is an improvement of almost 6%. On YouTube-Objects, we achieve an mIoU of , which is also a significant improvement over the second best result obtained by LucidTracker with .

6 Conclusion

In this work, we have proposed OnAVOS, which builds on the OSVOS approach. We have demonstrated that the inclusion of an objectness pretraining step and our online adaptation scheme for semi-supervised video object segmentation are highly effective. We have further shown that our online adaptation scheme is robust against choices of hyperparameters and generalizes to another dataset. We expect that, in the future, more methods will adopt adaptation schemes which make them more robust against large changes in appearance. For future work, we plan to explicitly incorporate temporal context information into our method.

Acknowledgements.  The work in this paper is funded by the EU project STRANDS (ICT-2011-600623) and the ERC Starting Grant project CV-SUPER (ERC-2012-StG-307432). We would like to thank István Sárándi, Jörg Stückler, Lucas Beyer, and Aljoša Ošep for helpful discussions and proofreading.

References

  • [Abadi et al.(2015)Abadi, Agarwal, Barham, Brevdo, Chen, Citro, Corrado, et al.] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.
  • [Babaee et al.(2017)Babaee, Dinh, and Rigoll] M. Babaee, D. T. Dinh, and G. Rigoll. A deep convolutional neural network for background subtraction. arXiv preprint arXiv:1702.01731, 2017.
  • [Bai et al.(2009)Bai, Wang, Simons, and Sapiro] X. Bai, J. Wang, D. Simons, and G. Sapiro. Video snapcut: Robust video object cutout using localized classifiers. ACM Trans. Graphics, 28(3):70:1–70:11, 2009.
  • [Bai et al.(2010)Bai, Wang, and Sapiro] X. Bai, J. Wang, and G. Sapiro. Dynamic color flow: A motion-adaptive color model for object segmentation in video. In ECCV, 2010.
  • [Bertinetto et al.(2016)Bertinetto, Valmadre, Henriques, Vedaldi, and Torr] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. S. Torr. Fully-convolutional siamese networks for object tracking. In ECCV Workshops, 2016.
  • [Braham and Droogenbroeck(2016)] M. Braham and M. Van Droogenbroeck. Deep background subtraction with scene-specific convolutional neural networks. In Int. Conf. on Systems, Signals and Image Proc., 2016.
  • [Caelles et al.(2017)Caelles, Maninis, Pont-Tuset, Leal-Taixé, Cremers, and Van Gool] S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taixé, D. Cremers, and L. Van Gool. One-shot video object segmentation. In CVPR, 2017.
  • [Chang et al.(2013)Chang, Wei, and III] J. Chang, D. Wei, and J. W. Fisher III. A video representation using temporal superpixels. In CVPR, 2013.
  • [Deng et al.(2009)Deng, Dong, Socher, Li, Li, and Fei-Fei] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009.
  • [Ellis and Zografos(2012)] L. F. Ellis and V. Zografos. Online learning for fast segmentation of moving objects. In ACCV, 2012.
  • [Everingham et al.(2015)Everingham, Eslami, Van Gool, Williams, Winn, and Zisserman] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL visual object classes challenge: A retrospective. IJCV, 111(1):98–136, 2015.
  • [Fan et al.(2015)Fan, Zhong, Lischinski, Cohen-Or, and Chen] Q. Fan, F. Zhong, D. Lischinski, D. Cohen-Or, and B. Chen. Jumpcut: Non-successive mask transfer and interpolation for video cutout. SIGGRAPH Asia, 34(6), 2015.
  • [Freund and Schapire(1995)] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In Europ. Conf. on Comput. Learning Theory, 1995.
  • [Grabner and Bischof(2006)] H. Grabner and H. Bischof. On-line boosting and vision. In CVPR, 2006.
  • [Grundmann et al.(2010)Grundmann, Kwatra, Han, and Essa] M. Grundmann, V. Kwatra, M. Han, and I. Essa. Efficient hierarchical graph-based video segmentation. In CVPR, 2010.
  • [Hariharan et al.(2011)Hariharan, Arbelaez, Bourdev, Maji, and Malik] B. Hariharan, P. Arbelaez, L. D. Bourdev, S. Maji, and J. Malik. Semantic contours from inverse detectors. In ICCV, 2011.
  • [He et al.(2016)He, Zhang, Ren, and Sun] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [Held et al.(2016)Held, Thrun, and Savarese] D. Held, S. Thrun, and S. Savarese. Learning to track at 100 fps with deep regression networks. In ECCV, 2016.
  • [Jain and Grauman(2014)] S. D. Jain and K. Grauman. Supervoxel-consistent foreground propagation in video. In ECCV, 2014.
  • [Jain et al.(2017a)Jain, Xiong, and Grauman] S. D. Jain, B. Xiong, and K. Grauman. Fusionseg: Learning to combine motion and appearance for fully automatic segmention of generic objects in videos. In CVPR, 2017a.
  • [Jain et al.(2017b)Jain, Xiong, and Grauman] S. D. Jain, B. Xiong, and K. Grauman. Pixel objectness. arXiv preprint arXiv:1701.05349, 2017b.
  • [Jampani et al.(2017)Jampani, Gadde, and Gehler] V. Jampani, R. Gadde, and P. V. Gehler. Video propagation networks. In CVPR, 2017.
  • [Kalal et al.(2012)Kalal, Mikolajczyk, and Matas] Z. Kalal, K. Mikolajczyk, and J. Matas. Tracking-learning-detection. PAMI, 34(7):1409–1422, 2012.
  • [Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele] A. Khoreva, R. Benenson, E. Ilg, T. Brox, and B. Schiele. Lucid data dreaming for object tracking. In arXiv preprint arXiv: 1703.09554, 2017.
  • [Kingma and Ba(2015)] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
  • [Krähenbühl and Koltun(2011)] P. Krähenbühl and V. Koltun. Efficient inference in fully connected CRFs with gaussian edge potentials. In NIPS, 2011.
  • [Li et al.(2016)Li, Li, and Porikli] H. Li, Y. Li, and F. Porikli. Deeptrack: Learning discriminative feature representations online for robust visual tracking. Trans. Image Proc., 25(4):1834–1848, 2016.
  • [Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, and Zitnick] T-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014.
  • [Long et al.(2015)Long, Shelhamer, and Darrell] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
  • [Maerki et al.(2016)Maerki, Perazzi, Wang, and Sorkine-Hornung] N. Maerki, F. Perazzi, O. Wang, and A. Sorkine-Hornung. Bilateral space video segmentation. In CVPR, 2016.
  • [Nam and Han(2016)] H. Nam and B. Han. Learning multi-domain convolutional neural networks for visual tracking. In CVPR, 2016.
  • [Papoutsakis and Argyros(2013)] K. E. Papoutsakis and A. A. Argyros. Integrating tracking with fine object segmentation. Image and Vision Computing, 31(10):771–785, 2013.
  • [Perazzi et al.(2015)Perazzi, Wang, Gross, and Sorkine-Hornung] F. Perazzi, O. Wang, M. Gross, and A. Sorkine-Hornung. Fully connected object proposals for video segmentation. In ICCV, 2015.
  • [Perazzi et al.(2016)Perazzi, Pont-Tuset, McWilliams, Van Gool, Gross, and Sorkine-Hornung] F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In CVPR, 2016.
  • [Perazzi et al.(2017)Perazzi, Khoreva, Benenson, Schiele, and Sorkine-Hornung] F. Perazzi, A. Khoreva, R. Benenson, B. Schiele, and A. Sorkine-Hornung. Learning video object segmentation from static images. In CVPR, 2017.
  • [Pohlen et al.(2017)Pohlen, Hermans, Mathias, and Leibe] T. Pohlen, A. Hermans, M. Mathias, and B. Leibe. Full-resolution residual networks for semantic segmentation in street scenes. In CVPR, 2017.
  • [Prest et al.(2012)Prest, Leistner, Civera, Schmid, and Ferrari] A. Prest, C. Leistner, J. Civera, C. Schmid, and V. Ferrari. Learning object class detectors from weakly annotated video. In CVPR, 2012.
  • [Ramakanth and Babu(2014)] S. A. Ramakanth and R. V. Babu. Seamseg: Video object segmentation using patch seams. In CVPR, 2014.
  • [Simonyan and Zisserman(2015)] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  • [Tokmakov et al.(2017)Tokmakov, Alahari, and Schmid] P. Tokmakov, K. Alahari, and C. Schmid. Learning motion patterns in videos. In CVPR, 2017.
  • [Tsai et al.(2016)Tsai, Yang, and Black] Y-H. Tsai, M-H. Yang, and M. J. Black. Video segmentation via object flow. In CVPR, 2016.
  • [W. and B.(2017)] Wenguan W. and Shenjian B. Super-trajectory for video segmentation. arXiv preprint arXiv:1702.08634, 2017.
  • [Wang and Yeung(2013)] N. Wang and D-Y. Yeung. Learning a deep compact image representation for visual tracking. In NIPS, 2013.
  • [Wang et al.(2016)Wang, Luo, and Jodoin] Y. Wang, Z. Luo, and P.-M. Jodoin. Interactive deep learning method for segmenting moving objects. Pattern Recognition Letters, 2016.
  • [Wu et al.(2016a)Wu, Shen, and Hengel] Z. Wu, C. Shen, and A. v. d. Hengel. Wider or deeper: Revisiting the resnet model for visual recognition. arXiv preprint arXiv:1611.10080, 2016a.
  • [Wu et al.(2016b)Wu, Shen, and v. d. Hengel] Z. Wu, C. Shen, and A. v. d. Hengel. Bridging category-level and instance-level semantic image segmentation. arXiv preprint arXiv:1605.06885, 2016b.
  • [Yu and Koltun(2016)] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016.

Supplementary Material

Appendix A More Comprehensive Comparison to Other Methods

Table 4 shows a more comprehensive comparison of our results to the results obtained by other methods.

Method DAVIS YouTube-Objects
mIoU [%] mIoU [%]
OnAVOS (ours), no adaptation
OnAVOS (ours), online adaptation
OSVOS [Caelles et al.(2017)Caelles, Maninis, Pont-Tuset, Leal-Taixé, Cremers, and Van Gool]
MaskTrack [Perazzi et al.(2017)Perazzi, Khoreva, Benenson, Schiele, and Sorkine-Hornung]
LucidTracker [Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele]
VPN [Jampani et al.(2017)Jampani, Gadde, and Gehler] -
FCP [Perazzi et al.(2015)Perazzi, Wang, Gross, and Sorkine-Hornung] 63.1 -
BVS [Maerki et al.(2016)Maerki, Perazzi, Wang, and Sorkine-Hornung] 66.5 59.7
OFL [Tsai et al.(2016)Tsai, Yang, and Black] 71.1 70.1
STV [W. and B.(2017)] 73.6 -

Table 4: Comparison to other methods on the DAVIS validation set and the YouTube-Objects dataset. Note that MaskTrack [Perazzi et al.(2017)Perazzi, Khoreva, Benenson, Schiele, and Sorkine-Hornung] and LucidTracker [Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele] report results on DAVIS for all sequences including the training set, but here we show their results for the validation set only. : Concurrent work only published on arXiv.

Appendix B Additional Evaluation Measures for DAVIS

Table 5 shows a more detailed evaluation on the DAVIS validation set using the evaluation measures suggested by Perazzi et al\bmvaOneDot[Perazzi et al.(2016)Perazzi, Pont-Tuset, McWilliams, Van Gool, Gross, and Sorkine-Hornung]. The measures used here are the Jaccard index , defined as the mean intersection-over-union (mIoU) between the predicted foreground masks and the ground truth masks; the contour accuracy measure , which measures how well the segmentation boundaries agree; and the temporal stability measure , which measures the consistency of the predicted masks over time. For more details of these measures, we refer the interested reader to Perazzi et al\bmvaOneDot[Perazzi et al.(2016)Perazzi, Pont-Tuset, McWilliams, Van Gool, Gross, and Sorkine-Hornung]. Note that the results for additional measures for LucidTracker [Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele] are missing since they are only reported averaged over all 50 sequences of DAVIS and not on the validation set.

The table shows that each evaluation measure is significantly improved by the proposed online adaptation scheme. OnAVOS obtains the best mean results for all three measures. It is surprising that our result for the temporal stability is better than the result by MaskTrack [Perazzi et al.(2017)Perazzi, Khoreva, Benenson, Schiele, and Sorkine-Hornung], although in contrast to our method, they explicitly incorporate temporal context by propagating masks.

Measure OnAVOS (ours) OSVOS [Caelles et al.(2017)Caelles, Maninis, Pont-Tuset, Leal-Taixé, Cremers, and Van Gool] MaskTrack [Perazzi et al.(2017)Perazzi, Khoreva, Benenson, Schiele, and Sorkine-Hornung] LucidTracker [Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele]
Un-adapted Adapted
mean
recall -
decay -
mean -
recall -
decay -
mean -
Table 5: Additional evaluation measures on the DAVIS validation set. Best and second best results are highlighted with bold and italic fonts, respectively.

Appendix C Per-Sequence Results for DAVIS

Table 6 shows mIoU results for each of the 20 sequences of the DAVIS validation set. On 18 out of 20 sequences, OnAVOS obtains either the best or the second best result.

Sequence Method, mIoU [%]
OnAVOS (ours) OSVOS [Caelles et al.(2017)Caelles, Maninis, Pont-Tuset, Leal-Taixé, Cremers, and Van Gool] MaskTrack [Perazzi et al.(2017)Perazzi, Khoreva, Benenson, Schiele, and Sorkine-Hornung] LucidTracker [Khoreva et al.(2017)Khoreva, Benenson, Ilg, Brox, and Schiele]
Un-adapted Adapted
blackswan
bmx-trees
breakdance
camel
car-roundabout
car-shadow
cows
dance-twirl
dog
drift-chicane
drift-straight
goat
horsejump-high
kite-surf
libby
motocross-jump
paragliding-launch
parkour
scooter-black
soapbox
mean
Table 6: Per-sequence results on the DAVIS validation set. Best and second best results are highlighted with bold and italic fonts, respectively.

Appendix D Hyperparameter Study on DAVIS

(a) online learning rate
(b) online loss scale
(c) distance threshold
(d) total steps
(e) update steps
(f) positive threshold
(g) erosion size
Figure 3: Influence of online adaptation hyperparameters on the DAVIS training set. The blue circle marks the operating point, based on which one parameter is changed at a time. The dashed line marks the un-adapted baseline. The plots show that overall our method is very robust against the exact choice of hyperparameters, except for the online learning rate . The standard deviations estimated by three runs are shown as error bars. In some cases, including the operating point, the estimated standard deviation is so small that it is hardly visible.

As described in the main paper, we found , , , , , and for the erosion size to work well on DAVIS. Starting from these values as the operating point, we conducted a more detailed hyperparameter study by changing one hyperparameter at a time, while keeping all others constant (see Fig. 3). The plots show that the performance of OnAVOS is in general very stable with respect to the choice of most of its hyperparameters and for every configuration we tried, the result was better than the un-adapted baseline (the dashed line in the plots). The single most important hyperparameter is the online learning rate , which is common for deep learning approaches. The online loss scale and the positive threshold have a moderate influence on performance, while changing the distance threshold and the number of steps and in a reasonable range only leads to minor changes in accuracy. For the erosion size, the optimum is achieved at 1, i.e\bmvaOneDotwhen no erosion is applied. This result suggests that the erosion operation is not helpful for DAVIS. The plots show that there is still some potential for improving the results by further tuning the hyperparameters. However, this study was meant as a characterization of our method rather than a systematic tuning.

The generalizability and the robustness of OnAVOS with respect to the choice of hyperparameters is further confirmed by the experiments on YouTube-Objects, which used the same hyperparameter settings as on DAVIS.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
21917
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description