Discriminative Online Learning for Fast Video Object Segmentation

Discriminative Online Learning for Fast Video Object Segmentation

Andreas Robinson111Authors contributed equally 222Computer Vision Laboratory, Linköping University, Sweden    Felix Järemo Lawin111Authors contributed equally 222Computer Vision Laboratory, Linköping University, Sweden    Martin Danelljan222Computer Vision Laboratory, Linköping University, Sweden 333Computer Vision Laboratory, ETH Zürich, Switzerland    Fahad Shahbaz Khan222Computer Vision Laboratory, Linköping University, Sweden 444Inception Institute of Artificial Intelligence, Abu Dhabi, UAE    Michael Felsberg222Computer Vision Laboratory, Linköping University, Sweden
Abstract

We address the highly challenging problem of video object segmentation. Given only the initial mask, the task is to segment the target in the subsequent frames. In order to effectively handle appearance changes and similar background objects, a robust representation of the target is required. Previous approaches either rely on fine-tuning a segmentation network on the first frame, or employ generative appearance models. Although partially successful, these methods often suffer from impractically low frame rates or unsatisfactory robustness.

We propose a novel approach, based on a dedicated target appearance model that is exclusively learned online to discriminate between the target and background image regions. Importantly, we design a specialized loss and customized optimization techniques to enable highly efficient online training. Our light-weight target model is integrated into a carefully designed segmentation network, trained offline to enhance the predictions generated by the target model. Extensive experiments are performed on three datasets. Our approach achieves an overall score of over on YouTube-VOS, while operating at 25 frames per second.

Figure 1: In our video segmentation approach (top), we first extract image features using a pre-trained ResNet-101. These features are then processed by our target appearance model, which is trained online to generate a robust coarse segmentation of the target. Guided by the image features, these segmentation scores are enhanced and upsampled by the refinement network. Our approach demonstrates significant robustness in the presence of background distractors (bottom left) and appearance changes (bottom right), owing to the discriminative and adaptive capabilities of the proposed target model.

1 Introduction

Video object segmentation is one of the fundamental problems within the field of computer vision, with numerous applications in robotics, surveillance, autonomous driving and action recognition. The task is to predict pixel-accurate masks of the region occupied by a specific target object, in every frame of a given video sequence. In this work we focus on the semi-supervised setting, where the ground truth mask of the target is provided in the first frame. Challenges arise in dynamic environments with similar background objects and when the target undergoes considerable appearance changes or occlusions. Successful video object segmentation therefore requires both robust and accurate pixel classification of the target region.

In the pursuit of robustness, several methods choose to fine-tune a segmentation network on the first frame, given the ground-truth mask. Although this has the potential to generate accurate segmentation masks under favorable circumstances, these methods suffer from extremely low frame-rates. Moreover, such extensive fine-tuning is prone to overfit to a single view of the scene, while degrading generic segmentation functionality learned during offline training. This limits performance in more challenging videos involving drastic appearance changes, occlusions and distractor objects in the background [36].

In this work, we propose a novel approach to address the challenges involved in video object segmentation. We introduce a dedicated target appearance model that is exclusively learned online, to discriminate the target region from the background. This model is integrated into our final segmentation architecture by designing a refinement network that produces high-quality segmentations based on predictions generated by the target model.

Unlike the methods relying on first-frame fine-tuning, our refinement network is trained offline to be target agnostic. Consequently, it retains powerful and generic object segmentation functionality, while target-specific learning is performed entirely by the target appearance module. A simplified illustration of our network is provided in figure 1.

We propose a discriminative online target model, capable of robustly differentiating between the target and background, by addressing the following challenges: (i) The model must be inferred solely from online data. (ii) The learning procedure needs to be highly efficient in order to maintain low computational complexity. (iii) The model should be capable of robust and controlled online updates to handle extensive appearance changes and distractor objects. All these aspects are accommodated by learning a light-weight fully-convolutional network head, with the goal to provide coarse segmentation scores of the target. We design a suitable objective and utilize specialized optimization techniques in order to efficiently learn a highly discriminative target model during online operation.

With the aim of constructing a simple framework, we refrain from using optical flow, post-processing and other additional components. Our final approach, consisting of a single deep network for segmentation and the target appearance model, is easily trained on video segmentation data in a single phase. Comprehensive experiments are performed on three benchmarks: DAVIS 2016 [30], DAVIS 2017 [30] and YouTube-VOS [36]. We first analyze our approach in a series of ablative comparisons and then compare it to several state-of-the-art approaches. Our method achieves an overall score of 73.4 on DAVIS 2017 and 71.0 on YouTube-VOS, while operating at 15 frames per second. In addition, a faster version of our approach achieves 25 frames per second, with only slight degradation in segmentation accuracy.

2 Related work

The task of video object segmentation has seen extensive study and rapid development in recent years, largely driven by the introduction and evolution of benchmarks such as DAVIS [30] and YouTube-VOS [35].

First-frame fine-tuning: Most state-of-the-art approaches train a segmentation network offline, and then fine-tune it on the first frame [29, 3, 26, 36] to learn the target-specific appearance. This philosophy was extended [32] by additionally fine-tuning on subsequent video frames. Other approaches [7, 17, 25] further integrate optical flow as an additional cue. While obtaining impressive results on the DAVIS 2016 dataset, the extensive fine-tuning leads to impractically long running times. Furthermore, such extensive fine-tuning is prone to over-fitting, a problem only partially addressed by heavy data augmentation [21].

Non-causal methods: Another line of research tackles the video object segmentation problem by allowing non-causal processing, by e.g. fine-tuning over blocks of video frames [23]. Other methods [1, 19] adopt spatio-temporal Markov Random Fields (MRFs), or infer discriminative location specific embeddings [8]. In this work, we focus on the causal setting in order to accommodate real-time applications.

Mask propagation: Several recent methods [29, 27, 37, 20] employ a mask-propagation module to improve spatio-temporal consistency of the segmentation. In [29], the model is learned offline to predict the target mask through refinement of the previous frame’s segmentation output. RGMP [27] attempts to further avoid first-frame fine-tuning by concatenating the current frame features with a target representation generated in the first frame. A slightly different approach is proposed in [37], where the mask of the previous frame is represented as a spatial Gaussian and a modulation vector. Unlike these methods, we are not relying on spatio-temporal consistence assumptions imposed by the mask-propagation approach. Instead, we use previous segmentation masks to train the discriminative model.

Generative approaches: Another group of methods [2, 34, 5, 18, 33] incorporate light-weight generative models of the target object. Rather than fine-tuning the network on the first frame, these methods first construct appearance models from features corresponding to the initial target labels. Features from incoming frames are then evaluated by the appearance model with techniques inspired by classical machine learning clustering methods [5, 20] or feature matching approaches [18, 33].

Tracking: Visual object tracking is similar to video object segmentation in that both problems involve following a specific object; although the former outputs bounding boxes rather than a pixel-wise segmentation. The problem of efficient online learning of discriminative target-specific appearance models has been extensively explored in visual tracking [15, 12]. Recently, optimization-based trackers, such as ECO [10] and ATOM [9], have achieved impressive results on benchmarks. These methods train convolution filters to discriminate between target and background.

The close relation between the two problem domains is made explicit in video object segmentation methods such as [6], where object trackers are used as external components to locate the target. In contrast, we do not employ off-the-shelf trackers to predict the target location. We instead take inspiration from optimization-based learning employed in recently introduced trackers [10, 9], to train a discriminative target model online, combining it with a refinement network trained offline.

3 Method

Figure 2: Overview of our video segmentation architecture, consisting of the feature extractor , the refinement network and target appearance model . The pre-trained feature extractor first produces feature maps from the input image (top left), at five depths. The online-trained target model then generates coarse target segmentation scores (bottom right) from the second-deepest feature map. The scores and all five feature maps are finally passed to the offline-trained refinement network , to produce the output segmentation mask (bottom left). consists of the refinement modules and target segmentation encoder (TSE) blocks (detailed in the gray inset, right).

In this work we propose a method for video object segmentation, integrating a powerful target appearance model into a deep neural network. The target model is trained online on the given sequence to differentiate between target and background appearance. We employ a specialized optimization technique, which is crucial for efficiently training the target model online and in real-time. All other parts of the network is fully trained offline, thereby avoiding any online fine-tuning.

Our approach has three main building blocks: a feature-extraction network , a target model and a refinement network . The target model generates a coarse prediction of the target location given deep features as input. These predictions are then improved and up-sampled by , which is trained offline. We train the target model online on a dataset with target labels and corresponding features . The full architecture is illustrated in figure 2.

During inference, we train on the first frame with the given ground-truth target mask and features extracted by . In the subsequent frames, the target model first predicts a coarse target segmentation score map . This score map together with features from are processed by , which outputs a high quality segmentation mask. The mask and associated features are stored in and are used to update before the next incoming frame.

3.1 Target model

We aim to develop a powerful and discriminative model of the target appearance, capable of differentiating between the target and background image regions. To successfully accommodate the video object segmentation problem, the model must be robust to significant appearance changes and distractor objects. Moreover, it needs to be easily updated with new data and efficiently trainable. Since these aspects have successfully been addressed in visual tracking, we take inspiration from recent research in this related field in the development of our approach. We employ a light-weight discriminative model that is exclusively trained online. Our target model , parameterized by , takes an input feature map and outputs coarse segmentation scores . For our purpose, the target model is realized as two fully-convolutional layers,

(1)

We use a factorized formulation for efficiency, where the initial layer reduces the feature dimensionality while the second layer computes the segmentation scores.

The target model is trained online based on image features and the corresponding target segmentation masks . Fundamental to our approach, the target model parameters need to be learned with minimal computational impact. To enable the deployment of specific fast converging optimization techniques we adopt an loss. Our online learning loss is given by,

(2)

Here, the parameters control the regularization term and are weight masks balancing the impact of target and background pixels. denotes bilinear up-sampling of the output from the target model to the spatial resolution of the labels . The dataset memory consists of sample feature maps , target labels and sample weights . During inference, can easily be updated with new samples from the video sequence. Compared to blindly fine-tuning on the latest frame, the dataset provides a controlled way of adding new data while keeping past frames in memory by setting appropriate sample weights .

Training: To minimize the target model loss over in equation (2) we employ the Gauss-Newton (GN) optimization strategy proposed in [9]. This is an iterative method, requiring an initial . In each iteration the optimal increment is found using a quadratic approximation of the loss

(3)

Here is the Jacobian of at and contain the residuals and . The loss in (3) results in a positive definite quadratic problem, which is minimized over with Conjugate Gradient (CG) descent [16]. We then update and execute the next GN iteration.

Pixel weighting: To address the imbalance between target and background, we employ a weight mask in (2) to ensure that the target influence is not too small relative to the usually much larger background region. We define the target influence as the fraction of target pixels in the image, or where is the pixel index and the total number of pixels. The weight mask is then defined as

(4)

where is the desired and the actual target influence. We set in our approach.

Initial sample generation: In the first frame of the sequence, we train the target model on the initial dataset , created from the given target mask and the extracted features . To add more variety in the initial frame, we generate additional augmented samples. Based on the initial label , we first cut out the target object and apply a fast inpainting method [31] to restore the background. We then apply a random affine warp and blur before pasting the target object back onto the image, creating a set of augmented images and corresponding label masks . After a feature extraction step, the unmodified first frame and the augmented frames are combined into the dataset . We initialize the sample weights such that the original sample carries twice the weight of the other samples, due to its higher importance. Finally, the weights are normalized to sum to one.

3.2 Refinement Network

While the target model provides robust but coarse segmentation scores, the final aim is to generate an accurate segmentation mask of the target at the original image resolution. To this end, we introduce a refinement network, that processes the coarse score along with backbone features. The network consists of two types of building blocks: a target segmentation encoder (TSE) and a refinement module. From these we construct a U-Net based architecture for object segmentation as in [38]. Unlike most state-of-the-art methods for semantic segmentation [39, 4], the U-Net structure does not rely on dilated convolutions, but effectively integrates low-resolution deep feature maps. This is crucial for reducing the computational complexity of our target model during inference.

The refinement network takes features maps from multiple depths in the backbone network as input. The resolution is decreased by a factor of 2 at each depth . For each layer, the features along with the coarse scores are first processed by the corresponding TSE block . The refinement module then inputs the resulting segmentation encoding generated by and the refined outputs from the preceding deeper layer,

(5)

The refinement modules are comprised of two residual blocks and a channel attention block, as in [38]. For the deepest block we set to an intermediate projection of inside . The output at the shallowest layer is processed by a residual block providing the final refined segmentation output .

Target segmentation encoder: Seeking to integrate features and scores, we introduce the target segmentation encoder (TSE). It processes features in two steps, as visualized in figure 2 (gray inset, right). We first project the backbone features to 64 channels to reduce the subsequent computational complexity. Additionally, we maintain 64 channels throughout the refinement network, keeping the number of parameters low. After projection, the features are concatenated with the segmentation score and encoded by three convolutional layers.

3.3 Offline training

During the offline training phase, we learn the parameters in our refinement network on segmentation data. The network is trained on samples consisting of one reference frame and one or more validation frames. These are all randomly selected from the same video sequence. A training iteration is performed as follows: We first learn the target model weights , described in section 3.1, based on the reference frame only. We then apply our full network, along with the learned target model, on the validation frames to predict the target segmentations. The parameters in the network are trained by minimizing the binary cross-entropy loss with respect to the ground-truth masks.

The backbone network is a ResNet-101, pre-trained on ImageNet. During offline training, we only learn the parameters of the refinement module, and freeze the weights of the backbone. Since the target model only receives backbone features, we can pre-learn and store the target model weights for each sequence. The offline training time is therefore not significantly affected by the learning of .

The network is trained on a combination of the YouTube-VOS [36] and DAVIS 2017 [30] train splits. These are balanced such that DAVIS 2017 is traversed eight times per epoch, and YouTube-VOS once. We select one reference frame and two validation frames per sample and train refinement network with the ADAM optimizer [22].

We start with the learning rate and moment decay rates with weight decay , and train for 120 epochs. The learning rate is then reduced to , and we train for another 60 epochs.

3.4 Inference

1:Images , target
2:, ,
3:    # initialize dataset
4:)    # Init , sec 3.1
5:for   do
6:         # Extract features
7:         # Predict target, sec 3.1
8:         # Refine target, sec 3.2
9:         # extend dataset
10:     if  then    # Update every frame
11:               
Algorithm 1 Inference

During inference, we apply our video segmentation procedure, also summarized in algorithm 1. We first generate the augmentation dataset , as described in 3.1, given the initial image and the corresponding target mask .

We then optimize the target model on this dataset. In the consecutive frames, we first predict the coarse segmentation scores using . Next, we refine with the network (section 3.2). The resulting segmentation output along with the input features are added to the dataset. Each new sample is first given a weight , with . We then normalize the sample weights to sum to unity. The parameter controls the update rate, such that the most recent samples in the sequence are prioritized in the re-optimization of the target model . For practical purposes, we limit the maximum capacity of the dataset. When the maximum capacity is reached, we remove the sample with the smallest weight from before inserting a new one. In all our experiments we set and .

During inference, we optimize the target model parameters and on the current dataset every -th frame. For efficiency, we keep the first layer of the target model fixed during updates. Setting to a large value reduces the inference time and regularizes the update of the target model. On the other hand, it is important that the target model is updated frequently, for objects that undergo rapid appearance changes. In our approach we set , i.e. we update every eighth frame.

The algorithm can be trivially expanded into handling multiple object segmentation. We then employ one online target model for each object and fuse the final refined predictions with softmax aggregation as proposed in [27]. Note that we only require one feature extraction per frame, since the image features are common for all target objects.

3.5 Implementation details

We implement our method in the PyTorch framework [28] and use its pre-trained ResNet-101 as the basis of the feature extractor . Following the naming convention in table 1 in [13], we extract five feature maps from the max pooling output in ”conv2_x” and the outputs of blocks ”conv2_x” through ”conv5_x”. The target model accepts 1024-channel features from ”conv4_x” and produces 1-channel score maps. Both the input features and output scores have a spatial resolution 1/16th of the input image.

Target model: The first layer has kernels reducing input features to channels while has a 3 3 kernel with one output channel. During first-frame optimization, and are randomly initialized by Kaiming Normal [14]. Using the data augmentation in 3.1, we generate a initial dataset of 20 image and label pairs. We then optimize and with the Gauss-Newton algorithm outlined in section 3.1 in GN steps. We apply CG iterations in the first step and in the others. As the starting solution is randomly initialized we typically use fewer CG iterations in the first GN step. In the target model update step we use CG iterations, updating every frame, while keeping fixed. We employ the aforementioned settings in our final approach, denoted Ours in the following sections.

The architecture allows us to alter the settings of the target model without retraining the refinement network. Taking advantage of this, we additionally develop a fast version, termed Ours (fast) by reducing the number of optimization steps, the number of filters in and increasing the update interval . Specifically, we set , , , , and .

4 Experiments

We perform experiments on three benchmarks: DAVIS 2016 [30], DAVIS 2017 [30] and YouTube-VOS [36]. For YouTube-VOS, we compare on the official test-dev set, with withheld ground-truth. For ablative experiments, we also show results on a separate validation split of the YouTube-VOS train set, consisting of 300 videos not used for training. Following the standard DAVIS protocol, we report both the mean Jaccard index and mean boundary scores, along with the overall score , which is the mean of the two. For comparisons on YouTube-VOS, we report and scores for classes included in the training set (seen) and the ones that are not (unseen). The overall score is computed as the average over all four scores, defined in YouTube-VOS. In addition, we compare the computational speed of the methods in terms of frames per second (fps), computed by taking the average over the DAVIS 2016 validation set. For our approach, we computed frame-rates on a single GPU. Further results are provided in the supplement.

4.1 Ablation

We analyze the contribution of all key components in our approach. All compared approaches are trained using the same data and settings.

Base net: First, we construct a baseline network to analyze the impact of our discriminative appearence model . This is performed by replacing with an offline-trained target encoder, while retraining the refinement network . As for our proposed network we keep the backbone parameters fixed. The target encoder is comprised of two convolutional layers, taking reference frame features from and the corresponding target mask as input. Features () extracted from the test frame are concatenated with the output from the target encoder and processed with two additional convolutional layers. The output is then passed to the refinement network in the same manner as for the coarse segmentation score (see section 3.2). We train this model with the same methodology as for our proposed network.

Base net + F.-T: To compare our discriminative online learning with first-frame fine-tuning of the network, we further integrate a fine-tuning strategy into the Base net above. For this purpose, we create an initial dataset with 20 samples using the same sample generation procedure employed for our approach (section 3.1). We then fine-tune all components in the architecture (refinement network and the target-encoder), except the backbone feature extractor. For fine-tuning we use the ADAM optimizer with 100 iterations and a batch size of four.

-only: To analyze the impact of the refinement network , we simply remove it from our architecture and instead let the target-specific model output the final segmentations. The target model’s coarse predictions are upsampled to full image resolution through bilinear interpolation. We learn the segmentation threshold by training the scale and offset parameters prior to the output sigmoid operation. In this version, we only train the target model on the first frame, and refrain from subsequent updates.

Ours - no update: To provide a fair comparison with the aforementioned versions, we evaluate a variant of our approach that does not perform any update of the target model during inference. The model is thus only trained on the first frame of the video.

Ours: Finally, we enable target model updates (as described in section 3.4) to obtain our final approach.

In table 1, we present the results in terms of score on the separate validation split of the YouTube-VOS dataset. The base network, not employing the target model achieves a score of . Adding fine-tuning on the first frame leads to an absolute improvement of . Using only the target model outperforms online fine-tuning. This is particularly notable, considering that the -only version contains only two offline-trained parameters in the network. Our target model thus possesses powerful discriminative capabilities, achieving high robustness. Further adding the refinement network (Ours - no update) leads to a major absolute gain of . This improvement stems from the offline-learned processing of the coarse segmentations, yielding more accurate mask predictions. Finally, the proposed online updating strategy additionally improves the score to .

Version Update F.-T.
Base net 52.0
Base net + F.-T. 57.2
-only 58.8
Ours - no update 67.6
Ours 70.6
Table 1: Ablative study on a validation split of 300 sequences from YouTube-VOS. We analyze the different components of our approach where and denote the target model and refinement module respectively. Further, ”Update” indicates if the target model update is enabled and ”F.-T.” denotes first-frame fine-tuning. Our target appearance model outperforms the Base net methods. Further, the refinement network significantly improves the raw predictions from the target model . Finally, the best performance is obtained when additionally updating target model .

4.2 Comparison to state-of-the-art

We compare our method to a variety of recent approaches on the YouTube-VOS, DAVIS 2017 and DAVIS 2016 benchmarks. We provide results for two versions of our approach: Ours and Ours (fast) (see section 3.5).

YouTube-VOS [36]: The official YouTube-VOS test-dev dataset has 474 sequences with objects from 91 classes. Out of these, 26 classes are not present in the training set. We compare our method with the results reported in [35], that were obtained by retraining the methods on the YouTube-VOS training set. Additionally, we compare to the recent AGAME [20] method, also trained on YouTube-VOS.

The results are reported in table 2. OSVOS [3], OnAVOS [32] and S2S [35] all employ first-frame fine-tuning, obtaining inferior frame-rates below FPS. Among these methods, the recent S2S achieves the best overall performance with a -score of . The AGAME method, employing a generative appearance model and no fine-tuning, obtains a -score of . Our approach significantly outperforms all previous methods by a relative margin of , yielding a final -score of . Moreover, our approaches achieve the fastest inference speeds. Notably, Ours (fast) maintains an impressive -score of , with an average segmentation speed of FPS.

FPS
Method overall seen unseen seen unseen
Ours 71.0 71.6 65.0 74.7 72.5 14.6
Ours (fast) 70.3 69.9 64.9 73.2 73.1 25.3
AGAME [20] 66.0 66.9 61.2 68.6 67.3 14.3
S2S [35] 64.4 71.0 55.5 70.0 61.2 0.11
OnAVOS [32] 55.2 60.1 46.1 62.7 51.4 0.08
OSVOS [3] 58.8 59.8 54.2 60.5 60.7 0.22
Table 2: State-of-the-art comparison on the large-scale YouTube-VOS test-dev dataset, containing 474 videos. The results of our approach were obtained through the official evaluation server. We report the mean Jaccard () and boundary () scores for object classes that are seen and unseen in the training set, along with the overall mean (). Our approaches achieve superior performance, with -scores over , while operating at high frame-rates.
DAVIS17 DAVIS16
    Method FPS
    Ours 73.4 71.3 75.5 84.8 85.0 84.5 14.6
    Ours (fast) 70.9 68.4 73.3 82.5 82.7 82.4 25.3
    Ours (DV17) 67.5 65.8 69.2 83.2 82.7 83.7 14.6
    AGAME [20] 70.0 67.2 72.7 - 82.0 - 14.3
    RGMP [27] 66.7 64.8 68.6 81.8 81.5 82.0 7.7
    FEELVOS [33] 71.5 69.1 74.0 81.7 81.1 82.2 2.22
    VM [18] - 56.6 - - 81.0 - 8.33
    FAVOS [6] 58.2 54.6 61.8 80.8 82.4 79.5 0.56
    OSNM [37] 54.8 52.5 57.1 - 74.0 - 7.14
    PReMVOS [25] 77.8 73.9 81.7 86.8 84.9 88.6 0.03
    OSVOS-S [26] 68.0 64.7 71.3 86.5 85.6 87.5 0.22
    OnAVOS [32] 67.9 64.5 71.2 85.5 86.1 84.9 0.08
    MGCRN [17] - - - 85.1 84.4 85.7 1.37
    CINM [1] 67.5 64.5 70.5 - 84.2 - 0.00
Table 3: Comparison our method with current state-of-the art approaches on the DAVIS 2017 and DAVIS 2016 validation sets. The best and second best entries are shown in red and blue respectively. In addition to Ours and Ours (fast), we report the results of our approach when trained on only DAVIS 2017, in Ours (DV17). Our segmentation method outperform compared methods with practical frame-rates. Furthermore, we achieve competitive results even when trained with only DAVIS 2017, owing to the discriminative nature of our target model.

DAVIS 2017 [30]: The validation set for DAVIS 2017 contains 30 sequences. In addition to Ours and Ours (fast), we now include a third version, Ours (DV17). To create this version, we train a separate refinement network employing only the DAVIS 2017 train split. No other datasets, such as PASCAL VOC, is employed. The target model in this version has the same parameters as Ours, (see section 3.5).

We report the results on DAVIS 2017 in table 3. The methods OnAVOS [32], OSVOS-S [26], MGCRN [17] and CINM [1] employ extensive fine-tuning on the first-frame, experiencing impractical segmentation speed. FAVOS [6] utilizes part-based trackers and a segmentation network to track and segment the target. The methods VM [18], RGMP [27], OSNM [37], AGAME [20] and FEELVOS [33] all employ mask-propagation. In addition, AGAME, VM and FEELVOS utilize generative appearance models. Our approach outperforms all aforementioned methods with a average of , while simultaneously reaching the highest frame-rate.

When training on only DAVIS 2017 data, Ours (DV17), achieves an average of . This is comparable to the performance of OnAVOS [32], OSVOS-S [26] and RGMP [27], which utilize much additional data during training, such as PASCAL VOC [11] and MS COCO [24]. This demonstrates that our refinement network can be trained robustly with limited data, and illustrates the superior generalization capabilities of our target model.

Among the compared methods, PReMVOS [25] was not discussed above, since it constitutes a major framework, encompassing multiple components and cues: extensive fine-tuning, mask-region proposals, optical flow based mask predictions, multiple refinement steps, merging and tracking modules, and re-identification. While this approach obtains an average of , it is approximately 500 times slower than ours. In contrast, our approach is simple, consisting of a single network together with a light-weight target model. However, many of the components in PReMVOS are complementary to our work, and can be directly integrated. Our aim is to propose a simple and general framework to stimulate future research.

DAVIS 2016 [30]: Finally, we evaluate our method on the 20 validation sequences in DAVIS 2016, corresponding to a subset of DAVIS 2017 and report the results in table 3. Our method performs comparable to the fine-tuning based approaches PReMVOS [25], MGCRN [17], OnAVOS [32] and OSVOS-S [26]. Further, all three versions of our method outperform AGAME [20], RGMP [27], OSNM [37], FAVOS [6], VM [18] and FEELVOS [33].

GT Ours OSVOS-S AGAME FEELVOS RGMP
Figure 3: Examples of three sequences from DAVIS 2017, demonstrating how our method performs under significant appearance changes compared to ground truth, OSVOS-S [26], AGAME [20], FEELVOS [33] and RGMP [27].
Image Ground truth Output Coarse scores
Figure 4: Qualitative results of our method, including both segmentation masks and target model score maps. The top three examples shown are from the DAVIS 2017 validation set, followed by three examples from our YouTube-VOS validation set. The last two examples are representative failure cases from both datasets.

4.3 Qualitative Analysis

Target model: In figure 4, we visualize the segmentation scores provided by our target model along with the final segmentation output. We analyze the output on sequences from the DAVIS 2017 and Youtube-VOS validation sets. For the first six rows in figure 4, our approach is able to accurately segment the target object. The target model (right) provide a discriminative coarse segmentation scores, that are robust to distractor objects, even in the most challenging cases (third row). Moreover, our refinement network successfully enhances the already accurate predictions from the target model, generating final segmentation masks with impressive level of detail. Additionally, the refinement network has the ability to suppress erroneous predictions generated by the target model, e.g the second camel in the second row, adding further robustness.

The last two rows in figure 4 demonstrate two cases, where our method struggles. Firstly, in the kite surfing sequence, our method detects the thin kite lines properly, but fails to separate them from the background. Accurately segmenting such thin structures in the image is a highly challenging task. Secondly, in the bottom row, our target model has failed to distinguish between instances of sheep due to their very similar appearance.

State-of-the-art: In figure 3 we visually compare our approach on three challenging sequences, with state-of-the-art methods employing fine-tuning (OSVOS-S [26]), mask-propagation (RGMP [27]) with generative modeling (AGAME [20]) and feature matching (FEELVOS [33]).

In the first sequence (first and second row) the extensively fine-tuned network in OSVOS-S fails to generalize to the changing target poses. While the mask-propagation in RGMP and generative appearance model in AGAME are more accurate, they both fail to segment the red target (a weapon). Further, the occlusions imposed by the twirling dancer in the middle sequence (third and fourth row), are causing the feature matching approach in FEELVOS to lose track of the target. Finally, in the last sequence (fifth and sixth row), all of the above methods fail in robustly segmenting the three targets. In contrast, our method accurately segments all targets in all of the selected sequences, demonstrating the robustness of our appearance model.

5 Conclusion

We present a novel approach to video object segmentation integrating a light-weight but highly discriminative target appearance model with a segmentation network. The discriminator produces coarse but robust target predictions and the segmentation network subsequently refine these into high-quality target segmentation masks. The target appearance model is efficiently trained online while the completely target-agnostic segmentation network is trained offline. Our method achieves state-of-the-art performance on the YouTube-VOS dataset and competitive results on DAVIS 2017, while operating at frame rates superior to previous methods.

References

  • [1] L. Bao, B. Wu, and W. Liu. Cnn in mrf: Video object segmentation via inference in a cnn-based higher-order spatio-temporal mrf. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5977–5986, 2018.
  • [2] H. S. Behl, M. Najafi, and P. H. Torr. Meta learning deep visual words for fast video object segmentation. arXiv preprint arXiv:1812.01397, 2018.
  • [3] S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taixé, D. Cremers, and L. Van Gool. One-shot video object segmentation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5320–5329. IEEE, 2017.
  • [4] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2018.
  • [5] Y. Chen, J. Pont-Tuset, A. Montes, and L. Van Gool. Blazingly fast video object segmentation with pixel-wise metric learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1189–1198, 2018.
  • [6] J. Cheng, Y.-H. Tsai, W.-C. Hung, S. Wang, and M.-H. Yang. Fast and accurate online video object segmentation via tracking parts. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [7] J. Cheng, Y.-H. Tsai, S. Wang, and M.-H. Yang. Segflow: Joint learning for video object segmentation and optical flow. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 686–695. IEEE, 2017.
  • [8] H. Ci, C. Wang, and Y. Wang. Video object segmentation by learning location-sensitive embeddings. In European Conference on Computer Vision, pages 524–539. Springer, 2018.
  • [9] M. Danelljan, G. Bhat, F. S. Khan, and M. Felsberg. ATOM: Accurate tracking by overlap maximization. In IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  • [10] M. Danelljan, G. Bhat, F. Shahbaz Khan, and M. Felsberg. Eco: Efficient convolution operators for tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6638–6646, 2017.
  • [11] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98–136, Jan. 2015.
  • [12] S. Hare, S. Golodetz, A. Saffari, V. Vineet, M.-M. Cheng, S. L. Hicks, and P. H. Torr. Struck: Structured output tracking with kernels. IEEE transactions on pattern analysis and machine intelligence, 38(10):2096–2109, 2016.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015.
  • [15] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista. High-speed tracking with kernelized correlation filters. IEEE transactions on pattern analysis and machine intelligence, 37(3):583–596, 2015.
  • [16] M. R. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems, volume 49. NBS Washington, DC, 1952.
  • [17] P. Hu, G. Wang, X. Kong, J. Kuen, and Y.-P. Tan. Motion-guided cascaded refinement network for video object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1400–1409, 2018.
  • [18] Y.-T. Hu, J.-B. Huang, and A. G. Schwing. Videomatch: Matching based video object segmentation. In European Conference on Computer Vision, pages 56–73. Springer, 2018.
  • [19] W.-D. Jang and C.-S. Kim. Online video object segmentation via convolutional trident network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5849–5858, 2017.
  • [20] J. Johnander, M. Danelljan, E. Brissman, F. S. Khan, and M. Felsberg. A generative appearance model for end-to-end video object segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  • [21] A. Khoreva, R. Benenson, E. Ilg, T. Brox, and B. Schiele. Lucid data dreaming for multiple object tracking. arXiv preprint arXiv:1703.09554, 2017.
  • [22] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [23] X. Li and C. Change Loy. Video object segmentation with joint re-identification and attention-aware mask propagation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 90–105, 2018.
  • [24] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
  • [25] J. Luiten, P. Voigtlaender, and B. Leibe. Premvos: Proposal-generation, refinement and merging for video object segmentation. arXiv preprint arXiv:1807.09190, 2018.
  • [26] K.-K. Maninis, S. Caelles, Y. Chen, J. Pont-Tuset, L. Leal-Taixé, D. Cremers, and L. Van Gool. Video object segmentation without temporal information. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2018.
  • [27] S. W. Oh, J.-Y. Lee, K. Sunkavalli, and S. J. Kim. Fast video object segmentation by reference-guided mask propagation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7376–7385. IEEE, 2018.
  • [28] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in pytorch. 2017.
  • [29] F. Perazzi, A. Khoreva, R. Benenson, B. Schiele, and A. Sorkine-Hornung. Learning video object segmentation from static images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2663–2672, 2017.
  • [30] F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In Computer Vision and Pattern Recognition, 2016.
  • [31] A. Telea. An image inpainting technique based on the fast marching method. Journal of graphics tools, 9(1):23–34, 2004.
  • [32] P. Voigtlaender and B. Leibe. Online adaptation of convolutional neural networks for video object segmentation. arXiv preprint arXiv:1706.09364, 2017.
  • [33] P. Voigtlaender and B. Leibe. Feelvos: Fast end-to-end embedding learning for video object segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  • [34] C. Vondrick, A. Shrivastava, A. Fathi, S. Guadarrama, and K. Murphy. Tracking emerges by colorizing videos. In European Conference on Computer Vision, pages 402–419. Springer, 2018.
  • [35] N. Xu, L. Yang, Y. Fan, J. Yang, D. Yue, Y. Liang, B. Price, S. Cohen, and T. Huang. Youtube-vos: Sequence-to-sequence video object segmentation. In European Conference on Computer Vision, pages 603–619. Springer, 2018.
  • [36] N. Xu, L. Yang, Y. Fan, D. Yue, Y. Liang, J. Yang, and T. Huang. Youtube-vos: A large-scale video object segmentation benchmark. arXiv preprint arXiv:1809.03327, 2018.
  • [37] L. Yang, Y. Wang, X. Xiong, J. Yang, and A. K. Katsaggelos. Efficient video object segmentation via network modulation. algorithms, 29:15, 2018.
  • [38] C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang. Learning a discriminative feature network for semantic segmentation. arXiv preprint arXiv:1804.09337, 2018.
  • [39] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2881–2890, 2017.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
354716
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description