Super-Trajectory for Video Segmentation

Super-Trajectory for Video Segmentation

Wenguan Wang  Beijing Lab of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, China Jianbing Shen  Corresponding author: Jianbing Shen (shenjianbing@bit.edu.cn). This work was supported in part by the National Basic Research Program of China (973 Program) (No. 2013CB328805), the National Natural Science Foundation of China (61272359), the Australian Research Council’s Discovery Projects funding scheme (project DP150104645), and the Fok Ying-Tong Education Foundation for Young Teachers. Specialized Fund for Joint Building Program of Beijing Municipal Education Commission. Beijing Lab of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, China Jianwen Xie  University of California, Los Angeles, USA Fatih Porikli  Research School of Engineering, Australian National University, Australia
Abstract

We introduce a novel semi-supervised video segmentation approach based on an efficient video representation, called as “super-trajectory”. Each super-trajectory corresponds to a group of compact trajectories that exhibit consistent motion patterns, similar appearance and close spatiotemporal relationships. We generate trajectories using a probabilistic model, which handles occlusions and drifts in a robust and natural way. To reliably group trajectories, we adopt a modified version of the density peaks based clustering algorithm that allows capturing rich spatiotemporal relations among trajectories in the clustering process. The presented video representation is discriminative enough to accurately propagate the initial annotations in the first frame onto the remaining video frames. Extensive experimental analysis on challenging benchmarks demonstrate our method is capable of distinguishing the target objects from complex backgrounds and even reidentifying them after occlusions.

1 Introduction

Figure 1: Our video segmentation method takes the first frame annotation as initialization (left). Leveraging on super-trajectories, the segmentation process achieves superior results even for challenging scenarios including heavy occlusions, complex appearance variations, and large shape deformations (middle, right).

We state the problem of semi-supervised video object segmentation as the partitioning of objects in a given video sequence with available annotations in the first frame. Aiming for this task, we incorporate an efficient video representation, super-trajectory, to capture the underlying spatiotemporal structure information that is intrinsic to real-word scenes. Each super-trajectory corresponds to a group of trajectories that are similar in nature and have common characteristics. A point trajectory, e.g., the tracked positions of an individual point across multiple frames, is a constituent of super-trajectory. This representation captures several aspects of a video:

  • Long-term motion information is explicitly modeled as it consists of trajectories over extended periods;

  • Spatiotemporal location information is implicitly interpreted by clustering nearby trajectories; and

  • Compact features, such as color and motion pattern, are described in a conveniently compact form.

With above good properties, super-trajectory simplifies and reduces the complexity of propagating human-provided labels in the segmentation process. We first generate trajectory based on Markovian process, which handles occlusions and drifts naturally and efficiently. Then a density peaks based clustering (DPC) algorithm [31] is modified for obtaining reasonable division of the trajectories, which offers proper split of videos in space and time axes. The design of our super-trajectory is motivated by the flowing two aspects.

Firstly, for the task of video segmentation, it is desirable to have a powerful abstraction of videos that is robust to structure variations and deformations in image space and time. As demonstrated in recently released DAVIS dataset [28], most of the existing approaches exhibit severe limitations for occlusions, motion blur, and appearance changes. The proposed super-trajectory, encoded with several well properties, is able to capture above instances (see Fig. 1).

Secondly, from the perspective of feature generation, point trajectory is desired to be improved for meeting above requests. Merging and splitting video segments (and corresponding trajectories) into atomic spatiotemporal components is essential for handling occlusions and temporal discontinuities. However, it is well-known that, classical clustering methods (e.g., k-means and spectral clustering), which are widely adopted by previous trajectory methods, even cannot reached a consensus on the definition of a cluster. Here, we modify DPC algorithm [31] for grouping trajectories, favoring its advantage of choosing cluster centers based on a reasonable criterion.

We conduct video segmentation via operating trajectories as unified super-trajectory groups. To eliminate adverse effects of camera motion, we introduce a reverse-tracking strategy to exclude objects that originate outside the frame. To reidentify objects after occlusions, we exploit object re-occurrence information, which reflects the spatiotemporal relations of objects across the entire video sequence.

The remainder of the article is organized as follows. A summarization of related work is introduced in Sec. 2. Our approach for super-trajectory generation is presented in detail in Sec. 3. In Sec. 4, we describe our super-trajectory based video segmentation algorithm. We then experimentally demonstrate its robustness, effectiveness, and efficiency in Sec. 5. Finally, we draw conclusions in Sec. 6.

2 Related Work

We provide a brief overview of recent works in video object segmentation and point trajectory extraction.

2.1 Video Object Segmentation

According to the level of supervision required, video segmentation techniques can be broadly categorized as unsupervised, semi-supervised and supervised methods.

Unsupervised algorithms [37, 49, 26, 44, 42] do not require manual annotations but often rely on certain limiting assumptions about the application scenario. Some techniques [5, 14, 27] emphasize the importance of motion information. More specially, [5, 14] analyze long-term motion information via trajectories, then solve the segmentation as a trajectory clustering problem. The works [10, 43, 46] introduce saliency information [45] as prior knowledge to infer the object. Recently, [19, 22, 50, 12, 48] generate object segments via ranking several object candidates.

Semi-supervised video segmentation, which also refers to label propagation, is usually achieved via propagating human annotation specified on one or a few key-frames onto the entire video sequence [4, 2, 35, 7, 21, 17, 32, 36]. These methods mainly use flow-based random field propagation models [38], patch-seams based propagation strategies [30], energy optimizations over graph models [29], joint segmentation and detection frameworks [47], or pixel segmentation on bilateral space [23].

Supervised methods [3, 51, 11] require tedious user interaction and iterative human corrections. These methods can attain high-quality boundaries while suffering from extensive and time-consuming human supervision.

2.2 Point Trajectory

Point trajectories are generated through tracking points over multiple frames and have the advantage of representing long-term motion information. Kanade-Lucas-Tomasi (KLT) [33] is among the most popular methods that track a small amount of feature points. Inspiring several follow-up studies in video segmentation and action recognition, optical flow based dense trajectories [34] improve over sparse interest point tracking. In particular, [39, 40, 41] introduce dense trajectories for action recognition. Other methods [5, 13, 20, 15, 14, 24, 25, 18, 9] address the problem of unsupervised video segmentation, in which case the problem also be described as motion segmentation. These methods usually track points via dense optical flow and perform segmentation via clustering trajectories.

Existing approaches often handle trajectories in pairs or individually and directly group all the trajectories into few clusters as segments, easily ignoring the inner coherence in a group of similar trajectories. Instead, we operate trajectories as united super-trajectory groups instead of individual entities, thus offering compact and atomic video representation and fully exploiting spatiotemporal relations among trajectories.

3 Super-Trajectory via Grouping Trajectories

3.1 Trajectory Generation

Given a sequence of video frames within time range , each pixel point can be tracked to the next frame using optical flow. This tracking process can be executed frame-by-frame until some termination conditions (e.g., occlusion, incorrect motion estimates, etc.) are reached. The tracked points are composed into a trajectory and a new tracker is initialized where prior tracker finished. We build our trajectory generation on a unified probabilistic model which naturally considers various termination conditions.

Let w denote a flow field indexed by pixel positions that returns a 2D flow vector at a given point. Using LDOF [6], we compute forward-flow field from frame to , and the backward-flow field from to . We track pixel potion to the consecutive frames in both directions. The tracked points of consecutive frames are concatenated to form a trajectory :

(1)

where indicates the length of trajectory and . We model point tracking process as a first order Markovian process, and denote the probability that -th point of trajectory is correctly tracked from frame as . The prediction model is defined by:

(2)

where and is formulated as:

(3)

The energy functions penalize various potential tracking error. The former energy is expressed as:

(4)

which penalizes the appearance variations between corresponding points. The latter energy is included to penalize occlusions. It uses the consistency of the forward and backward flows:

(5)

When this consistency constraint is violated, occlusions or unreliable optical flow estimates might occur (see [34] for more discussion). It is important to notice that the proposed tracking model performs accurately yet our model is not limited to the above constraints. We terminate the tracking process when , and then we start a new tracker at . In our implementation, we discard the trajectories shorter than four frames.

3.2 Super-Trajectory Generation

Previous studies indicate the value of trajectory based representation for long-term motion information. Our additional intuition is that neighbouring trajectories exhibit compact spatiotemporal relationships and they have similar natures in appearance and motion patterns. This motives us operating on trajectories as united groups.

We generate super-trajectory by clustering trajectories with density peaks based clustering (DPC) algorithm[31]. Before introducing our super-trajectory generation method, we first describe DPC.

Density Peaks based Clustering (DPC)  DPC is proposed to cluster the data by finding of density peaks. It provides a unique solution of fast clustering based on the idea that cluster centers are characterized by a higher density than their neighbors and by a relatively large distance from points with higher densities. It offers a reasonable criterion for finding clustering centers.

Given the distances between data points, for each data point , DPC calculates two quantities: local density and its distance from points of higher density. The local density of data point is defined as 111Here we do not use the cut-off kernel or gaussian kernel adopted in [31], due to the small data amount.:

(6)

Here, is measured by computing the minimum distance between the point and any other point with higher density:

(7)

For the point with highest density, it takes .

Cluster centers are the points with high local density () and large distance () from other points with higher local density. The data points can be ranked via , and the top ranking points are selected as centers. After successfully declaring cluster centers, each remaining data points is assigned to the cluster center as its nearest neighbor of higher density.

Grouping Trajectories via DPC  Given a trajectory spans frames, we define three features: spatial location (), color (), and velocity (), for describing :

(8)

where we set . We tested and did not observe obvious effect on the results.

Between each pair of trajectories and that share some frames, we define their distance via measuring descriptor similarity:

(9)

We normalize color distance on max intensity, location distance on sampling step (detailed below), motion distance on the mean motion magnitude of all the trajectories, which makes above distance measures to have similar scales. In case there is no temporal overlap, we set , where has a very large value.


(a) (b)

Figure 2: Illustration of initial super-trajectory generation. (a) The arrows indicate trajectories and the dots indicate the initial location of trajectory. (b) We roughly divide all the trajectories into groups with a given number of spatial grids .

We first roughly partition trajectories into several non-overlap clusters, and then iteratively updates each partition to get the optimized trajectory clusters.

The only parameter of our super-trajectory algorithm is number of spatial grids , as the degree of spatial subdivision. The spatial sampling step becomes , where refers to the product of the height and width of image frame. The clustering procedure begins with an initialization step where we divide the input video into several non-overlap spatiotemporal volumes of size . As shown in Fig. 2, all trajectories are divided into volumes. A trajectory falls into the volume where it starts. Then we need to find a proper cluster number of each trajectory group, thereby further offering a reasonable temporal split of video.

For each trajectory group, we initially estimate the cluster number as , where indicates the average length of all the trajectories. Then we apply a modified DPC algorithm for generating trajectory clusters, as described with in Alg. 1. In Alg. 1-3, if we have , then trajectory does not have any temporal overlap with those trajectories have higher local densities. That means trajectory is the center of a isolated group. If , in Alg. 1-4,

1:A sub-group of trajectories (), distance matrix via Eq. 9 and cluster number ;
2:Organized trajectory clusters;
3:Compute local densities via Eq. 6;
4:Compute distance via Eq. 7;
5:Find with , where ;
6:if  then
7:     Select as cluster centers;
8:else
9:     Compute via ;
10:     Select the trajectories with highest values as cluster centers;
11:end if
12:Assign remaining trajectories to cluster centers.
Algorithm 1 DPC for Generating Super-Trajectory Centers

that means there exist more than unconnected trajectory groups. Then we select the trajectories with highest densities of those unconnected trajectory groups as centers (Alg. 1-5). Otherwise, in Alg. 1-7,8, the trajectories with the highest values are selected as the cluster centers. The whole initialization process is described in Alg. 2-1,2,3.

Figure 3: Illustration of our super-trajectory generation via iterative trajectory clustering. (a) Frame . (b)-(f) Visualization results of super-trajectory in time slice with different iterations. Each pixel is assigned the average color of all the points along the trajectory which it belongs to. The blank areas are the discarded trajectories which are shorter than four frames.

Based on the above initialization process, we group trajectories into super-trajectories according to their spatiotemporal relationships and similarities (see Fig. 3(b)). Next, we iteratively refine our super-trajectory assignments. In this process, each trajectory is classified into the nearest cluster center. For reducing the searching space, we only search the trajectories fall into a space-time volume around the cluster center (Alg. 2-7). This results in a significant speed advantage by limiting the size of search space to reduce the number of distance calculations. Once each trajectory has been associated to the nearest cluster center, an update step adjusts the center of each trajectory cluster via Alg. 1 with (Alg. 2-14,15). We drop very small trajectory clusters and combine those trajectories to other nearest trajectory clusters. In practice, we find 5 iterations for above refining process are enough for obtaining satisfactory performance. Visualization results of super-trajectory generation with different iterations are presented in Fig. 3.

Using Alg. 1, we group all trajectories into nonoverlap clusters, represented as super-trajectories , where . It is worth to note that, (the number of super-trajectories) is varied in each iteration in Alg. 2 since we merge small clusters into other clusters. Additionally, for different videos is different even with same input parameter . That is important, since different videos have different temporal characteristics, thus we only constrain their spatial shape via .

1:All the trajectories , spatial sampling step ;
2:Super-trajectory assignments; /* Initialization */
3:Obtain trajectory groups via spatial sampling step ;
4:Set initial cluster number for each group;
5:Obtain initial cluster centers from each trajectory group via Alg. 1, where ;
6:loop       /* Iterative Assignment */
7:     Set label and distance for each trajectory ;
8:     for each trajectory cluster center  do
9:          for each trajectory falls in a space-time volume around  do
10:               Compute distance between and via Eq. 9;
11:               if  then
12:                    Set , ;
13:               end if
14:          end for
15:     end for       /* Update Assignment */
16:     Set cluster number for each group;
17:     Update for each cluster via Alg. 1.
18:end loop
Algorithm 2 Super-Trajectory Generation

4 Super-Trajectory based Video Segmentation

In Sec. 3, we cluster a set of compact trajectories into super-trajectory. In this section, we describe our video segmentation approach that leverages on super-trajectories.

Given the mask of the first frame, we seek a binary partitioning of pixels into foreground and background classes. Clearly, the annotation can be propagated to the rest of the video, using the trajectories that start at the first frame. However, only a few of points can be successfully tracked across the whole scene, due to occlusion, drift or unreliable motion estimation. Benefiting from our efficient trajectory clustering approach, super-trajectories are able to spread more annotation information over longer periods. This inspires us to base our label propagation process on super-trajectory.


(a) (b) (c) (d) (e)

Figure 4: (a) Input frames. (b) Foreground estimates via Eq. 10. (c) Foreground estimates via our reverse tracking strategy (Eq. 12). (d) Foreground estimates via backward re-occurrence based optimization (Eq. 14). (e) Final segmentation results.

For inferring the foreground probability of super-trajectories , we first divide all the trajectories into three categories: foreground trajectories , background trajectories and unlabeled trajectories , where . The and are the trajectories which start at the first frame and are labeled by the annotation mask , while the are the trajectories start at any frames except the first frame, thus cannot be labeled via . Accordingly, super-trajectories are classified into two categories: labeled ones and unlabeled ones . A labeled super-trajectory contains at least one labeled trajectory from or , and its foreground probability can be computed as the ratio between the included foreground trajectories and the labeled ones it contains:

(10)

For the points belonging to the labeled super-trajectory , their foreground probabilities are set as .

Then we build an appearance model for estimating the foreground probabilities of unlabeled pixels. The appearance model is built upon the labeled super-trajectories , consists of two weighted Gaussian Mixture Models over RGB colour values, one for the foreground and one for the background. The foreground GMM is estimated form all labeled super-trajectories , weighted by their foreground probabilities . The estimation of background GMM is analogous, with the weight replaced by the background probabilities . The appearance models leverage the foreground and background super-trajectories over many frames, instead of only using the first frame or labeled trajectories, therefore they can robustly estimate appearance information.

Although above model successfully propagates more annotation information across the whole video sequence, it still suffers from some difficulties: the model will be confused when a new object come into view (see Fig. 4 (b)). To this, we propose to reverse track points for excluding new incoming objects. We compute the ‘source’ of unlabeled trajectory :

(11)

where indicates starting position and refers to velocity via Eq. 8. It is clear that, if the virtual position is out of image frame domain, trajectory is a latecomer. For those trajectories start outside view, we treat them as background. Labeled super-trajectory is redefined as the one contains at least one trajectory from , or , and Eq. 10 is updated as

(12)

Those outside trajectories are also adopted for training appearance model in prior step. According to our experiment, this assumption offers about performance improvement. Foreground estimation results via our reverse tracking strategy are presented in Fig. 4 (c).

For re-identifying objects after long-term occlusions and constraining segmentation consistency, we explore re-occurrence of objects. As suggested by [10], objects, or regions, often re-occur both in space and in time. Here, we build correspondences among re-occurring regions across distant frames and transport foreground estimates globally. This process is based on super-pixel level, since super-trajectories cannot cover all of pixels.

Let be the superpixel set of input video. For each region, we search its Nearest Neighbors (NNs) as its re-occurring regions using KD-tree search. For region of frame , we only search its NNs in previous frames . Such backward search strategy is for biasing the segmentation results of prior frames as the propagation accuracy degrades over time. Following [10], each region is represented as a concatenation of several descriptors : RGB and LAB color histograms (6 channels20 bins), HOG descriptor (9 cells6 orientation bins) computed over a patch around superpixel center, and spatial coordinate of superpixel center. The spatial coordinate is with respect to image center and normalized into , which implicitly incorporates spatial consistency in NN-search.

Figure 5: Qualitative segmentation results on three video sequences from DAVIS [28] (from top to bottom: breakdance-flare, dog-agility and libby). It can be observed that the proposed algorithm is applicable to a quite general set of sequences and robust to many challenges.

After NN-search in the feature space, we construct a weight matrix for all the regions :

(13)

Then a probability transition matrix is built via row-wise normalization of . We define a column vector that gathers all the foreground probabilities of . The foreground probability of a superpixel is assigned as the average foreground probabilities of its pixels.

We iteratively update via the probability transition matrix . In each iteration , we update via:

(14)

which equivalents to updating foreground probability of a region with the weighted average of its NNs. In each iteration, we keep the foreground probabilities of those points belonging to labeled trajectories unchanged. Then we recompute and update it in next iteration. In this way, the relatively accurate annotation information of the labeled trajectories is preserved. Additionally, the annotation information is progressively propagated in a forward way and the super-trajectories based foreground estimates are consistent even across many distant frames (see Fig. 4 (d)).

After 10 iterations, the pixels (regions) with foreground probabilities lager than 0.5 are classified as foreground, thus obtaining final binary segments. In Sec. 5.2, we test and only observe performance variation. We set for obtaining best performance.

5 Experimental results

Parameter Settings  In Sec. 3.2, we set number of spatial grids . In Sec. 4, we over-segment each frame into about superpixels via SLIC [1] for well boundary adherence. For each superpixel, we set the number of NNs . In our experiments, all the parameters of our algorithm are fixed to unity.
Datasets  We evaluate our method on two public video segmentation benchmarks, namely DAVIS [28], and Segtrack-V2 [21]. The new released DAVIS [28] contains 50 video sequences (3, 455 frames in total) and pixel-level manual ground-truth for the foreground object in every frame. Those videos span a wide range of object segmentation challenges such as occlusions, fast-motion and appearance changes. Since DAVIS contains diverse scenarios which break classical assumptions, as demonstrated in [28], most state-of-the-art methods fail to produce reasonable segments. Segtrack-V2 [21] consists of 14 videos with 24 instance objects and 947 frames. Pixel-level mask is offered for every frame.

5.1 Performance Comparison

Video IoU score
BVS FCP JMP SEA TSP HVS STV
breakdance-flare 0.727 0.723 0.430 0.131 0.040 0.499 0.835
camel 0.669 0.734 0.640 0.649 0.654 0.876 0.798
car-roundabout 0.851 0.717 0.726 0.708 0.614 0.777 0.904
dance-twirl 0.492 0.471 0.444 0.117 0.099 0.318 0.640
drift-chicane 0.033 0.457 0.243 0.119 0.018 0.331 0.466
horsejump-low 0.601 0.607 0.663 0.498 0.291 0.551 0.768
libby 0.776 0.316 0.295 0.226 0.070 0.553 0.723
mallard-fly 0.606 0.541 0.536 0.557 0.200 0.436 0.650
motorbike 0.563 0.713 0.506 0.451 0.340 0.687 0.749
rhino 0.782 0.794 0.716 0.736 0.694 0.812 0.893
soapbox 0.789 0.449 0.759 0.783 0.247 0.684 0.751
stroller 0.767 0.597 0.656 0.464 0.369 0.662 0.826
surf 0.492 0.843 0.941 0.821 0.814 0.759 0.917
swing 0.784 0.648 0.115 0.511 0.098 0.104 0.765
tennis 0.737 0.623 0.765 0.482 0.074 0.576 0.826
Avg. (entire) 0.665 0.631 0.607 0.556 0.358 0.596 0.736

Table 1: IoU score on a representative subset of the DAVIS dataset [28], and the average computed over all 50 video sequences. The best results are boldfaced.

Quantitative Results Standard Intersection-over-Union (IoU) metric is employed for quantitative evaluation. Given a segmentation mask and ground-truth , IoU is computed via . We compare the proposed STV against various state-of-the-art alternatives: BVS [23], FCP [29], JMP [11], SEA [30], TSP [8], HVS [16], JOT [47], and OFL [36].

In Table 1, we report IoU score on a representative subset of the DAVIS dataset. As shown, the proposed STV performs superior on most video sequences. And STV achieves the highest average IoU score (0.736) over all the 50 video sequence of the DAVIS dataset, which demonstrates significant improvement over previous methods.

We further report quantitative results on Segtrack-V2 [21] dataset in Table 2. The results consistently demonstrate the favorable performance of the proposed method.

Method BVS OFL SEA FCP HVS JOT STV
IoU 0.584 0.675 0.453 0.574 0.518 0.718 0.781

Table 2: Average IoU score for SegtrackV2 dataset. The best results are boldfaced.

Qualitative Results Qualitative video segmentation results for video sequences from the DAVIS dataset [28] and SegTrack-V2 [21] are presented in Fig. 5 and Fig. 6. With the first frame as initialization, the proposed algorithm has the ability to segment the objects with fast motion patterns (breakdance-flare and cheetah1) or large shape deformation (dog-agility). It also produces accurate segmentation maps even when the foreground suffers occlusions (libby).

Figure 6: Qualitative segmentation results on representative video sequences from SegTrack-V2 [21] (from top to bottom: cheetah1, drift1, and girl). The initial masks are presented in the first row.

5.2 Validation of the Proposed Algorithm

In this section, we offer more detailed exploration for the proposed approach in several aspects with DAVIS dataset [28]. We test the values of important parameters, verify basic assumptions of the proposed algorithm, evaluate the contributions from each part of our approach, and perform runtime comparison.

Figure 7: Parameter selection for number of spatial grids (a) and the number of the number of the NNs (b). The IoU score is plotted as a function of a variety of s (s).

Parameter Verification We study the influence of the needed input parameter: number of spatial grids , of our super-trajectory algorithm in Sec. 3.2. We report the performance by plotting the IoU value of the segmentation results as functions of a variety of , where we vary . As shown in Fig. 7 (a), the performance increases with finer super-trajectory clustering in spatial domain (). However, when we further increase , the final performance does not change obviously. We set where the maximum performance is obtained. Later, we investigate the influence of parameter , which indicates the number of the NNs of a region in Sec. 4. We plot IoU score with varying in Fig. 7 (b), and set for achieving best performance.

Ablation Study To quantify the improvement obtained with our proposed trajectories in Sec. 3.1, we compare to two baseline trajectories: LTM [14] and DAD [39] in our experimental results. LTM is widely used for motion segmentation and DAD shows promising performance for action detection. To be fair, we only replace our trajectory generation part with above two methods, estimate optical flow via LDOF [6] and keep all other parameters fixed. From the comparison results in Table. 3. we can find that, compared with classical trajectory methods [14, 39], the proposed trajectory generation approach is preferable.

Method LTM DAD STV
IoU 0.718 0.654 0.736

Table 3: Average IoU score for DAVIS dataset with comparison to two trajectory methods: LTM [14] and DAD [39]. The best results are boldfaced.

6 Conclusions

This paper introduced a video segmentation approach by representing video as super-trajectories. Based on DPC algorithm, compact trajectories are efficiently grouped into super-trajectories. Occlusion and drift are naturally handled by our trajectory generation method based on a probabilistic model. We proposed to perform video segmentation on super-trajectory level. Via reverse tracking points and leveraging the property of region re-occurrence, the algorithm is robust for many segmentation challenges. Experimental results on famous video segmentation datasets [28, 21] demonstrate that our approach outperforms current state-of-the-art methods.

References

  • [1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE PAMI, 2012.
  • [2] V. Badrinarayanan, F. Galasso, and R. Cipolla. Label propagation in video sequences. In CVPR, 2010.
  • [3] X. Bai, J. Wang, D. Simons, and G. Sapiro. Video SnapCut: robust video object cutout using localized classifiers. ACM Trans. on Graphics, 2009.
  • [4] W. Brendel and S. Todorovic. Video object segmentation by tracking regions. In ICCV, 2009.
  • [5] T. Brox and J. Malik. Object segmentation by long term analysis of point trajectories. In ECCV, 2010.
  • [6] T. Brox and J. Malik. Large displacement optical flow: Descriptor matching in variational motion estimation. IEEE PAMI, 2011.
  • [7] I. Budvytis, V. Badrinarayanan, and R. Cipolla. Semi-supervised video segmentation using tree structured graphical models. In CVPR, 2011.
  • [8] J. Chang, D. Wei, and J. W. Fisher. A video representation using temporal superpixels. In CVPR, 2013.
  • [9] L. Chen, J. Shen, W. Wang, and B. Ni. Video object segmentation via dense trajectories. IEEE TMM, 2015.
  • [10] A. Faktor and M. Irani. Video segmentation by non-local consensus voting. In BMVC, 2014.
  • [11] Q. Fan, F. Zhong, D. Lischinski, D. Cohen-Or, and B. Chen. JumpCut: Non-successive mask transfer and interpolation for video cutout. ACM Trans. on Graphics, 2015.
  • [12] K. Fragkiadaki, P. Arbelaez, P. Felsen, and J. Malik. Learning to segment moving objects in videos. In CVPR, 2015.
  • [13] K. Fragkiadaki and J. Shi. Detection free tracking: Exploiting motion and topology for segmenting and tracking under entanglement. In CVPR, 2011.
  • [14] K. Fragkiadaki, G. Zhang, and J. Shi. Video segmentation by tracing discontinuities in a trajectory embedding. In CVPR, 2012.
  • [15] K. Fragkiadaki, W. Zhang, G. Zhang, and J. Shi. Two-granularity tracking: Mediating trajectory and detection graphs for tracking under occlusions. In ECCV, 2012.
  • [16] M. Grundmann, V. Kwatra, M. Han, and I. Essa. Efficient hierarchical graph-based video segmentation. In CVPR, 2010.
  • [17] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In CVPR, 2015.
  • [18] M. Keuper, B. Andres, and T. Brox. Motion trajectory segmentation via minimum cost multicuts. In ICCV, 2015.
  • [19] Y. J. Lee, J. Kim, and K. Grauman. Key-segments for video object segmentation. In ICCV, 2011.
  • [20] J. Lezama, K. Alahari, J. Sivic, and I. Laptev. Track to the future: Spatio-temporal video segmentation with long-range motion cues. In CVPR, 2011.
  • [21] F. Li, T. Kim, A. Humayun, D. Tsai, and J. M. Rehg. Video segmentation by tracking many figure-ground segments. In ICCV, 2013.
  • [22] T. Ma and L. J. Latecki. Maximum weight cliques with mutex constraints for video object segmentation. In CVPR, 2012.
  • [23] N. Maerki, F. Perazzi, O. Wang, and A. Sorkine-Hornung. Bilateral space video segmentation. In CVPR, 2016.
  • [24] P. Ochs and T. Brox. Higher order motion models and spectral clustering. In CVPR, 2012.
  • [25] P. Ochs, J. Malik, and T. Brox. Segmentation of moving objects by long term video analysis. IEEE PAMI, 2014.
  • [26] D. Oneata, J. Revaud, J. Verbeek, and C. Schmid. Spatio-temporal object detection proposals. In ECCV, 2014.
  • [27] A. Papazoglou and V. Ferrari. Fast object segmentation in unconstrained video. In ICCV, 2013.
  • [28] F. Perazzi, J. Pont-Tuset, B. McWilliams, L. V. Gool, M. Gross, and A. Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In CVPR, 2016.
  • [29] F. Perazzi, O. Wang, M. Gross, and A. Sorkinehornung. Fully connected object proposals for video segmentation. In ICCV, 2015.
  • [30] S. A. Ramakanth and R. V. Babu. SeamSeg: Video object segmentation using patch seams. In CVPR, 2014.
  • [31] A. Rodriguez and A. Laio. Clustering by fast search and find of density peaks. Science, 2014.
  • [32] N. Shankar Nagaraja, F. R. Schmidt, and T. Brox. Video segmentation with just a few strokes. In ICCV, 2015.
  • [33] J. Shi and C. Tomasi. Good features to track. In CVPR, 1994.
  • [34] N. Sundaram, T. Brox, and K. Keutzer. Dense point trajectories by GPU-accelerated large displacement optical flow. In ECCV, 2010.
  • [35] D. Tsai, M. Flagg, and J. M. Rehg. Motion coherent tracking using multi-label MRF optimization. BMVC, 2010.
  • [36] Y.-H. Tsai, M.-H. Yang, and M. J. Black. Video segmentation via object flow. In CVPR, 2016.
  • [37] A. Vazquez-Reina, S. Avidan, H. Pfister, and E. Miller. Multiple hypothesis video segmentation from superpixel flows. In ECCV, 2010.
  • [38] S. Vijayanarasimhan and K. Grauman. Active frame selection for label propagation in videos. In ECCV, 2012.
  • [39] H. Wang, A. Kläser, C. Schmid, and C.-L. Liu. Action recognition by dense trajectories. In CVPR, 2011.
  • [40] H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013.
  • [41] L. Wang, Y. Qiao, and X. Tang. Action recognition with trajectory-pooled deep-convolutional descriptors. In CVPR, 2015.
  • [42] W. Wang, J. Shen, X. Li, and F. Porikli. Robust video object cosegmentation. IEEE TIP, 2015.
  • [43] W. Wang, J. Shen, and F. Porikli. Saliency-aware geodesic video object segmentation. In CVPR, 2015.
  • [44] W. Wang, J. Shen, and L. Shao. Consistent video saliency using local gradient flow optimization and global refinement. IEEE TIP, 2015.
  • [45] W. Wang, J. Shen, L. Shao, and F. Porikli. Correspondence driven saliency transfer. IEEE TIP.
  • [46] W. Wang, J. Shen, R. Yang, and F. Porikli. Saliency-aware video object segmentation. IEEE PAMI, 2017.
  • [47] L. Wen, D. Du, Z. Lei, S. Z. Li, and M.-H. Yang. JOTS: Joint online tracking and segmentation. In CVPR, 2015.
  • [48] F. Xiao and Y. Jae Lee. Track and segment: An iterative unsupervised approach for video object proposals. In CVPR, 2016.
  • [49] C. Xu, C. Xiong, and J. J. Corso. Streaming hierarchical video segmentation. In ECCV, 2012.
  • [50] D. Zhang, O. Javed, and M. Shah. Video object segmentation through spatially accurate and temporally dense extraction of primary object regions. In CVPR, 2013.
  • [51] F. Zhong, X. Qin, Q. Peng, and X. Meng. Discontinuity-aware video object cutout. ACM Trans. on Graphics, 2012.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
49713
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description