Self-Supervised Learning of Video-Induced Visual Invariances

Self-Supervised Learning of Video-Induced Visual Invariances

Abstract

We propose a general framework for self-supervised learning of transferable visual representations based on \glsvivi. We consider the implicit hierarchy present in the videos and make use of (i) frame-level invariances (e.g. stability to color and contrast perturbations), (ii) shot/clip-level invariances (e.g. robustness to changes in object orientation and lighting conditions), and (iii) video-level invariances (semantic relationships of scenes across shots/clips), to define a holistic self-supervised loss. Training models using different variants of the proposed framework on videos from the \glsyt8m data set, we obtain state-of-the-art self-supervised transfer learning results on the 19 diverse downstream tasks of the \glsvtab, using only 1000 labels per task. We then show how to co-train our models jointly with labeled images, outperforming an ImageNet-pretrained ResNet-50 by 0.8 points with 10 fewer labeled images, as well as the previous best supervised model by 3.7 points using the full ImageNet data set.

\newacronym

viviVIVIVideo-Induced Visual Invariances \newacronymvtabVTABVisual Task Adaptation Benchmark \newacronymaaAAAutoAugment \newacronymsslSSLself-supervised learning \newacronymmt-sslMT-SSLmulti-task SSL \newacronymmsMSmotion segmentation \newacronymtiTItransitive invariance \newacronymyt8mYT8MYouTube-8M \newacronymmlpMLPmultilayer perceptron \newacronymcnnCNNconvolutional neural network \newacronymlstmLSTMLong Short-Term Memory \newacronymrnnRNNrecurrent neural network \newacronymsgdSGDstochastic gradient descent \newacronymmseMSEmean-square error \newacronymnlpNLPnatural language processing \cvprfinalcopy

1 Introduction

Supervised deep learning necessitates the collection and manual annotation of large amounts of data, which is often expensive, hard to scale, and may require domain expertise (e.g., in the context of medical data). Expensive data annotation hence presents a bottleneck which impedes the application of deep learning methods to diverse, previously under-explored, problems. Learning transferable visual representations, namely representations obtained by training a model on one task (or collection of tasks) which can then be used as a starting point for multiple unseen downstream tasks using few samples, is therefore a key research challenge [65].

An emerging body of work based on self-supervision has demonstrated that it is possible to learn such transferable visual representations. The idea is to carefully construct a pretext task which does not rely on manual annotation, yet encourages the model to compute useful features of the input. Videos are a rich source of such pretexts tasks as they capture the variations of instances over time which are not present in images. In addition, there is an abundance of videos available on the Internet covering almost any imaginable domain. As a result, and with the recent emergence of research video data sets [1], videos have been investigated in the context of self-supervision (for example, [37, 60, 59, 27, 61, 69, 38, 48, 39, 3, 2]). We believe that a holistic approach which captures these diverse efforts can be coupled with image-based pretext tasks to further improve the performance of self-supervised models.

Method Mean Nat. Spec. Str.
Ex-ImageNet 59.5 50.5 81.4 56.4
VIVI-Ex(4) 62.5 (+3.0) 55.9 80.9 59.1
VIVI-Ex(4)-Big 63.3 (+3.8) 57.5 81.0 59.5
Semi-Ex-10% [65] 65.3 70.2 81.9 52.7
VIVI-Ex(4)-Co(10%) 67.2 (+1.9) 63.3 82.6 62.9
Sup-100% [65] 66.4 73.5 82.5 52.1
Sup-Rot-100% [65] 68.0 (+1.6) 73.6 83.1 55.5
VIVI-Ex(4)-Co(100%) 69.4 (+3.0) 69.9 83.3 62.1
VIVI-Ex(4)-Co(100%)-Big 71.7 (+5.3) 72.5 84.3 64.7
Table 1: Mean testing accuracy and per-category mean accuracy for models fine-tuned on the 19 diverse downstream tasks (based on Natural, Specialized, Structured data sets) from the \glsvtab benchmark [65], using only 1000 labels per task. The proposed unsupervised models (VIVI-Ex(4) / VIVI-Ex(4)-Big) trained on raw \glsyt8m videos and variants co-trained with 10%/100% labeled ImageNet data (VIVI-Ex(4)-Co(10%) / VIVI-Ex(4)-Co(100%)), outperform the corresponding unsupervised (Ex-ImageNet), semi-supervised (Semi-Ex-10%) and fully supervised (Sup-100%, Sup-Rot-100%) baselines by a large margin.

In this work we propose a versatile video-based self-supervision framework for learning image representations. We divide a video data set into its natural hierarchy of frames, shots, and videos. The intuition is that the model can leverage (1) the frames to learn to be robust to color perturbations or contrast changes, (2) the shot information to be robust to rigid and non-rigid transformations of objects in a scene, and that (3) explicitly accounting for the video-level context should encourage the model to capture semantic relationships of scenes across shots/clips. In contrast to individual frame, shot, or video-level self-supervision objectives, our holistic approach can yield a representation that transfers better to a large set of downstream tasks. As an additional benefit, our approach does not need to pre-compute optical flow or motion segmentation masks, nor does it rely on object tracking.

In contrast to most previous work, our goal is to learn feature representations for downstream image classification as opposed to action recognition. We train the proposed model on the \acrfullyt8m data set (without using video-level labels) and show that this approach leads to state-of-the-art self-supervised results on the 19 diverse downstream tasks of the \acrfullvtab [65]. We then show how to co-train the model jointly with labeled images, outperforming an ImageNet-pretrained ResNet-50 with fewer labeled images. We also investigate the robustness of our co-training models to natural perturbations as induced by the variations across nearby frames in videos [51].

In summary, our contributions are:

  • We propose a versatile framework to learn image representations from non-curated videos by learning frame, shot, and video-level invariances.

  • We train a variety of models on M videos from the \glsyt8m data set and achieve a % absolute improvement over image/frame-based baselines across the 19 diverse tasks of the \glsvtab benchmark [65], which sets new state of the art among unsupervised methods.

  • We augment the \glsssl training framework with a supervised classification loss using data from ImageNet. The resulting models outperform an ImageNet-pretrained network using only 10% labeled ImageNet images (and no additional unlabeled ones), and achieve a new state of the art when co-trained with the full ImageNet data set, outperforming the best previous supervised result by points.

2 Related work

Self-supervised learning of image representations

\gls

ssl is an active topic of research in the computer vision community. Recent methods [63, 24, 4, 42, 23, 56] have advanced the state of the art in terms of learning representations that can linearly separate between the 1000 ImageNet categories [47]. Prior work has explored diverse self-supervision cues such as spatial-context [11], colorization [67], equivariance to transformations [17, 41]; alongside unsupervised techniques such as clustering [6, 68], generative modelling [13, 31], and exemplar learning [14].

Learning image representations from videos

More relevant to our contribution is the body of literature on \glsssl of image representations from videos. The temporal context of frames in video data has been widely exploited. For example, [37, 34, 15, 5, 60] make use of the order in which frames appear in a video to learn representations. Other forms of temporal context include its combination with spatial context [59], and the use of spatio-temporal co-occurrence statistics [27]. Orthogonal to these efforts, which attempt to be selective of the differences between frames, prior work along the lines of slow feature analysis [61, 69] also exploited videos as a means of learning invariant representations. Temporal coherence was exploited in a co-training setting by early work [38] on learning \glsplcnn for visual object recognition and face recognition. Slow and steady feature analysis [29] attempts to learn representations that exhibit higher order temporal coherence. This object deformation signal can be separated from global camera motion by tracking objects using unsupervised methods. These tracked patches have been used to learn image representations [58]. Tracking in this context may be replaced by spatio-temporally matched region proposals [16].

Some of the earliest work making use of temporal consistency used future frame prediction [53] as a pretext task. A more challenging version of this task is single frame future synthesis. The ambiguity in single-frame prediction has been side-stepped via time-agnostic prediction [28], motion segmentation [44], cross-pixel matching [35], and by giving the model a motion cue as input [66]. The latter two require distilling the temporal information from video pixels into optical-flow fields.

Optical-flow has been treated as a separate modality from the RGB pixels in a multi-modal setting [48, 56]. Even beyond optical-flow, videos, as found on the Internet, are inherently multi-modal, as they contain audio as well as subtitles. Thus relevant here are multi-modal learning methods that combine vision and audio [39, 8, 43, 3], and vision and text [54] to achieve better performance than their uni-modal baselines. In a robotics setting, RGB pixels may be considered together with ego-motion [2, 30]. Time-contrastive networks [50] consider two views of the same action to learn view invariant representations also applied in a robotics setting.

Doersch et al. [12] show that motion-based \glsssl may be combined with other self-supervision cues namely exemplar, colorization, and spatial-context, to pre-train models that perform better than each of these cues individually. Taking inspiration from their success our framework presents a synergistic combination of \glsssl methods.

Figure 1: (left) Illustration of the frame, shot, and video-level encoding pipeline used in this work. Each frame is encoded using the frame encoder . The frame embeddings are then aggregated for each shot using a pooling function to obtain shot embeddings . Predictions on the video level are then computed using the prediction functions . (right) Intuitively, we want to choose frame/shot- and video-level losses that embed frames from the same shot close to each other and frames from different shots or videos far apart, while encouraging shot embeddings from the same video to be predictive of each other using (simple) prediction functions.1

Transferable representations

Fine-tuning models trained on ImageNet labels is a popular strategy for transferring representations to new tasks [25]. Kornblith et al. [33] show that better supervised models tend to transfer better when fine-tuned. Other supervised learning benchmarks focus on performance on multiple data sets, either via transfer learning, meta-learning, or multitask learning [46, 57]. In the representation learning literature, models are usually evaluated in-domain, typically on ImageNet [66, and references therein]. However, self-supervised models are now performing well on tasks such as surface normal estimation, detection, and navigation [18]. The \glsvtab evaluates the transferability of representations beyond object classification in the natural image domain to many domains and task semantics such as counting and localization [65]. Similarly, recent developments in \glsnlp have lead to representations that transfer effectively to many diverse tasks [10].

3 Learning video-induced visual invariances

We start by giving an overview of the proposed framework in Sec. 3.1, and discuss frame/shot-level and video-level losses in detail in Sec. 3.2 and Sec. 3.3, respectively.

3.1 Overview

We consider a data set containing videos, each composed of multiple shots. For simplicity of exposition we assume that each video consists of shots, and each shot has frames. If we denote the -th frame in the -th shot of video by , we can write the data set as . Our framework consists of a frame-encoder , a frame embedding pooling function , and one or multiple shot-level prediction functions . The pooling function computes an embedding of the -th shot in video by feeding each frame through the frame encoder and applying the pooling function,

(1)

The pooling function can have different forms, ranging from simple average pooling to attention pooling taking the values of the individual frame embeddings into account. Shot-level prediction functions are trained to predict pretext (label-free) targets from shot embeddings.

More formally, to learn invariances at different levels of abstraction, we define a frame/shot-level loss and a video-level loss. The frame/shot-level loss takes the form

(2)

where are shot-level pretext labels and is a shot-level loss that can be instantiated as only acting on the frame level in the sense of decomposing into a sum over the frames (see Sec. 3.2 for concrete instantiations of losses). The video-level loss is given by

(3)

where the are video-level pretext labels and is a video-level loss (see Sec. 3.3 for concrete losses). The total loss is then given by , where balances the shot level and video level losses. is minimized jointly w.r.t. the parameters of , , and the .

Co-training with labeled images

We also consider the case where one has access to a limited number of labeled images in addition to the video data. Combining image-based \glsssl losses with a supervised loss applied to a subset of the images was studied previously by [64]. They found that this approach leads to a state-of-the-art semi-supervised models, and improves the performance of supervised models when all images are labeled. Here, we consider the related setup where the \glsssl loss is computed on video data, and the supervised loss is based on image data from a different data set. Specifically, we additionally apply followed by a linear classifier to mini-batches of labeled images and compute the cross-entropy loss between the predictions and the image labels. The total loss is then computed as , where balances the contributions of the self-supervised and supervised loss terms.

3.2 Learning shot-level invariances

To define the frame/shot-level loss , we propose to build on any \glsssl loss designed for images, such as classifying exemplars [14], solving jigsaw puzzles of image patches [40], or rotation prediction [17]. For learning shot-induced invariances, one can take two approaches:

  1. apply the image-based \glsssl loss independently to each frame so that the shot-induced invariances are learned implicitly through the combination of pooling function and and video-level prediction task, or

  2. explicitly ensure that the embeddings of the frames from the same shot are similar by adding a triplet or a contrastive loss to the image-based \glsssl loss.

In this work, in the spirit of approach (i) we consider \glsssl by rotation prediction [17] without additional explicit shot-level loss. To explore approach (ii) we rely on a variant of exemplar \glsssl [14], where each image is associated with a different class, and a feature extractor is trained to classify each image into its own class after heavily augmenting it (random cropping, rotation, contrast, and color shifts). Following [11, 32], to scale this approach to hundreds of millions of images (frames), we employ a triplet loss [49] encouraging augmentations of the same image to be close and augmentations of different images to be far apart. To learn invariances from different frames of the same shot, rather than picking a random frame from the shot and applying random augmentations to it, we pick consecutive frames from the same shot and augment each frame once. As a result, our feature extractor learns both the invariances induced by temporal variation in video as well as those induced by the data augmentation.

3.3 Learning video-level invariances

In contrast to action recognition networks, which learn video representations that have to be discriminative w.r.t. changes between frames, our framework targets learning representations that are invariant to such changes. Nevertheless, discriminative tasks useful for learning representations for action recognition, such as predicting whether a sequence of frames is played forward or backward [60], verifying whether the frames are ordered or shuffled [37], or predicting features corresponding to future frames [20], can be useful to learn abstract transferable representations when applied to sensibly chosen groups of aggregated frames. Following this intuition, our framework allows to apply any of these tasks to shot embeddings, rather than individual frame embeddings. For example, determining whether a sequence of shot embeddings is played forward or backward requires understanding of the high-level semantics of the scene and objects in each shot. Similarly, predicting future shot embeddings from the past ones encourages learning an abstract summary of each shot. In this work we will explore exactly these two approaches.

Figure 2: \glsvtab 1000 example mean score and per-category mean score of exemplar \glsssl from \glsyt8m frames (Ex-YT-F), with additional shot-level self-supervision (Ex-YT-S), the proposed method with InfoNCE video-level prediction across 4 frames (VIVI-Ex(4)) and additionally 3wider architecture (VIVI-Ex(4)-Big). Both shot- and video-level losses improve the overall score, with the gains coming mostly from higher mean accuracy on the natural and structured subsets.

For shot order prediction, we randomly reverse the order of the shot embeddings and train a prediction function to predict the shot order from concatenated shot embeddings, i.e., in (3) is the cross-entropy loss and is if the sequence of shot embeddings is reversed and otherwise. To train to predict future shot embeddings, we rely on noise-contrastive estimation [19]. Specifically, we use the embeddings of the shots to obtain a prediction of the embedding of the shot steps in the future. Then, should quantify the quality of the prediction, which we accomplish using the InfoNCE loss [42]

(4)

where is trained to assign high scores to pairs of shot embeddings from the same video, and low values to embeddings computed from different videos.2 Note that the terms in (4) can, up to an additive constant, be seen as the cross-entropy loss of an -class classification problem where the correct label is , so that we could reformulate the loss in the form (3) using class labels .

4 Experimental setup

Our experiments encompass two training phases, which we refer to as upstream and downstream. First, in the upstream phase, we train our models on video (and image) data using the methods proposed in the previous section. Then, we fine-tune those trained models on a set of downstream problems in the second phase. We focus on the challenging scenario in which the downstream data is limited, and use only 1000 examples for each downstream data set [65]. To understand the limits of the proposed approaches we have also experimented using the full downstream data sets. We provide these results in the supplementary material as our main focus is the low data regime.

Upstream training

We train on the videos in the \glsyt8m data set [1], which consists of millions of YouTube video IDs with over 3800 visual entities. We downloaded approximately M of these videos sampled at Hz and split them into a training set of M and a testing set of M videos. We further split the videos into shots using a simple strategy based on color histograms, similarly to [36]. We also present results of several baselines approaches applied to a dataset obtained by selecting a single random frame from each video, which we refer to as \glsyt8m frames.

Furthermore, in the co-training experiments we also use (a class-balanced fraction of) the ImageNet (ILSVRC-2012) training set [9], which contains 1.2M images classified into 1000 categories.

Downstream evaluation

To evaluate the learned representations, we use the data sets and follow the protocol of the \glsvtab [65]. This protocol consists of 19 data sets categorized into three groups as follows (details and references in the appendix).

  • Natural — Six classical image classification problems on natural images (Caltech101, CIFAR-100, DTD, Flowers102, Pets, Sun397 and SVHN).

  • Specialized — Image classification on data captured using specialist equipment, from the remote-sensing (Resisc45, EuroSAT) and the medical (Patch Camelyon, Diabetic Rethinopathy) domains.

  • Structured — Eight tasks to predict properties of the objects appearing in an image (how many there are, their relative position and distance), on both rendered (Clevr, dSprites, SmallNORB, DMLAB) and real (KITTI) data.

For each of these 19 data sets and each model that we propose, we launch a sweep over 4 hyper-parameters (learning rates and schedules, as in the lightweight mode of [65]). Then, we choose the models that had the best validation accuracy when averaged over these 19 tasks. These best-performing models were then re-trained for each data set on 1000 random points from the union of the train and validation set and evaluated on the testing set. To account for the randomness coming from the initialization of the fresh classification head and the order in which the data appears, we repeated this evaluation scheme three times and report the median test set accuracy (following  [65]).

Figure 3: Comparison of the \glsvtab 1000 example mean score of the proposed method with exemplar frame/shot-level \glsssl and InfoNCE video-level prediction across 4 frames (VIVI-Ex(4), and with a 3 wider architecture (VIVI-Ex(4)-Big)), with ImageNet-based exemplar (Ex-ImageNet) and rotation (Rot-ImageNet) baselines, as well as the multi-task \glsssl model from [12]. Our models outperform all baselines on average, and in particular on the structured data sets.

Architectures and training details

The frame encoder is modeled using the ResNet-50 v2 [22] architecture with BatchNorm [26]. We also investigated the effect of model capacity by widening the network by a factor of three. To avoid mismatch in batch statistics between the two data sources, in the co-training experiments we replace BatchNorm with GroupNorm [62] and also standardize [45] the weights of the convolutions. We construct mini-batches by sampling either 2 or 4 consecutive shots from each video (dropping those videos with fewer shots), and randomly select 8 consecutive frames for exemplar-based shot-level \glsssl and 4 consecutive frames rotation-based frame-level \glsssl. For the loss, when we sample 2 shots, we predict the embedding of one from the embedding of the other one using a \glsmlp, i.e., the function in (4) has the form , where are \glsplmlp with a single hidden layer with 256 units. In the experiments with 4 shots, we use a \glslstm prediction function with 256 hidden units to predict every shot embedding from the previous ones. We use temporal order prediction only together with exemplar-based \glsssl and for data with 2 shots per video, relying on a single-hidden-layer \glsmlp with 512 hidden units as prediction function. Throughout, we rely on (parameter-free) average pooling for . For both frame and shot-level \glsssl approaches we use the augmentation mechanism from [55]. For models co-trained with a supervised loss based on a fraction of ImageNet we additionally use the same HSV-space color randomization as [64].

We also perform experiments where we replace the augmentation mechanism from [55] with \glsaa, which is an augmentation policy learned using a reinforcement learning algorithm from the full ImageNet data set. While it can cause label leakage when applied to unsupervised methods, we investigate it to understand how these automatically learned invariances compare to those induced by shot-based augmentation which are label-free.

In all cases we choose the batch size such that the product of the number of videos and the number of shots is 2048, i.e., . We train all unsupervised models for 120k iterations, using \glssgd with a learning rate of 0.8 and momentum 0.9, multiplying the learning rate by 0.1 after 90k and 110k iterations. The co-trained models are trained for 100k iterations, and the schedule as well as the batch size is chosen depending on the amount of labeled data used. For the weight (and for co-trained models) we sweep over at most four different values. A complete description of all hyper-parameters and architectures can be found in the appendix.

Baselines

We train a rotation and exemplar baseline model on ImageNet and a data set obtained by sampling one frame from each video in our training set (\glsyt8m frames). We use the same training protocol as [32] for the respective methods except that we increase the batch size to 2048 and the schedule stretched to 120k iterations to be comparable to our methods. Furthermore, for the exemplar-based model we ablate the video-level prediction task, which amounts to treating the shots independently and only using the frames from the same shot as exemplars. In addition, we consider 3 baselines from [65]: A vanilla ResNet-50 v2 pretrained on ImageNet (achieving top-1/top-5 accuracy of %/% on the ImageNet validation set), the exemplar model trained on ImageNet with 10% class-balanced labeled data from [64] (Semi-Ex-10%), which achieves state-of-the-art semi-supervised accuracy on ImageNet, and the rotation model trained on ImageNet with all labels [64] (Sup-Rot-100%).

We further compare against three prior works that learn image representations from video data: The \glsms and the \glsmt-ssl models from [11], and the \glsti model from [59]. \glsms learns representations based on a foreground-background segmentation pretext task. The segmentation maps are derived using an off-the-shelf offline video segmentation algorithm. \glsmt-ssl combines \glsms and three other self supervision objectives to train a multi-task network. Its representation derives from a combination of colorization, spatial context, and motion segmentation cues. The \glsms and \glsmt-ssl models fine-tuned in this evaluation have a ResNet-101 [21] architecture up to block3. \glsti builds a graph combining intra-instance and inter-instance edges and exploits transitivity to learn invariant representations. The intra-instance edges are obtained by tracking patches in videos. We fine-tune their publicly available pre-trained VGG-16 [52] checkpoint. We refer the reader to the supplementary material for implementation details regarding the evaluation of these baselines.

5 Results

In this section we focus on the low sample-size regime, i.e., when each downstream data set consists of 1000 samples, and discuss the performance on the full data sets in the supplementary material (Table 4). In brief, the ranking of the methods according to the \glsvtab mean score using all examples is similar to the ranking according to the \glsvtab 1000 example mean score. Further, here we only present the best configuration (w.r.t. the number of shots and choice of prediction function) for each of our Video-Induced Visual Invariance (VIVI) learning approaches, and defer the results for other configurations to the supplementary material (Table 4).

5.1 Self-supervised learning

Exemplar

Fig. 2 shows the results for our models and the exemplar-based baselines. The baseline trained on \glsyt8m frames only (Ex-YT-F), without leveraging any temporal information, achieves a mean \glsvtab 1000 example score of %. Exploiting the temporal variations within shots to create exemplars (Ex-YT-S) increases that score by about points. Further, adding the video-level prediction loss on top adds another points. It hence appears that leveraging both shot- and video-level invariances using our approach leads to significant gains over just using frames. In addition, increasing the model capacity (using a wider model) leads to another increase by points. Note that this model is only points behind the semi-supervised model from [64] (Semi-Ex-10%) which uses 128k labeled images from ImageNet for training (cf. Table 1). The gains mostly come from improvements on the natural and structured data sets, whereas video level losses do not notably improve the score on the specialized data sets (see Fig. 2). We observed the largest gains when using with shots and more modest improvements for and temporal order prediction with shots (see Table 4 in the supplementary material).

Rotation

Similarly to the exemplar experiments, we observe gains of points in the mean \glsvtab 1000 example score over the frame-based baseline (Rot-YT-F) when using a video-level prediction task (VIVI-Rot(4); see Table 2). The gains are smaller for than for shots when combined with , and temporal order prediction was not effective when combined with rotation prediction as frame-level loss for both . We emphasize that the frame encoder trained via rotation \glsssl on \glsyt8m frames performs considerably worse than the same model trained on ImageNet. This is not surprising as ImageNet images are carefully cropped and the data has a balanced class distribution. By contrast, frames sampled from \glsyt8m are less balanced in terms of content and arguably provide many shortcuts for the rotation task such as black borders, overlaid logos, frames with text on a uniform background, or might lack any orientation cues.

Effect of AutoAugment (AA)

Table 2 shows the effect of using \glsaa [7] instead of the augmentation mechanism from [55]. The effect is strongest on the frame-based baselines, increasing the \glsvtab 1000-example score by at least 2, and weakest on models involving shot- and video-level losses, where the increase is between and points. Hence, the invariances induced by \glsaa are, to some degree, complementary to the proposed shot- and video-level losses. However, note that \glsaa is trained on labeled ImageNet images, which might introduce label leakage. Hence, methods relying on \glsaa should not be considered fully unsupervised.

Exemplar Rotation
yt-f yt-s vivi(4) vivi(4)-Big yt-f vivi
w/o aa 59.4 61.3 62.5 63.3 56.9 58.9
aa 61.8 62.8 63.0 64.4 58.9 59.9
Table 2: Effect of replacing the data augmentation mechanism from [55] with \glsaa. Video-induced invariances learned by our method are complementary to AA in the sense that applying AA to different variants of our method consistently leads to improvements.

Comparison with related work

Fig. 3 presents a summary of the comparison with baselines. We omit MS and \glsti as they obtain a \glsvtab 1000 example mean score comparable to relative patch location prediction [11] and jigsaw [40] \glsssl trained on ImageNet. These two methods have a significantly lower \glsvtab 1000 example score than the \glsmt-ssl model, as well as the rotation and exemplar \glsssl baselines (see Table 4 in the supplementary material). Our VIVI models clearly outperform both the ImageNet baseline and the \glsmt-ssl model. The score obtained by \glsmt-ssl is comparable to that obtained by rotation-based \glsssl trained on ImageNet, which in turn scores points higher than exemplar-based SSL. Both our models and \glsmt-ssl significantly outperform rotation and exemplar-based \glsssl on the structured data sets, whereas the ImageNet-based exemplar baseline obtains the highest mean score on the specialized data sets.

5.2 Co-training with ImageNet

Figure 4: Per-data set comparison of our exemplar-based unsupervised model (VIVI-Ex(4)) and its counterpart co-trained with the full ImageNet data set (VIVI-Ex(4)-Co(100%)). The accuracy on most of the natural (red) and specialized (green) data sets improves, with the largest improvements observed on the latter, while the accuracy decreases for about half of the structured data sets (blue).

In Table 1 we compare the scores obtained by our exemplar-based co-training models with the baselines from [65]. Our model with frame/shot-level and video-level losses and a wider architecture (VIVI-Ex(4)-Big) reduces the gap between exemplar trained on ImageNet and the strong Semi-Ex-10% semi-supervised baseline model by more than a factor of 2. Moreover, our model co-trained with 10% labeled ImageNet examples (class-balanced, no additional unlabeled ImageNet examples are used) outperforms both the Semi-Ex-10% baseline and the ImageNet pre-trained ResNet-50 on the \glsvtab 1000 examples mean score. Using the entire labeled ImageNet training set for co-training yields an increase of points. Finally, scaling up the architecture and applying \glsaa to preprocess the ImageNet data adds points, leading to a clear new state of the art on the \glsvtab benchmark. The largest gains from using (a subset of) ImageNet can generally be observed on the natural data sets, whereas the gains on the specialized and structured data sets are significantly lower. This result is not surprising given that many data sets in the natural category are semantically similar to ImageNet. Fig. 4 shows the per-data set increase/decrease in the \glsvtab 1000 example score when adding a classification loss computed on the entire ImageNet data set to VIVI-Ex(4).

Robustness to video perturbations

Our co-trained models are trained to both recognize 1000 ImageNet categories and be invariant to deformations found in video data. We therefore expect model predictions to be stable across neighbouring frames in a video. To measure if this is indeed the case, we evaluate our VIVI-Ex(4)-Co(100%) model on the ImageNet-Vid-Robust [51] benchmark. This benchmark measures the drop in accuracy under a stricter definition of the 0-1 loss using videos from the ImageNet-Vid data set [47]. Given a set of frames, the prediction on an “anchor” frame is considered correct only if all neighboring frames are predicted correctly. Intuitively, the drop in performance going from standard top-1 accuracy on anchor frames to this stricter loss function is indicative of a lack in model robustness. The lower the drop the more robust the model. In Table 3 we observe that our co-trained model is slightly more robust than its purely supervised counterpart, although the results are still within error bars. This is similar to the difference in performance drop observed for fine-tuning on ImageNet-Vid as reported in the benchmark paper itself [51, Table 1]. These initial results suggest that our co-training approach leads to a similar effect as fine-tuning, despite the domain shift between \glsyt8m and ImageNet-Vid. It seems that robustness to natural perturbations in videos is extremely challenging and worth investigating in the future.

Model Type Accuracy Original Accuracy Perturbed
ImageNet 68.0 [65.2, 70.7] 49.9 [46.9, 52.9] 18.1
VIVI-Ex(4)-Co(100%) 62.2 [59.3, 65.1] 46.3 [43.3, 49.2] 15.9
Table 3: ImageNet-Vid-Robust: We evaluate our VIVI-Ex(4)-Co(100%) model (co-trained using all labeled images available in the ImageNet training set), on the ImageNet-Vid-Robust benchmark [51]. Accuracy original is the top-1 accuracy measured on “anchor” frames. Accuracy perturbed is the PM-10 accuracy from the benchmark. It is the worst case accuracy defined over neighbouring 20 frames [51] around each “anchor” frame. is the absolute difference between these two. On this benchmark, lower difference is better. Small text in gray reports the Clopper-Pearson confidence interval.

6 Conclusion

We propose and evaluate a versatile framework for learning transferable, data-efficient image representations by exploiting video-induced visual invariances at different levels of granularity. The framework can be instantiated with any image-based \glsssl loss at the frame/shot-level and arbitrary sequence prediction proxy tasks at the video-level. Our experiments reveal that purely self-supervised models benefit greatly from exploiting video-induced invariances, outperforming the \glsssl baselines trained on ImageNet by a large margin, in particular on problems that require predicting the structural properties of the data. Moreover, when augmenting the proposed framework with a supervised classification loss, the resulting models outperform a vanilla ImageNet-pretrained model using fewer labeled examples, and sets a new state of the art on the \glsvtab benchmark when co-trained with the full ImageNet data set.

Future research could target better understanding of how the choice of losses and data sets used for upstream training impacts the performance on different tasks in downstream evaluation. While we found our co-trained models to be somewhat more robust to natural perturbations induced by videos than models trained only on images, further research is needed on learning models that overcome robustness-issues related to perturbations induced by videos.

Acknowledgments

We would like to thank Xiaohua Zhai for inspiring discussions, in particular on how to learn from video shots, and for contributions to preliminary experiments that led to this paper. Further, we would like to thank Raphael Marinier for help with preparing the \glsyt8m data set. Finally, we are grateful to Lucas Beyer for his implementation of GroupNorm with weight standardization.

Caltech101

CIFAR-100

DTD

Flowers102

Pets

SVHN

Sun397

Camelyon

EuroSAT

Resisc45

Retinopathy

Clevr-Count

Clevr-Dist

DM-Lab

KITTI-Dist

dSpr-Loc

dSpr-Ori

sNORB-Azim

sNORB-Elev

Mean

MS 50.4 17.2 34.9 34.7 18.7 80.7 7.0 79.7 90.4 45.5 73.6 45.0 56.9 34.8 60.6 77.8 46.6 48.6 35.3 49.4
TI 51.6 13.4 37.7 15.5 31.1 78.8 7.2 83.2 85.7 33.6 74.3 61.2 63.9 33.1 61.6 97.4 60.4 36.5 26.1 50.1
Jigsaw 66.7 18.9 51.4 66.1 37.5 55.1 12.1 76.0 91.5 66.2 72.4 42.8 55.9 30.5 68.2 69.5 35.0 44.9 36.3 52.5
Rel.Pat.Loc 68.5 19.1 52.2 69.0 41.3 60.9 11.1 77.5 92.6 65.4 70.7 43.5 59.6 33.6 68.2 70.7 29.3 47.2 35.2 53.5
Rot-YT-F 70.2 21.5 48.0 48.0 35.7 88.3 8.4 83.4 93.0 61.7 73.6 48.1 57.9 39.3 73.4 90.1 51.2 50.5 38.3 56.9
VIVI-Rot(2) 73.8 27.0 51.2 54.4 35.7 88.2 11.6 77.7 93.1 68.6 73.6 44.0 58.0 39.1 72.5 80.0 49.8 53.0 38.8 57.4
VIVI-Rot(4) 73.4 25.8 52.0 53.1 42.2 88.5 10.3 84.1 93.7 66.3 73.6 49.9 58.3 38.7 73.6 88.9 52.4 52.8 40.6 58.9
Rot-YT-F-AA 77.8 24.0 51.2 56.1 33.3 89.1 10.0 85.1 93.9 66.5 73.6 48.8 59.2 39.0 71.2 92.4 55.1 54.6 38.7 58.9
Ex-YT-F 73.0 24.3 49.8 64.6 48.1 88.4 13.4 83.0 95.5 70.4 73.6 50.4 59.0 38.6 71.0 90.8 46.9 43.5 44.7 59.4
Ex-ImageNet 72.0 20.1 52.8 54.5 51.0 87.5 15.5 83.8 95.2 72.5 74.2 49.9 60.9 36.9 75.6 92.2 45.6 48.8 41.7 59.5
MT-SSL 76.1 27.1 52.9 63.2 48.2 89.6 13.6 81.5 93.5 71.0 73.6 56.6 59.4 37.3 72.9 94.7 47.4 52.3 40.7 60.6
Rot-ImageNet 80.6 25.9 56.5 72.6 47.1 88.6 16.0 81.9 94.2 69.8 73.6 49.1 58.6 37.9 73.1 92.6 50.4 51.6 37.7 60.9
Ex-YT-S 76.2 28.4 50.4 74.9 53.1 88.0 14.3 81.7 94.8 74.2 73.7 52.8 58.9 39.3 70.9 91.1 50.8 49.7 42.1 61.3
Ex-YT-F-AA 78.7 28.1 54.6 64.7 52.7 89.0 16.5 83.5 95.5 73.1 73.6 52.2 60.3 39.0 74.4 93.4 54.6 45.9 44.5 61.8
VIVI-Ex(2)-Ord 76.0 29.0 49.0 77.7 54.7 88.5 13.6 80.5 94.2 73.2 73.6 55.9 60.2 39.1 72.0 91.5 52.1 51.5 42.8 61.9
VIVI-Ex(2) 75.3 28.8 48.7 77.5 55.5 87.9 12.4 81.6 94.1 73.6 73.6 56.3 60.6 38.9 73.1 91.9 52.2 50.6 44.6 62.0
VIVI-Ex(4) 76.3 29.0 50.1 77.9 55.6 88.0 14.1 82.4 94.4 73.1 73.6 55.3 60.9 38.6 72.9 95.3 53.0 52.4 44.1 62.5
Ex-YT-S-AA 79.3 30.1 53.9 75.4 55.3 88.4 14.7 83.4 94.8 75.7 73.6 55.3 59.5 40.7 76.6 91.3 52.9 51.2 41.4 62.8
VIVI-Ex(4)-AA 78.6 30.3 51.5 75.0 56.1 88.6 14.4 83.0 94.7 75.2 73.6 56.3 60.6 41.6 74.2 94.6 55.5 52.3 41.0 63.0
VIVI-Ex(4)-Big 77.5 32.8 51.3 79.4 56.6 88.3 16.6 79.8 95.1 75.3 73.6 54.7 57.9 40.4 74.4 92.0 56.8 52.4 47.0 63.3
VIVI-Ex(4)-Big-AA 77.5 34.8 54.2 76.9 59.5 89.7 16.2 84.3 94.8 77.2 73.6 53.3 60.7 40.5 78.0 93.4 59.2 52.9 47.0 64.4
Semi-Ex-10% 88.6 53.2 60.8 86.8 85.3 88.0 29.0 83.2 95.2 77.3 71.7 42.3 57.4 36.7 71.4 74.9 53.9 52.7 32.3 65.3
Sup-100% 91.0 57.0 66.0 88.6 89.9 87.3 34.4 80.6 95.3 80.8 73.2 41.0 56.1 36.3 70.6 85.7 46.0 45.7 35.4 66.4
VIVI-Ex(4)-Co(10%) 82.8 36.6 58.1 82.7 76.9 81.9 24.1 85.6 94.7 76.4 73.6 79.4 63.9 38.0 76.6 95.3 61.3 42.4 46.3 67.2
Sup-Rot-100% 91.7 53.7 69.5 90.8 88.1 88.5 32.8 83.4 96.0 82.0 71.1 47.3 57.2 36.6 77.1 88.3 52.1 51.6 33.7 68.0
VIVI-Ex(4)-Co(100%) 86.1 51.5 64.5 88.7 87.1 79.4 31.7 83.9 95.1 80.8 73.6 78.9 61.7 36.4 78.2 93.8 61.0 43.1 43.6 69.4
1000 VIVI-Ex(4)-Co(100%)-Big 88.0 53.3 69.0 90.4 88.4 84.4 34.1 86.2 95.9 81.7 73.6 79.9 63.5 37.3 82.9 95.3 67.4 46.2 44.9 71.7
MS 68.4 69.6 48.1 52.7 49.2 96.7 56.9 85.5 97.5 88.3 76.8 99.8 90.4 71.7 75.3 100.0 96.3 99.9 97.4 80.0
TI 76.5 68.5 56.4 66.3 52.0 96.2 59.4 89.8 97.6 90.1 81.0 94.0 91.6 72.3 61.2 100.0 96.4 97.0 86.2 80.7
Jigsaw 79.1 65.3 63.9 77.9 65.4 93.9 59.2 83.0 97.9 92.0 80.1 99.6 88.6 72.0 74.7 100.0 90.3 99.9 93.6 83.0
Rel.Pat.Loc 79.9 65.7 65.2 78.8 66.8 93.7 58.0 85.3 97.8 91.5 79.8 99.5 87.7 71.5 75.0 100.0 90.4 99.7 92.6 83.1
Rot-YT-F 81.8 72.6 60.7 66.5 65.7 96.9 59.4 86.7 98.3 92.2 76.8 99.8 92.1 76.0 81.3 100.0 96.6 99.8 98.0 84.3
VIVI-Rot(4) 87.1 74.2 62.4 73.5 68.6 97.0 61.1 86.8 98.3 92.8 76.9 99.8 92.1 76.3 79.1 100.0 96.5 100.0 97.7 85.3
VIVI-Rot(2) 86.7 74.1 61.6 75.1 67.6 97.0 61.9 86.7 98.4 92.6 77.7 99.8 92.5 76.4 81.3 100.0 96.6 99.9 97.1 85.4
Rot-YT-F-AA 86.8 72.5 63.0 74.7 68.4 96.9 60.1 86.4 98.4 92.8 78.5 99.8 92.2 76.4 81.5 100.0 96.6 99.7 98.1 85.4
Ex-YT-F 85.0 73.6 63.8 84.9 70.5 96.8 60.6 87.2 98.6 94.3 78.9 99.8 93.3 76.8 80.9 100.0 96.6 99.9 97.3 86.2
MT-SSL 88.0 76.1 64.4 80.0 72.3 97.2 63.0 85.8 98.3 93.7 78.6 99.7 93.0 75.4 80.4 100.0 96.5 100.0 98.1 86.3
Ex-ImageNet 83.5 74.2 65.4 83.4 74.9 96.8 60.4 85.5 98.7 94.5 79.8 99.8 93.5 75.5 80.4 100.0 96.5 99.9 98.0 86.4
Rot-ImageNet 88.5 76.4 67.7 83.0 73.1 97.0 63.2 85.4 98.5 93.9 79.1 99.9 92.2 76.0 82.0 100.0 96.6 100.0 98.3 86.9
VIVI-Ex(2)-Ord 86.0 75.7 62.1 87.1 76.1 96.9 63.7 87.2 98.6 94.6 79.9 99.8 93.5 76.5 80.9 100.0 96.5 99.8 97.9 87.0
Ex-YT-S 87.4 75.9 64.8 85.7 75.0 96.9 63.2 87.0 98.6 94.5 80.1 99.8 93.4 77.4 80.4 100.0 96.6 99.9 97.3 87.1
VIVI-Ex(4) 86.1 76.3 61.8 87.3 76.7 97.0 64.0 86.9 98.6 94.7 80.2 99.8 93.5 76.8 81.3 100.0 96.6 99.8 98.2 87.1
Ex-YT-F-AA 88.1 75.1 67.7 86.1 73.5 96.9 62.2 86.8 98.8 94.6 79.0 99.9 93.5 76.5 82.9 100.0 96.6 99.9 97.9 87.2
VIVI-Ex(2) 86.6 76.1 63.4 88.2 74.4 97.0 64.1 88.4 98.6 94.7 79.2 99.8 93.4 77.1 80.9 100.0 96.5 99.9 97.6 87.2
Ex-YT-S-AA 89.0 76.5 67.3 86.2 75.9 97.0 63.6 86.9 98.8 94.6 80.3 99.8 93.3 77.1 82.0 100.0 96.6 99.9 97.6 87.5
VIVI-Ex(4)-AA 88.8 76.8 64.0 87.1 75.9 97.2 63.9 88.6 98.6 94.5 79.5 99.8 93.2 76.7 84.0 100.0 96.6 99.8 97.6 87.5
VIVI-Ex(4)-Co(10%) 89.3 79.1 67.6 89.1 83.2 96.9 66.5 90.1 98.4 93.0 79.6 99.5 92.1 74.8 83.1 100.0 96.5 99.8 93.6 88.0
VIVI-Ex(4)-Big 89.1 79.4 64.7 89.6 78.7 97.1 69.2 86.9 98.6 95.6 80.2 99.8 93.6 77.2 81.8 100.0 96.6 99.9 98.6 88.3
VIVI-Ex(4)-Big-AA 90.5 80.4 68.5 87.5 78.3 97.3 68.7 88.7 98.7 95.3 80.5 99.9 92.8 77.8 81.0 100.0 96.7 100.0 98.1 88.5
Semi-Ex-10% 85.3 82.7 70.5 92.2 89.0 97.0 67.4 86.0 98.6 94.7 78.8 99.8 93.1 76.8 81.5 100.0 96.5 100.0 97.8 88.8
VIVI-Ex(4)-Co(100%) 92.5 82.0 73.2 92.7 90.9 96.8 70.7 87.4 98.5 93.7 80.2 99.4 91.2 73.4 82.1 100.0 96.5 98.9 96.5 89.3
Sup-100% 94.1 83.8 74.0 93.2 91.9 97.0 70.7 83.9 98.8 95.3 79.3 99.8 92.1 76.4 80.7 100.0 96.4 99.8 97.7 89.7
Sup-Rot-100% 94.6 84.8 75.9 94.7 91.5 97.0 70.2 85.9 98.8 94.9 79.5 99.8 92.5 76.5 82.3 100.0 96.5 100.0 98.4 90.2
full VIVI-Ex(4)-Co(100%)-Big 93.5 85.9 77.2 94.4 91.6 97.3 73.7 89.4 98.8 95.1 81.0 99.7 92.5 76.7 84.8 100.0 96.6 99.7 94.6 90.7
Table 4: Testing accuracy for every data set in the VTAB benchmark using 1000 and all samples for fine-tuning. Each number is the median of three fine-tuning runs. The proposed methods have the prefix \glsvivi. “Ex” and “Rot” stand for exemplar [14] and rotation prediction [17] frame-level self-supervision, respectively. These identifiers are followed with the number of shots in parentheses if an InfoNCE prediction loss across shots is used (except methods using shot order prediction have the suffix “-Ord”). Baseline methods only using frames and shots have the suffix “YT-F” and “YT-S”, respectively. The suffix “-AA” denotes methods that use \acrlongaa [7].

Appendix A Architectures

Here we expand on the short description in Section 4. The frame encoder is modelled using the ResNet-50 v2 [22] architecture with BatchNorm [26]. We also investigate in several experiments the effect of model capacity by widening the network by a factor of three. To avoid mismatch in batch statistics between the two data sources, in the co-training experiments we replace the BatchNorm with GroupNorm [62] and also standardize [45] the weights of the convolutions.

For each prediction task, we attach a different linear head to the 2048-dimensional pre-logits ResNet representation before applying the respective loss or prediction function. For exemplar, following [32], we use a linear head with 1000 outputs with L2-normalization of the features before feeding into the triplet-loss. For rotation prediction we rely on a linear head with 4 outputs. For the video-level loss (prediction across shots using and temporal order prediction) we project the pre-logits, average-pooled across the frames of the same shot, to 512 dimensions using a linear head, and feed this representation to the prediction functions . Finally, in the experiments with co-training, we rely on an additional linear classification head with 1000 outputs.

For the loss, when we sample 2 shots, we predict one from the other using an \glsmlp, i.e., the function in (4) has the form , where are \glsplmlp with a single hidden layer with 256 units and 128 outputs. In the experiments with 4 shots, we use a 2-layer \glslstm prediction function with 256 hidden units to predict every shot embedding from the previous ones. To match the dimension of the \glslstm output (256) and that of the future shot embeddings (512) we employ another linear layer. We use temporal order prediction only together with exemplar-based \glsssl and for data with 2 shots per video, relying on a single-hidden-layer \glsmlp with 512 hidden units as prediction function.

For both frame and shot-level \glsssl approaches we use the augmentation mechanism from [55]. For models co-trained with a supervised loss based on a fraction of ImageNet we additionally use the same HSV-space color randomization as [64]. We also perform experiments where we replace the augmentation mechanism from [55] with \glsaa, which is an augmentation policy learned using a reinforcement learning algorithm from the full ImageNet data set. More specifically, we rely on the TF-Hub module publicly available at https://tfhub.dev/google/image_augmentation/nas_imagenet/1.

Appendix B Training details

Table 5 provides details about the schedules, batch size, loss weights, etc. used for the individual methods. When exploring the effect of \glsaa we reduce the weight of the video-level loss, , by a factor of 2. The schedule for VIVI-Ex(4)-Co(10%) is motivated as follows. We take the schedule and batch size used for the ImageNet exemplar co-training experiments for 10% labeled ImageNet examples from [64], stretch the schedule to 100k iterations and reduce the batch size (as well as the learning rate) so that number of epochs over the 10% (128k example) data set matches that of [64].

We set the margin parameter in the semi-hard triplet loss [49] to 0.5. For rotation-based \glsssl, following common practice [17, 32], we compute the predicted rotation after appending to the mini-batch 3 rotated copies of the mini-batch along the batch dimension and compute the rotation loss for all rotated copies.

We train all models on 128 cores of a Google TPU v3 Pod. For exemplar \glsssl the triplet loss is computed per core. For all frame/shot level loss variants, is computed across all cores when prediction is across 4 shots, and computed per core when prediction is across 2 shots as computing the loss across all cores led to instabilities for that case.

LR #it. w. #it. LR schedule WD batch size #exemp. Ex-ImageNet 0.8 120k 17k 0.1@52k;86k - - 2048 8 Ex-YT-F 0.8 120k 17k 0.1@52k;86k - - 2048 8 Ex-YT-S 0.8 120k 5k 0.1@90k;110k - - 2048 8 (sh.) VIVI-Ex(2)-Ord 0.8 120k 5k 0.1@90k;110k - sh. 8 (sh.) VIVI-Ex(2) 0.8 120k 5k 0.1@90k;110k - sh. 8 (sh.) VIVI-Ex(4) 0.8 120k 5k 0.1@90k;110k - sh. 8 (sh.) VIV-Ex(4)-Big 0.8 120k 5k 0.1@90k;110k 0.04 - sh. 8 (sh.) VIVI-Ex(4)-Co(10%) 0.1 100k 3k 0.1@76k;88k;96k 0.04 sh., 256 im. 8 (sh.) VIVI-Ex(4)-Co(100%) 0.8 100k 3k 0.1@70k;85k;95k 0.04 sh., 2048 im. 8 (sh.) VIVI-Ex(4)-Co(100%)-Big 0.8 100k 3k 0.1@70k;85k;95k 0.04 sh., 2048 im. 8 (sh.) Rot-ImageNet 0.8 120k 17k 0.1@52k;86k - - 2048 1 Rot-YT-F 0.8 120k 17k 0.1@52k;86k - - 2048 1 VIVI-Rot(2) 0.8 120k 5k 0.1@90k;110k - sh. 4 (sh.) VIVI-Rot(4) 0.8 120k 5k 0.1@90k;110k - sh. 4 (sh.)

Table 5: Learning rate (LR), number of training iterations (#it.), number of linear warm-up iterations (w. #it.), learning rate schedule (LR schedule), weight decay (WD), video-level loss weight (), supervised cross-entropy loss weight (), batch size, and the number of exemplars (#exemp.) for the different models considered in this paper. Lists of values indicate values explored in the parameter sweep, with the optimal value (in terms of validation VTAB 1000 example score) underlined. For the co-training methods we indicate video (suffix “sh.”) and image (suffix “im.”) batch size. If the number of exemplars is followed by “(sh.)” we use consecutive frames of the same shot to create exemplars.

Appendix C Baseline fine-tuning details

As mentioned in the main manuscript we compared against two baseline methods: MT-SSL (Multi-Task Self-Supervised Learning) [12], and TI (Transitive Invariance) [59]. For MT-SSL we considered two variants: MS which was pre-trained on motion segmentation only, and MT-SSL which combined MS with three other tasks in a multi-task setting. We obtained pre-trained checkpoints for all three methods (MS, MT-SSl, and TI) from the authors of their respective prior works.

c.1 Fine-tuning motion segmentation and multi-task SSL baselines

MS and MT-SSL pre-trained a ResNet-101 up to block3. The representation at block3 is , which is too big. In [12], the authors used max-pooling to down-sample this to and then trained a linear predictor for ImageNet classification. We experimented with this approach for VTAB evaluation. The default evaluation protocol for VTAB is to sweep over initial learning rates: and . These were too high for the MS and MT-SSL models. For several downstream evaluation tasks fine-tuning diverged. We therefore modified the evaluation sweep minimally to sweep over initial learning rates: . We also evaluated a simpler alternative: Global average pooling the block3 representation into a dimensional vector. We found that global average pooling the representation achieved best results on the VTAB validation set. It also did not diverge at higher learning rates, so we could use the default learning rate schedule in this case. We therefore used this setting for the final evaluation on test data.

c.2 Fine-tuning the transitive invariance baseline

We exported the pre-trained caffe checkpoint into tensorflow using the caffe-tensorflow tool3. We found that the pre-trained VGG-16 backbone diverges at higher learning rates when fine-tuning downstream on VTAB tasks. We therefore manually adjusted the sweep over initial learning rates and found to work well. Another challenge with transferring this baseline model to several downstream data sets was that it is a patch-based model that expects dimensional input, whereas the VTAB benchmark scales all images to . We experimented with three ways of deploying this downstream: (a) Resize the input image from into , (b) apply the model fully convolutionally and compute a global average pool at the end, and (c) crop patches of size at stride from the input image and then average the representations across all of these. We found that (c) was computationally extremely expensive. (b) performed best and we report results for that approach on the VTAB test set.

Appendix D Additional results

In Fig. 5 to 9 we provide per-data set comparisons of different model pairs to better understand the effect of increasing the model size, using \glsaa, and co-training with different amounts of labeled images. All numbers are accuracies when using 1000 labels for fine-tuning.

Figure 5: Per-data set comparison of ImageNet-based exemplar \glsssl (Ex-ImageNet) with VIVI-Ex(4). Training on \glsyt8m rather than ImageNet and exploiting temporal information mostly helps on natural (red) and structured (blue) data sets, and slightly hurts for some specialized (green) data sets.
Figure 6: Per-data set comparison of VIVI-Ex(4) and a 3 wider counterpart (VIVI-Ex(4)-Big). Increasing model capacity leads to an increase in accuracy for all natural (red) data sets and some structured (blue) and specialized (green) data sets. However, some structured and specialized data sets also incur a reduction in accuracy.
Figure 7: Per-data set comparison of VIVI-Ex(4) and a variant using \glsaa. \glsaa seems to benefit all data set categories similarly, and also leads to reductions in accuracy for a few data sets from all categories.
Figure 8: Per-data set comparison of VIVI-Ex(4) and its counterpart co-trained with 10% class-balanced ImageNet data (VIVI-Ex(4)-Co(10%)). Most data sets from each category incur an increase in accuracy, but one data set from each the natural and structured categories suffer a significant loss in accuracy.
Figure 9: Effect of increasing the number of ImageNet images used for co-training from 10% (VIVI-Ex(4)-Co(10%)) to 100% (VIVI-Ex(4)-Co(100%)). The accuracy on the majority of natural (red) data sets is significantly increased, whereas most of the structured data sets incur a slight drop in accuracy.

Footnotes

  1. Video credit: https://vimeo.com/362621732 and
    https://en.wikipedia.org/wiki/Big_Buck_Bunny.
  2. In practice, we use all shot embeddings from the other videos, not only those at time step , which is known to improve performance [42].
  3. https://github.com/ethereon/caffe-tensorflow

References

  1. Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, and Sudheendra Vijayanarasimhan. Youtube-8m: A large-scale video classification benchmark. arXiv:1609.08675, 2016.
  2. Pulkit Agrawal, Joao Carreira, and Jitendra Malik. Learning to see by moving. In Proc. ICCV, pages 37–45, 2015.
  3. Relja Arandjelovic and Andrew Zisserman. Look, listen and learn. In Proc. ICCV, pages 609–617, 2017.
  4. Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In NeurIPS, 2019.
  5. Uta Buchler, Biagio Brattoli, and Bjorn Ommer. Improving spatiotemporal self-supervision by deep reinforcement learning. In Proc. ECCV, pages 770–786, 2018.
  6. Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. Proc. ECCV, 2018.
  7. Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. In Proc. CVPR, 2019.
  8. Virginia R de Sa. Learning classification with unlabeled data. In NeurIPS, pages 112–119, 1994.
  9. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proc. CVPR, pages 248–255. IEEE, 2009.
  10. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. NAACL-HLT, 2018.
  11. Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In Proc. ICCV, pages 1422–1430, 2015.
  12. Carl Doersch and Andrew Zisserman. Multi-task self-supervised visual learning. In ICCV, 2017.
  13. Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. In Proc. ICLR, 2017.
  14. Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with convolutional neural networks. In NeurIPS, pages 766–774, 2014.
  15. Basura Fernando, Hakan Bilen, Efstratios Gavves, and Stephen Gould. Self-supervised video representation learning with odd-one-out networks. In Proc. CVPR, 2017.
  16. Ruohan Gao, Dinesh Jayaraman, and Kristen Grauman. Object-centric representation learning from unlabeled videos. In Proc. ACCV, pages 248–263. Springer, 2016.
  17. Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. Proc. ICLR, 2018.
  18. Priya Goyal, Dhruv Mahajan, Abhinav Gupta, and Ishan Misra. Scaling and benchmarking self-supervised visual representation learning. arXiv:1905.01235, 2019.
  19. Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proc. AISTATS, 2010.
  20. Tengda Han, Weidi Xie, and Andrew Zisserman. Video representation learning by dense predictive coding. In Proc. ICCV Workshops, 2019.
  21. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proc. CVPR, 2016.
  22. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In Proc. ECCV. Springer, 2016.
  23. Olivier J Hénaff, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. Data-efficient image recognition with contrastive predictive coding. arXiv:1905.09272, 2019.
  24. R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. Proc. ICLR, 2019.
  25. Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. What makes imagenet good for transfer learning? arXiv:1608.08614, 2016.
  26. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proc. ICML, 2015.
  27. Phillip Isola, Daniel Zoran, Dilip Krishnan, and Edward H Adelson. Learning visual groups from co-occurrences in space and time. arXiv:1511.06811, 2015.
  28. Dinesh Jayaraman, Frederik Ebert, Alexei A Efros, and Sergey Levine. Time-agnostic prediction: Predicting predictable video frames. Proc. ICLR, 2019.
  29. Dinesh Jayaraman and Kristen Grauman. Slow and steady feature analysis: higher order temporal coherence in video. In Proc. CVPR, pages 3852–3861, 2016.
  30. Dinesh Jayaraman and Kristen Grauman. Learning image representations tied to egomotion from unlabeled video. IJCV, 125(1):136–161, Dec 2017.
  31. Diederik P. Kingma, Danilo Jimenez Rezende, Shakir Mohamed, and Max Welling. Semi-supervised learning with deep generative models. CoRR, abs/1406.5298, 2014.
  32. Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. Revisiting self-supervised visual representation learning. In Proc. CVPR, pages 1920–1929, 2019.
  33. Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? CVPR, 2019.
  34. Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Unsupervised representation learning by sorting sequences. In Proc. ICCV, pages 667–676, 2017.
  35. Aravindh Mahendran, James Thewlis, and Andrea Vedaldi. Cross pixel optical-flow similarity for self-supervised learning. In Proc. ACCV, pages 99–116. Springer, 2018.
  36. Jordi Mas and Gabriel Fernandez. Video shot boundary detection based on color histogram. Notebook Papers TRECVID2003, NIST, 15, 2003.
  37. Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuffle and learn: unsupervised learning using temporal order verification. In Proc. ECCV. Springer, 2016.
  38. Hossein Mobahi, Ronan Collobert, and Jason Weston. Deep learning from temporal coherence in video. In Proc. ICML, pages 737–744. ACM, 2009.
  39. Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. Multimodal deep learning. In Proc. ICML, pages 689–696, 2011.
  40. Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In Proc. ECCV, pages 69–84, 2016.
  41. Mehdi Noroozi, Hamed Pirsiavash, and Paolo Favaro. Representation learning by learning to count. In Proc. ICCV, 2017.
  42. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv:1807.03748, 2018.
  43. Andrew Owens, Jiajun Wu, Josh H McDermott, William T Freeman, and Antonio Torralba. Ambient sound provides supervision for visual learning. In Proc. ECCV, pages 801–816. Springer, 2016.
  44. Deepak Pathak, Ross Girshick, Piotr Dollár, Trevor Darrell, and Bharath Hariharan. Learning features by watching objects move. In Proc. CVPR, pages 2701–2710, 2017.
  45. Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, and Alan Yuille. Weight standardization. arXiv:1903.10520, 2019.
  46. Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. Learning multiple visual domains with residual adapters. In Advances in Neural Information Processing Systems, pages 506–516, 2017.
  47. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
  48. Nawid Sayed, Biagio Brattoli, and Björn Ommer. Cross and learn: Cross-modal self-supervision. In German Conference on Pattern Recognition, pages 228–243. Springer, 2018.
  49. Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In Proc. CVPR, 2015.
  50. Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, and Google Brain. Time-contrastive networks: Self-supervised learning from video. In ICRA, 2018.
  51. Vaishaal Shankar, Achal Dave, Rebecca Roelofs, Deva Ramanan, Benjamin Recht, and Ludwig Schmidt. A systematic framework for natural perturbations from videos. arXiv:1906.02168, 2019.
  52. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
  53. Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. Unsupervised learning of video representations using lstms. In Proc. ICML, pages 843–852, 2015.
  54. Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. arXiv:1904.01766, 2019.
  55. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proc. CVPR, 2015.
  56. Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv:1906.05849, 2019.
  57. Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, and Hugo Larochelle. Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv:1903.03096, 2019.
  58. Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In Proc. ICCV, 2015.
  59. Xiaolong Wang, Kaiming He, and Abhinav Gupta. Transitive invariance for self-supervised visual representation learning. In Proc. ICCV, pages 1329–1338, 2017.
  60. Donglai Wei, Joseph J Lim, Andrew Zisserman, and William T Freeman. Learning and using the arrow of time. In Proc. CVPR, pages 8052–8060, 2018.
  61. Laurenz Wiskott and Terrence J Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715–770, 2002.
  62. Yuxin Wu and Kaiming He. Group normalization. In Proc. ECCV, pages 3–19, 2018.
  63. Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proc. CVPR, pages 3733–3742, 2018.
  64. Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. S4l: Self-supervised semi-supervised learning. In Proc. ICCV, 2019.
  65. Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, et al. The Visual Task Adaptation Benchmark. arXiv:1910.04867, 2019.
  66. Xiaohang Zhan, Xingang Pan, Ziwei Liu, Dahua Lin, and Chen Change Loy. Self-supervised learning via conditional motion propagation. arXiv:1903.11412, 2019.
  67. Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In Proc. ECCV, pages 649–666, 2016.
  68. Chengxu Zhuang, Alex Lin Zhai, and Daniel Yamins. Local aggregation for unsupervised learning of visual embeddings. In Proc. ICCV, pages 6002–6012, 2019.
  69. Will Y Zou, Andrew Y Ng, and Kai Yu. Unsupervised learning of visual invariance with temporal coherence. In NIPS 2011 Workshop on Deep Learning and Unsupervised Feature Learning, volume 3, 2011.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
400798
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description