Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation
Despite the recent progress of fully-supervised action segmentation techniques, the performance is still not fully satisfactory. One main challenge is the problem of spatio-temporal variations (e.g. different people may perform the same activity in various ways). Therefore, we exploit unlabeled videos to address this problem by reformulating the action segmentation task as a cross-domain problem with domain discrepancy caused by spatio-temporal variations. To reduce the discrepancy, we propose Self-Supervised Temporal Domain Adaptation (SSTDA), which contains two self-supervised auxiliary tasks (binary and sequential domain prediction) to jointly align cross-domain feature spaces embedded with local and global temporal dynamics, achieving better performance than other Domain Adaptation (DA) approaches. On three challenging benchmark datasets (GTEA, 50Salads, and Breakfast), SSTDA outperforms the current state-of-the-art method by large margins (e.g. for the F1 score, from 59.6% to 69.1% on Breakfast, from 73.4% to 81.5% on 50Salads, and from 83.6% to 89.1% on GTEA), and requires only 65% of the labeled training data for comparable performance, demonstrating the usefulness of adapting to unlabeled target videos across variations. The source code is available at https://github.com/cmhungsteve/SSTDA.
The goal of action segmentation is to simultaneously segment videos by time and predict an action class for each segment, leading to various applications (e.g. human activity analyses). While action classification has shown great progress given the recent success of deep neural networks [40, 28, 27], temporally locating and recognizing action segments in long videos is still challenging. One main challenge is the problem of spatio-temporal variations of human actions across videos . For example, different people may make tea in different personalized styles even if the given recipe is the same. The intra-class variations cause degraded performance by directly deploying a model trained with different groups of people.
Despite significant progress made by recent methods based on temporal convolution with fully-supervised learning [20, 6, 23, 8], the performance is still not fully satisfactory (e.g. the best accuracy on the Breakfast dataset is still lower than 70%). One method to improve the performance is to exploit knowledge from larger-scale labeled data . However, manually annotating precise frame-by-frame actions is time-consuming and challenging. Another way is to design more complicated architectures but with higher costs of model complexity. Thus, we aim to address the spatio-temporal variation problem with unlabeled data, which are comparatively easy to obtain. To achieve this goal, we propose to diminish the distributional discrepancy caused by spatio-temporal variations by exploiting auxiliary unlabeled videos with the same types of human activities performed by different people. More specifically, to extend the framework of the main video task for exploiting auxiliary data [47, 19], we reformulate our main task as an unsupervised domain adaptation (DA) problem with the transductive setting [31, 5], which aims to reduce the discrepancy between source and target domains without access to the target labels.
Recently, adversarial-based DA approaches [10, 11, 39, 46] show progress in reducing the discrepancy for images using a domain discriminator equipped with adversarial training. However, videos also suffer from domain discrepancy along the temporal direction , so using image-based domain discriminators is not sufficient for action segmentation. Therefore, we propose Self-Supervised Temporal Domain Adaptation (SSTDA), containing two self-supervised auxiliary tasks: 1) binary domain prediction, which predicts a single domain for each frame-level feature, and 2) sequential domain prediction, which predicts the permutation of domains for an untrimmed video. Through adversarial training with both auxiliary tasks, SSTDA can jointly align cross-domain feature spaces that embed local and global temporal dynamics, to address the spatio-temporal variation problem for action segmentation, as shown in Figure 1. To support our claims, we compare our method with other popular DA approaches and show better performance, demonstrating the effectiveness for aligning temporal dynamics by SSTDA. Finally, we evaluate our approaches on three datasets with high spatio-temporal variations: GTEA , 50Salads , and the Breakfast dataset . By exploiting unlabeled target videos with SSTDA, our approach outperforms the current state-of-the-art methods by large margins and achieve comparable performance using only 65% of labeled training data.
In summary, our contributions are three-fold:
Self-Supervised Sequential Domain Prediction: We propose a novel self-supervised auxiliary task, which predicts the permutation of domains for long videos, to facilitate video domain adaptation. To the best of our knowledge, this is the first self-supervised method designed for cross-domain action segmentation.
Self-Supervised Temporal Domain Adaptation (SSTDA): By integrating two self-supervised auxiliary tasks, binary and sequential domain prediction, our proposed SSTDA can jointly align local and global embedded feature spaces across domains, outperforming other DA methods.
Action Segmentation with SSTDA: By integrating SSTDA for action segmentation, our approach outperforms the current state-of-the-art approach by large margins, and achieve comparable performance by using only 65% of labeled training data. Moreover, different design choices are analyzed to identify the key contributions of each component.
2 Related Works
Action Segmentation methods proposed recently are built upon temporal convolution networks (TCN) [20, 6, 23, 8] because of their ability to capture long-range dependencies across frames and faster training compared to RNN-based methods. With the multi-stage pipeline, MS-TCN  performs hierarchical temporal convolutions to effectively extract temporal features and achieve the state-of-the-art performance for action segmentation. In this work, we utilize MS-TCN as the baseline model and integrate the proposed self-supervised modules to further boost the performance without extra labeled data.
Domain Adaptation (DA) has been popular recently especially with the integration of deep learning. With the two-branch (source and target) framework for most DA works, finding a common feature space between source and target domains is the ultimate goal, and the key is to design the domain loss to achieve this goal .
Discrepancy-based DA [24, 25, 26] is one of the major classes of methods where the main goal is to reduce the distribution distance between the two domains. Adversarial-based DA [10, 11] is also popular with similar concepts as GANs  by using domain discriminators. With carefully designed adversarial objectives, the domain discriminator and the feature extractor are optimized through min-max training. Some works further improve the performance by assigning pseudo-labels to target data [33, 43]. Furthermore, Ensemble-based DA [35, 21] incorporates multiple target branches to build an ensemble model. Recently, Attention-based DA [41, 18] assigns attention weights to different regions of images for more effective DA.
Unlike images, video-based DA is still under-explored. Most works concentrate on small-scale video DA datasets [38, 45, 14]. Recently, two larger-scale cross-domain video classification datasets along with the state-of-the-art approach are proposed [3, 4]. Moreover, some authors also proposed novel frameworks to utilize auxiliary data for other video tasks, including object detection  and action localization . These works differ from our work by either different video tasks [19, 3, 4] or access to the labels of auxiliary data .
Self-Supervised Learning has become popular in recent years for images and videos given the ability to learn informative feature representations without human supervision. The key is to design an auxiliary task (or pretext task) that is related to the main task and the labels can be self-annotated. Most of the recent works for videos design auxiliary tasks based on spatio-temporal orders of videos [22, 42, 15, 1, 44]. Different from these works, our proposed auxiliary task predicts temporal permutation for cross-domain videos, aiming to address the problem of spatio-temporal variations for action segmentation.
3 Technical Approach
In this section, the baseline model which is the current state-of-the-art for action segmentation, MS-TCN , is reviewed first (Section 3.1). Then the novel temporal domain adaptation scheme consisting of two self-supervised auxiliary tasks, binary domain prediction (Section 3.2.1) and sequential domain prediction (Section 3.2.2), is proposed, followed by the final action segmentation model.
3.1 Baseline Model
Our work is built on the current state-of-the-art model for action segmentation, multi-stage temporal convolutional network (MS-TCN) . For each stage, a single-stage TCN (SS-TCN) applies a multi-layer TCN, , to derive the frame-level features , and makes the corresponding predictions using a fully-connected layer . By following , the prediction loss is calculated based on the predictions , as shown in the left part of Figure 2. Finally, multiple stages of SS-TCNs are stacked to enhance the temporal receptive fields, constructing the final baseline model, MS-TCN, where each stage takes the predictions from the previous stage as inputs, and makes predictions for the next stage.
3.2 Self-Supervised Temporal Domain Adaptation
Despite the promising performance of MS-TCN on action segmentation over previous methods, there is still a large room for improvement. One main challenge is the problem of spatio-temporal variations of human actions , causing the distributional discrepancy across domains . For example, different subjects may perform the same action completely differently due to personalized spatio-temporal styles. Moreover, collecting annotated data for action segmentation is challenging and time-consuming. Thus, such challenges motivate the need to learn domain-invariant feature representations without full supervision. Inspired by the recent progress of self-supervised learning, which learns informative features that can be transferred to the main target tasks without external supervision (e.g. human annotation), we propose Self-Supervised Temporal Domain Adaptation (SSTDA) to diminish cross-domain discrepancy by designing self-supervised auxiliary tasks using unlabeled videos.
To effectively transfer knowledge, the self-supervised auxiliary tasks should be closely related to the main task, which is cross-domain action segmentation in this paper. Recently, adversarial-based DA approaches [10, 11] show progress in addressing cross-domain image problems using a domain discriminator with adversarial training where domain discrimination can be regarded as a self-supervised auxiliary task since domain labels are self-annotated. However, directly applying image-based DA for video tasks results in sub-optimal performance due to the temporal information being ignored . Therefore, the question becomes: How should we design the self-supervised auxiliary tasks to benefit cross-domain action segmentation? More specifically, the answer should address both cross-domain and action segmentation problems.
To address this question, we first apply an auxiliary task binary domain prediction to predict the domain for each frame where the frame-level features are embedded with local temporal dynamics, aiming to address the cross-domain problems for videos in local scales. Then we propose a novel auxiliary task sequential domain prediction to temporally segment domains for untrimmed videos where the video-level features are embedded with global temporal dynamics, aiming to fully address the above question. Finally, SSTDA is achieved locally and globally by jointly applying these two auxiliary tasks, as illustrated in Figure 3.
In practice, since the key for effective video DA is to simultaneously align and learn temporal dynamics, instead of separating the two processes , we integrate SSTDA modules to multiple stages instead of the last stage only, and the single-stage integration is illustrated in Figure 2.
The main goal of action segmentation is to learn frame-level feature representations that encode spatio-temporal information so that the model can exploit information from multiple frames to predict the action for each frame. Therefore, we first learn domain-invariant frame-level features with the auxiliary task binary domain prediction (Figure 3 left).
Binary Domain Prediction: For a single stage, we feed the frame-level features from source and target domains and , respectively, to an additional shallow binary domain classifier , to discriminate which domain the features come from. Since temporal convolution from previous layers encodes information from multiple adjacent frames to each frame-level feature, those frames contribute to the binary domain prediction for each frame. Through adversarial training with a gradient reversal layer (GRL) [10, 11], which reverses the gradient signs during back-propagation, will be optimized to gradually align the feature distributions between the two domains. Here we note as equipped with GRL, as shown in Figure 4.
Since this work is built on MS-TCN, integrating with proper stages is critical for effective DA. From our investigation, the best performance happens when s are integrated into middle stages. See Section 4.3 for details.
The overall loss function becomes a combination of the baseline prediction loss and the local domain loss with reverse sign, which can be expressed as follows:
where is the total stage number in MS-TCN, is the number of stages integrated with , and is the total frame number of a video. is a binary cross-entropy loss function, and is the trade-off weight for local domain loss , obtained by following the common strategy as [10, 11].
Although frame-level features f is learned using the context and dependencies from neighbor frames, the temporal receptive fields of f are still limited, unable to represent full videos. Solely integrating DA into f cannot fully address spatio-temporal variations for untrimmed long videos. Therefore, in addition to binary domain prediction for frame-level features, we propose the second self-supervised auxiliary task for video-level features: sequential domain prediction, which predicts a sequence of domains for video clips, as shown in the right part of Figure 3. This task is a temporal domain segmentation problem, aiming to predict the correct permutation of domains for long videos consisting of shuffled video clips from both source and target domains. Since this goal is related to both cross-domain and action segmentation problems, sequential domain prediction can effectively benefit our main task.
More specifically, we first divide and into two sets of segments and , respectively, and then learn the corresponding two sets of segment-level feature representations and with Domain Attentive Temporal Pooling (DATP). All features are then shuffled and combined in random order and fed to a sequential domain classifier equipped with GRL (noted as ) to predict the permutation of domains, as shown in Figure 4.
Domain Attentive Temporal Pooling (DATP): The most straightforward method to obtain a video-level feature is to aggregate frame-level features using temporal pooling. However, not all the frame-level features contribute the same to the overall domain discrepancy, as mentioned in . Hence, we assign larger attention weights (calculated using in local SSTDA) to the features which have larger domain discrepancy so that we can focus more on aligning those features. Finally, the attended frame-level features are aggregated with temporal pooling to generate the video-level feature , which can be expressed as:
where is the number of frames in a video segment. For more details, please refer to the supplementary.
Sequential Domain Prediction: By separately applying DATP to both source and target segments, respectively, a set of segment-level feature representations are obtained. We then shuffle all the features in and concatenate them into a feature to represent a long and untrimmed video , which contains video segments from both domains in random order. Finally, is fed into a sequential domain classifier to predict the permutation of domains for the video segments. For example, if , the goal of is to predict the permutation as . is a multi-class classifier where the class number corresponds to the total number of all possible permutations of domains, and the complexity of is determined by the segment number for each video (more analyses in Section 4.3). The outputs of are used to calculate the global domain loss as below:
where is also a standard cross-entropy loss function where the class number is determined by the segment number. Through adversarial training with GRL, sequential domain prediction also contributes to optimizing to align the feature distributions between the two domains.
There are some self-supervised learning works also proposing the concepts of temporal shuffling [22, 44]. However, they predict temporal orders within one domain, aiming to learn general temporal information for video features. Instead, our method predicts temporal permutation for cross-domain videos, which are shown with a dual-branch pipeline in Figure 4, and integrate with binary domain prediction to effectively address both cross-domain and action segmentation problems.
Local-Global Joint Training.
Finally, we also adopt a strategy from  to minimize the class entropy for the frames that are similar across domains by adding a domain attentive entropy (DAE) loss . Please refer to the supplementary for more details.
To validate the effectiveness of the proposed methods in reducing spatial-temporal discrepancy for action segmentation, we choose three challenging datasets: GTEA , 50Salads , and Breakfast , which separate the training and validation sets by different people (noted as subjects) with leave-subjects-out cross-validation for evaluation, resulting in large domain shift problem due to spatio-temporal variations. Therefore, we regard the training set as Source domain, and the validation set as Target domain with the standard transductive unsupervised DA protocol [31, 5]. See the supplementary for more implementation details.
4.1 Datasets and Evaluation Metrics
The overall statistics of the three datasets are listed in Table 1. Three widely used evaluation metrics are chosen as follows : frame-wise accuracy (Acc), segmental edit score, and segmental F1 score at the IoU threshold , denoted as F1 (). While Acc is the most common metric, edit and F1 score both consider the temporal relation between predictions and ground truths, better reflecting the performance for action segmentation.
4.2 Experimental Results
We first investigate the effectiveness of our approaches in utilizing unlabeled target videos for action segmentation. We choose MS-TCN  as the backbone model since it is the current state of the art for this task. “Source only” means the model is trained only with source labeled videos, i.e., the baseline model. And then our approach is compared to other methods with the same transductive protocol. Finally, we compare our method to the most recent action segmentation methods on all three datasets, and investigate how our method can reduce the reliance on source labeled data.
Self-Supervised Temporal Domain Adaptation: First we investigate the performance of local SSTDA by integrating the auxiliary task binary domain prediction with the baseline model. The results on all three datasets are improved significantly, as shown in Table 2. For example, on the GTEA dataset, our approach outperforms the baseline by 4.3% for F125, 3.2% for the edit score and 3.6% for the frame-wise accuracy. Although local SSTDA mainly works on the frame-level features, the temporal information is still encoded using the context from neighbor frames, helping address the variation problem for videos across domains.
|Source only (MS-TCN)||86.5||83.6||71.9||81.3||76.5|
|Source only (MS-TCN)||75.4||73.4||65.2||68.9||82.1|
|Source only (MS-TCN)||65.3||59.6||47.2||65.7||64.7|
Despite the improvement from local SSTDA, integrating DA into frame-level features cannot fully address the problem of spatio-temporal variations for long videos. Therefore, we integrate our second proposed auxiliary task sequential domain prediction for untrimmed long videos. By jointly training with both auxiliary tasks, SSTDA can jointly align cross-domain feature spaces embedding with local and global temporal dynamics, and further improve over local SSTDA with significant margins. For example, on the 50Salads dataset, it outperforms local SSTDA by 3.8% for F110, 3.7% for F125, 3.5% for F150, and 3.8% for the edit score, as shown in Table 2.
One interesting finding is that local SSTDA contributes to most of the frame-wise accuracy improvement for SSTDA because it focuses on aligning frame-level feature spaces. On the other hand, sequential domain prediction benefits aligning video-level feature spaces, contributing to further improvement for the other two metrics, which consider temporal relation for evaluation.
Learning from Unlabeled Target Videos: We also compare SSTDA with other popular approaches [11, 26, 33, 43, 35, 21, 44] to validate the effectiveness of reducing spatio-temporal discrepancy with the same amount of unlabeled target videos. For the fair comparison, we integrate all these methods with the same baseline model, MS-TCN. For more implementation details, please refer to the supplementary.
Table 3 shows that our proposed SSTDA outperforms all the other investigated DA methods in terms of the two metrics that consider temporal relation. We conjecture the main reason is that all these DA approaches are designed for cross-domain image problems. Although they are integrated with frame-level features which encode local temporal dynamics, the limited temporal receptive fields prevent them from fully addressing temporal domain discrepancy. Instead, the sequential domain prediction in SSTDA is directly applied to the whole untrimmed video, helping to globally align the cross-domain feature spaces that embed longer temporal dynamics, so that spatio-temporal variations can be reduced more effectively.
We also compare with the most recent video-based self-supervised learning method, , which can also learn temporal dynamics from unlabeled target videos. However, the performance is even worse than other DA methods, implying that temporal shuffling within single domain does not effectively benefit cross-domain action segmentation.
|Source only (MS-TCN)||86.5||83.6||71.9||81.3|
Comparison with Action Segmentation Methods: Here we compare the recent methods to SSTDA trained with two settings: 1) fully source labels, and 2) weakly source labels.
The first setting means we have labels for all the frames in source videos, and SSTDA outperforms all the previous methods on the three datasets with respect to all evaluation metrics. For example, SSTDA outperforms currently the state-of-the-art fully-supervised method, MS-TCN , by large margins (e.g. 8.1% for F125, 8.6% for F150, and 6.9% for the edit score on 50Salads; 9.5% for F125, 8.0% for F150, and 8.0% for the edit score on Breakfast), as demonstrated in Table 4. Since no additional labeled data is used, these results indicate how our proposed SSTDA address the spatio-temporal variation problem with unlabeled videos to improve the action segmentation performance.
Given the significant improvement by exploiting unlabeled target videos, it implies the potential to train with fewer number of labeled frames using SSTDA, which is our second setting. In this setting, we drop labeled frames from source domains with uniform sampling for training, and evaluate on the same length of validation data. Our experiment indicates that by integrating with SSTDA, only 65% of labeled training data are required to achieve comparable performance with MS-TCN, as shown in the “SSTDA (65%)” row in Table 4. For the full experiments about labeled data reduction, please refer to the supplementary.
4.3 Ablation Study and Analysis
Design Choice for Local SSTDA:
Since we develop our approaches upon MS-TCN , it raises the question: How to effectively integrate binary domain prediction to a multi-stage architecture? To answer this, we first integrate into each stage and the results show that the best performance happens when the is integrated into middle stages, such as or , as shown in Table 5. is not a good choice for DA because it corresponds to low-level features with less discriminability where DA shows limited effects , and represents less temporal receptive fields for videos. However, higher stages (e.g. ) are not always better. We conjecture that it is because the model fits more to the source data, causing difficulty for DA. In our case, integrating into provides the best overall performance.
We also integrate binary domain prediction with multiple stages. However, multi-stage DA does not always guarantee improved performance. For example, has worse results than in terms of F1. Since and provide the best single-stage DA performance, we use , which performs the best, as the final model for all our approaches in all the experiments.
Design Choice for Global SSTDA:
The most critical design decision for the sequential domain prediction is the segment number for each video. In our implementation, we divide one source video into segments and do so for one target video, and then apply to predict the permutation of domains for these video segments. Therefore, the category number of equals the number of all permutations . In other words, the segment number determine the complexity of the self-supervised auxiliary task. For example, leads to a 20-way classifier, and results in a 70-way classifier. Since a good self-supervised task should be neither naive nor over complicated , we choose as our final decision, which is supported by our experiments as shown in Table 6.
It is also common to evaluate the qualitative performance to ensure that the prediction results are aligned with human vision. First, we compare our approaches with the baseline model MS-TCN  and the ground truth, as shown in Figure 5. MS-TCN fails to detect some pour actions in the first half of the video, and falsely classify close as take in the latter part of the video. With local SSTDA, our approach can detect close in the latter part of the video. Finally, with full SSTDA, our proposed method also detects all pour action segments in the first half of video. We then compare SSTDA with other DA methods, and Figure 6 shows that our result is the closest to the ground truth. The others either incorrectly detect some actions or make incorrect classification. For more qualitative results, please refer to the supplementary.
5 Conclusions and Future Work
In this work, we propose a novel approach to effectively exploit unlabeled target videos to boost performance for action segmentation without target labels. To address the problem of spatio-temporal variations for videos across domains, we propose Self-Supervised Temporal Domain Adaptation (SSTDA) to jointly align cross-domain feature spaces embedded with local and global temporal dynamics by two self-supervised auxiliary tasks, binary and sequential domain prediction. Our experiments indicate that SSTDA outperforms other DA approaches by aligning temporal dynamics more effectively. We also validate the proposed SSTDA on three challenging datasets (GTEA, 50Salads, and Breakfast), and show that SSTDA outperforms the current state-of-the-art method by large margins and only requires 65% of the labeled training data to achieve the comparable performance, demonstrating the usefulness of adapting to unlabeled videos across variations. For the future work, we plan to apply SSTDA to more challenging video tasks (e.g. spatio-temporal action localization ).
In the supplementary material, we would like to show more details about the technical approach, implementation, and experiments.
6.1 Technical Approach Details
Domain Attentive Temporal Pooling (DATP): Temporal pooling is one of the most common methods to aggregate frame-level features into video-level features for each video. However, not all the frame-level features contribute the same to the overall domain discrepancy. Therefore, inspired by , we assign larger attention weights to the features which have larger domain discrepancy so that we can focus more on aligning those features, achieving more effective domain adaptation.
More specifically, we utilize the entropy criterion to generate the domain attention value for each frame-level feature as below:
where is the output from the learned domain classifier used in local SSTDA. is the entropy function to measure uncertainty. increases when decreases, which means the domains can be distinguished well. We also add a residual connection for more stable optimization. Finally, we aggregate the attended frame-level features with temporal pooling to generate the video-level feature , which is noted as Domain Attentive Temporal Pooling (DATP), as illustrated in the left part of Figure 7 and can be expressed as:
where refers to the residual connection, and is equal to in the main paper. is the number of frames used to generate a video-level feature.
Local SSTDA is necessary to calculate the attention weights for DATP. Without the above mechanisms, frames will be aggregated in the same way as temporal pooling without cross-domain consideration, which is already demonstrated sub-optimal for cross-domain video tasks .
Domain Attentive Entropy (DAE): Minimum entropy regularization is a common strategy to perform more refined classifier adaptation. However, we only want to minimize class entropy for the frames that are similar across domains. Therefore, inspired by , we attend to the frames which have low domain discrepancy, corresponding to high domain entropy . More specifically, we adopt the Domain Attentive Entropy (DAE) module to calculate the attentive entropy loss , which can be expressed as follows:
where and is the output of and , respectively. is the total frame number of a video. We also apply the residual connection for stability, as shown in the right part of Figure 7.
Our method is built upon the state-of-the-art action segmentation model, MS-TCN , which takes input frame-level feature representations and generates the corresponding output frame-level class predictions by four stages of SS-TCN. In our implementation, we convert the second and third stages into Domain Adaptive TCN (DA-TCN) by integrating each SS-TCN with the following three parts: 1) (for binary domain prediction), 2) DATP and (for sequential domain prediction), and 3) DAE, bringing three corresponding loss functions, , and , respectively, as illustrated in Figure 8. The final loss function can be formulated as below:
where , and are the weights for , and , respectively, obtained by the methods described in Section 6.2. is the stage index in MS-TCN.
Datasets and Evaluation Metrics:
|avg. length (min.)||1||6.4||2.7|
|avg. action #/video||20||20||6|
Frame-wise accuracy (Acc): Acc is one of the most typical evaluation metrics for action segmentation, but it does not consider the temporal dependencies of the prediction, causing the inconsistency between qualitative assessment and frame-wise accuracy. Besides, long action classes have higher impact on this metric than shorter action classes, making it not able to reflect over-segmentation errors.
Segmental edit score (Edit): The edit score penalizes over-segmentation errors by measuring the ordering of predicted action segments independent of slight temporal shifts.
Segmental F1 score at the IoU threshold (F1): F1 also penalizes over-segmentation errors while ignoring minor temporal shifts between the predictions and ground truth. The scores are determined by the total number of actions but do not depend on the duration of each action instance, which is similar to mean average precision (mAP) with intersection-over-union (IoU) overlap criteria. F1 becomes popular recently since it better reflects the qualitative results.
Implementation and Optimization: Our implementation is based on the PyTorch [32, 37] framework. We extract I3D  features for the video frames and use these features as inputs to our model. The video frame rates are the same as . For GTEA and Breakfast datasets we use a video temporal resolution of 15 frames per second (fps), while for 50Salads we downsampled the features from 30 fps to 15 fps to be consistent with the other datasets. For fair comparison, we adopt the same architecture design choices of MS-TCN  as our baseline model. The whole model consists of four stages where each stage contains ten dilated convolution layers. We set the number of filters to in all the layers of the model and the filter size is . For optimization, we utilize the Adam optimizer and a batch size equal to , following the official implementation of MS-TCN . Since the target data size is smaller than the source data, each target data is loaded randomly multiple times in each epoch during training. For the weighting of loss functions, we follow the common strategy as [10, 11] to gradually increase and from to . The weighting for smoothness loss is as in  and is chosen as via the grid-search. We will release the code upon acceptance.
Less Training Labeled Data:
To investigate the potential to train with a fewer number of labeled frames using SSTDA, we drop labeled frames from source domains with uniform sampling for training, and evaluate on the same length of validation data. Our experiment on the 50Salads dataset shows that by integrating with SSTDA, the performance does not drop significantly with the decrease in labeled training data, indicating the alleviation of reliance on labeled training data. Finally, only 65% of labeled training data are required to achieve comparable performance with MS-TCN, as shown in Table 8. We then evaluate the proposed SSTDA on GTEA and Breakfast with the same percentage of labeled training data, and also get comparable or better performance.
Table 8 also indicates the results without additional labeled training data, which contain discriminative information that can directly boost the performance for action segmentation. The additional trained data are all unlabeled, so they cannot be directly trained with standard prediction loss. Therefore, we propose SSTDA to exploit unlabeled data to: 1) further improve the strong baseline, MS-TCN, without additional training labels, and 2) achieve comparable performance with this strong baseline using only 65% of labels for training.
Comparison with Other Approaches:
|Source only (MS-TCN)||75.4||73.4||65.2||68.9|
|Source only (MS-TCN)||65.3||59.6||47.2||65.7|
We compare our proposed SSTDA with other approaches by integrating the same baseline architecture with other popular DA methods [11, 26, 33, 43, 35, 21] and a state-of-the-art video-based self-supervised approach . For fair comparison, all the methods are integrated with the second and third stages, as our proposed SSTDA, where the single-stage integration methods are described as follows:
DANN : We add one discriminator, which is the same as , equipped a gradient reversal layer (GRL) to the final frame-level features .
JAN : We integrate Joint Maximum Mean Discrepancy (JMMD) to the final frame-level features and the class prediction .
MADA : Instead of a single discriminator, we add multiple discriminators according to the class number to calculate the domain loss for each class. All the class-based domain losses are weighted with prediction probabilities and then summed up to obtain the final domain loss.
MSTN : We utilize pseudo-labels to cluster the data from the source and target domains, and calculate the class centroids for the source and target domain separately. Then we compute the semantic loss by calculating mean squared error (MSE) between the source and target centroids. The final loss contains the prediction loss, the semantic loss, and the domain loss as DANN .
MCD : We apply another classifier and follow the adversarial training procedure of Maximum Classifier Discrepancy to iteratively optimize the generator ( in our case) and the classifier (). The L1-distance is used as the discrepancy loss.
SWD : The framework is similar to MCD, but we replace the L1-distance with the Wasserstein distance as the discrepancy loss.
VCOP : We divide into three segments and compute the segment-level features with temporal pooling. After temporal shuffling the segment-level features, pairwise features are computed and concatenated into the final feature representing the video clip order. The final features are then fed into a shallow classifier to predict the order.
The experimental results on 50Salads and Breakfast both indicate that our proposed SSTDA outperforms all these methods, as shown in Table 9.
The performance of the most recent video-based self-supervised learning method  on 50Salads and Breakfast also show that temporal shuffling within single domain without considering the relation across domains does not effectively benefit cross-domain action segmentation, resulting in even worse performance than other DA methods. Instead, our proposed self-supervised auxiliary tasks make predictions on cross-domain data, leading to cross-domain temporal relation reasoning instead of predicting within-domain temporal orders, achieving significant improvement in the performance of our main task, action segmentation.
6.3 Segmentation Visualization
Here we show more qualitative segmentation results from all three datasets to compare our methods with the baseline model, MS-TCN . All the results (Figure 9 for GTEA, Figure 10 for 50Salads, and Figure 11 for Breakfast) demonstrate that the improvement over the baseline by only local SSTDA is sometimes limited. For example, local SSTDA falsely detects the pour action in Figure 8(b), falsely classifies cheese-related actions as cucumber-related actions in Figure 9(b), and falsely detects the stir milk action in Figure 10(b). However, by jointly aligning local and global temporal dynamics with SSTDA, the model is effectively adapted to the target domain, reducing the above mentioned incorrect predictions and achieving better segmentation.
- Unaiza Ahsan, Rishi Madhok, and Irfan Essa. Video jigsaw: Unsupervised learning of spatiotemporal context for video action recognition. In IEEE Winter Conference on Applications of Computer Vision (WACV), 2019.
- Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2017.
- Min-Hung Chen, Zsolt Kira, and Ghassan AlRegib. Temporal attentive alignment for video domain adaptation. CVPR Workshop on Learning from Unlabeled Videos, 2019.
- Min-Hung Chen, Zsolt Kira, Ghassan AlRegib, Jaekwon Woo, Ruxin Chen, and Jian Zheng. Temporal attentive alignment for large-scale video domain adaptation. In IEEE International Conference on Computer Vision (ICCV), 2019.
- Gabriela Csurka. A comprehensive survey on domain adaptation for visual applications. In Domain Adaptation in Computer Vision Applications, pages 1–35. Springer, 2017.
- Li Ding and Chenliang Xu. Tricornet: A hybrid temporal convolutional and recurrent network for video action segmentation. arXiv preprint arXiv:1705.07818, 2017.
- Li Ding and Chenliang Xu. Weakly-supervised action segmentation with iterative soft boundary assignment. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
- Yazan Abu Farha and Jurgen Gall. Ms-tcn: Multi-stage temporal convolutional network for action segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
- Alireza Fathi, Xiaofeng Ren, and James M Rehg. Learning to recognize objects in egocentric activities. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011.
- Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning (ICML), 2015.
- Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The Journal of Machine Learning Research (JMLR), 17(1):2096–2030, 2016.
- Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS), 2014.
- Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, et al. Ava: A video dataset of spatio-temporally localized atomic visual actions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
- Arshad Jamal, Vinay P Namboodiri, Dipti Deodhare, and KS Venkatesh. Deep domain adaptation in action space. In British Machine Vision Conference (BMVC), 2018.
- Dahun Kim, Donghyeon Cho, and In So Kweon. Self-supervised video representation learning with space-time cubic puzzles. In AAAI Conference on Artificial Intelligence (AAAI), 2019.
- Yu Kong and Yun Fu. Human action recognition and prediction: A survey. arXiv preprint arXiv:1806.11230, 2018.
- Hilde Kuehne, Ali Arslan, and Thomas Serre. The language of actions: Recovering the syntax and semantics of goal-directed human activities. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
- Vinod Kumar Kurmi, Shanu Kumar, and Vinay P Namboodiri. Attending to discriminative certainty for domain adaptation. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2019.
- Avisek Lahiri, Sri Charan Ragireddy, Prabir Biswas, and Pabitra Mitra. Unsupervised adversarial visual level domain adaptation for learning video object detectors from images. In IEEE Winter Conference on Applications of Computer Vision (WACV), 2019.
- Colin Lea, Michael D Flynn, Rene Vidal, Austin Reiter, and Gregory D Hager. Temporal convolutional networks for action segmentation and detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
- Chen-Yu Lee, Tanmay Batra, Mohammad Haris Baig, and Daniel Ulbricht. Sliced wasserstein discrepancy for unsupervised domain adaptation. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2019.
- Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Unsupervised representation learning by sorting sequences. In IEEE International Conference on Computer Vision (ICCV), 2017.
- Peng Lei and Sinisa Todorovic. Temporal deformable residual networks for action segmentation in videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
- Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning (ICML), 2015.
- Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Unsupervised domain adaptation with residual transfer networks. In Advances in Neural Information Processing Systems (NeurIPS), 2016.
- Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In International Conference on Machine Learning (ICML), 2017.
- Chih-Yao Ma, Min-Hung Chen, Zsolt Kira, and Ghassan AlRegib. Ts-lstm and temporal-inception: Exploiting spatiotemporal dynamics for activity recognition. Signal Processing: Image Communication, 71:76–87, 2019.
- Chih-Yao Ma, Asim Kadav, Iain Melvin, Zsolt Kira, Ghassan AlRegib, and Hans Peter Graf. Attend and interact: higher-order object interactions for video understanding. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2018.
- Khoi-Nguyen C Mac, Dhiraj Joshi, Raymond A Yeh, Jinjun Xiong, Rogerio S Feris, and Minh N Do. Learning motion in feature space: Locally-consistent deformable convolution networks for fine-grained action detection. In IEEE International Conference on Computer Vision (ICCV), 2019.
- Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision (ECCV), 2016.
- Sinno Jialin Pan, Qiang Yang, et al. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering (TKDE), 22(10):1345–1359, 2010.
- Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In Advances in Neural Information Processing Systems Workshop (NeurIPSW), 2017.
- Zhongyi Pei, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Multi-adversarial domain adaptation. In AAAI Conference on Artificial Intelligence (AAAI), 2018.
- Alexander Richard, Hilde Kuehne, and Juergen Gall. Weakly supervised action learning with rnn based fine-to-coarse modeling. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
- Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2018.
- Sebastian Stein and Stephen J McKenna. Combining embedded accelerometers with computer vision for recognizing food preparation activities. In ACM international joint conference on Pervasive and ubiquitous computing (UbiComp), 2013.
- Benoit Steiner, Zachary DeVito, Soumith Chintala, Sam Gross, Adam Paszke, Francisco Massa, Adam Lerer, Gregory Chanan, Zeming Lin, Edward Yang, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems (NeurIPS), 2019.
- Waqas Sultani and Imran Saleemi. Human action recognition across datasets by foreground-weighted histogram decomposition. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2014.
- Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
- Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2018.
- Ximei Wang, Liang Li, Weirui Ye, Mingsheng Long, and Jianmin Wang. Transferable attention for domain adaptation. In AAAI Conference on Artificial Intelligence (AAAI), 2019.
- Donglai Wei, Joseph J Lim, Andrew Zisserman, and William T Freeman. Learning and using the arrow of time. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
- Shaoan Xie, Zibin Zheng, Liang Chen, and Chuan Chen. Learning semantic representations for unsupervised domain adaptation. In International Conference on Machine Learning (ICML), 2018.
- Dejing Xu, Jun Xiao, Zhou Zhao, Jian Shao, Di Xie, and Yueting Zhuang. Self-supervised spatiotemporal learning via video clip order prediction. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
- Tiantian Xu, Fan Zhu, Edward K Wong, and Yi Fang. Dual many-to-one-encoder-based transfer learning for cross-dataset human action recognition. Image and Vision Computing, 55:127–137, 2016.
- Weichen Zhang, Wanli Ouyang, Wen Li, and Dong Xu. Collaborative and adversarial network for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
- Xiao-Yu Zhang, Haichao Shi, Changsheng Li, Kai Zheng, Xiaobin Zhu, and Lixin Duan. Learning transferable self-attentive representations for action recognition in untrimmed videos with weak supervision. In AAAI Conference on Artificial Intelligence (AAAI), 2019.