Abstract
We introduce Negative Sampling in SemiSupervised Learning (), a simple, fast, easy to tune algorithm for semisupervised learning (SSL). is motivated by the success of negative sampling/contrastive estimation. We demonstrate that adding the loss to stateoftheart SSL algorithms, such as the Virtual Adversarial Training (VAT), significantly improves upon vanilla VAT and its variant, VAT with Entropy Minimization. By adding the loss to MixMatch, the current stateoftheart approach on semisupervised tasks, we observe significant improvements over vanilla MixMatch. We conduct extensive experiments on the CIFAR10, CIFAR100, SVHN and STL10 benchmark datasets.
Negative sampling in semisupervised learning
John Chen &Vatsal Shah &Anastasios Kyrillidis
Rice University &University of Texas at Austin &Rice University
1 Introduction
Deep learning has been hugely successful in areas such as image classification (Krizhevsky et al., 2012; He et al., 2016; Zagoruyko and Komodakis, 2016; Huang et al., 2017) and speech recognition (Sak et al., 2014; Sercu et al., 2016), where a large amount of labeled data is available. However, in practice it is often prohibitively expensive to create a large, high quality labeled dataset, due to lack of time, resources, or other factors. For example, the ImageNet dataset—which consists of 3.2 million labeled images in 5247 categories—took nearly two and half years to complete with the aid of Amazon’s Mechanical Turk (Deng et al., 2009). Some medical tasks may require months of preparation, expensive hardware, the collaboration of many experts, and often are limited by the number of participants (Miotto et al., 2016). As a result, it is desirable to exploit unlabeled data to aid the training of deep learning models.
This form of learning is semisupervised learning (Chapelle and Scholkopf, 2006) (SSL). Unlike supervised learning, the aim of SSL is to leverage unlabeled data, in conjunction with labeled data, to improve performance. SSL is typically evaluated on labeled datasets where a certain proportion of labels have been discarded. There have been a number of instances in which SSL is reported to achieve performance close to purely supervised learning (Laine and Aila, 2017; Miyato et al., 2017; Tarvainen and Valpola, 2017; Berthelot et al., 2019), where the purely supervised learning model is trained on the much larger whole dataset. However, despite significant progress in this field, it is still difficult to quantify when unlabeled data may aid the performance except in a handful of cases (Balcan and Blum, 2005; BenDavid et al., 2008; Kääriäinen, 2005; Niyogi, 2013; Rigollet, 2007; Singh et al., 2009; Wasserman and Lafferty, 2008).
In this work, we restrict our attention to SSL algorithms which add a loss term to the neural network loss. These algorithms are the most flexible and practical given the difficulties in hyperparameter tuning in the entire model training process, in addition to achieving the stateoftheart performance.
We introduce Negative Sampling in SemiSupervised Learning (): a simple, fast, easy to tune SSL algorithm, motivated by negative sampling/contrastive estimation (Mikolov et al., 2013; Smith and Eisner, 2005). In negative sampling/contrastive estimation, in order to train a model on unlabeled data, we exploit implicit negative evidence, originating from the unlabeled samples: Using negative sampling, we seek for good models that discriminate a supervised example from its neighborhood, comprised of unsupervised examples, assigned with a random (and potentially wrong) class. Stated differently, the learner learns that not only the supervised example is good, but that the same example is locally optimal in the space of examples, and that alternative examples are inferior. With negative sampling/contrastive estimation, instead of explaining and exploiting all of the data (that is not available during training), the model implicitly must only explain why the observed, supervised example is better than its unsupervised neighbors.
Overall, adds a loss term to the learning objective, and is shown to improve performance simply by doing so to other stateoftheart SSL objectives. Since modern datasets often have a large number of classes (Russakovsky et al., 2014), we are motivated by the observation that it is often much easier to label a sample with a class or classes it is not, as opposed to the one class it is, exploiting ideas from negative sampling/contrastive estimation (Mikolov et al., 2013; Smith and Eisner, 2005).
Key Contributions.
Our findings can be summarized as follows:

[leftmargin=0.7cm]

We propose a new SSL algorithm, which is easy to tune, and improves SSL performance of other state of the art algorithms, simply by adding the loss in their objective.

Adding the loss to the stateoftheart—nonMixup (Zhang et al., 2017)—loss for unlabeled data, i.e., Virtual Adversarial Training (VAT) (Miyato et al., 2017), we observe superior performance compared to stateoftheart alternatives, such as PseudoLabel (Lee, 2013), plain VAT (Miyato et al., 2017), and VAT with Entropy Minimization (Miyato et al., 2017; Oliver et al., 2018), for the standard SSL benchmarks of SVHN and CIFAR10.

Adding the loss to the stateoftheart Mixup SSL, i.e., the MixMatch procedure (Berthelot et al., 2019), combined with MixMatch produces superior performance for the standard SSL benchmarks of SVHN, CIFAR10 and STL10.
Namely, adding the loss to existing SSL algorithms is an easy way to improve performance, and requires limited extra computational resources for hyperparameter tuning, since it is interpretable, fast, and sufficiently easy to tune.
2 Negative Sampling in SemiSupervised Learning
Let the set of labeled samples be denoted as , being the input and being the associated label, and the set of unlabeled samples be denoted as , each with unknown correct label . For the rest of the text, we will consider the crossentropy loss, which is one of the most widely used loss functions for classification. The objective function for cross entropy loss over the labeled examples is:
where there are labeled samples, classes, is the identity operator that equals 1 when , and is the output of the classifier for sample for class .
For the sake of simplicity, we will perform the following relabeling: for all , and . In the hypothetical scenario where the labels for the unlabeled data are known and for the parameters of the model, the likelihood would be:
Observe that,
which follows from the definition of the quantities that represent a probability distribution and, consequently, sum up to one.
Taking negative logarithms allows us to split the loss function into two components: the supervised part and the unsupervised part. The loglikelihood loss function can now be written as follows:
While the true labels need to be known for the unsupervised part to be accurate, we draw ideas from negative sampling/contrastive estimation (Mikolov et al., 2013; Smith and Eisner, 2005) in our approach. I.e., for each unlabeled example in the unsupervised part, we randomly assign labels from the set of labels. These labels indicate classes that the sample does not belong to: as the number of labels in the task increase, the probability of including the correct label in the set of labels is small. The way labels are selected could be uniformly at random or by using Nearest Neighbor search, or even based on the output probabilities of the network, with the hope that the correct label is not picked. Our idea is analogous to the word2vec setting (Mikolov et al., 2013), which is described in Appendix A.
The approach above assumes the use of the full dataset, both for the supervised and unsupervised parts. In practice, more often than not we train models based on stochastic gradient descent, and we implement a minibatch variant of this approach with different batch sizes and for labeled and unlabeled data, respectively. Particularly, for the supervised minibatch of size for labeled data, the objective term is approximated as:
The unsupervised part with minibatch size of and loss, where each unlabeled sample is connected with hopefully incorrect labels, is approximated as:
Based on the above, our loss looks as follows:
Thus, the loss is just an additive loss term that can be easily included in many existing SSL algorithms, as we show next. For clarity, a pseudocode implementation of the algorithm where negative labels are identified by the label probability being below a threshold, as the output of the classifier or otherwise, is given in Algorithm 1.
3 Related Work
In this paper, we restrict our attention to a subset of SSL algorithms which add a loss to the supervised loss function. These algorithms tend to be more practical in terms of hyperparameter tuning (Berthelot et al., 2019). There are a number of SSL algorithms not discussed in this paper, including "transductive" models (Joachims, 1999, 2003; Gammerman et al., 1998), graphbased methods (Zhu et al., 2003; Bengio and Le Roux, 2006), and generative modeling (Joachims, 2003; Belkin and Niyogi, 2002; Salakhutdinov and Hinton, 2007; Coates and Ng, 2011; Goodfellow et al., 2011; Kingma et al., 2014; Odena, 2016; Zhe et al., 2016; Salimans et al., 2016). For a comprehensive overview of SSL methods, refer to Chapelle and Scholkopf (2006), or Zhu et al. (2003). We describe below the relevant categories of SSL in this paper.
3.1 Consistency Regularization
Consistency regularization applies data augmentation to semisupervised learning with the following intuition: Small perturbations for each sample should not significantly change the output of the network. This is usually achieved by minimizing some distance measure between the output of the network, with and without perturbations in the input. The most straightforward distance measure is the mean squared error used by the model (Laine and Aila, 2017; Sajjadi et al., 2016). The model adds the distance term , where is the result of a stochastic perturbation to , to the supervised classification loss as a regularizer, with some weight.
Mean teacher (Tarvainen and Valpola, 2017) observes the potentially unstable target prediction over the course of training with the model approach, and proposes a prediction function parameterized by an exponential moving average of model parameter values. Mean teacher adds the distance function , where is an exponential moving average of , to the supervised classification loss with some weight. However, these methods are domain specific.
3.1.1 Virtual Adversarial Training
Virtual Adversarial Training (Miyato et al., 2017) (VAT) approximates perturbations to be applied over the input to most significantly affect the output class distribution, inspired by adversarial examples (Goodfellow et al., 2015; Szegedy et al., 2014). VAT computes an approximation of the perturbation as:
where is an input data sample, is its dimension, is a nonnegative function that measures the divergence between two distributions, and are scalar hyperparameters. Consistency regularization is then used to minimize the distance between the output of the network, with and without the perturbations in the input. Since we follow the work in Oliver et al. (2018) almost exactly, we select the best performing consistency regularization SSL method in that work, VAT, for comparison and combination with for nonMixup SSL; Mixup procedure will be described later.
3.2 Entropy minimization
The goal of entropy minimization (Grandvalet and Bengio, 2005) is to discourage the decision boundary from passing near samples where the network produces lowconfidence predictions. One way to achieve this is by adding a simple loss term to minimize the entropy for unlabeled data with total classes:
Entropy minimization on its own has not demonstrated competitive performance in SSL, however it can be combined with VAT for stronger results (Miyato et al., 2017; Oliver et al., 2018). We include entropy minimization with VAT in our experiments.
3.3 PseudoLabeling
PseudoLabeling (Lee, 2013) is a simple and easy to tune method which is widely used in practice. For a particular sample, it requires only the probability value of each class, the output of the network, and labels the sample with a class if the probability value crosses a certain threshold. The sample is then treated as a labeled sample with the standard supervised loss function. PseudoLabeling is closely related to entropy minimization, but only enforces lowentropy predictions for predictions which are already lowentropy. We emphasize here that the popularity of PseudoLabeling is likely due to its simplicity and limited extra cost for hyperparameter search.
3.4 Mixup based SSL
Mixup (Zhang et al., 2017) combines pairs of samples and their onehot labels with the following operations
to produce a new sample where is a hyperparameter. Mixup is a form of regularization which encourages the neural network to behave linearly between training examples, justified by Occam’s Razor. In SSL, the labels are typically the predicted labels by a neural network with some processing steps.
Applying Mixup to SSL led to Interpolation Consistency Training (ICT) (Verma et al., 2019) and MixMatch (Berthelot et al., 2019), which significantly improved upon previous results with SSL on the standard benchmarks of CIFAR10 and SVHN.
ICT trains the model to output predictions similar to a meanteacher , where is an exponential moving average of . Namely, on unlabeled data, ICT encourages .
MixMatch applies a number of processing steps for labeled and unlabeled data on each iteration and mixes both labeled and unlabeled data together. The final loss is given by
where is the labeled data , is the unlabeled data , and are the output samples labeled by MixMatch, and , , , are hyperparameters. Given a batch of labeled and unlabeled samples, MixMatch applies data augmentations on each unlabeled sample , averages the predictions across the augmentations,
and applies temperature sharpening,
to the average prediction. is typically 2 in practice, and is 0.5. The unlabeled data is labeled with this sharpened average prediction.
Let the collection of labeled unlabeled data be . Standard data augmentation is applied to the originally labeled data and let this be denoted . Let denote the shuffled collection of and . MixMatch alters Mixup by adding a max operation
and produces and .
Since MixMatch performs the strongest empirically, we select MixMatch as the best performing Mixupbased SSL method for comparison and combination with .
4 Experiments
We separate experiments into nonMixupbased SSL and Mixupbased SSL. For reproducibility, we follow the methodology of Oliver et al. (2018) almost exactly for our nonMixupbased SSL experiments, and reproduce many of the key findings. In that work, they compare nonMixupbased SSL methods and, thus, we use their consistent testbed for the same purpose. MixMatch uses an almost identical setup, but with a slightly different evaluation method, and we use the official implementation for MixMatch for our Mixupbased experiments.
4.1 NonMixupbased SSL
Following Oliver et al. (2018), the model employed is the standard Wide ResNet (WRN) (Zagoruyko and Komodakis, 2016) with depth 28 and width 2, batch normalization (Ioffe and Szegedy, 2015), and leaky ReLU activations (Ng, 2013). The optimizer is the Adam optimizer (Kingma and Ba, 2014). The batch size is 100, half of which are labeled and half are unlabeled. Standard procedures for regularization, data augmentation, and preprocessing are followed.
We use the standard training data/validation data split for SVHN, with 65,932 training images and 7,325 validation images. All but 1,000 examples are turned ”unlabeled”. Similarly, we use the standard training/data validation data split for CIFAR10, with 45,000 training images and 5,000 validation images. All but 4,000 labels are turned ”unlabeled”. We also use the standard training data/validation data split for CIFAR100, with 45,000 training images and 5,000 validation images. All but 10,000 labels are turned ”unlabeled”.
Hyperparameters are optimized to minimize validation error; test error is reported at the point of lowest validation error. We select hyperparameters which perform well for both SVHN and CIFAR10. After selecting hyperparameters on CIFAR10 and SVHN, we run almost the exact same hyperparameters with practically no further tuning on CIFAR100 to determine the ability of each method to generalize to new datasets. Since VAT and VAT + EntMin use different hyperparameters for CIFAR10 and SVHN, we use those tuned for CIFAR10 for the CIFAR100 dataset. For and + VAT, we divide the threshold by 10 since there are 10x classes in CIFAR100. We run 5 seeds for all cases.
Since models are typically trained on CIFAR10 (Krizhevsky, 2009) and SVHN (Netzer et al., 2011) for fewer than the 500,000 iterations (1,000 epochs) (Oliver et al., 2018), we make the only changes of reducing the total iterations to 200,000, warmup period (Tarvainen and Valpola, 2017) to 50,000, and iteration of learning rate decay to 130,000. All other methodology follows that work (Oliver et al., 2018).
4.1.1 Baseline Methods
For baseline methods, we consider PseudoLabeling, due to its simplicity on the level of , and VAT for its performance, in addition to VAT + Entropy Minimization. We omit the model and Mean Teacher, since we follow the experiments of Oliver et al. (2018) and both produce worse performance than VAT. The supervised baseline is trained on the remaining labeled data after some labels have been removed. We generally follow the tuned hyperparameters in the literature and do not observe noticeable gains from further hyperparameter tuning.
4.1.2 Implementation of
We implement using the output probabilities of the network with the unlabeled samples, namely
The performance of with random negative sampling assignment or Nearest Neighborbased assignment is given in Section 5. We label a sample with negative labels for the classes whose probability value falls below a certain threshold. We then simply add the loss to the existing SSL loss function. Using on its own gives
for some weighting . For adding to VAT, this gives
for some weighting . The weighting is a common practice in SSL, also used in MixMatch. This is the simplest form of and we believe there are large gains to be made with more complex methods of choosing the negative labels.
4.1.3 Results
Dataset  Supervised  PseudoLabel  VAT  VAT + EntMin  VAT +  

CIFAR10  20.76 .28  17.56 .29  14.72 .23  14.34 .18  16.03 .05  13.94 .10 
SVHN  12.39 .53  7.70 .22  6.20 .11  6.10 .02  6.52 .22  5.51 .14 
CIFAR100  48.26 .25  46.91 .31  44.38 .56  43.92 .44  46.34 .37  43.70 .19 
We follow the practice in Oliver et al. (2018) and use the same hyperparameters for plain and in + VAT for both CIFAR10 and SVHN. After selecting hyperparameters on CIFAR10 and SVHN, we run almost the exact same hyperparameters with little further tuning on CIFAR100, where the threshold is divided by 10 since there are 10x classes in CIFAR100.
CIFAR10. We evaluate the accuracy of each method with 4,000 labeled samples and 41,000 unlabeled samples, as is standard practice. The results are given in Table 1. For , we use a threshold , learning rate of 6e4, and . For VAT + , we use a shared learning rate of 6e4 and reduce from 1 to 0.3, which is identical to . All other settings remain as is optimized individually.
We created 5 splits of 4,000 labeled samples, each with a different seed. Each model is trained on a different split and test error is reported with mean and standard deviation. We find that performs reasonably well and significantly better than PseudoLabeling, over a 1.5% improvement. A significant gain over all algorithms is attained by adding the loss to the VAT loss. VAT + achieves almost a 1% improvement over VAT, and is about 0.5% better than VAT + EntMin. This underscores the flexibility of to improve existing methods.
SVHN. We evaluate the accuracy of each method with 1,000 labeled samples and 64,932 unlabeled samples, as is standard practice. The results are shown in Table 1. We use the same hyperparameters for and VAT + as in CIFAR10.
Again, 5 splits are created, each with a different seed. Each model is trained on a different split and test error is reported with mean and standard deviation. Here, achieves competitive learning rate with VAT, 6.52% versus 6.20%, and is significantly better than PseudoLabeling, at 7.70%. By combining with VAT, test error is further reduced by a notable margin, almost 1% better than VAT alone and more than 0.5% better than VAT + EntMin.
CIFAR100. We evaluate the accuracy of each method with 10,000 labeled samples and 35,000 unlabeled samples, as is standard practice. The results are given in Table 1. For , we use a threshold , learning rate of 6e4, and , following the settings in CIFAR10 and SVHN. For VAT + in CIFAR100, we use a shared learning rate of 3e3 and , .
As before, we created 5 splits of 10,000 labeled samples, each with a different seed, and each model is trained on a different split. Test error is reported with mean and standard deviation. is observed to improve 0.6% test error over PseudoLabeling and adding to VAT reduces test error slightly and achieves the best performance. This suggests that EntMin and boosts VAT even with little hyperparameter tuning, and perhaps should be used as default. We note that the performance of SSL methods can be sensitive to hyperparameter tuning, and minor hyperparameter tuning may improve performance greatly. In our experiments, alone runs more than 2x faster than VAT.
4.2 Mixupbased SSL
We follow the methodology of Berthelot et al. (2019) and continue to use the same model described in section 4.1. In the previous section, we use the standard training data/validation data split for SVHN and CIFAR10, with all but 1,000 labels and all but 4,000 labels discarded respectively. Since the performance of MixMatch is particularly strong using only a small number of labeled samples, we include experiments for SVHN with all but 250 labels discarded, and CIFAR10 with all but 250 labels discarded. We also include experiments on STL10, a dataset designed for SSL, which has 5,000 labeled images and 100,000 unlabeled images drawn from a slightly different distribution than the labeled data. All but 1,000 labels are discarded for STL10.
Hyperparameters are tuned individually for each dataset, and the median of the last 20 checkpoints’ test error is reported, following Berthelot et al. (2019). We run 5 seeds.
Again, we reduce training epochs to 300 epochs for both SVHN and CIFAR10, which is a typical training time for fully supervised models. We reduce the training epochs of STL10 significantly in interest of training time. All other methodology follows the work of MixMatch. We note here that Berthelot et al. (2019) differs from Oliver et al. (2018) in that it evaluates an exponential moving average of the model parameters, as opposed to using a learning rate decay schedule, and uses weight decay.
4.2.1 Baseline Methods
We run MixMatch with the official implementation, and use the parameters recommended in the original work for each dataset.
4.2.2 Implementation of
Recall that MixMatch outputs collections of samples with their generated labels. We label each sample with negative labels for the classes whose generated probability value falls below a certain threshold. We then simply add the loss to the existing SSL loss function, computing the loss using the probability outputs of the network as usual. Namely,
4.2.3 Results
We follow the practice of Berthelot et al. (2019) and tune separately for each dataset. MixMatch + only takes marginally longer runtime than MixMatch on its own. The learning rate is fixed.
CIFAR10. We evaluate the accuracy of each method with 4,000 labeled samples and 41,000 unlabeled samples, as is standard practice, and 250 labeled samples and 44,750 unlabeled samples, where MixMatch performs much stronger than other SSL methods. The results are given in Table 2.
As in Berthelot et al. (2019), we use and . For , we use a threshold of and a coefficient of for 250 labeled samples and for 4,000 labeled samples.
We created 5 splits of the number of labeled samples, each with a different seed. Each model is trained on a different split and test error is reported with mean and standard deviation.
Similar to the previous section, we find that adding immediately improves the performance of MixMatch, with a 2% improvement with 250 labeled samples and a small improvement for 4,000 samples. The 250 labeled samples case may be the more interesting case since it highlights the sample efficiency of the method.
CIFAR10  250  4,000 

MixMatch  14.49 1.60  7.05 0.10 
Mixmatch +  12.48 1.21  6.92 0.12 
SVHN. We evaluate the accuracy of each method with 1,000 labeled samples and 64,932 unlabeled samples, as is standard practice, and 250 labeled samples and 65,682 unlabeled samples. The results are shown in Table 3.
Following the literature, we use and . For , we again use a threshold of and a coefficient of for both 250 labeled samples and 1,000 labeled samples.
We created 5 splits with 5 different seeds, where each model is trained on a different split and test error is reported with mean and standard deviation.
By adding to MixMatch, the model achieves almost the same test error with 250 labeled samples than it does using only MixMatch on 1,000 labeled samples. In other words, in this case applying improves performance almost equivalent to having 4x the amount of labeled data. In the cases of 250 labeled samples and 1,000 labeled samples, adding to MixMatch improves performance by 0.4% and 0.15% respectively, achieving stateoftheart results.
SVHN  250  1,000 

MixMatch  3.75 0.09  3.28 0.11 
Mixmatch +  3.38 0.08  3.14 0.11 
STL10. We evaluate the accuracy of each method with 1,000 labeled samples and 100,000 unlabeled samples. The results are given in Table 4.
Following the literature, we use and . For , we again use a threshold of and . We trained the model for a significantly fewer epochs than in Berthelot et al. (2019), however even in this case can improve upon MixMatch, reducing test error slightly.
STL10  1,000 

MixMatch  22.20 0.89 
Mixmatch +  21.74 0.33 
5 Alternative methods
With computational efficiency in mind, we compare several methods of implementing in Table 5 on the FMNIST dataset with a small Convolutional Neural Network. We split the FMNIST dataset into a 2,000/58,000 labeled/unlabeled split and report validation error at the end of training. Specifically, we compare:

Supervised: trained only on the 2,000 labeled samples.

Uniform: negative labels are selected uniformly over all classes.

NN: We use the Nearest Neighbor (NN) method to the exclude the class of the NN, exclude four classes with the NNs, or to label with the class with the furthest NN.

Threshold: refers to the method of section 4.1.2

Oracle: negative labels are selected uniformly over all wrong classes.
FMNIST  2,000 

Supervised  17.25 .22 
Uniform  1  18.64 .38 
Uniform  3  19.35 .33 
Exclude class of NN  1  17.12 .15 
Exclude 4 nearest classes with NN  1  17.13 .21 
Furthest class with NN  1  16.76 .15 
Threshold  16.47 .18 
Threshold  16.59 .19 
Oracle  1  16.37 .12 
Oracle  3  15.20 .66 
Selecting negative labels uniformly over all classes appears to hurt performance, suggesting that negative labels must be selected more carefully in the classification setting. NN methods appear to improve over purely supervised training, however the effectiveness is limited by long preprocessing times and the high dimensionality of the data.
The method described in section 4.1.2, listed here as Threshold, achieves superior test error in comparison to NN and Uniform methods. In particular, it is competitive with Oracle  1, an oracle which labels each unlabeled sample with one negative label which the sample is not a class of.
It is no surprise that Oracle  3 improves substantially over Oracle  1, and it is not inconceivable to develop methods which can accurately select a small number of negative labels, and these may lead to even better results when combined with other SSL methods.
We stress that this is not a definitive list of methods to implement negative sampling in SSL, and our fast proposed method, when combined with other SSL, already improves over the stateoftheart.
6 Conclusion
With simplicity, speed, and ease of tuning in mind, we proposed Negative Sampling in SemiSupervised Learning (), a semisupervised learning method inspired by negative sampling, which simply adds a loss function. We demonstrate the effectiveness of when combined with existing SSL algorithms, producing the overall best result for nonMixupbased SSL, by combining with VAT, and Mixupbased SSL, by combining with MixMatch. We show improvements across a variety of tasks with only a minor increase in training time.
References
 A pacstyle model for learning from labeled and unlabeled data. In International Conference on Computational Learning Theory, pp. 111–126. Cited by: §1.
 Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems, Cited by: §3.
 Does unlabeled data provably help? worstcase analysis of the sample complexity of semisupervised learning.. In COLT, pp. 33–44. Cited by: §1.
 Label propagation and quadratic criterion. MIT Press. Cited by: §3.
 MixMatch: a holistic approach to semisupervised learning. arXiv preprint arXiv:1905.02249. Cited by: item , §1, §3.4, §3, §4.2.3, §4.2.3, §4.2.3, §4.2, §4.2, §4.2.
 Semisupervised learning. MIT Press. Cited by: §1, §3.
 The importance of encoding versus training with sparse coding and vector quantization. In International Conference on Machine Learning, Cited by: §3.
 Imagenet: a largescale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: §1.
 Learning by transduction. In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, Cited by: §3.
 Spikeandslab sparse coding for unsupervised feature discovery. NIPS Workshop on Challenges in Learning Hierarchical Models. Cited by: §3.
 Explaining and harnessing adversarial examples. In International Conference on Learning Representations, Cited by: §3.1.1.
 Semisupervised learning by entropy minimization. In Advances in Neural Information Processing Systems, Cited by: §3.2.
 Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1.
 Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §1.
 Batch normalization: accelerating deep network training. In International Conference on Machine Learning, Cited by: §4.1.
 Transductive inference for text classification using support vector machines. In International Conference on Machine Learning, Cited by: §3.
 Transductive learning via spectral graph partitioning. In International Conference on Machine Learning, Cited by: §3.
 Generalization error bounds using unlabeled data. In International Conference on Computational Learning Theory, pp. 127–142. Cited by: §1.
 Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.1.
 Semisupervised learning with deep generative models. In Advances in Neural Information Processing Systems, Cited by: §3.
 Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1.
 Learning multiple layers of features from tiny images. Technical report. Cited by: §4.1.
 Temporal ensembling for semisupervised learning. In International Conference on Learning Representations, Cited by: §1, §3.1.
 Pseudolabel: the simple and efficient semisupervised learning method for deep neural networks. ICML Workshop on Challenges in Representation Learning. Cited by: item , §3.3.
 Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, Cited by: §1, §1, §2.
 Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Scientific reports 6, pp. 26094. Cited by: §1.
 Virtual adversarial training: a regularization method for supervised and semisupervised learning. arXiv preprint arXiv:1704.03976. Cited by: item , §1, §3.1.1, §3.2.
 Reading digits in natural images with unsupervised feature learning. NIPS Workshop on Deep Learning and Unsupervised Feature Learning. Cited by: §4.1.
 Rectifier nonlinearities improve neural network acoustic models. In International Conference on Machine Learning, Cited by: §4.1.
 Manifold regularization and semisupervised learning: some theoretical analyses. The Journal of Machine Learning Research 14 (1), pp. 1229–1250. Cited by: §1.
 Semisupervised learning with generative adversarial networks. arXiv preprint arXiv:1606.01583. Cited by: §3.
 Realistic evaluation of deep semisupervised learning algorithms. arXiv preprint arXiv:1804.09170. Cited by: item , §3.1.1, §3.2, §4.1.1, §4.1.3, §4.1, §4.1, §4.2, §4.
 Generalization error bounds in semisupervised classification under the cluster assumption. Journal of Machine Learning Research 8 (Jul), pp. 1369–1392. Cited by: §1.
 Imagenet large scale visual recognition challenge. arXiv preprint arXiv:1409.0575. Cited by: §1.
 Regularization with stochastic transformations and perturbations for deep semisupervised learning. In Advances in Neural Information Processing Systems, Cited by: §3.1.
 Long shortterm memory recurrent neural network architectures for large scale acoustic modeling. In Fifteenth annual conference of the international speech communication association, Cited by: §1.
 Using deep belief nets to learn covariance kernels for gaussian processes. In Advances in Neural Information Processing Systems, Cited by: §3.
 Improved techniques for training gans. In Advances in Neural Information Processing Systems, Cited by: §3.
 Very deep multilingual convolutional neural networks for LVCSR. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4955–4959. Cited by: §1.
 Unlabeled data: now it helps, now it doesn’t. In Advances in neural information processing systems, pp. 1513–1520. Cited by: §1.
 Contrastive estimation: training loglinear models on unlabeled data. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pp. 354–362. Cited by: §1, §1, §2.
 Intriguing properties of neural networks. In International Conference on Learning Representations, Cited by: §3.1.1.
 Mean teachers are better role models: weightaveraged consistency targets improve semisupervised deep learning results. In Advances in Neural Information Processing Systems, Cited by: §1, §3.1, §4.1.
 Interpolation consistency training for semisupervised learning. arXiv preprint arXiv:1903.03825. Cited by: §3.4.
 Statistical analysis of semisupervised regression. In Advances in Neural Information Processing Systems, pp. 801–808. Cited by: §1.
 Wide residual networks. arXiv preprint arXiv:1605.07146. Cited by: §1, §4.1.
 Mixup: beyond empirical risk minimization. arXiv preprint arXiv:1710.09412. Cited by: item , §3.4.
 Variational autoencoder for deep learning of images, labels and captions. In Advances in Neural Information Processing Systems, Cited by: §3.
 Semisupervised learning using gaussian fields and harmonic functions. In International Conference on Machine Learning, Cited by: §3.
Appendix A Negative Sampling
We present the case of word2vec for negative sampling where the number of words and contexts is such that picking a random pair of (word, context) is with high probability not related. To make the resemblance, let us describe the intuition behind word2vec. Here, the task is to relate words –represented as – with contexts –represented as . We can theoretically conceptualize words being related with , and contexts being related to labels . The negative sampling by Mikolov et al., considers the following objective function: consider a pair of a word and a context. If this pair comes from valid data that correctly connects these two, then we can say that the data pair came from the true data distribution; if this pair does otherwise, then we claim that does not come from the true distribution.
In math, we will denote by as the probability that satisfies the first case, and otherwise. The paper models these probabilities as:
where correspond to the vector representation of the context and word, respectively.
Now, in order to find good vector representations (we naively group all variables into ), given the data, we perform maximum loglikelihood as follows:
Of course, we never take the whole dataset (whole corpus ) and do gradient descent; rather we perform SGD by considering only a subset of the data for the first term:
Also, we cannot consider *every* data point not in the dataset; rather, we perform negative sampling by selecting random pairs (according to some probability  this is important)—say pairs:
where the tildes represent the “nonvalid” data.