Combating Noisy Labels by Agreement: A Joint Training Method with CoRegularization
Abstract
Deep Learning with noisy labels is a practically challenging problem in weakly supervised learning. The stateoftheart approaches “Decoupling” and “Coteaching+” claim that the “disagreement” strategy is crucial for alleviating the problem of learning with noisy labels. In this paper, we start from a different perspective and propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training. Specifically, we first use two networks to make predictions on the same minibatch data and calculate a joint loss with CoRegularization for each training example. Then we select smallloss examples to update the parameters of both two networks simultaneously. Trained by the joint loss, these two networks would be more and more similar due to the effect of CoRegularization. Extensive experimental results on corrupted data from benchmark datasets including MNIST, CIFAR10, CIFAR100 and Clothing1M demonstrate that JoCoR is superior to many stateoftheart approaches for learning with noisy labels.
1 Introduction
Deep Neural Networks (DNNs) achieve remarkable success on various tasks, and most of them are trained in a supervised manner, which heavily relies on a large number of training instances with accurate labels [14]. However, collecting largescale datasets with fully precise annotations is expensive and timeconsuming. To alleviate this problem, data annotation companies choose some alternating methods such as crowdsourcing [39, 43] and online queries [3] to improve labelling efficiency. Unfortunately, these methods usually suffer from unavoidable noisy labels, which have been proven to lead to noticeable decrease in performance of DNNs [1, 44].
As this problem has severely limited the expansion of neural network applications, a large number of algorithms have been developed for learning with noisy labels, which belongs to the family of weakly supervised learning frameworks [2, 5, 6, 7, 8, 9, 11]. Some of them focus on improving the methods to estimate the latent noisy transition matrix [21, 24, 32]. However, it is challenging to estimate the noise transition matrix accurately. An alternative approach is training on selected or weighted samples, e.g., Mentornet [16], gradientbased reweight [30] and Coteaching [12]. Furthermore, the stateoftheart methods including Coteaching+ [41] and Decoupling [23] have shown excellent performance in learning with noisy labels by introducing the “Disagreement” strategy, where “when to update” depends on a disagreement between two different networks. However, there are only a part of training examples that can be selected by the “Disagreement” strategy, and these examples cannot be guaranteed to have groundtruth labels [12]. Therefore, there arises a question to be answered: Is “Disagreement” necessary for training two networks to deal with noisy labels?
Motivated by Cotraining for multiview learning and semisupervised learning that aims to maximize the agreement on multiple distinct views [4, 19, 34, 45], a straightforward method for handling noisy labels is to apply the regularization from peer networks when training each single network. However, although the regularization may improve the generalization ability of networks by encouraging agreement between them, it still suffers from memorization effects on noisy labels [44]. To address this problem, we propose a novel approach named JoCoR (Joint Training with CoRegularization). Specifically, we train two networks with a joint loss, including the conventional supervised loss and the CoRegularization loss. Furthermore, we use the joint loss to select smallloss examples, thereby ensuring the error flow from the biased selection would not be accumulated in a single network.
To show that JoCoR significantly improves the robustness of deep learning on noisy labels, we conduct extensive experiments on both simulated and realworld noisy datasets, including MNIST, CIFAR10, CIFAR100 and Clothing1M datasets. Empirical results demonstrate that the robustness of deep models trained by our proposed approach is superior to many stateoftheart approaches. Furthermore, the ablation studies clearly demonstrate the effectiveness of CoRegularization and Joint Training.
2 Related work
In this section, we briefly review existing works on learning with noisy labels.
Noise rate estimation. The early methods focus on estimating the label transition matrix [24, 25, 28, 37]. For example, Fcorrection [28] uses a twostep solution to heuristically estimate the noise transition matrix. An additional softmax layer is introduced to model the noise transition matrix [10]. In these approaches, the quality of noise rate estimation is a critical factor for improving robustness. However, noise rate estimation is challenging, especially on datasets with a large number of classes.
Smallloss selection. Recently, a promising method of handling noisy labels is to train models on smallloss instances [30]. Intuitively, the performance of DNNs will be better if the training data become less noisy. Previous work showed that during training, DNNs tend to learn simple patterns first, then gradually memorize all samples [1], which justifies the widely used smallloss criteria: treating samples with small training loss as clean ones. In particular, MentorNet [16] firstly trains a teacher network, then uses it to select clean instances for guiding the training of the student network. As for Coteaching [12], in each minibatch of data, each network chooses its smallloss instances and exchanges them with its peer network for updating the parameters. The authors argued that these two networks could filter different types of errors brought by noisy labels since they have different learning abilities. When the error from noisy data flows into the peer network, it will attenuate this error due to its robustness.
Disagreement. The “Disagreement” strategy is also applied to this problem. For instance, Decoupling [23] updates the model only using instances on which the predictions of two different networks are different. The idea of disagreementupdate is similar to hard example mining [33], which trains model with examples that are misclassified and expects these examples to help steer the classifier away from its current mistakes. For the “Disagreement” strategy, the decision of “when to update” depends on a disagreement between two networks instead of depending on the label. As a result, it would help decrease the divergence between these networks. However, as noisy labels are spread across the whole space of examples, there may be many noisy labels in the disagreement area, where the Decoupling approach cannot handle noisy labels explicitly. Combining the “Disagreement” strategy with crossupdate in Coteaching, Coteaching+ [41] achieves excellent performance in improving the robustness of DNNs against noisy labels. In spite of that, Coteaching+ only selects smallloss instances with different predictions from two models so very few examples are utilized for training in each minibatch when datasets are with extremely high noise rate. It would prevent the training process from efficient use of training examples. This phenomenon will be explicitly shown in our experiments in the symmetric80% label noise case.
Other deep learning methods. In addition to the aforementioned approaches, there are some other deep learning solutions [13, 17] to deal with noisy labels, including pseudolabel based [35, 40] and robust loss based approaches [28, 46]. For pseudolabel based approaches, Joint optimization [35] learns network parameters and infers the groundtrue labels simultaneously. PENCIL [40] adopts label probability distributions to supervise network learning and to update these distributions through backpropagation endtoend in each epoch. For robust loss based approaches, Fcorrect[28] proposes a robust risk minimization method to learn neural networks for multiclass classification by estimating label corruption probabilities. GCE [46] combines the advantages of the mean absolute loss and the cross entropy loss to obtain a better loss function and presents a theoretical analysis of the proposed loss functions in the context of noisy labels.
Semisupervised learning. Semisupervised learning also belongs to the family of weakly supervised learning frameworks [15, 18, 22, 26, 27, 31, 47]. There are some interesting works from semisupervised learning that are highly relevant to our approach. In contrast to “Disagreement” strategy, many of them are based on a agreement maximization algorithm. CoRLS [34] extends standard regularization methods like Support Vector Machines (SVM) and Regularized Least squares (RLS) to multiview semisupervised learning by optimizing measures of agreement and smoothness over labelled and unlabelled examples. EA++ [19] is a coregularization based approach for semisupervised domain adaptation, which builds on the notion of augmented space and harnesses unlabeled data in the target domain to further assist the transfer of information from source to target. The intuition is that different models in each view would agree on the labels of most examples, and it is unlikely for compatible classifiers trained on independent views to agree on an incorrect label. This intuition also motivates us to deal with noisy labels based on the agreement maximization principle.
3 The Proposed Approach
As mentioned before, we suggest to apply the agreement maximization principle to tackle the problem of noisy labels. In our approach, we encourage two different classifiers to make predictions closer to each other by explicit regularization method instead of hard sampling employed by the âDisagreementâ strategy. This method could be considered as a metaalgorithm that trains two base classifiers by one loss function, which includes a regularization term to reduce divergence between the two classifiers.
For multiclass classification with classes, we suppose the dataset with samples is given as , where is the th instance with its observed label as . Similar to Decoupling and Coteaching+, we formulate the proposed JoCoR approach with two deep neural networks denoted by and , while and denote their prediction probabilities of instance , respectively. In other words, and are the outputs of the “softmax” layer in and .
Network. For JoCoR, each network can be used to predict labels alone, but during the training stage these two networks are trained with a pseudosiamese paradigm, which means their parameters are different but updated simultaneously by a joint loss (see Figure 2). In this work, we call this paradigm as “Joint Training”.
Specifically, our proposed loss function on is constructed as follows:
(1) 
In the loss function, the first part is conventional supervised learning loss of the two networks, the second part is the contrastive loss between predictions of the two networks for achieving CoRegularization.
Classification loss. For multiclass classification, we use CrossEntropy Loss as the supervised part to minimize the distance between predictions and labels.
(2)  
Intuitively, two networks can filter different types of errors brought by noisy labels since they have different learning abilities. In Coteaching [12], when the two networks exchange the selected smallloss instances in each minibatch data, the error flows can be reduced by peer networks mutually. By virtue of the jointtraining paradigm, our JoCoR would consider the classification losses from both two networks during the “smallloss” selection stage. In this way, JoCoR can share the same advantage of the crossupdate strategy in Coteaching. This argument will be clearly supported by the ablation study in the later section.
Contrastive loss. From the view of agreement maximization principles [4, 34], different models would agree on labels of most examples, and they are unlikely to agree on incorrect labels. Based on this observation, we apply the CoRegularization method to maximize the agreement between two classifiers. On one hand, the CoRegularization term could help our algorithm select examples with clean labels since an example with small CoRegularization loss means that two networks reach an agreement on its predictions. On the other hand, the regularization from peer networks helps the model find a much wider minimum, which is expected to provide better generalization performance [45].
In JoCoR, we utilize the contrastive term as CoRegularization to make the networks guide each other. To measure the match of the two networks’ predictions and , we adopt the JensenShannon (JS) Divergence. To simplify implementation, we could use the symmetric KullbackLeibler(KL) Divergence to surrogate this term.
(3) 
where
Smallloss Selection Before introducing the details, we first clarify the connection between small losses and clean instances. Intuitively, smallloss examples are likely to be the ones that are correctly labelled [12, 30]. Thus, if we train our classifier only using smallloss instances in each minibatch data, it would be resistant to noisy labels.
To handle noisy labels, we apply the “smallloss” criteria to select “clean” instances (step 8 in Algorithm 1). Following the setting of Coteaching+, we update (step 12), which controls how many smallloss data should be selected in each training epoch. At the beginning of training, we keep more smallloss data (with a large ) in each mininatch since deep networks would fit clean data first [1, 44]. With the increase of epochs, we reduce gradually until reaching , keeping fewer examples in each minibatch. Such operation will prevent deep networks from overfitting noisy data [12].
In our algorithm, we use the joint loss (1) to select smallloss examples. Intuitively, an instance with small joint loss means that both two networks could be easy to reach a consensus and make correct predictions on it. As two networks have different learning abilities based on different initial conditions, the selected smallloss instances are more likely to be with clean labels than those chosen by a single model. Specifically, we conduct smallloss selection as follows:
(4) 
After obtaining the smallloss instances, we calculate the average loss on these examples for further backpropagation:
(5) 
Relations to other approaches. We compare JoCoR with other related approaches in Table 1. Specifically, Decoupling applies the “disagreement” strategy to select instances while Coteaching use smallloss criteria. Besides, Coteaching updates parameters of networks by the “crossupdate” strategy to reduce the accumulated error flow. Combining the “disagreement” strategy and the “crossupdate” strategy, Coteaching+ achieves excellent performance. As for our JoCoR, we also select smallloss examples but update the networks by Joint Training. Furthermore, we use the CoRegularization to maximize agreement between the two networks. Note that CoRegularization in our proposed method and “disagreementâ strategy in Decoupling are both essentially to reduce the divergence between the two classifiers. The difference between them lies in that the former uses an explicit regularization methods with all training examples while the latter employs hard sampling that reduces the effective number of training examples. It is especially important in the case of smallloss selection, because the selection would further decrease the effective number of training examples.
Decoupling  Coteaching  Coteaching+  JoCoR  
small loss  ✗  ✓  ✓  ✓ 
cross update  ✗  ✓  ✓  ✗ 
joint training  ✗  ✗  ✗  ✓ 
disagreement  ✓  ✗  ✓  ✗ 
agreement  ✗  ✗  ✗  ✓ 
FlippingRate  Standard  Fcorrection  Decoupling  Coteaching  Coteaching+  JoCoR 

Symmetry20%  
Symmetry50%  
Symmetry80%  
Asymmetry40% 
FlippingRate  Standard  Fcorrection  Decoupling  Coteaching  Coteaching+  JoCoR 

Symmetry20%  
Symmetry50%  
Symmetry80%  
Asymmetry40% 
FlippingRate  Standard  Fcorrection  Decoupling  Coteaching  Coteaching+  JoCoR 

Symmetry20%  
Symmetry50%  
Symmetry80%  
Asymmetry40% 
4 Experiments
In this section we first compare JoCoR with some stateoftheart approaches, then analyze the impact of Joint Training and CoRegularization by ablation study. We also analyze the effect of in (1) by sensitivity analysis and put it in supplementary materials. Code is available at https://github.com/hongxin001/JoCoR.
4.1 Experiment setup
Datasets. We verify the effectiveness of our proposed algorithm on four benchmark datasets: MNIST, CIFAR10, CIFAR100 and Clothing1M [38], and the detailed characteristics of these datasets can be found in supplementary materials. These datasets are popularly used for the evaluation of learning with noisy labels in previous literatures [10, 18, 29]. Especially, Clothing1M is a largescale realworld dataset with noisy labels, which is widely used in the related works[20, 28, 40, 38].
Since all datasets are clean except Clothing1M, following [28, 29], we need to corrupt these datasets manually by the label transition matrix Q, where given that noisy is flipped from clean y. Assume that the matrix Q has two representative structures: (1) Symmetry flipping [36]; (2) Asymmetry flipping [28]: simulation of finegrained classification with noisy labels, where labellers may make mistakes only within very similar classes.
Following Fcorrection [28], only half of the classes in the dataset are with noisy labels in the setting of asymmetric noise, so the actual noise rate in the whole dataset is half of the noisy rate in the noisy classes. Specifically, when the asymmetric noise rate is 0.4, it means . Figure 4 shows an example of noise transition matrix.
For experiments on Clothing1M, we use the 1M images with noisy labels for training, the 14k and 10k clean data for validation and test, respectively. Note that we do not use the 50k clean training data in all the experiments because only noisy labels are required during the training process [20, 35]. For preprocessing, we resize the image to , crop the middle as input, and perform normalization.
Baselines. We compare JoCoR (Algorithm 1) with the following stateoftheart algorithms, and implement all methods with default parameters by PyTorch, and conduct all the experiments on NVIDIA Tesla V100 GPU.

Coteaching+ [41], which trains two deep neural networks and consists of disagreementupdate step and crossupdate step.

Coteaching [12], which trains two networks simultaneously and crossupdates parameters of peer networks.

Decoupling [23], which updates the parameters only using instances which have different predictions from two classifiers.

Fcorrection [28], which corrects the prediction by the label transition matrix. As suggested by the authors, we first train a standard network to estimate the transition matrix Q.

As a simple baseline, we compare JoCoR with the standard deep network that directly trains on noisy datasets (abbreviated as Standard).
Network Structure and Optimizer. We use a 2layer MLP for MNIST, a 7layer CNN network architecture for CIFAR10 and CIFAR100. The detailed information can be found in supplementary materials. For Clothing1M, we use ResNet with 18 layers.
For experiments on MNIST, CIFAR10 and CIFAR100, Adam optimizer (momentum=0.9) is used with an initial learning rate of 0.001, and the batch size is set to 128. We run 200 epochs in total and linearly decay learning rate to zero from 80 to 200 epochs.
For experiments on Clothing1M, we also use Adam optimizer (momentum=0.9) and set batch size to 64. During the training stage, we run 15 epochs in total and set learning rate to , and for 5 epochs each.
As for in our loss function (1), we search it in [0.05, 0.10, 0.15,,0.95] with a clean validation set for best performance. When validation set is also with noisy labels, we use the smallloss selection to choose a clean subset for validation. As deep networks are highly nonconvex, even with the same network and optimization method, different initializations can lead to different local optimum. Thus, following Decoupling [23], we also take two networks with the same architecture but different initializations as two classifiers.
Measurement. To measure the performance, we use the test accuracy, i.e., test accuracy = (# of correct predictions) / (# of test). Besides, we also use the label precision in each minibatch, i.e., label precision = (# of clean labels) / (# of all selected labels). Specifically, we sample of smallloss instances in each minibatch and then calculate the ratio of clean labels in the smallloss instances. Intuitively, higher label precision means less noisy instances in the minibatch after sample selection, so the algorithm with higher label precision is also more robust to the label noise. All experiments are repeated five times. The error bar for STD in each figure has been highlighted as a shade.
Selection setting. Following Coteaching, we assume that the noise rate is known. To conduct a fair comparison in benchmark datasets, we set the ratio of smallloss samples as identical: , where for MNIST, CIFAR10 and CIFAR100, for Clothing1M. If is not known in advance, can be inferred using validation sets [21, 42].
4.2 Comparison with the StateoftheArts
Results on MNIST. At the top of Figure 3, it shows test accuracy vs. epochs on MNIST. In all four plots, we can see the memorization effect of networks, i.e., test accuracy of Standard first reaches a very high level and then gradually decreases. Thus, a good robust training method should stop or alleviate the decreasing process. On this point, JoCoR consistently achieves higher accuracy than all the other baselines in all four cases.
We can compare the test accuracy of different algorithms in detail in Table 2. In the most natural Symmetry20% case, all new approaches work better than Standard obviously, which demonstrates their robustness. Among them, JoCoR and Coteaching+ work significantly better than other methods. When it goes to Symmetry50% case and Asymmetry40% case, Decoupling begins to fail while other methods still work fine, especially JoCoR and Coteaching+. However, Coteaching+ cannot combat with the hardest Symmetry80% case, where it only achieves 58.92%. In this case, JoCoR achieves the best average classification accuracy (84.89%) again.
To explain such excellent performance, we plot label precision vs. epochs at the bottom of Figure 3. Only Decoupling, Coteaching, Coteaching+ and JoCoR are considered here, as they include example selection during training. First, we can see both JoCoR and Coteaching can successfully pick clean instances out. Note that JoCoR not only reaches high label precision in all four cases but also performs better and better with the increase of epochs while Coteaching declines gradually after reaching the top. This shows that our approach is better at finding clean instances. Then, Decoupling and Coteaching+ fail in selecting clean examples. As mentioned in Related Work, very few examples are utilized by Coteaching+ in the training process when noise rate goes to be extremely high. In this way, we can understand why Coteaching+ performs poorly on the hardest case.
Results on CIFAR10. Table 3 shows test accuracy on CIFAR10. As we can see, JoCoR performs the best in all four cases again. In the Symmetric20% case, JoCoR works much better than all other baselines and Coteaching+ performs better than Coteaching and Decoupling. In the other three cases, JoCoR is still the best and Coteaching+ cannot even achieve comparable performance with Coteaching.
Figure 5 shows test accuracy and label precision vs. epochs. JoCoR outperforms all the other comparing approaches on both test accuracy and label precision. On label precision, while Decoupling and Coteaching+ fail to find clean instances, both JoCoR and Coteaching can do this. An interesting phenomenon is that in the Asymmetry40% case, although Coteaching can achieve better performance than JoCoR in the first 100 epochs, JoCoR consistently outperforms it in all the later epochs. The result shows that JoCoR has better generalization ability than Coteaching.
Results on CIFAR100. Then, we show our results on CIFAR100. The test accuracy is shown in Table 4. Test accuracy and label precision vs. epochs are shown in Figure 6. Note that there are only 10 classes in MNIST and CIFAR10 datasets. Thus, overall the accuracy is much lower than previous ones in Tables 2 and 3. But JoCoR still achieves high test accuracy on this datasets. In the easiest Symmetry20% and Symmetry50% cases, JoCoR works significantly better than Coteaching+, Coteaching and other methods. In the hardest Symmetry80% case, JoCoR and Coteaching tie together but JoCoR still gets higher testing accuracy. When it turns to Asymmetry40% case, JoCoR and Coteaching+ perform much better than other methods. On label precision, JoCoR keeps the best performance in all four cases.
Methods  best  last 
Standard  67.22  64.68 
Fcorrection  68.93  65.36 
Decoupling  68.48  67.32 
Coteaching  69.21  68.51 
Coteaching+  59.32  58.79 
JoCoR  70.30  69.79 
Results on Clothing1M. Finally, we demonstrate the efficacy of the proposed method on the realworld noisy labels using the Clothing1M dataset. As shown in Table 5, best denotes the scores of the epoch where the validation accuracy is optimal, and last denotes the scores at the end of training. The proposed JoCoR method gets better result than stateoftheart methods on best. After all epochs, JoCoR achieves a significant improvement in accuracy of +5.11 over Standard, and an improvement of +1.28 over the best baseline method.
4.3 Ablation Study
To conduct ablation study for analyzing the effect of CoRegularization, we set up the experiments above MNIST and CIFAR10 with Symmetry50% noise. For implementing Joint Training without CoRegularization (Jointonly), we set the in (1) to 0. Besides, to verify the effect of the Joint Training paradigm, we introduce Coteaching and Standard enhanced by “smallloss” selection (Standard+) to join the comparison. Recall that the jointtraining method selects examples by the joint loss while Coteaching uses crossupdate method to reduce the error flow [12], these two methods should play a similar role during training according to the previous analysis.
The test accuracy and label precision vs. epochs on MNIST are shown in Figure 7. As we can see, JoCoR performs much better than the others on both test accuracy and label precision. The former keeps almost no decrease while the latter decline a lot after reaching the top. This observation indicates that CoRegularization strongly hinders neural networks from memorizing noisy labels.
The test accuracy and label precision vs. epochs on CIFAR10 are shown in Figure 8. In this figure, JoCoR still maintains a huge advantage over the other three methods on both test accuracy and label precision while Jointonly, Coteaching and Standard+ remain the same trend as these for MNIST, keeping a downward tendency after increasing to the highest point. These results show that CoRegularization plays a vital role in handling noisy labels. Moreover, Jointonly achieves a comparable performance with Coteaching on test accuracy and performs better than Coteaching and Standard+ on label precision. It shows that Joint Training is a more efficient paradigm to help select clean examples than crossupdate in Coteaching.
5 Conclusion
The paper proposes an effective approach called JoCoR to improve the robustness of deep neural networks with noisy labels. The key idea of JoCoR is to train two classifiers simultaneously with one joint loss, which is composed of regular supervised part and CoRegularized part. Similar to Coteaching+, we also select smallloss instances to update networks in each minibatch data by the joint loss. We conduct experiments on MNIST, CIFAR10, CIFAR100 and Clothing1M to demonstrate that, JoCoR can train deep models robustly with the slightly and extremely noisy supervision. Furthermore, the ablation studies clearly demonstrate the effectiveness of CoRegularization and Joint Training. In future work, we will explore the theoretical foundation of JoCoR based on the view of traditional Cotraining algorithms [19, 34].
Acknowledgments This research is supported by Singapore National Research Foundation projects AISGRP20190013, NSOETSS201901, and NTU.
Appendix A Dataset
the detailed characteristics of these datasets are shown in Table 6.
# of training  # of test  # of class  size  

MNIST  60,000  10,000  10  
CIFAR10  50,000  10,000  10  
CIFAR100  50,000  10,000  100  
Clothing1M  1,000,000  10,000  14 
Appendix B Network Architecture
The network architectures of the MLP and CNN models are shown in Table 7.
MLP on MNIST  CNN on CIFAR10& CIFAR100  

Gray Image  RGB Image  
Dense , ReLU 






Dense  Dense 
Appendix C Parameter Sensitivity Analysis
To conduct the sensitivity analysis on parameter , we set up the experiments above MNIST with symmetry50% noise. Specifically, we compare in the range of [0.05, 0.35, 0.65, 0.95]. The larger the is, the less the divergence of two classifiers in JoCoR would be.
The test accuracy and label precision vs. number of epochs are in Figure 9. Obviously, As the increases, the performance of our algorithm on test accuracy gets better and better. When , JoCoR achieves the best performance. We can see the same trends on label precision, which means that JoCoR can select clean example more precisely with a larger .
References
 (2017) A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning, pp. 233–242. Cited by: §1, §2, §3.
 (2018) Classification from pairwise similarity and unlabeled data.. In International Conference on Machine Learning, pp. 452–461. Cited by: §1.
 (2003) Noisetolerant learning, the parity problem, and the statistical query model. Journal of the ACM 50 (4), pp. 506–519. Cited by: §1.
 (1998) Combining labeled and unlabeled data with cotraining. In Proceedings of the eleventh Annual Conference on Computational Learning Theory, pp. 92–100. Cited by: §1, §3.
 (2006) Semisupervised learning. MIT Press. Cited by: §1.
 (2014) Analysis of learning from positive and unlabeled data. In Advances in Neural Information Processing Systems, pp. 703–711. Cited by: §1.
 (2018) Leveraging latent label distributions for partial label learning.. In International Joint Conferences on Artificial Intelligence, pp. 2107–2113. Cited by: §1.
 (2019) Partial label learning by semantic difference maximization. In International Joint Conferences on Artificial Intelligence, pp. 2294–2300. Cited by: §1.
 (2019) Partial label learning with selfguided retraining. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 3542–3549. Cited by: §1.
 (2016) Training deep neuralnetworks using a noise adaptation layer. In Proceedings of the 5th International Conference on Learning Representation, Cited by: §2, §4.1.
 (2017) Learning with inadequate and incorrect supervision. In 2017 IEEE International Conference on Data Mining), pp. 889–894. Cited by: §1.
 (2018) Coteaching: robust training of deep neural networks with extremely noisy labels. In Advances in Neural Information Processing Systems, pp. 8527–8537. Cited by: §1, §2, §3, §3, §3, item ii, §4.3.
 (2019) Deep selflearning from noisy labels. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5138–5147. Cited by: §2.
 (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 770–778. Cited by: §1.
 (2018) Binary classification for positiveconfidence data.. In Advances in Neural Information Processing Systems, pp. 5917–5928. Cited by: §2.
 (2017) Mentornet: learning datadriven curriculum for very deep neural networks on corrupted labels. arXiv preprint arXiv:1712.05055. Cited by: §1, Figure 1, §2.
 (2019) Nlnl: negative learning for noisy labels. In Proceedings of the IEEE International Conference on Computer Vision, pp. 101–110. Cited by: §2.
 (2017) Positiveunlabeled learning with nonnegative risk estimator. In Advances in Neural Information Processing Systems, pp. 1675–1685. Cited by: §2, §4.1.
 (2010) Coregularization based semisupervised domain adaptation. In Advances in Neural Information Processing Systems, pp. 478–486. Cited by: §1, §2, §5.
 (2019) Learning to learn from noisy labeled data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5051–5059. Cited by: §4.1, §4.1.
 (2015) Classification with noisy labels by importance reweighting. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (3), pp. 447–461. Cited by: §1, §4.1.
 (2019) On the minimal supervision for training any binary classifier from only unlabeled data. In Proceedings of the International Conference on Learning Representation, Cited by: §2.
 (2017) Decoupling “when to update” from “how to update”. In Advances in Neural Information Processing Systems, pp. 960–970. Cited by: §1, Figure 1, §2, item iii, §4.1.
 (2015) Learning from corrupted binary labels via classprobability estimation. In International Conference on Machine Learning, pp. 125–134. Cited by: §1, §2.
 (2013) Learning with noisy labels. In Advances in Neural Information Processing Systems, pp. 1196–1204. Cited by: §2.
 (2016) Theoretical comparisons of positiveunlabeled learning against positivenegative learning. In Advances in Neural Information Processing Systems, pp. 1199–1207. Cited by: §2.
 (2013) Squaredloss mutual information regularization: a novel informationtheoretic approach to semisupervised learning. In International Conference on Machine Learning, pp. 10–18. Cited by: §2.
 (2017) Making deep neural networks robust to label noise: a loss correction approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1944–1952. Cited by: §2, §2, item iv, §4.1, §4.1, §4.1.
 (2014) Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596. Cited by: §4.1, §4.1.
 (2018) Learning to reweight examples for robust deep learning. arXiv preprint arXiv:1803.09050. Cited by: §1, §2, §3.
 (2017) Semisupervised classification based on classification from positive and unlabeled data. In International Conference on Machine Learning, pp. 2998–3006. Cited by: §2.
 (2014) Class proportion estimation with application to multiclass anomaly rejection. In Artificial Intelligence and Statistics, pp. 850–858. Cited by: §1.
 (2016) Training regionbased object detectors with online hard example mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 761–769. Cited by: §2.
 (2005) A coregularization approach to semisupervised learning with multiple views. In Proceedings of ICML Workshop on Learning With Multiple Views, pp. 74–79. Cited by: §1, §2, §3, §5.
 (2018) Joint optimization framework for learning with noisy labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5552–5560. Cited by: §2, §4.1.
 (2015) Learning with symmetric label noise: the importance of being unhinged. In Advances in Neural Information Processing Systems, pp. 10–18. Cited by: §4.1.
 (2019) Are anchor points really indispensable in labelnoise learning?. In Advances in Neural Information Processing Systems, pp. 6835–6846. Cited by: §2.
 (2015) Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2691–2699. Cited by: §4.1.
 (2014) Learning from multiple annotators with varying expertise. Machine Learning 95 (3), pp. 291–327. Cited by: §1.
 (2019) Probabilistic endtoend noise correction for learning with noisy labels. arXiv preprint arXiv:1903.07788. Cited by: §2, §4.1.
 (2019) How does disagreement benefit coteaching?. arXiv preprint arXiv:1901.04215. Cited by: §1, Figure 1, §2, item i.
 (2018) An efficient and provable approach for mixture proportion estimation using linear independence assumption. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4480–4489. Cited by: §4.1.
 (2018) Learning with biased complementary labels. In Proceedings of the European Conference on Computer Vision, pp. 68–83. Cited by: §1.
 (2016) Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530. Cited by: §1, §1, §3.
 (2018) Deep mutual learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4320–4328. Cited by: §1, §3.
 (2018) Generalized cross entropy loss for training deep neural networks with noisy labels. In Advances in Neural Information Processing Systems, pp. 8778–8788. Cited by: §2.
 (2018) A brief introduction to weakly supervised learning. National Science Review 5 (1), pp. 44–53. Cited by: §2.