Combating Noisy Labels by Agreement: A Joint Training Method with Co-Regularization
Deep Learning with noisy labels is a practically challenging problem in weakly supervised learning. The state-of-the-art approaches “Decoupling” and “Co-teaching+” claim that the “disagreement” strategy is crucial for alleviating the problem of learning with noisy labels. In this paper, we start from a different perspective and propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training. Specifically, we first use two networks to make predictions on the same mini-batch data and calculate a joint loss with Co-Regularization for each training example. Then we select small-loss examples to update the parameters of both two networks simultaneously. Trained by the joint loss, these two networks would be more and more similar due to the effect of Co-Regularization. Extensive experimental results on corrupted data from benchmark datasets including MNIST, CIFAR-10, CIFAR-100 and Clothing1M demonstrate that JoCoR is superior to many state-of-the-art approaches for learning with noisy labels.
Deep Neural Networks (DNNs) achieve remarkable success on various tasks, and most of them are trained in a supervised manner, which heavily relies on a large number of training instances with accurate labels . However, collecting large-scale datasets with fully precise annotations is expensive and time-consuming. To alleviate this problem, data annotation companies choose some alternating methods such as crowdsourcing [39, 43] and online queries  to improve labelling efficiency. Unfortunately, these methods usually suffer from unavoidable noisy labels, which have been proven to lead to noticeable decrease in performance of DNNs [1, 44].
As this problem has severely limited the expansion of neural network applications, a large number of algorithms have been developed for learning with noisy labels, which belongs to the family of weakly supervised learning frameworks [2, 5, 6, 7, 8, 9, 11]. Some of them focus on improving the methods to estimate the latent noisy transition matrix [21, 24, 32]. However, it is challenging to estimate the noise transition matrix accurately. An alternative approach is training on selected or weighted samples, e.g., Mentornet , gradient-based reweight  and Co-teaching . Furthermore, the state-of-the-art methods including Co-teaching+  and Decoupling  have shown excellent performance in learning with noisy labels by introducing the “Disagreement” strategy, where “when to update” depends on a disagreement between two different networks. However, there are only a part of training examples that can be selected by the “Disagreement” strategy, and these examples cannot be guaranteed to have ground-truth labels . Therefore, there arises a question to be answered: Is “Disagreement” necessary for training two networks to deal with noisy labels?
Motivated by Co-training for multi-view learning and semi-supervised learning that aims to maximize the agreement on multiple distinct views [4, 19, 34, 45], a straightforward method for handling noisy labels is to apply the regularization from peer networks when training each single network. However, although the regularization may improve the generalization ability of networks by encouraging agreement between them, it still suffers from memorization effects on noisy labels . To address this problem, we propose a novel approach named JoCoR (Joint Training with Co-Regularization). Specifically, we train two networks with a joint loss, including the conventional supervised loss and the Co-Regularization loss. Furthermore, we use the joint loss to select small-loss examples, thereby ensuring the error flow from the biased selection would not be accumulated in a single network.
To show that JoCoR significantly improves the robustness of deep learning on noisy labels, we conduct extensive experiments on both simulated and real-world noisy datasets, including MNIST, CIFAR-10, CIFAR-100 and Clothing1M datasets. Empirical results demonstrate that the robustness of deep models trained by our proposed approach is superior to many state-of-the-art approaches. Furthermore, the ablation studies clearly demonstrate the effectiveness of Co-Regularization and Joint Training.
2 Related work
In this section, we briefly review existing works on learning with noisy labels.
Noise rate estimation. The early methods focus on estimating the label transition matrix [24, 25, 28, 37]. For example, F-correction  uses a two-step solution to heuristically estimate the noise transition matrix. An additional softmax layer is introduced to model the noise transition matrix . In these approaches, the quality of noise rate estimation is a critical factor for improving robustness. However, noise rate estimation is challenging, especially on datasets with a large number of classes.
Small-loss selection. Recently, a promising method of handling noisy labels is to train models on small-loss instances . Intuitively, the performance of DNNs will be better if the training data become less noisy. Previous work showed that during training, DNNs tend to learn simple patterns first, then gradually memorize all samples , which justifies the widely used small-loss criteria: treating samples with small training loss as clean ones. In particular, MentorNet  firstly trains a teacher network, then uses it to select clean instances for guiding the training of the student network. As for Co-teaching , in each mini-batch of data, each network chooses its small-loss instances and exchanges them with its peer network for updating the parameters. The authors argued that these two networks could filter different types of errors brought by noisy labels since they have different learning abilities. When the error from noisy data flows into the peer network, it will attenuate this error due to its robustness.
Disagreement. The “Disagreement” strategy is also applied to this problem. For instance, Decoupling  updates the model only using instances on which the predictions of two different networks are different. The idea of disagreement-update is similar to hard example mining , which trains model with examples that are misclassified and expects these examples to help steer the classifier away from its current mistakes. For the “Disagreement” strategy, the decision of “when to update” depends on a disagreement between two networks instead of depending on the label. As a result, it would help decrease the divergence between these networks. However, as noisy labels are spread across the whole space of examples, there may be many noisy labels in the disagreement area, where the Decoupling approach cannot handle noisy labels explicitly. Combining the “Disagreement” strategy with cross-update in Co-teaching, Co-teaching+  achieves excellent performance in improving the robustness of DNNs against noisy labels. In spite of that, Co-teaching+ only selects small-loss instances with different predictions from two models so very few examples are utilized for training in each mini-batch when datasets are with extremely high noise rate. It would prevent the training process from efficient use of training examples. This phenomenon will be explicitly shown in our experiments in the symmetric-80% label noise case.
Other deep learning methods. In addition to the aforementioned approaches, there are some other deep learning solutions [13, 17] to deal with noisy labels, including pseudo-label based [35, 40] and robust loss based approaches [28, 46]. For pseudo-label based approaches, Joint optimization  learns network parameters and infers the ground-true labels simultaneously. PENCIL  adopts label probability distributions to supervise network learning and to update these distributions through back-propagation end-to-end in each epoch. For robust loss based approaches, F-correct proposes a robust risk minimization method to learn neural networks for multi-class classification by estimating label corruption probabilities. GCE  combines the advantages of the mean absolute loss and the cross entropy loss to obtain a better loss function and presents a theoretical analysis of the proposed loss functions in the context of noisy labels.
Semi-supervised learning. Semi-supervised learning also belongs to the family of weakly supervised learning frameworks [15, 18, 22, 26, 27, 31, 47]. There are some interesting works from semi-supervised learning that are highly relevant to our approach. In contrast to “Disagreement” strategy, many of them are based on a agreement maximization algorithm. Co-RLS  extends standard regularization methods like Support Vector Machines (SVM) and Regularized Least squares (RLS) to multi-view semi-supervised learning by optimizing measures of agreement and smoothness over labelled and unlabelled examples. EA++  is a co-regularization based approach for semi-supervised domain adaptation, which builds on the notion of augmented space and harnesses unlabeled data in the target domain to further assist the transfer of information from source to target. The intuition is that different models in each view would agree on the labels of most examples, and it is unlikely for compatible classifiers trained on independent views to agree on an incorrect label. This intuition also motivates us to deal with noisy labels based on the agreement maximization principle.
3 The Proposed Approach
As mentioned before, we suggest to apply the agreement maximization principle to tackle the problem of noisy labels. In our approach, we encourage two different classifiers to make predictions closer to each other by explicit regularization method instead of hard sampling employed by the âDisagreementâ strategy. This method could be considered as a meta-algorithm that trains two base classifiers by one loss function, which includes a regularization term to reduce divergence between the two classifiers.
For multi-class classification with classes, we suppose the dataset with samples is given as , where is the -th instance with its observed label as . Similar to Decoupling and Co-teaching+, we formulate the proposed JoCoR approach with two deep neural networks denoted by and , while and denote their prediction probabilities of instance , respectively. In other words, and are the outputs of the “softmax” layer in and .
Network. For JoCoR, each network can be used to predict labels alone, but during the training stage these two networks are trained with a pseudo-siamese paradigm, which means their parameters are different but updated simultaneously by a joint loss (see Figure 2). In this work, we call this paradigm as “Joint Training”.
Specifically, our proposed loss function on is constructed as follows:
In the loss function, the first part is conventional supervised learning loss of the two networks, the second part is the contrastive loss between predictions of the two networks for achieving Co-Regularization.
Classification loss. For multi-class classification, we use Cross-Entropy Loss as the supervised part to minimize the distance between predictions and labels.
Intuitively, two networks can filter different types of errors brought by noisy labels since they have different learning abilities. In Co-teaching , when the two networks exchange the selected small-loss instances in each mini-batch data, the error flows can be reduced by peer networks mutually. By virtue of the joint-training paradigm, our JoCoR would consider the classification losses from both two networks during the “small-loss” selection stage. In this way, JoCoR can share the same advantage of the cross-update strategy in Co-teaching. This argument will be clearly supported by the ablation study in the later section.
Contrastive loss. From the view of agreement maximization principles [4, 34], different models would agree on labels of most examples, and they are unlikely to agree on incorrect labels. Based on this observation, we apply the Co-Regularization method to maximize the agreement between two classifiers. On one hand, the Co-Regularization term could help our algorithm select examples with clean labels since an example with small Co-Regularization loss means that two networks reach an agreement on its predictions. On the other hand, the regularization from peer networks helps the model find a much wider minimum, which is expected to provide better generalization performance .
In JoCoR, we utilize the contrastive term as Co-Regularization to make the networks guide each other. To measure the match of the two networks’ predictions and , we adopt the Jensen-Shannon (JS) Divergence. To simplify implementation, we could use the symmetric Kullback-Leibler(KL) Divergence to surrogate this term.
Small-loss Selection Before introducing the details, we first clarify the connection between small losses and clean instances. Intuitively, small-loss examples are likely to be the ones that are correctly labelled [12, 30]. Thus, if we train our classifier only using small-loss instances in each mini-batch data, it would be resistant to noisy labels.
To handle noisy labels, we apply the “small-loss” criteria to select “clean” instances (step 8 in Algorithm 1). Following the setting of Co-teaching+, we update (step 12), which controls how many small-loss data should be selected in each training epoch. At the beginning of training, we keep more small-loss data (with a large ) in each mini-natch since deep networks would fit clean data first [1, 44]. With the increase of epochs, we reduce gradually until reaching , keeping fewer examples in each mini-batch. Such operation will prevent deep networks from over-fitting noisy data .
In our algorithm, we use the joint loss (1) to select small-loss examples. Intuitively, an instance with small joint loss means that both two networks could be easy to reach a consensus and make correct predictions on it. As two networks have different learning abilities based on different initial conditions, the selected small-loss instances are more likely to be with clean labels than those chosen by a single model. Specifically, we conduct small-loss selection as follows:
After obtaining the small-loss instances, we calculate the average loss on these examples for further backpropagation:
Relations to other approaches. We compare JoCoR with other related approaches in Table 1. Specifically, Decoupling applies the “disagreement” strategy to select instances while Co-teaching use small-loss criteria. Besides, Co-teaching updates parameters of networks by the “cross-update” strategy to reduce the accumulated error flow. Combining the “disagreement” strategy and the “cross-update” strategy, Co-teaching+ achieves excellent performance. As for our JoCoR, we also select small-loss examples but update the networks by Joint Training. Furthermore, we use the Co-Regularization to maximize agreement between the two networks. Note that Co-Regularization in our proposed method and “disagreementâ strategy in Decoupling are both essentially to reduce the divergence between the two classifiers. The difference between them lies in that the former uses an explicit regularization methods with all training examples while the latter employs hard sampling that reduces the effective number of training examples. It is especially important in the case of small-loss selection, because the selection would further decrease the effective number of training examples.
In this section we first compare JoCoR with some state-of-the-art approaches, then analyze the impact of Joint Training and Co-Regularization by ablation study. We also analyze the effect of in (1) by sensitivity analysis and put it in supplementary materials. Code is available at https://github.com/hongxin001/JoCoR.
4.1 Experiment setup
Datasets. We verify the effectiveness of our proposed algorithm on four benchmark datasets: MNIST, CIFAR-10, CIFAR-100 and Clothing1M , and the detailed characteristics of these datasets can be found in supplementary materials. These datasets are popularly used for the evaluation of learning with noisy labels in previous literatures [10, 18, 29]. Especially, Clothing1M is a large-scale real-world dataset with noisy labels, which is widely used in the related works[20, 28, 40, 38].
Since all datasets are clean except Clothing1M, following [28, 29], we need to corrupt these datasets manually by the label transition matrix Q, where given that noisy is flipped from clean y. Assume that the matrix Q has two representative structures: (1) Symmetry flipping ; (2) Asymmetry flipping : simulation of fine-grained classification with noisy labels, where labellers may make mistakes only within very similar classes.
Following F-correction , only half of the classes in the dataset are with noisy labels in the setting of asymmetric noise, so the actual noise rate in the whole dataset is half of the noisy rate in the noisy classes. Specifically, when the asymmetric noise rate is 0.4, it means . Figure 4 shows an example of noise transition matrix.
For experiments on Clothing1M, we use the 1M images with noisy labels for training, the 14k and 10k clean data for validation and test, respectively. Note that we do not use the 50k clean training data in all the experiments because only noisy labels are required during the training process [20, 35]. For preprocessing, we resize the image to , crop the middle as input, and perform normalization.
Baselines. We compare JoCoR (Algorithm 1) with the following state-of-the-art algorithms, and implement all methods with default parameters by PyTorch, and conduct all the experiments on NVIDIA Tesla V100 GPU.
Co-teaching+ , which trains two deep neural networks and consists of disagreement-update step and cross-update step.
Co-teaching , which trains two networks simultaneously and cross-updates parameters of peer networks.
Decoupling , which updates the parameters only using instances which have different predictions from two classifiers.
F-correction , which corrects the prediction by the label transition matrix. As suggested by the authors, we first train a standard network to estimate the transition matrix Q.
As a simple baseline, we compare JoCoR with the standard deep network that directly trains on noisy datasets (abbreviated as Standard).
Network Structure and Optimizer. We use a 2-layer MLP for MNIST, a 7-layer CNN network architecture for CIFAR-10 and CIFAR-100. The detailed information can be found in supplementary materials. For Clothing1M, we use ResNet with 18 layers.
For experiments on MNIST, CIFAR-10 and CIFAR-100, Adam optimizer (momentum=0.9) is used with an initial learning rate of 0.001, and the batch size is set to 128. We run 200 epochs in total and linearly decay learning rate to zero from 80 to 200 epochs.
For experiments on Clothing1M, we also use Adam optimizer (momentum=0.9) and set batch size to 64. During the training stage, we run 15 epochs in total and set learning rate to , and for 5 epochs each.
As for in our loss function (1), we search it in [0.05, 0.10, 0.15,,0.95] with a clean validation set for best performance. When validation set is also with noisy labels, we use the small-loss selection to choose a clean subset for validation. As deep networks are highly nonconvex, even with the same network and optimization method, different initializations can lead to different local optimum. Thus, following Decoupling , we also take two networks with the same architecture but different initializations as two classifiers.
Measurement. To measure the performance, we use the test accuracy, i.e., test accuracy = (# of correct predictions) / (# of test). Besides, we also use the label precision in each mini-batch, i.e., label precision = (# of clean labels) / (# of all selected labels). Specifically, we sample of small-loss instances in each mini-batch and then calculate the ratio of clean labels in the small-loss instances. Intuitively, higher label precision means less noisy instances in the mini-batch after sample selection, so the algorithm with higher label precision is also more robust to the label noise. All experiments are repeated five times. The error bar for STD in each figure has been highlighted as a shade.
Selection setting. Following Co-teaching, we assume that the noise rate is known. To conduct a fair comparison in benchmark datasets, we set the ratio of small-loss samples as identical: , where for MNIST, CIFAR-10 and CIFAR100, for Clothing1M. If is not known in advance, can be inferred using validation sets [21, 42].
4.2 Comparison with the State-of-the-Arts
Results on MNIST. At the top of Figure 3, it shows test accuracy vs. epochs on MNIST. In all four plots, we can see the memorization effect of networks, i.e., test accuracy of Standard first reaches a very high level and then gradually decreases. Thus, a good robust training method should stop or alleviate the decreasing process. On this point, JoCoR consistently achieves higher accuracy than all the other baselines in all four cases.
We can compare the test accuracy of different algorithms in detail in Table 2. In the most natural Symmetry-20% case, all new approaches work better than Standard obviously, which demonstrates their robustness. Among them, JoCoR and Co-teaching+ work significantly better than other methods. When it goes to Symmetry-50% case and Asymmetry-40% case, Decoupling begins to fail while other methods still work fine, especially JoCoR and Co-teaching+. However, Co-teaching+ cannot combat with the hardest Symmetry-80% case, where it only achieves 58.92%. In this case, JoCoR achieves the best average classification accuracy (84.89%) again.
To explain such excellent performance, we plot label precision vs. epochs at the bottom of Figure 3. Only Decoupling, Co-teaching, Co-teaching+ and JoCoR are considered here, as they include example selection during training. First, we can see both JoCoR and Co-teaching can successfully pick clean instances out. Note that JoCoR not only reaches high label precision in all four cases but also performs better and better with the increase of epochs while Co-teaching declines gradually after reaching the top. This shows that our approach is better at finding clean instances. Then, Decoupling and Co-teaching+ fail in selecting clean examples. As mentioned in Related Work, very few examples are utilized by Co-teaching+ in the training process when noise rate goes to be extremely high. In this way, we can understand why Co-teaching+ performs poorly on the hardest case.
Results on CIFAR-10. Table 3 shows test accuracy on CIFAR-10. As we can see, JoCoR performs the best in all four cases again. In the Symmetric-20% case, JoCoR works much better than all other baselines and Co-teaching+ performs better than Co-teaching and Decoupling. In the other three cases, JoCoR is still the best and Co-teaching+ cannot even achieve comparable performance with Co-teaching.
Figure 5 shows test accuracy and label precision vs. epochs. JoCoR outperforms all the other comparing approaches on both test accuracy and label precision. On label precision, while Decoupling and Co-teaching+ fail to find clean instances, both JoCoR and Co-teaching can do this. An interesting phenomenon is that in the Asymmetry-40% case, although Co-teaching can achieve better performance than JoCoR in the first 100 epochs, JoCoR consistently outperforms it in all the later epochs. The result shows that JoCoR has better generalization ability than Co-teaching.
Results on CIFAR-100. Then, we show our results on CIFAR-100. The test accuracy is shown in Table 4. Test accuracy and label precision vs. epochs are shown in Figure 6. Note that there are only 10 classes in MNIST and CIFAR-10 datasets. Thus, overall the accuracy is much lower than previous ones in Tables 2 and 3. But JoCoR still achieves high test accuracy on this datasets. In the easiest Symmetry-20% and Symmetry-50% cases, JoCoR works significantly better than Co-teaching+, Co-teaching and other methods. In the hardest Symmetry-80% case, JoCoR and Co-teaching tie together but JoCoR still gets higher testing accuracy. When it turns to Asymmetry-40% case, JoCoR and Co-teaching+ perform much better than other methods. On label precision, JoCoR keeps the best performance in all four cases.
Results on Clothing1M. Finally, we demonstrate the efficacy of the proposed method on the real-world noisy labels using the Clothing1M dataset. As shown in Table 5, best denotes the scores of the epoch where the validation accuracy is optimal, and last denotes the scores at the end of training. The proposed JoCoR method gets better result than state-of-the-art methods on best. After all epochs, JoCoR achieves a significant improvement in accuracy of +5.11 over Standard, and an improvement of +1.28 over the best baseline method.
4.3 Ablation Study
To conduct ablation study for analyzing the effect of Co-Regularization, we set up the experiments above MNIST and CIFAR-10 with Symmetry-50% noise. For implementing Joint Training without Co-Regularization (Joint-only), we set the in (1) to 0. Besides, to verify the effect of the Joint Training paradigm, we introduce Co-teaching and Standard enhanced by “small-loss” selection (Standard+) to join the comparison. Recall that the joint-training method selects examples by the joint loss while Co-teaching uses cross-update method to reduce the error flow , these two methods should play a similar role during training according to the previous analysis.
The test accuracy and label precision vs. epochs on MNIST are shown in Figure 7. As we can see, JoCoR performs much better than the others on both test accuracy and label precision. The former keeps almost no decrease while the latter decline a lot after reaching the top. This observation indicates that Co-Regularization strongly hinders neural networks from memorizing noisy labels.
The test accuracy and label precision vs. epochs on CIFAR-10 are shown in Figure 8. In this figure, JoCoR still maintains a huge advantage over the other three methods on both test accuracy and label precision while Joint-only, Co-teaching and Standard+ remain the same trend as these for MNIST, keeping a downward tendency after increasing to the highest point. These results show that Co-Regularization plays a vital role in handling noisy labels. Moreover, Joint-only achieves a comparable performance with Co-teaching on test accuracy and performs better than Co-teaching and Standard+ on label precision. It shows that Joint Training is a more efficient paradigm to help select clean examples than cross-update in Co-teaching.
The paper proposes an effective approach called JoCoR to improve the robustness of deep neural networks with noisy labels. The key idea of JoCoR is to train two classifiers simultaneously with one joint loss, which is composed of regular supervised part and Co-Regularized part. Similar to Co-teaching+, we also select small-loss instances to update networks in each mini-batch data by the joint loss. We conduct experiments on MNIST, CIFAR-10, CIFAR-100 and Clothing1M to demonstrate that, JoCoR can train deep models robustly with the slightly and extremely noisy supervision. Furthermore, the ablation studies clearly demonstrate the effectiveness of Co-Regularization and Joint Training. In future work, we will explore the theoretical foundation of JoCoR based on the view of traditional Co-training algorithms [19, 34].
Acknowledgments This research is supported by Singapore National Research Foundation projects AISG-RP-2019-0013, NSOE-TSS2019-01, and NTU.
Appendix A Dataset
the detailed characteristics of these datasets are shown in Table 6.
|# of training||# of test||# of class||size|
Appendix B Network Architecture
The network architectures of the MLP and CNN models are shown in Table 7.
|MLP on MNIST||CNN on CIFAR-10& CIFAR-100|
|Gray Image||RGB Image|
|Dense , ReLU||
Appendix C Parameter Sensitivity Analysis
To conduct the sensitivity analysis on parameter , we set up the experiments above MNIST with symmetry-50% noise. Specifically, we compare in the range of [0.05, 0.35, 0.65, 0.95]. The larger the is, the less the divergence of two classifiers in JoCoR would be.
The test accuracy and label precision vs. number of epochs are in Figure 9. Obviously, As the increases, the performance of our algorithm on test accuracy gets better and better. When , JoCoR achieves the best performance. We can see the same trends on label precision, which means that JoCoR can select clean example more precisely with a larger .
- (2017) A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning, pp. 233–242. Cited by: §1, §2, §3.
- (2018) Classification from pairwise similarity and unlabeled data.. In International Conference on Machine Learning, pp. 452–461. Cited by: §1.
- (2003) Noise-tolerant learning, the parity problem, and the statistical query model. Journal of the ACM 50 (4), pp. 506–519. Cited by: §1.
- (1998) Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh Annual Conference on Computational Learning Theory, pp. 92–100. Cited by: §1, §3.
- (2006) Semi-supervised learning. MIT Press. Cited by: §1.
- (2014) Analysis of learning from positive and unlabeled data. In Advances in Neural Information Processing Systems, pp. 703–711. Cited by: §1.
- (2018) Leveraging latent label distributions for partial label learning.. In International Joint Conferences on Artificial Intelligence, pp. 2107–2113. Cited by: §1.
- (2019) Partial label learning by semantic difference maximization. In International Joint Conferences on Artificial Intelligence, pp. 2294–2300. Cited by: §1.
- (2019) Partial label learning with self-guided retraining. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 3542–3549. Cited by: §1.
- (2016) Training deep neural-networks using a noise adaptation layer. In Proceedings of the 5th International Conference on Learning Representation, Cited by: §2, §4.1.
- (2017) Learning with inadequate and incorrect supervision. In 2017 IEEE International Conference on Data Mining), pp. 889–894. Cited by: §1.
- (2018) Co-teaching: robust training of deep neural networks with extremely noisy labels. In Advances in Neural Information Processing Systems, pp. 8527–8537. Cited by: §1, §2, §3, §3, §3, item ii, §4.3.
- (2019) Deep self-learning from noisy labels. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5138–5147. Cited by: §2.
- (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 770–778. Cited by: §1.
- (2018) Binary classification for positive-confidence data.. In Advances in Neural Information Processing Systems, pp. 5917–5928. Cited by: §2.
- (2017) Mentornet: learning data-driven curriculum for very deep neural networks on corrupted labels. arXiv preprint arXiv:1712.05055. Cited by: §1, Figure 1, §2.
- (2019) Nlnl: negative learning for noisy labels. In Proceedings of the IEEE International Conference on Computer Vision, pp. 101–110. Cited by: §2.
- (2017) Positive-unlabeled learning with non-negative risk estimator. In Advances in Neural Information Processing Systems, pp. 1675–1685. Cited by: §2, §4.1.
- (2010) Co-regularization based semi-supervised domain adaptation. In Advances in Neural Information Processing Systems, pp. 478–486. Cited by: §1, §2, §5.
- (2019) Learning to learn from noisy labeled data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5051–5059. Cited by: §4.1, §4.1.
- (2015) Classification with noisy labels by importance reweighting. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (3), pp. 447–461. Cited by: §1, §4.1.
- (2019) On the minimal supervision for training any binary classifier from only unlabeled data. In Proceedings of the International Conference on Learning Representation, Cited by: §2.
- (2017) Decoupling “when to update” from “how to update”. In Advances in Neural Information Processing Systems, pp. 960–970. Cited by: §1, Figure 1, §2, item iii, §4.1.
- (2015) Learning from corrupted binary labels via class-probability estimation. In International Conference on Machine Learning, pp. 125–134. Cited by: §1, §2.
- (2013) Learning with noisy labels. In Advances in Neural Information Processing Systems, pp. 1196–1204. Cited by: §2.
- (2016) Theoretical comparisons of positive-unlabeled learning against positive-negative learning. In Advances in Neural Information Processing Systems, pp. 1199–1207. Cited by: §2.
- (2013) Squared-loss mutual information regularization: a novel information-theoretic approach to semi-supervised learning. In International Conference on Machine Learning, pp. 10–18. Cited by: §2.
- (2017) Making deep neural networks robust to label noise: a loss correction approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1944–1952. Cited by: §2, §2, item iv, §4.1, §4.1, §4.1.
- (2014) Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596. Cited by: §4.1, §4.1.
- (2018) Learning to reweight examples for robust deep learning. arXiv preprint arXiv:1803.09050. Cited by: §1, §2, §3.
- (2017) Semi-supervised classification based on classification from positive and unlabeled data. In International Conference on Machine Learning, pp. 2998–3006. Cited by: §2.
- (2014) Class proportion estimation with application to multiclass anomaly rejection. In Artificial Intelligence and Statistics, pp. 850–858. Cited by: §1.
- (2016) Training region-based object detectors with online hard example mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 761–769. Cited by: §2.
- (2005) A co-regularization approach to semi-supervised learning with multiple views. In Proceedings of ICML Workshop on Learning With Multiple Views, pp. 74–79. Cited by: §1, §2, §3, §5.
- (2018) Joint optimization framework for learning with noisy labels. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5552–5560. Cited by: §2, §4.1.
- (2015) Learning with symmetric label noise: the importance of being unhinged. In Advances in Neural Information Processing Systems, pp. 10–18. Cited by: §4.1.
- (2019) Are anchor points really indispensable in label-noise learning?. In Advances in Neural Information Processing Systems, pp. 6835–6846. Cited by: §2.
- (2015) Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2691–2699. Cited by: §4.1.
- (2014) Learning from multiple annotators with varying expertise. Machine Learning 95 (3), pp. 291–327. Cited by: §1.
- (2019) Probabilistic end-to-end noise correction for learning with noisy labels. arXiv preprint arXiv:1903.07788. Cited by: §2, §4.1.
- (2019) How does disagreement benefit co-teaching?. arXiv preprint arXiv:1901.04215. Cited by: §1, Figure 1, §2, item i.
- (2018) An efficient and provable approach for mixture proportion estimation using linear independence assumption. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4480–4489. Cited by: §4.1.
- (2018) Learning with biased complementary labels. In Proceedings of the European Conference on Computer Vision, pp. 68–83. Cited by: §1.
- (2016) Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530. Cited by: §1, §1, §3.
- (2018) Deep mutual learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4320–4328. Cited by: §1, §3.
- (2018) Generalized cross entropy loss for training deep neural networks with noisy labels. In Advances in Neural Information Processing Systems, pp. 8778–8788. Cited by: §2.
- (2018) A brief introduction to weakly supervised learning. National Science Review 5 (1), pp. 44–53. Cited by: §2.