Does label smoothing mitigate label noise?
Abstract
Label smoothing is commonly used in training deep learning models, wherein onehot training labels are mixed with uniform label vectors. Empirically, smoothing has been shown to improve both predictive performance and model calibration. In this paper, we study whether label smoothing is also effective as a means of coping with label noise. While label smoothing apparently amplifies this problem — being equivalent to injecting symmetric noise to the labels — we show how it relates to a general family of losscorrection techniques from the label noise literature. Building on this connection, we show that label smoothing is competitive with losscorrection under label noise. Further, we show that when distilling models from noisy data, label smoothing of the teacher is beneficial; this is in contrast to recent findings for noisefree problems, and sheds further light on settings where label smoothing is beneficial.
[number within=section]remark_tcb Remark fonttitle=,fontupper=, colframe=green!10!black,colback=green!2!white, colbacktitle=green!20!white,coltitle=blue!75!black, boxsep=1pt,left=2pt,right=2pt,top=1pt,bottom=1pt, frame hidden,boxrule=1pt, theorem style=plain rem \newtcbtheorem[number within=section]assumption_tcb Assumption fonttitle=,fontupper=, colframe=blue!5!black,colback=blue!2!white, colbacktitle=blue!10!white,coltitle=blue!75!black, boxsep=1pt,left=2pt,right=2pt,top=1pt,bottom=1pt, frame hidden,boxrule=1pt, theorem style=plain asm
1 Introduction
Label smoothing is commonly used to improve the performance of deep learning models (Szegedy et al., 2016; Chorowski and Jaitly, 2017; Vaswani et al., 2017; Zoph et al., 2018; Real et al., 2018; Huang et al., 2019; Li et al., 2020). Rather than standard training with onehot training labels, label smoothing prescribes using smoothed labels by mixing in a uniform label vector. This procedure is generally understood as a means of regularisation (Szegedy et al., 2016; Zhang et al., 2018) that improves generalization and model calibration (Pereyra et al., 2017; Müller et al., 2019).
How does label smoothing affect the robustness of deep networks? Such robustness is desirable when learning from data subject to label noise (Angluin and Laird, 1988). Modern deep networks can perfectly fit such noisy labels (Zhang et al., 2017). Can label smoothing address this problem? Interestingly, there are two competing intuitions. On the one hand, smoothing might mitigate the problem, as it prevents overconfidence on any one example. On the other hand, smoothing might accentuate the problem, as it is equivalent to injecting uniform noise into all labels (Xie et al., 2016).
Which of these intuitions is borne out in practice? A systematic study of this question is, to our knowledge, lacking. Indeed, label smoothing is conspicuously absent in most treatments of the noisy label problem (Patrini et al., 2016; Han et al., 2018b; Charoenphakdee et al., 2019; Thulasidasan et al., 2019; Amid et al., 2019). Intriguingly, however, a cursory inspection at popular loss correction techniques in this literature (Natarajan et al., 2013; Patrini et al., 2017; van Rooyen and Williamson, 2018) reveals a strong similarity to label smoothing (see §3). But what is the precise relationship between these methods, and does it imply label smoothing is a viable denoising technique?
In this paper, we address these questions by first connecting label smoothing to existing label noise techniques. At first glance, this connection indicates that smoothing has an opposite effect to one such losscorrection technique. However, we empirically show that smoothing is competitive with such techniques in denoising, and that it improves performance of distillation (Hinton et al., 2015) under label noise. We then explain its denoising ability by analysing smoothing as a regulariser. In sum, our contributions are:

we empirically demonstrate that label smoothing significantly improves performance under label noise, which we explain by relating smoothing to regularisation.

we show that when distilling from noisy labels, smoothing the teacher improves the student; this is in marked contrast to recent findings in noisefree settings.
Contributions (i) and (ii) establish that label smoothing can be beneficial under noise, and also highlight that a regularisation view can complement a loss view, the latter being more popular in the noise literature (Patrini et al., 2017). Contribution (iii) continues a line of exploration initiated in Müller et al. (2019) as to the relationship between teacher accuracy and student performance. While Müller et al. (2019) established that label smoothing can harm distillation, we show an opposite picture in noisy settings.
2 Background and notation
We present some background on (noisy) multiclass classification, label smoothing, and knowledge distillation.
2.1 Multiclass classification
In multiclass classification, we seek to classify instances into one of labels . More precisely, suppose instances and labels are drawn from a distribution . Let be a loss function, where is the penalty for predicting scores given true label . We seek a predictor minimising the risk of , i.e., its expected loss under :
where is the classprobability distribution, and . Canonically, is the softmax crossentropy, .
Given a finite training sample , one can minimise the empirical risk
In label smoothing (Szegedy et al., 2016), one mixes the training labels with a uniform mixture over all possible labels: for , this corresponds to minimising
(1) 
where .
2.2 Learning under label noise
The label noise problem is the setting where one observes samples from some distribution with ; i.e., the observed labels are not reflective of the ground truth (Angluin and Laird, 1988; Scott et al., 2013). Our goal is to nonetheless minimise the risk on the (unobserved) . This poses a challenge to deep neural networks, which can fit completely arbitrary labels (Zhang et al., 2017).
A common means of coping with noise is to posit a noise model, and design robust procedures under this model. One simple model is classconditional noise (Blum and Mitchell, 1998; Scott et al., 2013; Natarajan et al., 2013), wherein there is a rowstochastic noise transition matrix such that for each , label may be flipped to with probability . Formally, if and are the noisy and clean classprobabilities respectively, we have
(2) 
The symmetric noise model further assumes that there is a constant flip probability of changing the label uniformly to one of the other classes (Long and Servedio, 2010; van Rooyen et al., 2015), i.e., for ,
(3) 
where denotes the identity and the allones matrix.
While there are several approaches to coping with noise, our interest will be in the family of loss correction techniques: assuming one has knowledge (or estimates) of the noisetransition matrix , such techniques yield consistent risk minimisers with respect to . (Patrini et al., 2017) proposed two such techniques, termed backward and forward correction, which respectively involve the losses
(4)  
(5) 
Observe that for a given label , computes a weighted sum of losses for all labels , while computes a weighted sum of predictions for all .
Backward correction was inspired by techniques in Natarajan et al. (2013); CidSueiro et al. (2014); van Rooyen and Williamson (2018), and results in an unbiased estimate of the risk with respect to . Recent works have studied robust estimation of the matrix from noisy data alone (Patrini et al., 2017; Han et al., 2018b; Xia et al., 2019). Forward correction was inspired by techniques in Reed et al. (2014); Sukhbaatar et al. (2015), and does not result in an unbiased risk estimate. However, it preserves the Bayesoptimal minimiser, and is empirically effective (Patrini et al., 2017).
2.3 Knowledge distillation
Knowledge distillation Bucilǎ et al. (2006); Hinton et al. (2015) refers to the following recipe: given a training sample , one trains a teacher model using a loss function suitable for estimating classprobabilities, e.g., the softmax crossentropy. This produces a classprobability estimator , where denotes the simplex. One then uses to train a student model, e.g., using cross entropy Hinton et al. (2015) or square loss Sanh et al. (2019) as an objective. The key advantage of distillation is that the resulting student has improved performance compared to simply training the student on labels in .
3 Label smoothing meets loss correction
We now relate label smoothing to loss correction techniques for label noise via a label smearing framework.
3.1 Label smearing for loss functions
Suppose we have some base loss of interest, e.g., the softmax crossentropy. Recall that we summarise the loss via the vector . The loss on an example is for onehot vector .
Consider now the following generalisation, which we term label smearing: given a matrix , we compute
On an example , the smeared loss is given by
Compared to the standard loss, we now potentially involve all possible labels, scaled appropriately by the matrix .
3.2 Special cases of label smearing
The label smearing framework captures many interesting approaches as special cases (see Table 1):

Standard training. Suppose that , for identity matrix . This trivially corresponds to standard training.

Label smoothing. Suppose that , where is the allones matrix, and is a tuning parameter. This corresponds to mixing the true label with a uniform distribution over all the classes, which is precisely label smoothing per (1).

Backward correction. Suppose that , where is a classconditional noise transition matrix. This corresponds to the backward correction procedure of Patrini et al. (2017). Here, the entries of may be negative; indeed, for symmetric noise, where . While this poses optimisation problems, recent works have studied means of correcting this (Kiryo et al., 2017; Han et al., 2018a).
The above techniques have been developed with different motivations. By casting them in a common framework, we can elucidate some of their shared properties.
Method  Smearing matrix 
Standard  
Label smoothing  
Backward correction 
3.3 Statistical consistency of label smearing
Recall that our fundamental goal is to devise a procedure that can approximately minimise the population risk . Given this, it behooves us to understand the effect of label smearing on this risk. As we shall explicate, label smearing:

is equivalent to fitting to a modified distribution.

preserves classification consistency for suitable .
For (i), observe that the smeared loss has corresponding risk
Consequently, minimising a smeared loss is equivalent to minimising the original loss on a smeared distribution with classprobabilities .
For example, under label smoothing, we fit to the classprobabilities . This corresponds to a scaling and translation of the original. This trivially preserves the label with maximal probability, provided . Smoothing is thus consistent for classification, i.e., minimising its risk also minimises the classification risk (Zhang, 2004, 2004; Bartlett et al., 2006).
Now consider backward correction with . Suppose this is applied to a distribution with classconditional label noise governed by transition matrix . Then, we will fit to probabilities . By (2), these will exactly equal the clean probabilities ; i.e., backward correction will effectively denoise the labels.
3.4 How does label smoothing relate to loss correction?
Following Table 1, one cannot help but notice a strong similarity between label smoothing and backward correction for symmetric noise. Both methods combine an identity matrix with an allones matrix; the striking difference, however, is that this combination is via addition in one, but subtraction in the other. This results in losses with very different forms:
(6)  
Fundamentally, the effect of the two techniques is different: smoothing aims to minimise the average perclass loss , while backward correction seeks to maximise this. Figure 1 visualises the effect on the losses when , and is the logistic loss. Intriguingly, the smoothed loss is seen to penalise confident predictions. On the other hand, backward correction allows one to compensate for overly confident negative predictions by allowing for a negative loss on positive samples that are correctly predicted.
Label smoothing also relates to forward correction: recall that here, we compute the loss Compared to label smoothing, forward correction thus performs smoothing of the logits. As shown in Figure 1, the effect is that the loss becomes bounded for all predictions.
At this stage, we return to our original motivating question: can label smoothing mitigate label noise? The above would seem to indicate otherwise: backward correction guarantees an unbiased risk estimate, and yet we have seen smoothing constructs a fundamentally different loss. In the next section, we assess whether this is borne out empirically.
4 Effect of label smoothing on label noise
We now present experimental observations of the effects of label smoothing under label noise. We then provide insights into why smoothing can successfully denoise labels, by viewing smoothing as a form of shrinkage regularisation.
4.1 Denoising effects of label smoothing
We begin by empirically answering the question: can label smoothing successfully mitigate label noise? To study this, we employ smoothing in settings where the training data is artificially injected with symmetric label noise. This follows the convention in the label noise literature (Patrini et al., 2017; Han et al., 2018a; Charoenphakdee et al., 2019).
Specifically, we consider the CIFAR10, CIFAR100 and ImageNet datasets, and add symmetric label noise at level to the training (but not the test) set; i.e., we replace the training label with a uniformly chosen label of the time. On CIFAR10 and CIFAR100 we train two different models on this noisy data, ResNet32 and ResNet56, with similar hyperparameters as Müller et al. (2019). Each experiment is repeated five times, and we report the mean and standard deviation of the clean test accuracy. On ImageNet we train ResNetv250 with LARS You et al. (2017). We describe the detailed experimental setup in Appendix B.
As loss functions, our baseline is training with the softmax crossentropy on the noisy labels. We then employ label smoothing (1) (LS) for various values of , as well as forward (FC) and backward (BC) correction (4), (5) assuming symmetric noise for various values of . We remark here that in the label noise literature, it is customary to estimate , with theoretical optimal value ; however, we shall here simply treat this as a tuning parameter akin to the smoothing , whose effect we shall study.
We now analyse the results along several dimensions.
Accuracy: In Figure 2, we plot the test accuracies of all methods on CIFAR10 and CIFAR100. Our first finding is that label smoothing significantly improves accuracy over the baseline. We observe similar denoising effects on ImageNet in Table 2. This confirms that empirically, label smoothing is effective in dealing with label noise.
Our second finding is that, surprisingly, choosing , the true noise rate, improves performance of all methods. This is in contrast to the theoretically optimal choice for loss correction approaches (Patrini et al., 2017), and indicates it is valuable to treat as a tuning parameter.
Finally, we see that label smoothing is often competitive with loss correction. This is despite it minimising a fundamentally different loss to the unbiased backward correction, as discussed in §3.4. We note however that loss correction generally produces the best overall accuracy with high .
LS  70.86  71.12  71.55  70.95  70.59 
FC  70.86  73.04  73.17  73.35  72.92 
Denoising: What explains the effectiveness of label smoothing for training with label noise? Does it correct the predictions on noisy examples, or does it only further improve the predictions on the clean (nonnoisy) examples?
To answer these questions, we separately inspect accuracies on the noisy and clean portions of the training data (i.e., on those samples whose labels are flipped, or not). Table 3 reports this breakdown from the ResNet32 model on CIFAR100, for different values of . We see that as increases, accuracy improves on both the noisy and clean parts of the data, with a more significant boost on the noisy part. Consequently, smoothing systematically improves predictions of both clean and noisy samples.
Model confidence: Predictive accuracy is only concerned with a model ranking the true label ahead of the others. However, the confidence in model predictions is also of interest, particularly since a danger with label noise is being overly confident in predicting a noisy label. How do smoothing and correction methods affect this confidence under noise?
To measure this, in Figure 3 we plot distributions of the differences between the logit activation for a true/noisy label , and the average logit activation . Compared to the baseline, label smoothing significantly reduces confidence in the noisy label (refer to the left side of Figure 3(b)).
To visualise this effect of smoothing, in Figure 4 we plot prelogits (penultimate layer output) of examples from 3 classes projected onto their class vectors as in Müller et al. (2019), for a ResNet32 trained on CIFAR100. As we increase , the confidences for noisy labels shrink, showing the denoising effects of label smoothing.
On the other hand, both forward and backward correction systematically increase confidence in predictions. This is especially pronounced for forward correction, demonstrated by the large spike for high differences in Figure 3(b). At the same time, these techniques increase the confidence in predictions of the true label (refer to Figure 3(a)): forward correction in particular becomes much more confident in the true label than any other technique.
In sum, Figure 3 illustrates both positive and adverse effects on confidence from label smearing techniques: label smoothing becomes less confident in both the noisy and correct labels, while forward and backward correction become more confident in both the correct labels and noisy labels.
Full train  Clean part of train  Noisy part of train  
true labels  true labels  true labels  noisy labels  
0.0  77.39  86.75  39.92  17.88 
0.1  80.11  87.99  48.58  12.27 
0.2  81.22  88.27  53.01  8.32 
LS  FC  BC  
0.0  0.111  0.111  0.111 
0.1  0.108  0.153  0.214 
0.2  0.156  0.165  0.266 
Model calibration: To further tease out the impact of label smearing on model confidences, we ask: how do these techniques affect the calibration of the output probabilities? This measures how meaningful the model probabilities are in a frequentist sense (Dawid, 1982).
In Table 4, we report the expected calibration error (ECE) (Guo et al., 2017) on the test set for each method. While smoothing improves calibration over the baseline with — an effect noted also in Müller et al. (2019) — for larger , it becomes significantly worse. Furthermore, loss correction techniques significantly degrade calibration over smoothing. This is in keeping with the above findings as to these methods sharpening prediction confidences.
Summary: Overall, our results demonstrate that label smoothing is competitive with loss correction techniques in coping with label noise, and that it is particularly successful in denoising examples while preserving calibration.
4.2 Label smoothing as regularisation
While empirically encouraging, the results in the previous section indicate a gap in our theoretical understanding: from §3.4, the smoothing loss apparently has the opposite effect to backward correction, which is theoretically unbiased under noise. What, then, explains the success of smoothing?
To understand the denoising effects of label smoothing, we now study its role as a regulariser. To get some intuition, consider a linear model , trained on features and onehot labels using the square loss, i.e., . Label smoothing at level transforms the optimal solution to
(7) 
Observe that if our data is centered, the second term will be zero. Consequently, for such data, the effect of label smoothing is simply to shrink the weights. Thus, label smoothing can have a similar effect to shrinkage regularisation.
Our more general finding is the following. From (6), label smoothing is equivalent to minimising a regularised risk , where
and . The second term above does not depend on the underlying label distribution . Consequently, it may be seen as a datadependent regulariser on our predictor . Concretely, for the softmax crossentropy,
(8) 
To understand the label smoothing regulariser (8) more closely, we study it for the special case of linear classifiers, i.e., While we acknowledge that the label smoothing effects displayed in our experiments for deep networks are complex, as a first step, understanding these effects for simpler models will prove instructive.
Smoothing for linear models. For linear models , the label smoothing regularization for softmax crossentropy (8) induces the following shrinkage effect.
Theorem 1.
Let be distributed as with a finite mean. Then is the minimiser of (8).
See Appendix A for the proof. We see that the label smoothing regulariser encourages shrinkage of our weights towards zero; this is akin to the observation for square loss in (7), and similar in effect to regularisation, which is also motivated as increasing the classification margin.
This perspective gives one hint as to why smoothing may successfully denoise. For linear models, introducing asymmetric label noise can move the decision boundary closer to a class. Hence, a regulariser that increases margin, such as shrinkage, can help the model to be more robust to noisy labels. We illustrate this effect with the following experiment.
Effect of shrinkage on label noise. We consider a 2D problem comprising Gaussian classconditionals, centered at and with isotropic covariance at scale . The optimal linear separator is one that passes through the origin, shown in Figure 5 as a black line. This separator is readily found by fitting logistic regression on this data.
Dataset  Architecture  Vanilla distillation  LS on teacher  LS on student  FC on teacher  FC on student 
CIFAR100  ResNet32  63.980.26  64.480.25  63.830.28  66.650.18  63.940.34 
CIFAR100  ResNet56  64.310.26  65.630.24  64.500.32  66.350.20  64.240.26 
CIFAR10  ResNet32  80.440.64  86.951.82  85.722.61  86.811.86  86.922.11 
CIFAR10  ResNet56  77.980.25  87.101.66  86.981.71  86.881.80  86.821.76 
We inject 5% asymmetric label noise into the negatives, so that some of these have their labels flipped to be positive. The effect of this noise is to move the logistic regression separator closer to the (true) negatives, indicating there is greater uncertainty in its predictions. However, if we apply label smoothing at various levels , the separator is seen to gradually converge back to the Bayesoptimal; this is in keeping with the shrinkage property of the regulariser (8).
Further, as suggested by Theorem 1, an explicit regulariser has a similar effect to smoothing (Figure 5(b)). Formally establishing the relationship between label smoothing and shrinkage is an interesting open question.
Summary. We have seen in §3 that from a loss perspective, label smoothing results in a biased risk estimate; this is contrast to the unbiased backward correction procedure. In this section, we provided an alternate regularisation perspective, which gives insight into why label smoothing can denoise training labels. Combining these two views theoretically, however, remains an interesting topic for future work.
5 Distillation under label noise
We now study the effect of label smoothing on distillation, when our data is corrupted with label noise. In distillation, a trained “teacher” model’s logits are used to augment (or replace) the onehot labels used to train a “student” model Hinton et al. (2015). While traditionally motivated as a means for a simpler model (student) to mimic the performance of a complex model (teacher), Furlanello et al. (2018) showed gains even for models of similar complexity.
Müller et al. (2019) observed that for standard (noisefree) problems, label smoothing on the teacher improves the teacher’s performance, but hurts the student’s performance. Thus, a better teacher does not result in a better student. Müller et al. (2019) attribute this to the erasure of relative information between the teacher logits under smoothing.
But is a teacher trained with label smoothing on noisy data better for distillation? On the one hand, as we saw in previous section, label smoothing has a denoising effect on models trained on noisy data. On the other hand, label smoothing on clean data may cause some information erasure in logits (Müller et al., 2019). Can the teacher transfer the denoising effects of label smoothing to a student?
We study this question empirically. On the CIFAR100 and CIFAR10 datasets, with the same architectures and noise injection procedure as the previous section, we train three teacher models on the noisy labels: one asis on the noisy labels, one with label smoothing, and another with forward correction. We distill each teacher to a student model of the same complexity (see Appendix B for a complete description), and measure the student’s performance. As a final approach, we distill a vanilla teacher, but apply label smoothing and forward correction on the student.
Table 5 reports the performance of the distilled students using each of the above teachers. Our key finding is that on both datasets, both label smoothing and loss correction on the teacher significantly improves over vanilla distillation; this is in marked contrast to the findings of Müller et al. (2019). On the other hand, smoothing or correcting on the student has mixed results; while there are benefits on CIFAR10, the larger CIFAR100 sees essentially no gains.
Finally, we plot the effect of the teacher’s label smoothing parameter on student performance in Figure 6. Even for high values of , smoothing improves performance over the baseline (). Per the previous section, large values of allow for successful label denoising, and the results indicate the value of this transfer to the student.
In summary, our experiments show that under label noise, it is strongly beneficial to denoise the teacher — either through label smoothing or loss correction — prior to distillation.
6 Conclusion
We studied the effectiveness of label smoothing as a means of coping with label noise. Empirically, we showed that smoothing is competitive with existing loss correction techniques, and that it exhibits strong denoising effects. Theoretically, we related smoothing to one of these correction techniques, and reinterpreted it as a form of regularisation. Further, we showed that when distilling models from noisy data, label smoothing of the teacher is beneficial. Overall, our results shed further light on the potential benefits of label smoothing, and suggest formal exploration of its denoising properties as an interesting topic for future work.
Supplementary material for “Does label smoothing mitigate label noise?”
Appendix A Proof of Theorem 1
Note that for linear models, is a convex function of . Hence we can find the minimiser of by solving for when the gradient vanishes. We have,
We can swap differential and expectation in the above equation as is differentiable in both and . Now we show that the gradient evaluates to zero at :
Appendix B Experimental setup
b.1 Architecture
We use ResNet with batch norm [He et al., 2016] for our experiments with the following configurations. For CIFAR10 and CIFAR100 we experiment with ResNet32 and ResNet56. We use ResNetv250 for our experiments with ImageNet. We list the architecture configurations in terms of (, , stride) corresponding to each ResNet block in Table 6.
Architecture  Configuration: [(, , stride)] 
ResNet32  [(5, 16, 1), (5, 32, 2), (5, 64, 2)] 
ResNet56  [(9, 16, 1), (9, 32, 2), (9, 64, 2)] 
ResNetv250  [(3, 64, 1), (4, 128, 2), (6, 256, 2), (3, 512, 2)] 
b.2 Training
We follow the experimental setup from Müller et al. [2019].
For both CIFAR10 and CIFAR100 we use a minibatch size of and train for k steps. We use stochastic gradient descent with Nesterov momentum of 0.9. We use an initial learning rate of with a schedule of dropping by a factor of at k and k steps.
We set weight decay to 1e4. On ImageNet we train ResNetv250 using the LARS optimizer You et al. [2017] for large batch training, with a batch size of , and training for steps. For data augmentation we used random crops and leftright flips
For our distillation experiments we train only with the crossentropy objective against the teacher’s logits. We use a temperature of unless specified otherwise when describing an experiment.
We ran training on CIFAR100 and CIFAR10 using 4 chips of TPU v2 and ImageNet using 128 chips of TPU v3. Training for CIFAR100 and CIFAR10 took under 15 minutes, and for ImageNet around h.
Appendix C Experiments: additional results
c.1 Comparison of smoothing against label noise baselines
Dataset  Architecture  Baseline  LS  FC smoothing  BC smoothing  FC Patrini  BC Patrini 
CIFAR100  RESNET32  57.060.38  60.700.28  61.290.38  53.910.40  57.250.24  55.890.33 
CIFAR100  RESNET56  54.930.37  59.040.53  60.000.31  52.250.51  55.090.39  55.000.13 
CIFAR10  RESNET32  80.440.63  83.950.18  80.780.42  77.230.72  80.330.29  80.650.59 
CIFAR10  RESNET56  77.980.24  80.980.48  79.660.26  77.320.35  77.970.45  77.660.44 
Figure 7 shows density plots of differences between maximum logit value (or corresponding to true/noisy label) and the average logit value across different portions of the training data. We notice that while label smoothing is reducing the confidence (by lowering the peak around 1.0), backward correction and forward correction methods increase the confidence by boosting the spike. This is the case for both the noisy and true labels, however the effect is much stronger on the correct label logit activation.
c.2 Logit visualisation plots
In this section we present additional prelogit visualization plots  for CIFAR100 trained with ResNet56 in Figure 8 (ad), for CIFAR10 trained with ResNet32 in Figure 8(eg). Figure 9 visualises the prelogits for backward and forward correction on CIFAR100 trained with ResNet32. As before, we see that both methods are able to denoise the noisy instances.
Footnotes
References
 Robust bitempered logistic loss based on Bregman divergences. CoRR abs/1906.03361. External Links: 1906.03361 Cited by: §1.
 Learning from noisy examples. Machine Learning 2 (4), pp. 343–370 (English). Cited by: §1, §2.2.
 Convexity, classification, and risk bounds. Journal of the American Statistical Association 101 (473), pp. 138–156. Cited by: §3.3.
 Combining labeled and unlabeled data with cotraining. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, COLT’ 98, New York, NY, USA, pp. 92–100. External Links: ISBN 1581130570 Cited by: §2.2.
 Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’06, New York, NY, USA, pp. 535–541. Cited by: §2.3.
 On symmetric losses for learning from corrupted labels. In Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 961–970. Cited by: §1, §4.1.
 Towards better decoding and language model integration in sequence to sequence models. In Interspeech 2017, 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, August 2024, 2017, pp. 523–527. Cited by: §1.
 Consistency of losses for learning from weak labels. In Machine Learning and Knowledge Discovery in Databases, T. Calders, F. Esposito, E. Hüllermeier and R. Meo (Eds.), Berlin, Heidelberg, pp. 197–210. Cited by: §2.2.
 The wellcalibrated Bayesian. Journal of the American Statistical Association 77 (379), pp. 605–610. Cited by: §4.1.
 Bornagain neural networks. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 1015, 2018, pp. 1602–1611. Cited by: §5.
 On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 611 August 2017, pp. 1321–1330. Cited by: §4.1.
 Pumpout: A meta approach for robustly training deep neural networks with noisy labels. CoRR abs/1809.11008. Cited by: 3rd item, §4.1.
 Masking: a new perspective of noisy supervision. In NeurIPS, Cited by: §1, §2.2.
 Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §B.1, Table 6.
 Distilling the knowledge in a neural network. CoRR abs/1503.02531. Cited by: §1, §2.3, §5.
 GPipe: efficient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. dÁlchéBuc, E. Fox and R. Garnett (Eds.), pp. 103–112. Cited by: §1.
 Positiveunlabeled learning with nonnegative risk estimator. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, Red Hook, NY, USA, pp. 1674–1684. Cited by: 3rd item.
 Regularization via structural label smoothing. CoRR abs/2001.01900. Cited by: §1.
 Random classification noise defeats all convex potential boosters. Machine Learning 78 (3), pp. 287–304 (English). External Links: ISSN 08856125 Cited by: §2.2.
 When does label smoothing help?. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 814 December 2019, Vancouver, BC, Canada, pp. 4696–4705. Cited by: §B.2, Figure 8, Figure 9, §1, §1, Figure 4, §4.1, §4.1, §4.1, §5, §5, §5.
 Learning with noisy labels. In Advances in Neural Information Processing Systems (NIPS), pp. 1196–1204. Cited by: item 1, §1, §2.2, §2.2.
 Loss factorization, weakly supervised learning and label noise robustness. In International Conference on Machine Learning (ICML), pp. 708–717. Cited by: §1.
 Making deep neural networks robust to label noise: a loss correction approach. In Computer Vision and Pattern Recognition (CVPR), pp. 2233–2241. Cited by: item 1, §1, §1, §2.2, §2.2, 3rd item, Figure 2, §4.1, §4.1.
 Regularizing neural networks by penalizing confident output distributions. In International Conference on Learning Representations Workshop, Cited by: §1.
 Regularized Evolution for Image Classifier Architecture Search. arXiv eprints. External Links: 1802.01548 Cited by: §1.
 Training deep neural networks on noisy labels with bootstrapping. External Links: 1412.6596 Cited by: §2.2.
 DistilBERT, a distilled version of bert: smaller, faster, cheaper and lighter. External Links: 1910.01108 Cited by: §2.3.
 Classification with asymmetric label noise: consistency and maximal denoising. In Conference on Learning Theory (COLT), pp. 489–511. Cited by: §2.2, §2.2.
 Training convolutional networks with noisy labels. In ICLR Workshops, Cited by: §2.2.
 Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 2730, 2016, pp. 2818–2826. Cited by: §1, §2.1.
 Combating label noise in deep learning using abstention. In Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 6234–6243. Cited by: §1.
 Learning with symmetric label noise: the importance of being unhinged. In Advances in Neural Information Processing Systems (NIPS), pp. 10–18. Cited by: §2.2.
 A theory of learning with corrupted labels. Journal of Machine Learning Research 18 (228), pp. 1–50. Cited by: §1, §2.2.
 Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, Red Hook, NY, USA, pp. 6000–6010. Cited by: §1.
 Are anchor points really indispensable in labelnoise learning?. In Advances in Neural Information Processing Systems 32, pp. 6835–6846. Cited by: §2.2.
 DisturbLabel: regularizing CNN on the loss layer. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 2730, 2016, pp. 4753–4762. Cited by: §1.
 Large batch training of convolutional networks. External Links: 1708.03888 Cited by: §B.2, §4.1.
 Understanding deep learning requires rethinking generalization. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 2426, 2017, Conference Track Proceedings, Cited by: §1, §2.2.
 Mixup: beyond empirical risk minimization. In International Conference on Learning Representations, Cited by: §1.
 Statistical analysis of some multicategory large margin classification methods. J. Mach. Learn. Res. 5, pp. 1225–1251. External Links: ISSN 15324435 Cited by: §3.3.
 Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics 32 (1), pp. 56–85. Cited by: §3.3.
 Learning transferable architectures for scalable image recognition. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vol. , pp. 8697–8710. External Links: ISSN 10636919 Cited by: §1.