On Feature Normalization and Data Augmentation

On Feature Normalization and Data Augmentation

Abstract

Modern neural network training relies heavily on data augmentation for improved generalization. After the initial success of label-preserving augmentations, there has been a recent surge of interest in label-perturbing approaches, which combine features and labels across training samples to smooth the learned decision surface. In this paper, we propose a new augmentation method that leverages the first and second moments extracted and re-injected by feature normalization. We replace the moments of the learned features of one training image by those of another, and also interpolate the target labels. As our approach is fast, operates entirely in feature space, and mixes different signals than prior methods, one can effectively combine it with existing augmentation methods. We demonstrate its efficacy across benchmark data sets in computer vision, speech, and natural language processing, where it consistently improves the generalization performance of highly competitive baseline networks.

\printAffiliationsAndNotice\icmlEqualContribution

1 Introduction

Deep learning has had a dramatic impact across many fields, including computer vision, automated speech recognition (ASR), and natural language processing (NLP). Fueled by these successes, significant effort has gone into the search for ever more powerful and bigger neural network architectures Krizhevsky et al. (2012); He et al. (2015); Zoph and Le (2016); Huang et al. (2019); Vaswani et al. (2017). These innovations, along with progress in computing hardware, have enabled researchers to train enormous models with billions of parameters (Radford et al., 2019; Keskar et al., 2019; Raffel et al., 2019). Such over-parameterized models can easily memorize the whole training set even with random labels (Zhang et al., 2017). To address overfitting, neural networks are trained with heavy regularization, which can be explicit, for example in the case of data augmentation (Simard et al., 1993; Frühwirth-Schnatter, 1994; Schölkopf et al., 1996; Van Dyk and Meng, 2001) and dropout (Srivastava et al., 2014), or implicit, such as early stopping and intrinsic normalization (Ioffe and Szegedy, 2015; Ba et al., 2016).

Figure 1: MoEx with PONO normalization. The features of the cat image are infused with moments from the plane image.

The most common form of data augmentation is based on label-preserving transformations. For instance, practitioners (Simard et al., 1993; Krizhevsky et al., 2012; Szegedy et al., 2016) randomly flip, crop, translate, or rotate images — assuming that none of these transformations alter their class memberships. Chapelle et al. (2001) formalizes such transformations under the Vicinal Risk Minimization (VRM) principle, where the augmented data sampled within the vicinity of an observed instance are assumed to have the same label. Zhang et al. (2018) takes it a step further and introduce Mixup, a label-perturbing data augmentation method where two inputs and their corresponding labels are linearly interpolated to smooth out the decision surface between them. As a variant, Yun et al. (2019) cuts and pastes a rectangular patch from one image into another and interpolate the labels proportional to the area of the patch.

A key ingredient to optimizing such deep neural networks is Batch Normalization (Ioffe and Szegedy, 2015; Zhang et al., 2017). A series of recent studies (Bjorck et al., 2018; Santurkar et al., 2018) show that normalization methods change the loss surface and lead to faster convergence by enabling larger learning rates in practice. While batch normalization has arguably contributed substantially to the deep learning revolution in visual object recognition, its performance degrades on tasks with smaller mini-batch or variable input sizes (e.g. many NLP tasks). This has motivated the quest to find normalization methods for single instances, such as LayerNorm (LN) (Ba et al., 2016), InstanceNorm (IN) (Ulyanov et al., 2016), GroupNorm (GN) (Wu and He, 2018), and recently PositionalNorm (PONO) (Li et al., 2019). These intra-instance normalizations treat each example as a distribution and normalize them with respect to their first and second moments — essentially removing the moment information from the feature representation and re-learning them through scaling and offset constants.

Up to this point, data augmentation was considered more or less independent of the normalization method used during training. In this paper, we introduce a novel label-perturbing data augmentation approach that integrates naturally with feature normalization. It has been argued previously, that the first and second moments extracted in intra-instance normalization capture the underlying structure of an image Li et al. (2019). We propose to extract these moments, but instead of simply removing them, we re-inject moments from a different image and interpolate the labels — for example, injecting the structure of a plane into the image of a cat to obtain a mixture between cat and plane. See Figure 1 for a schematic illustration. In practice, this procedure is very effective for training with mini-batches and can be implemented in a few lines of code: During training we compute the feature mean and variance for each instance at a given layer, permute them across the mini-batch, and re-inject them into the feature representation of other instances (while interpolating the labels). In other words, we randomly exchange the feature moments across samples, and we therefore refer to our method as Moment Exchange (MoEx).

Unlike previous methods, MoEx operates purely in feature space and can therefore easily be applied jointly with existing data augmentation methods that operate in the input space, such as cropping, flipping, rotating, but even label-perturbing approaches like Cutmix or Mixup. Importantly, because MoEx only alters the first and second moments of the pixel distributions, it has an orthogonal effect to existing data augmentation methods and its improvements can be “stacked” on top of their established gains in generalization.

We conduct extensive experiments on eleven different tasks/datasets using more than ten varieties of models. The results show that MoEx consistently leads to significant improvements across models and tasks, and it is particularly well suited to be combined with existing augmentation approaches. Further, our experiments show that MoEx is not limited to computer vision, but is also readily applicable and highly effective in applications within speech recognition and NLP. The code for MoEx is available at https://github.com/Boyiliee/MoEx.

2 Background and Related Work

Feature normalization has always been a prominent part of neural network training LeCun et al. (1998); Li and Zhang (1998). Initially, when networks had predominately one or two hidden layers, the practice of z-scoring the features was limited to the input itself. As networks became deeper, Ioffe and Szegedy (2015) extended the practice to the intermediate layers with the celebrated BatchNorm algorithm. As long as the mean and variance are computed across the entire input, or a randomly picked mini-batch (as it is the case for BatchNorm), the extracted moments reveal biases in the data set with no predictive information — removing them causes no harm but can substantially improve optimization and generalization LeCun et al. (1998); Bjorck et al. (2018); Ross et al. (2013).

In contrast, recently proposed normalization methods (Ba et al., 2016; Ulyanov et al., 2016; Wu and He, 2018; Li et al., 2019) treat the features of each training instance as a distribution and normalize them for each sample individually. We refer to the extracted mean and variance as intra-instance moments. We argue that intra-instance moments are attributes of a data instance that describe the distribution of its features and should not be discarded. Recent works (Huang and Belongie, 2017; Li et al., 2019) have shown that such attributes can be useful in several generative models. Realizing that these moments capture interesting information about data instances, we propose to use them for data augmentation.

Data augmentation has a similarly long and rich history in machine learning. Initial approaches discovered the concept of label-preserving transformations (Simard et al., 1993; Schölkopf et al., 1996) to mimic larger training data sets to suppress overfitting effects and improve generalization. For instance, Simard et al. (2003) randomly translates or rotates images assuming that the labels of the images would not change under such small perturbations. Many subsequent papers proposed alternative flavors of this augmentation approach based on similar insights (DeVries and Taylor, 2017; Kawaguchi et al., 2018; Cubuk et al., 2019a; Zhong et al., 2020; Karras et al., 2019; Cubuk et al., 2019b; Xie et al., 2019; Singh and Lee, 2017). Beyond vision tasks, back-translation (Sennrich et al., 2015; Yu et al., 2018; Edunov et al., 2018b; Caswell et al., 2019) and word dropout (Iyyer et al., 2015) are commonly used to augment text data. Besides augmenting inputs, Maaten et al. (2013); Ghiasi et al. (2018); Wang et al. (2019) adjust either the features or loss function as implicit data augmentation methods. In addition to label-preserving transformations, there is an increasing trend to use label-perturbing data augmentation methods. Zhang et al. (2018) arguably pioneered the field with Mixup, which interpolates two training inputs in feature and label space simultaneously. Cutmix (Yun et al., 2019), instead, is designed especially for image inputs. It randomly crops a rectangular region of an image and pastes it into another image, mixing the labels proportional to the number of pixels contributed by each input image to the final composition.

3 Moment Exchange

In this section we introduce Moment Exchange (MoEx), which blends feature normalization with data augmentation. Similar to Mixup and Cutmix, it fuses features and labels across two training samples, however it is unique in its asymmetry, as it mixes two very different components: The normalized features of one instance are combined with the feature moments of another. This asymmetric composition in feature space allows us to capture and smooth out different directions of the decision boundary, not previously covered by existing augmentation approaches. We also show that MoEx can be implemented very efficiently in a few lines of code, and should be regarded as a cheap and effective companion to existing data augmentation methods.

Setup.

Deep neural networks are composed of layers of transformations including convolution, pooling, transformers Vaswani et al. (2017), fully connected layers, and non-linear activation layers. Consider a batch of input instances , these transformations are applied sequentially to generate a series of hidden features before passing the final feature to a linear classifier. For each instance, any feature presentation is a three dimensional vector indexed by channel (C), height (H), and width (W).

Normalization.

We assume the network is using an invertible intra-instance normalization method. Let us denote this function by , which takes the features of the -th input at layer and produces three outputs, the normalized features , the first moment , and the second moment :

The inverse function reverses the normalization process. As an example, PONO (Li et al., 2019) computes the first and second moments across channels from the feature representation at a given layer

The normalized features have zero-mean and standard deviation 1 along the channel dimension. Note that other inter-instance normalizations, such as batch-norm, can also be used in addition to the intra-instance normalization , with their well-known beneficial impact on convergence. As the norms compute statistics across different dimensions their interference is insignificant.

Moment Exchange.

The procedure described in the following functions identically for each layer it is applied to and we therefore drop the superscript for notational simplicity. Further, for now, we only consider two randomly chosen samples and (see Figure 1 for a schematic illustration). The intra-instance normalization decomposes the features of input at layer into three parts, . Traditionally, batch-normalization Ioffe and Szegedy (2015) discards the two moments and only proceeds with the normalized features . If the moments are computed across instances (e.g. over the mini-batch) this makes sense, as they capture biases that are independent of the label. However, in our case we focus on intra-instance normalization, and therefore both moments are computed only from and are thus likely to contain label-relevant signal. This is clearly visible in the cat and plane examples in Figure 1. All four moments (,,,), capture the underlying structure of the samples, distinctly revealing their respective class labels.

We consider the normalized features and the moments as distinct views of the same instance. It generally helps robustness if a machine learning algorithm leverages multiple sources of signal, as it becomes more resilient in case one of them is under-expressed in a test example. For instance, the first moment conveys primarily structural information and only little color information, which, in the case of cat images can help overcome overfitting towards fur color biases in the training data set.

In order to encourage the network to utilize the moments, we use the two images and combine them by injecting the moments of image into the feature representation of image :

(1)

In the case of PONO, the transformation becomes

(2)

We now proceed with these features , which contain the moments of image B (plane) hidden inside the features of image A (cat). In order to encourage the neural network to pay attention to the injected features of B we modify the loss function to predict the class label and also , up to some mixing constant . The loss becomes a straight-forward combination

Implementation. In practice one needs to apply MoEx only on a single layer in the neural network, as the fused signal is propagated until the end. With PONO as the normalization method, we observe that the first layer () usually leads to the best result. In contrast, we find that MoEx is more suited for later layers when using IN (Ulyanov et al., 2016), GN (Wu and He, 2018), or LN (Ba et al., 2016) for moment extraction. Please see subsection 5.1 for a detailed ablation study. The inherent randomness of mini-batches allows us to implement MoEx very efficiently. For each input instance in the mini-batch we compute the normalized features and moments . Subsequently we sample a random permutation and apply MoEx with a random pair within the mini-batch

(3)

See Algorithm 1 in the Appendix for an example implementation in PyTorch (Paszke et al., 2017). Note that all computations are extremely fast and only introduce negligible overhead during training.

Hyper-parameters. To control the intensity of our data augmentation, we perform MoEx during training with some probability . In this way, the model can still see the original features with probability . In practice we found that works well on most datasets except that we set for ImageNet where we need stronger data augmentation. The interpolation weight is another hyper-parameter to be tuned. Empirically, we find that 0.9 works well across data sets. The reason can be that the moments contain less information than the normalized features. Please see subsection 5.2 for a detailed ablation study.

Properties. MoEx is performed entirely at the feature level inside the neural network and can be readily combined with other augmentation methods that operate on the raw input (pixels or words). For instance, Cutmix Yun et al. (2019) typically works best when applied on the input pixels directly. We find that the improvements of MoEx are complimentary to such prior work and recommend to use MoEx in combination with established data augmentation methods.

Model #param. CIFAR10 CIFAR100
ResNet-110 (3-stage) 1.7M 6.820.23 26.280.10
+MoEx 1.7M 6.030.24 25.470.09
DenseNet-BC-100 (k=12) 0.8M 4.670.10 22.61
+MoEx 0.8M 4.580.03 21.380.18
ResNeXt-29 (864d) 34.4M 4.000.04 18.540.27
+MoEx 34.4M 3.640.07 17.080.12
WRN-28-10 36.5M 3.850.06 18.67
+MoEx 36.5M 3.310.03 17.690.10
DenseNet-BC-190 (k=40) 25.6M 3.310.04 17.100.02
+MoEx 25.6M 2.870.03 16.090.14
PyramidNet-200 () 26.8M 3.650.10 16.51
+MoEx 26.8M 3.440.03 15.500.27
Table 1: Classification results (Err (%)) on CIFAR-10, CIFAR-100 in comparison with various competitive baseline models. WRN-28-10: Wide ResNet depth=28, widening parameter k=10 (dropout Srivastava et al. (2014): 0.3) , DenseNet-BC (L=100, k=12): depth L=100, growth rate k=12. Note: for these models, we follow the official github, we train ResNet110 for 164 epochs, WRN-28-10 for 200 epochs, others for 300 epochs.
PyramidNet-200 () Top-1 / Top-5
(# params: 26.8 M) Error (%)
Baseline 16.45 / 3.69
Manifold Mixup (Zhang et al., 2018) 16.14 / 4.07
StochDepth (Huang et al., 2016) 15.86 / 3.33
DropBlock (Ghiasi et al., 2018) 15.73 / 3.26
Mixup (Zhang et al., 2018) 15.63 / 3.99
ShakeDrop (Yamada et al., 2018) 15.08 / 2.72
MoEx 15.02 / 2.96
Cutout DeVries and Taylor (2017) 16.53 / 3.65
Cutout + MoEx 15.11 / 3.23
CutMix Yun et al. (2019) 14.47 / 2.97
CutMix + MoEx 13.95 / 2.95
CutMix + ShakeDrop (Yamada et al., 2018) 13.81 / 2.29
CutMix + ShakeDrop + MoEx 13.47 / 2.15
Table 2: Combining MoEx with other regularization methods on CIFAR-100 using the state-of-the-art model, PyramidNet-200, following the setting of Yun et al. (2019). The best numbers in each group are bold.

4 Experiments

We evaluate the efficacy of our approach thoroughly across several tasks and data modalities. Our implementation will be released as open source upon publication.

4.1 Image Classification on CIFAR

Setup. CIFAR-10 and CIFAR-100 Krizhevsky and Hinton (2009) are benchmark datasets containing 50K training and 10K test colored images at 32x32 resolution. We evaluate our method using various model architectures (He et al., 2015; Huang et al., 2017; Xie et al., 2017; Zagoruyko and Komodakis, 2016; Han et al., 2017) on CIFAR-10 and CIFAR-100. We follow the conventional setting1 with random translation as the default data augmentation and apply MoEx to the features after the first layer. Furthermore, to justify the compatibility of MoEx with other regularization methods, we follow the official setup2 of (Yun et al., 2019) and apply MoEx jointly with several regularization methods to PyramidNet-200 (Han et al., 2017) on CIFAR-100.

Results. Table 1 displays the classification results on CIFAR-10 and CIFAR-100 using MoEx or not. We take three random runs and report the mean and standard error (Gurland and Tripathi, 1971). MoEx consistently enhances the performance of all the baseline models.

Table 2 demonstrates the CIFAR-100 classification results on the basis of PyramidNet-200. Compared to other augmentation methods, PyramidNet trained with MoEx obtains the lowest error rates in all-but one settings. However, significant additional improvements are achieved when MoEx is combined with existing methods — setting a new state-of-the-art for this particular benchmark task when combined with the two best performing alternatives, CutMix and ShakeDrop.

# of Test Error (%)
Model epochs Baseline +MoEx
ResNet-50 90 23.6 23.1
ResNeXt-50 (324d) 90 22.2 21.4
DenseNet-265 90 21.9 21.6
ResNet-50 300 23.1 21.9
ResNeXt-50 (324d) 300 22.5 22.0
DenseNet-265 300 21.5 20.9
Table 3: Classification results (Test Err (%)) on ImageNet in comparison with various models. Note: The ResNeXt-50 (324d) models trained for 300 epochs overfit. They have higher training accuracy but lower test accuracy than the 90-epoch ones.

4.2 Image Classification on ImageNet

Setup. We evaluate on ImageNet (Deng et al., 2009) (ILSVRC 2012 version), which consists of 1.3M training images and 50K validation images of various resolutions. For faster convergence, we use NVIDIA’s mixed-precision training code base3 with batch size 1024, default learning rate , cosine annealing learning rate scheduler (Loshchilov and Hutter, 2016) with linear warmup (Goyal et al., 2017) for the first 5 epochs. As the model might require more training updates to converge with data augmentation, we apply MoEx to ResNet-50, ResNeXt-50 (324d), DenseNet-265 and train them for 90 and 300 epochs. For a fair comparison, we also report Cutmix (Yun et al., 2019) under the same setting.

Results. Table 3 shows the test error rates on the ImageNet data set. MoEx is able to improve the classification performance throughout, regardless of model architecture. Similar to the previous CIFAR experiments, we observe in Table 4 that MoEx is highly competitive when compared to existing regularization methods and truly shines when it is combined with them. When applied jointly with CutMix (the strongest alternative), we obtain our lowest Top-1 and Top-5 error of 20.9/5.7 respectively. Due to computational limitations we only experimented with a ResNet-50, but expect similar trends for other architectures.

Beyond classification, we also finetune the pre-trained ImageNet models on Pascal VOC object detection task and find that weights pre-trained with MoEx provide a better initialization when finetuned on downstream tasks. Please see Appendix for details.

ResNet50 # of Top-1 / Top-5
(# params: 25.6M) epochs Error (%)
ISDA (Wang et al., 2019) 90 23.3 / 6.8
Shape-ResNet (Geirhos et al., 2018) 105 23.3 / 6.7
Mixup (Zhang et al., 2018) 200 22.1 / 6.1
AutoAugment (Cubuk et al., 2019a) 270 22.4 / 6.2
Fast AutoAugment (Lim et al., 2019) 270 22.4 / 6.3
DropBlock (Ghiasi et al., 2018) 270 21.9 / 6.0
Cutout (DeVries and Taylor, 2017) 300 22.9 / 6.7
Manifold Mixup (Zhang et al., 2018) 300 22.5 / 6.2
Stochastic Depth (Huang et al., 2016) 300 22.5 / 6.3
CutMix (Yun et al., 2019) 300 21.4 / 5.9
Baseline 300 23.1 / 6.6
MoEx 300 21.9 / 6.1
CutMix 300 21.3 / 5.7
CutMix + MoEx 300 20.9 / 5.7
Table 4: Comparison of state-of-the-art regularization methods on ImageNet. The results for Stochastic Depth and Cutout are from Yun et al. (2019).
Model # Param Val Err Test Err
DenseNet-BC-100 0.8M 3.16 3.23
+MoEx 0.8M 2.97 3.31
VGG-11-BN 28.2M 3.05 3.38
+MoEx 28.2M 2.76 3.00
WRN-28-10 36.5M 2.42 2.21
+MoEx 36.5M 2.22 1.98
Table 5: Speech classification on Speech Command. Similar to the observation of Zhang et al. (2018), regularization methods work better for models with large capacity on this dataset.

4.3 Speech Recognition on Speech Commands

Setup. To demonstrate that MoEx can be applied to speech models as well, we use Speech Command dataset4 (Warden, 2018) which contains 65000 utterances (one second long) from thousands of people. The goal is to classify them in to 30 command words such as ”Go”, ”Stop”, etc. There are 56196, 7477, and 6835 examples for training, validation, and test. We use an open source implementation5 to encode each audio into a mel-spectrogram of size 1x32x32 and feeds it to 2D ConvNets as an one-channel input. We follow the default setup in the codebase training models with initial learning rate 0.01 with ADAM (Kingma and Ba, 2014) for 70 epochs. The learning rate is reduce on plateau. We use the validation set for hyper-parameter selection and tune MoEx and . We test the proposed MoEx on three baselines models: DenseNet-BC-100, VGG-11-BN, and WRN-28-10.

Results. Table 5 displays the validation and test errors. We observe that training models with MoEx improve over the baselines significantly in all but one case. The only exception is DenseNet-BC-100, which has only 2% of the parameters of the wide resnet, confirming the findings of Zhang et al. (2018) that on this data set data augmentation has little effect on tiny models.

4.4 3D model classification on ModelNet

Setup. We conduct experiments on Princeton ModelNet10 and ModelNet40 datasets (Wu et al., 2015) for 3D model classification. This task aims to classify 3D models encoded as 3D point clouds into 10 or 40 categories. As a proof of concept, we use PointNet++ (SSG) (Qi et al., 2017) implemented efficiently in PyTorch Geometric6 (Fey and Lenssen, 2019) as the baseline. It does not use surface normal as additional inputs. We apply MoEx to the features after the first set abstraction layer in PointNet++. Following their default setting, all models are trained with ADAM (Kingma and Ba, 2014) at batch size 32 for 200 epochs. The learning rate is set to 0.001. We tune the hyper-parameters of MoEx on ModelNet-10 and apply the same hyper-parameters to ModelNet-40. We choose , , and InstanceNorm7 for this task, which leads to slightly better results.

Model ModelNet10 ModelNet40
PointNet++ 6.020.10 9.160.16
+ MoEx 5.250.18 8.780.28
Table 6: Classification errors (%) on ModelNet10 and ModelNet40. The mean and standard error out of 3 runs are reported.

Results. Table 6 summarizes the results out of three runs, showing mean error rates with standard errors. MoEx reduces the classification errors from 6.0% to 5.3% and 9.2% to 8.8% on ModelNet10 and ModelNet40, respectively.

Task Method BLEU BERT-F1 (%)
De-En Transformer 34.4 -
DynamicConv 35.2 -
DynamicConv 35.460.06 67.280.02
+ MoEx 35.640.11 67.440.09
En-De DynamicConv 28.960.05 63.750.04
+ MoEx 29.180.10 63.860.02
It-En DynamicConv 33.270.04 65.510.02
+ MoEx 33.360.11 65.650.07
En-It DynamicConv 30.470.06 64.050.01
+ MoEx 30.640.06 64.210.11
Table 7: Machine translation with DynamicConv (Wu et al., 2019a) on IWSLT-14 German to English, English to German, Italian to English, and English to Italian tasks. The mean and standard error are based on 3 random runs. : numbers from Wu et al. (2019a). Note: for all these scores, the higher the better.

4.5 Machine Translation on IWSLT 2014.

Setup. To show the potential of MoEx on natural language processing tasks, we apply MoEx to the state-of-the-art DynamicConv (Wu et al., 2019a) model on 4 tasks in IWSLT 2014 (Cettolo et al., 2014): German to English, English to German, Italian to English, and English to Italian machine translation. IWSLT 2014 is based on the transcripts of TED talks and their translation, it contains 167K English and German sentence pairs and 175K English and Italian sentence pairs. We use fairseq library (Ott et al., 2019) and follow the common setup (Edunov et al., 2018a) using 1/23 of the full training set as the validation set for hyper-parameter selection and early stopping. All models are trained with a batch size of 12000 tokens per GPU on 4 GPUs for 20K updates to ensure convergence; however, the models usually don’t improve after 10K updates. We use the validation set to select the best model. We tune the hyper-parameters of MoEx on the validation set of the German to English task including and and use MoEx with InstanceNorm with and after the first encoder layer. We apply the same set of hyper-parameters to the other three language pairs. When computing the moments, the edge paddings are ignored. We use two metrics to evaluate the models: BLEU (Papineni et al., 2002) which is a exact word-matching metric and BERTScore 8  (Zhang et al., 2020). We report the scaled BERT-F19 for better interpretability. As suggested by the authors, we use multi-lingual BERT (Devlin et al., 2019) to compute BERTScore for non-English languages10 and RoBERTa-large for English11.

Result. Table 7 summarizes the average scores (higher better) with standard error rates over three runs. It shows that MoEx consistently improves the baseline model on all four tasks by about 0.2 BLEU and 0.2% BERT-F1. Although these improvements are not exorbitant, they are highly consistent and, as far as we know, MoEx is the first label-perturbing data augmentation method that improves machine translation models.

5 Ablation Study and Model Analysis

5.1 Ablation Study on Components

name MoEx Test Error
Baseline 26.30.10
Label smoothing (Szegedy et al., 2016) 26.00.06
Label Interpolation only 26.00.12
MoEx ( = 1, not interpolating the labels) 26.30.02
MoEx with label smoothing 25.80.09
MoEx ( = 0.9, label interpolation, proposed) 25.50.09
Table 8: Ablation study on different design choices.

In the previous section we have established that MoEx yields significant improvements across many tasks and model architectures. In this section we shed light onto which design choices crucially contribute to these improvements. Table 8 shows results on CIFAR-100 with a Resnet-110 architecture, averaged over 3 runs. The column titled MoEx indicates whether we performed moment exchange or not.

Label smoothing.

First, we investigate if the positive effect of MoEx can be attributed to label smoothing (Szegedy et al., 2016). In label smoothing, one changes the loss of a sample with label to

(4)

where denotes the total number of classes. Essentially the neural network is not trained to predict one class with 100% certainty, but instead only up to a confidence of .

Further, we evaluate Label Interpolation only. Here, we evaluate MoEx with label interpolation - but without any feature augmentation, essentially investigating the effect of label interpolation alone. Both variations yield some improvements over the baseline, but are clearly significantly worse than MoEx.

Interpolated targets. The last three rows of Table 8 demonstrate the necessity of utilizing the moments for prediction. We investigate two variants: , which corresponds to no label interpolation; MoEx with label smoothing (essentially assigning a small loss to all labels except ). The last row corresponds to our proposed method, MoEx ().

Two general observations can be made: 1. interpolating the labels is crucial for MoEx to be beneficial — the approach leads to absolutely no improvement when we set . 2. it is also important to perform moment exchange, without it MoEx reduces to a version of label smoothing, which yields significantly smaller benefits.

Moments to exchange Test Error
No MoEx 26.30.10
All features in a layer, i.e. LN 25.60.02
Feature in each channel, i.e. IN 25.70.13
Features in Group of channels, i.e. GN (g=4) 25.70.09
Features at each position, i.e. PONO 25.50.09
1st moment at each position 25.90.06
2nd moment at each position 26.00.13
Unnormalized 2nd moment at each position, i.e. LRN 26.30.05
Table 9: MoEx with different normalization methods on CIFAR-100. For each normalization, we report the mean and standard error of 3 runs with the best configuration.

Choices of normalizations. We study how MoEx performs when using moments from LayerNorm (LN) (Ba et al., 2016), InstanceNorm (IN) (Ulyanov et al., 2016), PONO (Li et al., 2019), GroupNorm (GN) (Wu and He, 2018), and local response normalization (LRN) (Krizhevsky et al., 2012) perform. For LRN, we use a recent variant (Karras et al., 2018) which uses the unnormalized 2nd moment at each position. We conduct experiments on CIFAR-100 with ResNet110. For each normalization, we do a hyper-parameter sweep to find the best setup12. Table 9 shows classification results of MoEx with various feature normalization methods on CIFAR-100 averaged over 3 runs (with corresponding standard errors). We observe that MoEx generally works with all normalization approaches, however PONO has a slight but significant edge, which we attribute to the fact that it catches the structural information of the feature most effectively. Different normalizations work the best at different layers. With PONO we apply MoEx in the first layer, whereas the LN moments work best when exchanged after the second stage of a 3-stage ResNet-110; GN and IN are better at the first stage. We hypothesize the reason is that PONO moments captures local information while LN and IN compute global features which are better encoded at later stages of a ResNet. For image classification, using PONO seems generally best. For some other tasks we observe that using moments from IN can be more favorable (See subsection 4.4 and 4.5).

Model Top-1 / Top-5 Error(%)
ResNet-50 1 0 23.1 / 6.6
0.9 0.25 22.6 / 6.6
0.9 0.5 22.4 / 6.4
0.9 0.75 22.3 / 6.3
0.3 1 22.9 / 6.9
0.5 1 22.2 / 6.4
0.7 1 21.9 / 6.2
0.9 1 21.9 / 6.1
0.95 1 22.5 / 6.3
0.99 1 22.6 / 6.5
Table 10: Ablation study on ImageNet with different and (exchange probability) trained for 300 epochs.

5.2 Ablation Study on Hyper-parameters

and serve as the target interpolation weights of labels , , respectively. To explore the relationship between and model performance, we train a ResNet50 on ImageNet with with on PONO. The results are summarized in Table 10. We observe that generally higher leads to lower error, probably because more information is captured in the normalized features than in the moments. After all, moments only capture general statistics, whereas the features have many channels and can capture texture information in great detail. We also investigate various values of the exchange probability (for fixed ), but on the ImageNet data (i.e. apply MoEx on every image) tends to perform best.

5.3 Robustness and Uncertainty.

To estimate the robustness of the models trained with MoEx, we follow the procedure proposed by Hendrycks et al. (2019) and evaluate our modele on their ImageNet-A data set, which contains 7500 natural images (not originally part of ImageNet) that are misclassified by a publicly released ResNet-50 in torchvision13. We compare our models with various publicly released pretrained models including Cutout (Zhang et al., 2018), Mixup (Zhang et al., 2018), CutMix (Yun et al., 2019), Shape-Resnet (Geirhos et al., 2018), and recently proposed AugMix (Hendrycks et al., 2020). We report all 5 metrics implemented in the official evaluation code14: model accuracy (Acc), root mean square calibration rrror (RMS), mean absolute distance calibration error (MAD), the area under the response rate accuracy curve (AURRA) and soft F1 (Sokolova et al., 2006; Hendrycks et al., 2019). Table 11 summarizes all results. In general MoEx performs fairly well across the board. The combination of MoEx and Cutmix leads to the best performance on most of the metrics.

Name Acc RMS MAD AURRA Soft F1
ResNet-50 (torchvision) 0 62.6 55.8 0 60.0
Shape-ResNet 2.3 57.8 50.7 1.8 62.1
AugMix 3.8 51.1 43.7 3.3 66.8
Fast AutoAugment 4.7 54.7 47.8 4.5 62.3
Cutout 4.4 55.7 48.7 3.8 61.7
Mixup 6.6 51.8 44.4 7.0 63.7
Cutmix 7.3 45.0 36.5 7.2 69.3
ResNet-50 (300 epochs) 4.2 54.0 46.8 3.9 63.7
MoEx 5.5 43.2 34.2 5.7 72.9
Cutmix + MoEx 8.4 42.2 34.0 9.4 70.4
Table 11: The performance of ResNet-50 variants on ImageNet-A. The up-arrow represents the higher the better, the down-arrow represents the lower the better.

6 Conclusion and Future Work

In this paper we propose MoEx, a novel data augmentation algorithm. Instead of disregarding the moments extracted by the (intra-instance) normalization layer, it forces the neural network to pay special attention towards them. We show empirically that this approach is consistently able to improve the classification accuracy and robustness. As an augmentation method for features, MoEx is complementary to existing state-of-the-art approaches and can be readily combined with them. Beyond vision tasks, we also apply MoEx on speech and natural language processing tasks. As future work we plan to investigate alternatives to feature normalization for the invertible functions . For instance, one could factorize the hidden features, or learn decompositions Chen et al. (2011). Further, can also be learned using models like invertible ResNet (Behrmann et al., 2019) or flow-based methods (Tabak and Vanden-Eijnden, 2010; Rezende and Mohamed, 2015).

Acknowledgments

This research is supported in part by the grants from Facebook, the National Science Foundation (III-1618134, III-1526012, IIS1149882, IIS-1724282, and TRIPODS-1740822), the Office of Naval Research DOD (N00014-17-1-2175), Bill and Melinda Gates Foundation. We are thankful for generous support by Zillow and SAP America Inc. In particular, we appreciate the valuable discussion with Geoff Pleiss and Tianyi Zhang.

\appendixpage

Appendix A Additional Experiments

a.1 Fintuneing Imagenet pretrained models on Pascal VOC for Object Detection

Setup. To demonstrate that MoEx encourages models to learn better image representations, we apply models pre-trained on ImageNet with MoEx to downstream tasks including object detection on Pascal VOC 2007 dataset. We use the Faster R-CNN (Ren et al., 2015) with C4 or FPN (Lin et al., 2017) backbones implemented in Detectron2 (Wu et al., 2019b) and following their default training configurations. We consider three ImageNet pretrained models: the ResNet-50 provided by He et al. (2015), our ResNet-50 baseline trained for 300 epochs, our ResNet-50 trained with CutMix (Yun et al., 2019), and our ResNet-50 trained with MoEx. A Faster R-CNN is initialized with these pretrained weights and finetuned on Pascal VOC 2007 + 2012 training data, tested on Pascal VOC 2007 test set, and evaluated with the PASCAL VOC style metric: average precision at IoU 50% which we call APVOC (or AP50 in detectron2). We also report MS COCO (Lin et al., 2014) style average precision metric APCOCO which is recently considered as a better choice. Notably, MoEx is not applied during finetuning.

Results. Table 12 shows the average precision of different initializations. We discover that MoEx provides a better initialization than the baseline ResNet-50 and is competitive against CutMix(Yun et al., 2019) for the downstream cases and leads slightly better performance regardless of backbone architectures.

Backbone Initialization APVOC APCOCO
C4 ResNet-50 (default) 80.3 51.8
ResNet-50 (300 epochs) 81.2 53.5
ResNet-50 + CutMix 82.1 54.3
ResNet-50 + MoEx 81.6 54.6
FPN ResNet-50 (default) 81.8 53.8
ResNet-50 (300 epochs) 82.0 54.2
ResNet-50 + CutMix 82.1 54.3
ResNet-50 + MoEx 82.3 54.3
Table 12: Object detection on PASCAL VOC 2007 test set using Faster R-CNN whose backbone is initialized with different pretrained weights. We use either the original C4 or feature pyramid network (Lin et al., 2017) backbone.

Appendix B MoEx Pytorch Implementation

1 shows an example code of MoEx in PyTorch (Paszke et al., 2017).

# x: a batch of features of shape (batch_size,
#    channels, height, width),
# y: onehot labels of shape (batch_size, n_classes)
# norm_type: type of the normalization to use
def moex(x, y, norm_type):
    x, mean, std = normalization(x, norm_type)
    ex_index = torch.randperm(x.shape[0])
    x = x * std[ex_index] + mean[ex_index]
    y_b = y[ex_index]
    return x, y, y_b
# output: model output
# y: original labels
# y_b: labels of moments
# loss_func: loss function used originally
# lam: interpolation weight $\lambda$
def interpolate_loss(output, y, y_b, loss_func, lam):
    return lam * loss_func(output, y) + \
        (1. - lam) * loss_func(output, y_b)
def normalization(x, norm_type, epsilon=1e-5):
    # decide how to compute the moments
    if norm_type == ’pono’:
        norm_dims = [1]
    elif norm_type == ’instance_norm’:
        norm_dims = [2, 3]
    else: # layer norm
        norm_dims = [1, 2, 3]
    # compute the moments
    mean = x.mean(dim=norm_dims, keepdim=True)
    var = x.var(dim=norm_dims, keepdim=True)
    std = (var + epsilon).sqrt()
    # normalize the features, i.e., remove the moments
    x = (x - mean) / std
    return x, mean, std
Algorithm 1: Example code of MoEx in PyTorch.

Footnotes

  1. https://github.com/bearpaw/pytorch-classification
  2. https://github.com/clovaai/CutMix-PyTorch
  3. https://github.com/NVIDIA/apex/tree/master/examples/imagenet
  4. We attribute the Speech Command dataset to the Tensorflow team and AIY project: https://ai.googleblog.com/2017/08/launching-speech-commands-dataset.html
  5. https://github.com/tugstugi/pytorch-speech-commands
  6. https://github.com/rusty1s/pytorch_geometric
  7. We do hyper-parameter search from , and whether to use PONO or InstanceNorm.
  8. BERTScore is a newly proposed evaluation metric for text generation based on matching contextual embeddings extracted from BERT or RoBERTa (Devlin et al., 2019; Liu et al., 2019) and has been shown to be more correlated with human judgments.
  9. https://github.com/Tiiiger/bert_score/blob/master/journal/
    rescale_baseline.md
  10. Hash code: bert-base-multilingual-cased_L9_no-idf_
    version=0.3.0(hug_trans=2.3.0)-rescaled
  11. Hash code: roberta-large_L17_no-idf_version=0.3.0
    (hug_trans=2.3.0)-rescaled
  12. We select the best result from experiments with and . We choose the best layer among the 1st layer, 1st stage, 2nd stage, and 3rd stage. For each setting, we obtain the mean and standard error out of 3 runs with different random seeds.
  13. https://download.pytorch.org/models/resnet50-19c8e357.pth
  14. https://github.com/hendrycks/natural-adv-examples

References

  1. Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §1, §1, §2, §3, §5.1.
  2. Invertible residual networks. In International Conference on Machine Learning, pp. 573–582. Cited by: §6.
  3. Understanding batch normalization. In Advances in Neural Information Processing Systems, pp. 7694–7705. Cited by: §1, §2.
  4. Tagged back-translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pp. 53–63. Cited by: §2.
  5. Report on the 11th iwslt evaluation campaign, iwslt 2014. In Proceedings of the International Workshop on Spoken Language Translation, Cited by: §4.5.
  6. Vicinal risk minimization. In Advances in neural information processing systems, pp. 416–422. Cited by: §1.
  7. Automatic feature decomposition for single view co-training. In ICML, Cited by: §6.
  8. Autoaugment: learning augmentation strategies from data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 113–123. Cited by: §2, Table 4.
  9. RandAugment: practical data augmentation with no separate search. arXiv preprint arXiv:1909.13719. Cited by: §2.
  10. Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: §4.2.
  11. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Cited by: §4.5, footnote 8.
  12. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552. Cited by: §2, Table 2, Table 4.
  13. Classical structured prediction losses for sequence to sequence learning. In Proceedings of NAACL-HLT, pp. 355–364. Cited by: §4.5.
  14. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 489–500. Cited by: §2.
  15. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, Cited by: §4.4.
  16. Data augmentation and dynamic linear models. Journal of time series analysis 15 (2), pp. 183–202. Cited by: §1.
  17. ImageNet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231. Cited by: Table 4, §5.3.
  18. Dropblock: a regularization method for convolutional networks. In Advances in Neural Information Processing Systems, pp. 10727–10737. Cited by: §2, Table 2, Table 4.
  19. Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677. Cited by: §4.2.
  20. A simple approximation for unbiased estimation of the standard deviation. The American Statistician 25 (4), pp. 30–32. Cited by: §4.1.
  21. Deep pyramidal residual networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6307–6315. Cited by: §4.1.
  22. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385. Cited by: §A.1, §1, §4.1.
  23. AugMix: a simple data processing method to improve robustness and uncertainty. Proceedings of the International Conference on Learning Representations (ICLR). Cited by: §5.3.
  24. Natural adversarial examples. arXiv preprint arXiv:1907.07174. Cited by: §5.3.
  25. Convolutional networks with dense connectivity. IEEE transactions on pattern analysis and machine intelligence. Cited by: §1.
  26. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §4.1.
  27. Deep networks with stochastic depth. In European conference on computer vision, pp. 646–661. Cited by: Table 2, Table 4.
  28. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510. Cited by: §2.
  29. Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §1, §1, §2, §3.
  30. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1681–1691. Cited by: §2.
  31. Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representations, Cited by: §5.1.
  32. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410. Cited by: §2.
  33. Towards understanding generalization via analytical learning theory. arXiv preprint arXiv:1802.07426. Cited by: §2.
  34. Ctrl: a conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. Cited by: §1.
  35. Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.3, §4.4.
  36. Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §4.1.
  37. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1, §1, §5.1.
  38. Efficient backprop. In Neural Networks: Tricks of the trade, G. Orr and M. K. (Eds.), Cited by: §2.
  39. Positional normalization. In Advances in Neural Information Processing Systems, pp. 1620–1632. Cited by: §1, §1, §2, §3, §5.1.
  40. Sphering and its properties. Sankhyā: The Indian Journal of Statistics, Series A, pp. 119–133. Cited by: §2.
  41. Fast autoaugment. In Advances in Neural Information Processing Systems, pp. 6662–6672. Cited by: Table 4.
  42. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125. Cited by: §A.1, Table 12.
  43. Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §A.1.
  44. RoBERTa: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: footnote 8.
  45. Sgdr: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983. Cited by: §4.2.
  46. Learning with marginalized corrupted features. In International Conference on Machine Learning, pp. 410–418. Cited by: §2.
  47. Fairseq: a fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pp. 48–53. Cited by: §4.5.
  48. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311–318. Cited by: §4.5.
  49. Automatic differentiation in pytorch. Cited by: Appendix B, §3.
  50. Pointnet++: deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, pp. 5099–5108. Cited by: §4.4.
  51. Language models are unsupervised multitask learners. OpenAI Blog. Cited by: §1.
  52. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Cited by: §1.
  53. Faster r-cnn: towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91–99. Cited by: §A.1.
  54. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770. Cited by: §6.
  55. Normalized online learning. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, pp. 537–545. Cited by: §2.
  56. How does batch normalization help optimization?. In Advances in Neural Information Processing Systems, pp. 2483–2493. Cited by: §1.
  57. Incorporating invariances in support vector learning machines. In International Conference on Artificial Neural Networks, pp. 47–52. Cited by: §1, §2.
  58. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709. Cited by: §2.
  59. Efficient pattern recognition using a new transformation distance. In Advances in neural information processing systems, pp. 50–58. Cited by: §1, §1, §2.
  60. Best practices for convolutional neural networks applied to visual document analysis. In Proceedings of the Seventh International Conference on Document Analysis and Recognition-Volume 2, pp. 958. Cited by: §2.
  61. Hide-and-seek: forcing a network to be meticulous for weakly-supervised object and action localization. In International Conference on Computer Vision (ICCV), Cited by: §2.
  62. Beyond accuracy, f-score and roc: a family of discriminant measures for performance evaluation. In Australasian joint conference on artificial intelligence, pp. 1015–1021. Cited by: §5.3.
  63. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15 (1), pp. 1929–1958. Cited by: §1, Table 1.
  64. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826. Cited by: §1, §5.1, Table 8.
  65. Density estimation by dual ascent of the log-likelihood. Communications in Mathematical Sciences 8 (1), pp. 217–233. Cited by: §6.
  66. Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §1, §2, §3, §5.1.
  67. The art of data augmentation. Journal of Computational and Graphical Statistics 10 (1), pp. 1–50. Cited by: §1.
  68. Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §1, §3.
  69. Implicit semantic data augmentation for deep networks. In Advances in Neural Information Processing Systems, pp. 12614–12623. Cited by: §2, Table 4.
  70. Speech commands: a dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209. Cited by: §4.3.
  71. Pay less attention with lightweight and dynamic convolutions. In International Conference on Learning Representations, Cited by: §4.5, Table 7.
  72. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19. Cited by: §1, §2, §3, §5.1.
  73. Detectron2. Note: \urlhttps://github.com/facebookresearch/detectron2 Cited by: §A.1.
  74. 3d shapenets: a deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912–1920. Cited by: §4.4.
  75. Adversarial examples improve image recognition. arXiv preprint arXiv:1911.09665. Cited by: §2.
  76. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1492–1500. Cited by: §4.1.
  77. Shakedrop regularization for deep residual learning. arXiv preprint arXiv:1802.02375. Cited by: Table 2.
  78. Qanet: combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541. Cited by: §2.
  79. CutMix: regularization strategy to train strong classifiers with localizable features. In International Conference on Computer Vision (ICCV), Cited by: §A.1, §A.1, §1, §2, §3, Table 2, §4.1, §4.2, Table 4, §5.3.
  80. Wide residual networks. arXiv preprint arXiv:1605.07146. Cited by: §4.1.
  81. Understanding deep learning requires rethinking generalization. In ICLR, Cited by: §1, §1.
  82. Mixup: beyond empirical risk minimization. Proceedings of the International Conference on Learning Representations (ICLR). Cited by: §1, §2, Table 2, §4.3, Table 4, Table 5, §5.3.
  83. BERTScore: evaluating text generation with bert. In International Conference on Learning Representations, Cited by: §4.5.
  84. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Cited by: §2.
  85. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578. Cited by: §1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
409295
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description