Classifying the classifier: dissecting the weight space of neural networks
Abstract
This paper presents an empirical study on the weights of neural networks, where we interpret each model as a point in a highdimensional space – the neural weight space. To explore the complex structure of this space, we sample from a diverse selection of training variations (dataset, optimization procedure, architecture, etc.) of neural network classifiers, and train a large number of models to represent the weight space. Then, we use a machine learning approach for analyzing and extracting information from this space. Most centrally, we train a number of novel deep metaclassifiers with the objective of classifying different properties of the training setup by identifying their footprints in the weight space. Thus, the metaclassifiers probe for patterns induced by hyperparameters, so that we can quantify how much, where, and when these are encoded through the optimization process. This provides a novel and complementary view for explainable AI, and we show how metaclassifiers can reveal a great deal of information about the training setup and optimization, by only considering a small subset of randomly selected consecutive weights. To promote further research on the weight space, we release the neural weight space (NWS) dataset – a collection of 320K weight snapshots from 16K individually trained deep neural networks.
1 Introduction
The complex and nonlinear nature of deep neural networks (DNNs) makes it difficult to understand how they operate, what features are used to form decisions, and how different selections of hyperparameters influence the final optimized weights. This has led to the development of methods in explainable AI (XAI) for visualizing and understanding neural networks, and in particular for convolutional neural networks (CNNs). Thus, many methods are focused on the input image space, for example by deriving images that maximize class probability or individual neuron activations [44, 51]. There are also methods which directly investigate neuron statistics of different layers [30], or use layer activations for information retrieval [27, 1, 39]. However, these methods primarily focus on local properties, such as individual neurons and layers, and an indepth analysis of the full set of model weights and the weight space statistics has largely been left unexplored.
In this paper, we present a dissection and exploration of the neural weight space (NWS) – the space spanned by the weights of a large number of trained neural networks. We represent the space by training a total of 16K CNNs, where the training setup is randomly sampled from a diverse set of hyperparameter combinations. The performance of the trained models alone can give valuable information when related to the training setups, and suggest optimal combinations of hyperparameters. However, given its complexity, it is difficult to directly reason about the sampled neural weight space, e.g. in terms of Euclidean or other distance measures – there is a large number of symmetries in this space, and many possible permutations represent models from the same equivalence class. To address this challenge, we use a machine learning approach for discovering patterns in the weight space, by utilizing a set of metaclassifiers. These are trained with the objective of predicting the hyperparameters used in optimization. Since each sample in the hyperparameter space corresponds to points (models) in the weight space, the metaclassifiers seek to learn the inverse mapping from weights to hyperparameters. This gives us a tool to directly reason about hyperparameters in the weight space. To enable comparison between heterogeneous architectures and to probe for local information, we introduce the concept of local metaclassifiers operating on only small subsets of the weights in a model. The accuracy of a local metaclassifier enables us to quantify where differences due to hyperparameter selection are encoded within a model.
We demonstrate how we can find detailed information on how optimization shapes the weights of neural networks.
For example, we can quantify how the particular dataset used for training influences the convolutional layers stronger than the deepest layers, and how initialization is the most distinguishing characteristic of the weight space.
We also see how weights closer to the input and output of a network faster diverge from the starting point as compared to the “more hidden” weights.
Moreover, we can measure how properties in earlier layers, e.g. the filter size of convolutional layers, influence the weights in deeper layers. It is also possible to pinpoint how individual features of the weights influence a metaclassifier, e.g. how a majority of the differences imposed on the weight space by the optimizer are located in the bias weights of the convolutional layers. All such findings could aid in understanding DNNs and help future research on neural network optimization. Also, since we show that a large amount of information about the training setup could be revealed by metaclassifiers, the techniques are important in privacy related problem formulations, providing a tool for leaking information on a blackbox model without any knowledge of its architecture.
In summary, we present the following set of contributions:

We use the neural weight space as a general setting for exploring properties of neural networks, by representing it using a large number of trained CNNs.

We release the neural weight space (NWS) dataset, comprising 320K weight snapshots from 16K individually trained nets, together with scripts for training more samples and for training metaclassifiers
^{6} . 
We introduce the concept of neural network metaclassification for performing a dissection of the weight space, quantifying how much, where and when the training hyperparameters are encoded in the trained weights.

We demonstrate how a large amount of information about the training setup of a network can be revealed by only considering a small subset of consecutive weights, and how hyperparameters can be practically compared across different architectures.
We see our study as a first step towards understanding and visualizing neural networks from a direct inspection of the weight space. This opens up new possibilities in XAI, for understanding and explaining learning in neural networks in a way that has previously not been explored. Also, there are many other potential applications of the sampled weight space, such as learningbased initialization and regularization, learning measures for estimating distances between networks, learning to prevent privacy leakage of the training setup, learning model compression and pruning, learningbased hyperparameter selection, and many more.
Throughout the paper we use the term weights denoted to describe the trainable parameters of neural nets (including bias and batch normalization parameters), and the term hyperparameters denoted to describe the nontrainable parameters. For full generality we also include dataset choice and architectural specifications under the term hyperparameter.
2 Related work
Visualization and analysis: A significant body of work has been directed towards explainable AI (XAI) in deep learning, for visualization and understanding of different aspects of neural networks [19], e.g. using methods for neural feature visualization. These aim at estimating the input images to CNNs that maximize the activation of certain channels or neurons [44, 51, 32, 49, 42], providing information on what are the salient features picked up by a CNN. Another interesting viewpoint is how information is structured by neural networks. It is, for example, possible to define interpretability of the learned representations, with individual units corresponding to unique and interpretative concepts [53, 4, 52]. It has also been demonstrated how neural networks learn hierarchical representations [6].
Many methods rely on comparing and embedding DNN layer activations. For example, activations generated by multiple images can be concatenated to represent a single trained model [9], and activations of a large number of images can be visualized using dimensionality reduction [40]. The activations can also be used to regress the training objective, measuring the level of abstraction and separation of different layers [1], and for measuring similarity between different trainings and layers [27, 39], e.g. to show how different layers of CNNs converge. In contrast to these methods we operate directly on the weights, and we explore a very large number of trainings from heterogeneous architectures.
There are also previous methods which consider the model weights, e.g. in order to visualize the evolution of weights during training [12, 13, 28, 31, 2, 11]. Another common objective is to monitor the statistics of the weights during training, e.g. using tools such as Tensorboard. While these consider one or few models, our goal is to compare a large number of different models and learn how optimization encodes the properties of the training setup within the model weights.
By training a very large number of fully connected networks, Novak et al. study how the sensitivity to model input correlates with generalization performance [38]. For different combinations of basetraining and finetuning objectives, Zamir et al. quantify the transferlearning capability between tasks [50]. Yosinski et al. explore different configurations of pretraining and finetuning to evaluate and predict performance [48]. However, while monitoring the accuracy of many trainings is conceptually similar to our analysis in Section 4.2, our main focus is to search for information in the weights of all the trained models (Section 5).
Privacy and security: Related to the metaclassification described in Section 5 are the previous works aiming at detecting privacy leakage in machine learning models. Many such methods focus on member inference attacks, attempting to estimate if a sample was in the training data of a trained model, or generating plausible estimates of training samples [10, 18, 36]. Ateniese et al. train a set of models using different training data statistics [3]. The learned features are used to train a metaclassifier to classify the characteristics of the original training data. The metaclassifier is then used to infer properties of the nondisclosed training data of a targeted model. The method was tested on Hidden Markov Models and Support Vector Machines. A similar concept was used by Shokri et al. for training an attack model on output activations in order to determine whether a certain record is used as part of the modelâs training dataset [43]. Our dissection using metaclassifiers can reveal information about training setup of an unknown DNN, provided that the trained weights are available; potentially with better accuracy than previous methods since we learn from a very large set of trained models.
Metalearning: There are many examples of methods for learning optimal architectures and hyperparameters [21, 34, 5, 54, 41, 55, 29]. However, these methods are mostly focused on evolutionary strategies to search for the optimal network design. In contrast, we are interested in comparing different hyperparameters and architectural choices, not only to gain knowledge on advantageous setups, but primarily to explore how training setup affects the optimized weights of neural networks. To our knowledge there has not been any previous largescale samplings of the weight space for the purpose of learningbased understanding of deep learning.
3 Weight space representation
We consider all weights of a model as being represented by a single point in the highdimensional neural weight space (NWS). To create such a representation from the mixture of convolutional filters, biases, and weight matrices of fullyconnected (FC) layers, we use a vectorization operation. The vectorization is performed layer by layer, followed by concatenation. For the th convolutional layer, all the filter weights from filters are concatenated, followed by bias weights ,
(1) 
where denotes vectorization and concatenates vectors and . For the FC layers, the weight matrices are added using the same scheme,
(2) 
Parameter  Values 

Dataset  MNIST [26], CIFAR10 [24], SVHN [37], STL10 [8], FashionMNIST [47] 
Learning rate  
Batch size  
Augmentation  Off, On 
Optimizer  ADAM [23], RMSProp [17], Momentum SGD 
Activation  ReLU [35], ELU [7], Sigmoid, TanH 
Initialization  Constant, Random normal, Glorot uniform, Glorot normal [14] 
Conv. filter size  
# of conv. layers  
# of FC layer  
Conv. layer width  
FC layer width 
Starting with the empty set , and repeating the vectorization operations for all layers, we arrive at the final weight vector . For simplicity, we have not included indices over the 2D convolutional filters and weight matrices in the notation. Moreover, additional learnable weights, e.g. batch normalization parameters, can simply be added after the biases of each layer. Although the vectorization may rearrange the 2D spatial information e.g. in convolutional filters, there is still spatial structure in the 1D vector. We recognize that there is ample room for an improved NWS representation, e.g. accounting for weight space permutations and the 2D nature of filters. However, we focus on starting the exploration of the weight space with a representation that is as simple as possible, and from the results in Section 5 we will see that it is possible to extract a great deal of useful information from the vectorized weights.
4 The NWS dataset
In this section, we describe the sampling of the NWS dataset. Then, we show how we can correlate training setup with model performance by regressing the test accuracy from hyperparameters.
4.1 Sampling
To generate points in the NWS, we train CNNs by sampling a range of different hyperparameters, as specified in Table 1. For each training, the hyperparameters are selected randomly, similarly to previous techniques for hyperparameter search [5]. It is difficult to estimate the number of SGD steps needed for optimization, since this varies with most of the hyperparameters. Instead, we rely on early stopping to define convergence. For each training we export weights at 20 uniformly sampled points along the optimization trajectory. In order to train and manage the large number of models, we use relatively small CNNs automatically generated based on the architectural hyperparameters shown in Table 1.
Name  Description  Quantity 

Random hyperparameters (including architecture)  13K (10K/3K train/test)  
Random hyperparameters (fixed architecture)  3K (2K/1K train/test) 
The architectures are defined by 610 layers, and in total between 20K and 390K weights each. They follow a standard design, with a number of convolutional layers and 3 maxpooling layers, followed by a set of fullyconnected (FC) layers. The loss is the same for all trainings, specified by crossentropy. For regularization, all models use dropout [46] with 50% keep probability after each fully connected layer. Moreover, we use batch normalization [22] for all trainings and layers. Without the normalization, many of the more difficult hyperparameter settings do not converge (the supplementary material contains a discussion around this).
We conduct 13K separate trainings with random hyperparameters, and 3K trainings with fixed architecture for the purpose of global metaclassification (Section 5.2). Table 2 lists the resulting NWS datasets used throughout the paper. For detailed explanations of the training setup, and extensive training statistics (convergence, distribution of test accuracy, training time, model size, etc.), we refer to the supplementary material.
4.2 Regressing the test accuracy
To gain an understanding on the influence of the different hyperparameters in Table 1, we regress the test accuracy of the sampled set of networks. Given the hyperparameters , we model the test accuracy with a linear relationship,
(3) 
where are the model coefficients and is the number of categories for hyperparameter . is the set of ordered hyperparameters, and is the set of categorical hyperparameters. The categorical hyperparameters are split into one binary for each category, as denoted by the Iverson bracket, . We fit one linear model for each dataset, which means that we have in total 10 categorical and 1 ordered (learning rate) parameters for each model (see Table 1). Although some of the categorical parameters actually are ordered (batch size, filter size, etc.), we split these to fit one descriptive correlation for each of the categories. In total we have 32 categorical, 1 ordered and 1 constant coefficient, so that the size of the set is 34.
The distribution of test accuracies on a particular dataset shows two modes; one with successful trainings and one with models that do not learn well. This makes it difficult to fit a linear model to the test accuracy. Instead, we focus only on the mode of models that have learned something useful, by rejecting all trainings with test accuracy lower than a certain threshold inbetween the two modes. This sampling reduces the set from 13K to around 10.5K. Further, for each dataset, we normalize the test accuracy to have zero mean and unit variance. Thus, a positive model coefficient explains a positive effect on the test accuracy and vice versa, and the magnitudes are similar between different datasets. The results of fitting the model to one dataset, CIFAR10, is displayed in Figure 1.
On average, the single most influential parameter is the initialization, followed by activation function and optimizer, and it is clear how large impact modern techniques have had on optimization (such as ADAM optimizer, ReLU/ELU activation and Glorot initialization). For architecture specific parameters, a general trend is to promote wider models. However, the number of FC layers has an overall negative correlation. This can be explained by many layers being more difficult to optimize, so that performance suffers when less effective optimizer and initialization is used. Finally, we recognize that a linear model is only explaining some of the correlations, but it helps in forming an overall understanding of the hyperparameters.
5 Metaclassification
The objective of a metaclassifier is to learn the mapping , i.e. to estimate a specific hyperparameter from a weight vector . The performance of the prediction of gives us a notion for comparing different in terms of hyperparameters. We first give a motivation and definition, followed by examples of global and local metaclassification using metaclassifiers of different complexities.
5.1 Motivation and definition
For a model , trained using hyperparameters , what is specific about the learned weights comparing different ? For example, given two sets of weights and , trained using different hyperparameters and , respectively, we expect the weights to converge to different locations in the weight space. However, it is difficult to relate to or reason about these locations based on direct inspection of the weights. For example, the Euclidean interdistance may very well be smaller than the intradistance , due to the complicated and permutable structure of the weight space. In order to find the decision boundary between and , we can instead learn it from a large number of samples and , using a model . The model can thus be used to determine in which region (related to this decision boundary) a new weight sample is located. This gives us a notion of quantifying how much of or is encoded in .
Given a CNN classifier , parameterized by the trainable weights , and operating on image samples , a global metaclassifier is described by , where are static samples of the weight space and is the model parameterization. The objective of is to perform classification of the hyperparameters as shown in Table 1, to determine e.g. which dataset was used in training, if augmentation was performed, or which optimizer was used. takes the vectorized weights as input (see Section 3).
Models: We consider featurebased metaclassification, as well as deep models applied on the raw weight input. The features are specified from 8 different statistical measures of a weight vector: mean, variance, skewness (third standardized moment), and fivenumber summary (1, 25, 50, 75 and 99 percentiles). The measures are applied both directly on the weights and on weight gradients , for a total of 16 features. The features are used for training support vector machines (SVMs), testing both linear and radial basis function (RBF) kernels. These models provide simple linear and nonlinear baselines for comparison with a more advanced deep metaclassifier (DMC). We also tried performing logistic regression on the features, but performance was not better than random guessing.
A DMC is designed as a 1D CNN in order to handle the vectorized weights. Using convolutions on the vectorized weights can be motivated from three different perspectives: 1) there is spatial structure in the weight vector which we can explore, especially for the convolutional layers, 2) for local DMCs we are interested in spatial invariance, so that any subset of neighboring weights can be considered, and 3) for global DMCs we have a large input weight vector from which we need to extract a lowdimensional feature representation before using fully connected components.
Data filtering: Since the objective of a metaclassifier is to explore how hyperparameters are encoded through the optimization process, we are only interested in models of the weight space that have learned something useful. Therefore, we discard trainings that have not converged, where convergence is defined as specified in Section 4.2.
5.2 Global metaclassification
A global metaclassifier considers all trainable weights from each CNN. We train on the set in Table 2, where each is composed of 92,868 weights. The DMC model consists of 15 1D convolutional layers followed by 6 FC layers. For the SVMs, we consider two methods for extracting the statistical measures mentioned in Section 5.1 – one is to evaluate statistics over the complete set of weights, and the other is to do this layerbylayer. The layerwise method extracts separate statistics for multiplicative weights, bias weights, and for each of the batch normalization weights in a layer, yielding a total of 480 training features. We refer to the supplementary material for details on the models and training.
Figure 2 shows the performance of global metaclassifiers trained on 6 different hyperparameters. Considering the diversity of the training data, all of the hyperparameters except for batch size can be predicted with fairly high accuracy using a DMC. This shows that there are many features in the weight space which are characteristic of different hyperparameters. However, the most surprising results are achieved with a linear SVM and layerwise weight statistics, with performance not very far from the deep classifiers, and especially for the dataset, optimizer and activation hyperparameters. Apparently, using separate statistics for each layer and type of weight is enough to give a good estimation on which hyperparameter was used in training.
By inspecting the decision boundaries of the linear SVMs, we can get a sense for which features that best explain a certain hyperparameter, as illustrated in Figure 3. Given the coefficients of a oneversusrest SVM, it describes a vector that is orthogonal to the hyperplane in feature space which separates the class from the rest of the classes. The vector is oriented along the feature axes which are most useful for separating the classes. Taking the norm of over the classes , we can get an indication on which features were most important for separating the classes. The most indicative features for determining the dataset are the 1 and 99 percentiles of the gradient of filter weights in the first convolutional layer. As these filters are responsible for extracting simple features, and the gradient of the filters are related to the strength of edge extraction, there is a close link to the statistics of the training images, which explains the good performance of the linear SVM. When evaluating the statistics over all weights, it is not possible to have this direct connection to training data, and performance suffers. For the optimizer metaclassifier, on the other hand, most information comes from the statistics of bias weights in the convolutional layers, and in particular from percentiles of and . For activation function metaclassification, the running mean stored by batch normalization in the FC layers contribute most to the linear SVM decisions. For initialization it is more difficult to find isolated types of weights that contribute most, and looking at the SVMs trained on 16 features over the complete we can see how initialization is better described by statistics computed globally over the weights.
5.3 Local metaclassification
A local metaclassifier is the model , where is the subset of weights between indices and . SVMs use features extracted from , while a local DMC is trained directly on . A local DMC consists of 12 1D convolutional layers followed by 6 FC layers. The training data is composed of the set in Table 2. We use a subset size of weights (on average around % of a weight vector), such that . DMCs are trained by randomly picking for each minibatch, while SVMs use a fixed number of 10 randomly selected subsets of each .
Figure 4 shows the performance of local metaclassifiers , trained on 11 different hyperparameters from Table 1. For each hyperparameter, there are three individually trained DMCs based on: subsets of all weights in , weights from only the convolutional layers, and only FC weights. In contrast to the global metaclassification it is not possible for an SVM to pinpoint statistics of one particular layer, which makes linear SVMs perform poorly. The RBF kernel improves performance, but is mostly far from the DMC accuracies. Still, the results are consistently better than random guessing for most hyperparameters, so there is partial information contained in the statistical measures. The best SVM performance is achieved for the initialization hyperparameter. This makes intuitive sense, as the differences are mainly described by simpler statistics of the weights.
Considering that the local DMCs learn features that are invariant to the architecture, and use only a fraction of the weights, they perform very well compared to global DMCs. This is partly due to the larger training set, but also confirms how much information is stored locally in the vector . By comparing DMCs trained on only convolutional or FC weights, we can analyze where most of the features of a certain hyperparameter are stored, e.g. the dataset footprint is more pronounced in the convolutional layers. For the architectural hyperparameters, the filter size can be predicted to some extent from only FC weights, which points towards how settings in the convolutional layers affect the FC weights. Compared to the global DMCs, initialization is a more profound local property as compared to e.g. dataset.
Let denote the subset of weights starting at position from optimization step . The trained model can then probe for information across different depths of a model, and track how information evolve during training. Sampling at different and , the result is a 2D performance map, see Figure 5, where is in the upper left corner of the map. For the DMC trained to detect the optimizer used, the performance is approximately uniform across the weights except for a high peak close to the first layers. This roughly agrees with the feature importance of the linear SVM in Figure 3, where information about optimizer can be encoded in the bias weights of the early convolutional layers. Inspecting how the DMC performance for initialization decreases faster in the first layers (Figure 4(c)), we can see how learning faster diverges from the initialization point in the convolutional layers, which agrees with previous studies on how representations are learned [39]. However, we can also see the same tendency in the very last layers. That is, not only convolutional layers quickly diverge from the starting point to adapt to image content, but the last layers also do a similar thing when adapting to the output labels. However, looking at the minimum performance it is still easy to find patterns left from the initialization (see Figure 4), and this hyperparameter dominates the weight space locally. Connecting to the results in Section 4.2, initialization strategy was also the hyperparameter of the NWS dataset that showed highest correlation with performance.
6 Discussion
Looking at the results of the metaclassifications, SVMs can perform reasonably well when considering perlayer statistics on the whole set of weights. However, for extracting information from a random subset of weights, the DMCs are clearly superior, pointing to more complex patterns than the statistics used by the SVMs. It is interesting how much information a set of DMCs can extract from a very small subset of the weights, demonstrating how an abundance of information is locally encoded in the weight space. This is interesting from the viewpoint of privacy leakage, but the information can also be used for gaining valuable insight into how optimization shapes the weights of neural networks. This provides a new perspective for XAI, where we see our approach as a first step towards understanding neural networks from a direct inspection of the weight space. For example, in understanding and refining optimization algorithms, we may ask what are the differences in the learned weights caused by different optimizers? Or how does different activation functions affect the weights? Using a metaclassifier we can pinpoint how a majority of the differences caused by optimizer are due to different distributions in bias weights of the convolutional layers. The activation function used when training with batch normalization gives differences in the moving average used for batch normalization of the FC layers. Another interesting observation is the effect of the initialization in Figure 4(c) and 4(f), which points to how convolutional and final layers diverge faster from the initialization point. A potential implication is to motivate studying what is referred to as differential learning rates by the Fastai library [20]. This has been used for transfer learning, gradually increasing learning rate for deeper layers, but there could be reason to investigate the technique in a wider context, and to look at tuning learning rate of the last layers differently.
6.1 Limitations and future work
We have only scratched the surface of possible explorations in the neural weight space. There is a wealth of settings to be explored using metaclassification, where different combinations of hyperparameters could reveal the structures of how DNNs learn. So far, we have only considered small vanilla CNNs. Larger and more diverse architectures and training regimes could be considered, e.g. ResNets [16], GANs [15], RNNs, dilated and strided convolutions, as well as different input resolution, loss functions, and tasks.
The performance of DMCs can most likely be improved, e.g. by refining the representation of weights in Section 3. And by aggregating information from the full weight vector using local DMCs, there is potential to learn many things about a blackbox DNN with access only to the trained weights. Also, we consider only the weight space itself; a possible extension is to combine weights with layer activations for certain data samples.
While XAI is one of the apparent applications of studying neural networks weights in closer detail, there are many other important applications that would benefit from a largescale analysis of the weight space, e.g. model compression, model privacy, model pruning, and distance metric learning. Another interesting direction would be to learn weightgeneration, e.g. by means of GANs or VAEs, which could be used for initialization or ensemble learning. The model in Section 4.2 was used to show correlations between hyperparameters and model performance. However, the topic of learningbased hyperparameter optimization [21, 34] could be explored in closer detail using the weight space sampling. Also, metalearning could aim to include DMCs during training in order to steer optimization towards good regions in the NWS.
7 Conclusions
This paper introduced the neural weight space (NWS) as a general setting for dissection of trained neural networks. We presented a dataset composed of 16K trained CNN classifiers, which we make available for future research in explainable AI and metalearning. The dataset was studied both in terms of performance for different hyperparameter selections, but most importantly we used metaclassifiers for reasoning about the weight space. We showed how a significant amount of information on the training setup can be revealed by only considering a small fraction of random consecutive weights, pointing to the abundance of information locally encoded in DNN weights. We studied this information to learn properties of how optimization shapes the weights of neural networks. The results indicate how much, where, and when the optimization encodes information about the particular hyperparameters in the weight space.
From the results, we pinpointed initialization as one of the most fundamental local features of the space, followed by activation function and optimizer. Although the actual dataset used for training a network also has a significant impact, the aforementioned properties are in general easier to distinguish, pointing to how optimization techniques can have a more profound effect on the weights as compared to the training data. We see many possible directions for future work, e.g. focusing on metalearning for improving optimization in deep learning, for example using metaclassifiers during training in order to steer optimization towards good regions in the weight space.
This project was supported by the Wallenberg Autonomous Systems and Software Program (WASP) and the strategic research environment ELLIIT.
Supplementary material
A NWS sampling
a.1 Hyperparameters
The different hyperparameter choices are listed in Table 1 in the main paper. Here, we provide details on how the hyperparameters are specified. Note that, as in the paper, we use a broad definition of hyperparameters which includes dataset and architectural design.
Dataset:
MNIST [26] uses 55K/10K train/test images at 2828 pixels resolution. These are upsampled and replicated to be 32323 pixels. CIFAR10 [24] uses 45K/10K train/test images at 32323 pixels. SVHN [37] uses 73,257/26,032 train/test images at 32323 pixels. STL10 [8] uses 5K/8K train/test images at 96963 pixels resolution. These are downsampled to 32323 pixels. STL10 also provides unlabeled images, but these are not utilized in our trainings. FashionMNIST [47] uses 55K/10K train/test images at 2828 pixels resolution. These are upsampled and replicated to be 32323 pixels.
All datasets have 10 classes each. All datasets except for SVHN have an equal number of images for each class. The class imbalance of SVHN is handled as described in Section A.2.
Architecture:
There are 5 hyperparameters () for specifying the CNN architecture, which are used by Algorithm 1 to build the network. With the random selection of these hyperparameters, the sampled weight space contains models with between 6 and 10 layers, and between 20K and 390K weights each. The distribution of model sizes is shown in Figure 5(a).
Learning rate:
Randomly selected in the range , and then decayed by a factor in each epoch of training.
Augmentation:
Augmentation is performed online, i.e. by randomly transforming each of the training images for every minibatch during training. The random transformations include: horizontal and vertical translations in the range pixels, rotations in the range degrees, zooming by a factor in the range %, shearing by a factor in the range degrees, brightness adjustment by adding a constant in the range , contrast adjustment by the operation where is the mean of image and k is picked in the range , hue adjustment by an offset to the H channel in the HSV color space, color saturation adjustment by multiplying the S channel in the HSV color space by a factor in the range , and finally corruption by normally distributed noise with standard deviation in the range .
Optimizer:
Initialization:
The different initialization schemes are only applied to convolutional filters and FC weight matrices. The bias terms are always initialized as . Constant initialization always uses the constant , random normal initialization uses mean and standard deviation , and the Glorot uniform and normal initialization schemes are parameter free [14].
a.2 Training procedure
For optimization, we use 10% randomly selected training images as validation set. Since we train with a very diverse set of hyperparameters it is difficult to specify for how many steps optimization should be performed. Thus, we use the validation set to provide early stopping criteria. The procedure is as follows: We evaluate the validation accuracy, , after each epoch of training. We compute a filtered validation accuracy (we set ). Early stopping is performed if and one or more of the following criteria are fullfilled:

There are 5 consecutive decreasing validation accuracy evaluations . This criteria is for detecting overfitting.

There are in total 30 decreasing validation accuracy evaluations, , when comparing to the smoothed validation accuracy. This criteria is used when there is noise in the validation accuracy evaluations across epochs, so that criteria 1 does not kick in, or if pronounced overfitting does not occur.

There are 30 consecutive stationary iterationsÂ . This situation can occur if optimization gets stuck.
Although these criteria capture many of the variations that can occur, there is some room for improvement. For example, training on STL10, which is a smaller dataset than the others, there is more noise in the validation loss between epochs. This means that sometimes, mainly with difficult hyperparameter setup, early stopping can kick earlier than optimal.
The wide diversity of hyperparameters means that early stopping results in very different training lengths. The distribution of training times is shown in Figure 5(b).
Once per epoch, the current weights are exported. Since early stopping never applies before 20 epochs, we always have exported weights at 20 points along the optimization trajectory. However, in most cases there are a lot more. To be consistent and to be able to manage the amount of data, we always only keep 20 weight snapshots. This means that if training is longer than 20 epochs, we keep only the exported weights at 20 uniformly sampled epochs from initialization to convergence.
The SVHN dataset has an unbalanced number of images. We address this problem by randomly selecting images from each class, where is the minimum number of images of a class. In order to utilize all images, this random selection is repeated each epoch.
a.3 Training statistics
Figure 7 shows the training, validation and test accuracy during the progress of optimization, for all 13K trainings in the NWS set . The figure is separated between the 5 datasets, and also provides mean accuracies over all the trainings. For all datasets there is a wide spread in end accuracy and convergence behavior, which is tightly linked to the hyperparameter setup. Also, there is a distinct difference between datasets when it comes to overfitting. For example, inspecting the difference between training and validation accuracy, overfitting hardly occurs when training on MNIST, while CIFAR10 and STL10 are more susceptible to this phenomena.
To get a better picture of the spread in performance of trained models, Figure 8 shows the distribution of test accuracy for each dataset. While each dataset generates performances over a wide range of values, there are always two distinct modes; one with the failed trainings and one with more successful trainings. Clearly, some datasets are more prone to fail (CIFAR10, STL10) than others (MNIST, FashionMNIST). Also, there is always an approximately normally distributed set of successful trainings, and this is also the case when only selecting the best hyperparameters. That is, random initialization and/or SGD will result in that test performance is normally distributed over repeated training runs.
For the failure modes, it is also possible to discern a tendency to have two peaks. One is at 10%, i.e. random guessing, meaning that nothing is learned. However, for all datasets there is also a more or less pronounced peak around 20%.
Using the test accuracies, we can also analyze the performance of different combinations of hyperparameters. Figure 9 shows all combinations of activation function and initialization strategy, split between different datasets. We can make a number of observations from the result. For example, Glorot initialization is clearly the superior choice of the initialization schemes included, both using uniformly and normally distributed weights. ELU consistently provides a slight advantage over ReLU, and especially for the more difficult datasets (CIFAR10 and STL10). There is also an interesting pattern of sigmoid and TanH being clearly inferior to ReLU and ELU when combined with constant or random normal initialization. Constant initialization combined with sigmoid activation is the far worst combination, but for other initialization strategies sigmoid performs better than TanH. When using Glorot initialization, the difference between ReLU/ELU and sigmoid/Tanh is much smaller. This means that on average ReLU and ELU are much better at handling difficult initialization.
Figure 10 shows all combinations of optimizer and initialization strategy for the different datasets. When using Glorot initialization, the momentum optimizer performance is close to that of ADAM and RMSprop. However, ADAM and RMSprop are much better at dealing with the more difficult initialization schemes, although the performance is quite far from the results when using Glorot initialization. It is possible that momentum optimization could behave better in the beginning of optimization with a lower momentum setting [17], when gradients are large, whereas a higher momentum is preferable later on. That is, a different momentum could give the opposite pattern of what we see here – better handling of bad initialization, but worse at converging to the more optimal end points.
Figure 11 shows performances of different initialization strategies when augmentations are turned off and on. When studying correlation between hyperparameters and test performance in Figure 14 it seems like augmentation has little effect. However, this does not reveal the complete picture. Looking at performance for different initializations in Figure 11, for Glorot initialization it improves performance on most datasets, while decreasing performance for less effective initialization schemes.
B Dimensionality reduction
Previous work have demonstrated the weights of a single or few trainings from the perspective of PCA components [12, 31, 2]. However, we can make use of thousands of separate trainings to perform PCA, and a rather small set of components holds the majority of variance of the data. Figure 13 shows a UMAP embedding [33] of the 10 first principal components of 3K separate trainings ( in Table 1 in the paper). These have in total 92,868 model weights each, of which 16,880 are from the convolutional layers and 75,988 from the FC layers. While it is possible to run PCA on the full weight representations, we choose to focus only on the convolutional weights to be able to use more models for the embedding. For each training, 20 points in the weight space have been included along the training trajectory from initialization to final converged model, for a total of weight vectors. For embedding in 2D, we perform UMAP [33] on the 10 first principal components. The result is 60K 2D points, which can be scattered with different colors to represent hyperparameters. Figure 13 shows the same embedding, but color coded according to 6 different hyperparameter settings used in the training of the different weights. The figure also illustrates the training progress and test error of each point. Some of the hyperparameter options show clear patterns, such as optimizer, activation function and initialization.
The majority of initialization points cluster in the bottom right (Figure 12(g)). From the different types of initialization (Figure 12(f)) and the error (Figure 12(h)), it is clear how many trainings with constant initialization fail, especially when using Sigmoid activation (Figure 12(e)). Constant initialization and Sigmoid activation was also shown to be the worst combination of hyperparameters in Figure 9. There are also small clusters of points outside the main manifold of points; these are examples of trainings that get stuck and fail to learn useful information.
C Subsampling of the NWS
For the metalearning experiments (regression of test accuracy from hyperparameters and metaclassification), we use a subsampled set of the NWS. As shown in Figure 8, the test accuracy distributions show different modes of “failed” and “successful” trainings. For our metalearning experiments we are interested in differentiating between the successful trainings. Therefore, we only include trainings of this mode, by using a threshold on test accuracy for rejecting the failed models.
The threshold accuracies for separating the modes are 80%, 25%, 50%, 25%, and 50% for MNIST, CIFAR10, SVHN, STL10, and FashionMNIST, respectively. The exact choice of threshold is not critical since there are relatively few trainings around these values. For the 10K/3K training/test set of , this selection means that we have 8,035/2,448 training/test samples. For the 2K/1K training/test set of , it means 1,758/880 training/test samples.
D Regressing the test accuracy
Figure 14 shows the regression coefficients for linear models fitted to the test accuracies of models from each dataset. The subsampled NWS dataset was used for these models. The different options for each hyperparameter are listed in Table 1 in the main paper.
The results clearly demonstrate a vast improvement using Glorot initialization [14] as compared to constant or random normal initialization. There is also a clear indication on the advantages of using more wellthoughtout optimization strategies (ADAM, RMSprop) as compared to conventional momentum SGD. However, if we compare different combinations of optimizer and initialization (Figure 10), it is evident how ADAM and RMSprop are better at handling less optimal initializations. For Glorot initialization, momentum SGD can perform on par with these. Clearly, optimization is very much dependent on the initialization and optimizer. Another important factor is the activation function, where ReLU and ELU are clearly superior compared to Sigmoid and TanH. Morover, there is an overall tendency to favor ELU over ReLU. For architecture specific hyperparameters, there is a general trend to promote wider models. However, the number of FC layers has a consistently negative correlation. Although a large number of FCs do not necessarily increase performance (for example, AlexNet [25] and VGG [45] have only 3), it is interesting how more than 3 layers overall results in decreased performance. One possible explanation could be that many FCs are hard to optimize without using skipconnection/resnet designs [16], especially when using less optimal hyperparameters.
To get an idea of how well the models capture the test accuracies of the datasets, Figure 14(a) shows the average error of the models. The errors of the linear models are compared to constant models, i.e. which simply uses the mean test accuracy. Since the test accuracies have been normalized before fitting the models, the mean test accuracy is 1. For the easier datasets (MNIST, FMNIST), the error is higher than for the more difficult ones (CIFAR10, STL10).
From Figure 14 we see similar patterns of optimal hyperparameters for all datasets, and it is difficult to say which datasets require the most similar hyperparameter tuning. In order to measure how well hyperparameter tuning correlates between different datasets, we measure the Pearson’s correlation between the model coefficients of all combinations of datasets. The results are displayed in Figure 14(b). From the correlations, we can for example see that MNIST requires a hyperparameter tuning that is more similar to CIFAR10 than STL10, and that STL10 on average has the least similar coefficients. However, all datasets are strongly correlated in terms of optimal hyperparameters.
E SVM metaclassifier trainings
e.1 Experimental setup
SVMs are trained by considering both a weight vector and its gradients . From these, standard statistical measures are calculated, and include: mean, variance, skewness, 1% percentile, 25% percentile, 50% percentile, 75% percentile, and 99% percentile. The skewness is described by the third order moment,
(4) 
where and is the mean and standard deviation of , respectively. This yields a total of 16 features. For the layerwise statistics, each type of weight in each layer is considered separately. The type of weights include multiplicative weights (filters in convolutional layers, and weight matrices of fully connected layers), bias weights, and the , , running mean and variance used for the batch normalization of a layer.
SVMs are trained in a oneversusrest strategy, i.e. where one SVM is trained for each class of a dataset, drawing a decision boundary in feature space between that class and the rest of the classes.
e.2 Feature importance
Using a linear kernel of an SVM, we can get an indication on which features are most important for a decision. Given the coefficients of a oneversusrest SVM, it describes a vector that is orthogonal to the hyperplane in feature space which separates the class from the rest of the classes. The vector is oriented along the feature axes which are most useful for separating the classes. Taking the norm of over the classes , we can get an indication on which features were most important for separating the classes.
In Figure 16, the coefficient norm of each of the in total 480 features of the layerwise linear SVMs are illustrated, together with information on which layer and type of weight a feature is computed from. For dataset, optimizer and activation, there are distinct features which are more pronounced. For example, filter gradients of the first layer are most descriptive for dataset, biases from all convolutional layers are used for classifying optimizer, and running mean and variance are used for classifying activation function. For initialization classification, however, there are no clear individual features or layers which are most descriptive.
Figure 17 shows coefficient norms of the 16 features of SVMs trained on local subsets of weights. The information used for classification is predominantly located in the features containing percentiles of weights and weight gradients.
F DMC trainings
f.1 Experimental setup
Layer  Global DMC (10,948,997 weights in total)  Local DMC (10,407,685 weights in total) 

Input  92,8681  5,0001 
1  Conv. 51: (1 8) Maxpooling 21  Conv. 51: (1 8) Maxpooling 21 
2  Conv. 51 (8 16) Maxpooling 21  Conv. 51 (8 16) Maxpooling 21 
3  Conv. 51 (16 32) Maxpooling 21  Conv. 51 (16 32) 
4  Conv. 51 (32 64) Maxpooling 21  Conv. 51 (32 64) Maxpooling 21 
5  Conv. 51 (64 128) Maxpooling 21  Conv. 51 (64 128) 
6  Conv. 51 (128 128) Maxpooling 21  Conv. 51 (128 256) Maxpooling 21 
7  Conv. 51 (128 128) Maxpooling 21  Conv. 51 (256 256) Maxpooling 21 
8  Conv. 51 (128 128) Maxpooling 21  Conv. 51 (256 256) Maxpooling 21 
9  Conv. 51 (128 128) Maxpooling 21  Conv. 51 (256 256) Maxpooling 21 
10  Conv. 51 (128 128)  Conv. 51 (256 256) 
11  Conv. 51 (128 256) Maxpooling 21  Conv. 51 (256 256) 
12  Conv. 51 (256 256)  Conv. 51 (256 256) Maxpooling 21 
13  Conv. 51 (256 256) Maxpooling 21  FC (4864 1024) Dropout 0.5 
14  Conv. 51 (256 256)  FC (1024 1024) Dropout 0.5 
15  Conv. 51 (256 256) Maxpooling 21  FC (1024 1024) Dropout 0.5 
16  FC (5632 1024) Dropout 0.5  FC (1024 1024) Dropout 0.5 
17  FC (1024 1024) Dropout 0.5  FC (1024 64) Dropout 0.5 
18  FC (1024 1024) Dropout 0.5  FC (64 C) 
19  FC (1024 1024) Dropout 0.5  
20  FC (1024 64) Dropout 0.5  
21  FC (64 C) 
The deep metaclassifiers (DMCs) are specified as 1D CNNs. A global DMC takes as input a significantly larger number of weights compared to a local DMC, and this is accounted for by having an appropriate number of pooling operations throughout the convolutional part of a network. The exact layer specifications of the global and local DMCs are listed in Table LABEL:tab:dmc. The CNNs use batch normalization and ReLU activation on each layer. Batch normalization turns out to be a crucial component to make the DMCs converge. Initialization is performed by the Glorot normal scheme [14].
The choice of the number of layers and their width is made to get a reasonably large CNN that performs well. However, as we have not made an extensive effort to search for the optimal design, this could most probably be improved, and we leave this for future work.
For training of DMCs, we use the subsampled NWS (Section C). Since each model of the dataset has weight snapshots exported at 20 different occasions during training, we can also use more than one weight sample of the models. Although consecutive weight snapshots from the same training are expected to be similar, there are also differences that can improve DMC trainings, similar to how data augmentation is commonly used for improving generalization. For the global DMCs, we use the 4 last weight snapshots, for a total of 7,032 weight vectors used in training. For the local DMCs, we use the 2 last weight snapshots, for at total of 16,070 weight vectors.
When training local DMCs, for each new minibatch we pick at random the location of the subset within the weight vector. This means that the effective size of the training set is much larger than the number of weight samples, and we can train for many epochs without overfitting. Thus, we train local DMCs for 500 epochs, while global DMCs are trained for 100 epochs.
Since the subsampling in Section C removes failed trainings, the distribution of hyperparameters in the training set may be unbalanced (e.g. less trainings that use constant initialization, or sigmoid activation). We enforce class balance by simply only using samples of each class used for the metaclassification, where is the number of samples of the class with least samples.
Optimization of DMCs is performed using ADAM with default settings in Tensorflow (, , and ), a batch size of 64, and a learning rate of . The learning rate is decayed by a factor 50 times during training, so that the end learning rate is . A local DMC trained with a subset size of 5,000 takes approximately 2 hours to optimize on an Nvida GeForce GTX 1080 Ti GPU, while a global DMC takes 70 minutes.
f.2 Subset size
The choice of using a subset size of 5,000 weight elements as input to a local DMC is arbitrary, and means that on average 5% of the weights of a model are used. In order to get a sense for the impact of the choice of subset size, we train DMCs using a range of different values, between 100 and 20,000. This means that the size of the DMC CNN in Table LABEL:tab:dmc will vary greatly. To account for this, we adjust the number of maxpoolings performed in layers 611. That is, if the input is small, these layers are not followed by maxpooling, and if it is large all use maxpooling. Using this strategy, the CNNs will approximately use equally many trainable weights for different input sizes.
Figure 18 shows the relationship between the input size and DMC performance for 5 different hyperparameter classifications. Generally, there is a logarithmic relation between subset size and DMC performance. Looking at the DMC trained to detect activation function, this shows a steeper slope, so that compared to e.g. the dataset DMC, activation function benefits more from having a larger number of weights from a model. This gives an indication on how local the information is stored, i.e. that activation function is a more global property of the weights than dataset.
f.3 Batch normalization
As discussed in the main paper, and specified in Algorithm 1, we use batch normalization (BN) for all CNN trainings, since otherwise the majority of trainings fail if difficult hyperparameters are used (constant initialization, sigmoid activation, etc.). As discussed by Bau et al. [4], BN seems to have a whitening effect on the activations at each layer, which makes for decreased interpretability. The question is if the same thing is true also for the weights, and if this makes it more difficult to use a DMC to classify the weights? Potentially, the reduction in internal covariate shift provided by BN could make it more difficult to find descriptive features that describe the training properties.
In a preliminary experiment, shown in Figure 19, we compared DMCs trained on models with and without BN. Sampling of the NWS for this experiment was made using a less diverse sampling of hyperparameters, with approximately 2K models trained using BN and 2K trained without BN. Performance for the BNtrained models is slightly lower for some of the hyperparameters (dataset, batch size, augmentation), but also slightly higher for other hyperparameters (optimizer, filter size). We conclude that there is a smaller difference using BN in the context of DMC training. However, it is still possible to see information about hyperparameters, which is also confirmed by the DMC results we provide in the main paper.
Another aspect of BN is that it uses learnable parameters (running mean/variance and /). These are part of the vectorized model weights . Since the BN parameters carries statistics of the training, this raises the question of how much of the information found by a DMC is actually contained in the BN specific weights? And on the same line of questions; how much of the information is found in the bias weights as compared to other weights? Figure 16 partially answers these questions for SVMs on the full set of weights. However, to also answer the questions for local DMCs, the results in Figure 19 have also been computed on stripped weight vectors, where both BN specific and bias weights have been removed. That is, we use the same DMCs, trained on weights with BN/bias weights, but remove these weights during inference. As seen in the results, there are no significant differences in performance, which means that most information is contained in convolutional filters and FC weight matrices.
f.4 DMC results
Figure 20 and 21 show performance maps for all of the trained local DMCs. The columns correspond to DMCs trained on subsets picked from all positions of a weight vector (left), subsets only taken from the convolutional layers of (middle), and subsets only from the FC weights (right).
There are many things that can be seen in the results, and we only discuss around a few of these. For the DMCs trained to classify optimizer and activation function, the performance maps are very similar. Looking at the DMCs trained on weights from the whole network, we can only see that the performance increases in the beginning and end of the CNNs. However, looking at DMCs trained on only convolutional or FC weights, there are stripes of increasing performance. Inspecting the feature importances in Figure 16, we see how optimizer is best determined from bias weights and that activation function is easier to detect in the running mean weights. Thus, the striped patterns are most likely pinpointing the location of bias and running mean weights for optimizer and activation metaclassifiers, respectively. Although these types of weights occur at different locations due to the varying architectures of the dataset, on average they will end up in the indicated locations.
For the initialization DMC, there is a faster decrease in performance in the earlier layers. Looking at the performance of DMCs trained on convolutional and FC weights, it is clear how this decrease is contained in the convolutional layers, i.e. that these weight faster move away from the initialization. Also, in the last one or two FC layers the decrease is more rapid.
For the DMCs trained to detect filter size, the DMC trained on only convolutional layer weights is superior. This is expected, since the DMC only has to learn about the frequency information induced by vectorization of the convolutional filters. However, what is more interesting is how a large fraction of the FC layers also reflect an increase in accuracy, pointing to how the filter size affects the network on a more global level.











Footnotes
 Department of Science and Technology, Linköping University, Sweden
 Department of Science and Technology, Linköping University, Sweden
 Institute of Media Informatics, Ulm University, Germany
 Department of Science and Technology, Linköping University, Sweden
 Department of Science and Technology, Linköping University, Sweden
 https://github.com/gabrieleilertsen/nws
References
 (2016) Understanding intermediate layers using linear classifier probes. arXiv preprint arXiv:1610.01644. Cited by: §1, §2.
 (2018) PCA of high dimensional random walks with comparison to neural network training. In Advances in Neural Information Processing Systems (NeurIPS 2018), Cited by: §2, §B.
 (2015) Hacking smart machines with smarter ones: how to extract meaningful data from machine learning classifiers. International Journal of Security and Networks (IJSN) 10 (3). Cited by: §2.
 (2017) Network dissection: quantifying interpretability of deep visual representations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), Cited by: §2, §F.3.
 (2012) Random search for hyperparameter optimization. Journal of Machine Learning Research (JMLR) 13. Cited by: §2, §4.1.
 (2018) Do convolutional neural networks learn class hierarchy?. IEEE transactions on visualization and computer graphics (TVCG) 24 (1). Cited by: §2.
 (2015) Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289. Cited by: Table 1.
 (2011) An analysis of singlelayer networks in unsupervised feature learning. In International conference on artificial intelligence and statistics (AISTATS 2011), Cited by: §A.1, Table 1.
 (2010) Why does unsupervised pretraining help deep learning?. Journal of Machine Learning Research (JMLR) 11. Cited by: §2.
 (2015) Model inversion attacks that exploit confidence information and basic countermeasures. In ACM SIGSAC Conference on Computer and Communications Security (CCS 2015), Cited by: §2.
 (2019) Topology of learning in artificial neural networks. arXiv preprint arXiv:1902.08160. Cited by: §2.
 (1997) Visualization of learning in neural networks using principal component analysis. In International Conference on Computational Intelligence and Multimedia Applications (ICCIMA 1997), Cited by: §2, §B.
 (1997) Weight space learning trajectory visualization. In Australian Conference on Neural Networks (ACNN 1997), Cited by: §2.
 (2010) Understanding the difficulty of training deep feedforward neural networks. In International conference on artificial intelligence and statistics (AISTATS 2010), Cited by: §A.1, Table 1, §D, §F.1.
 (2014) Generative adversarial nets. In International Conference on Neural Information Processing Systems (NIPS 2014), Cited by: §6.1.
 (2016) Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition (CVPR 2016), Cited by: §D, §6.1.
 (2012) Neural networks for machine learning lecture 6a overview of minibatch gradient descent. Cited by: §A.1, §A.3, Table 1.
 (2017) Deep models under the gan: information leakage from collaborative deep learning. In ACM SIGSAC Conference on Computer and Communications Security (CCS 2017), Cited by: §2.
 (2018) Visual analytics in deep learning: an interrogative survey for the next frontiers. IEEE transactions on visualization and computer graphics (TVCG). Cited by: §2.
 (2018) Fastai. GitHub. Note: \urlhttps://github.com/fastai/fastai Cited by: §6.
 (2011) Sequential modelbased optimization for general algorithm configuration. In International Conference on Learning and Intelligent Optimization (LION 2011), Cited by: §2, §6.1.
 (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML 2015), Cited by: §4.1.
 (2014) ADAM: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §A.1, Table 1.
 (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §A.1, Table 1.
 (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (NIPS 2012), Cited by: §D.
 (1998) Gradientbased learning applied to document recognition. Proceedings of the IEEE 86 (11). Cited by: §A.1, Table 1.
 (2015) Convergent learning: do different neural networks learn the same representations?. In NIPS Workshop on Feature Extraction: Modern Questions and Challenges, Cited by: §1, §2.
 (2016) Stuck in a what? adventures in weight space. arXiv preprint arXiv:1602.07320. Cited by: §2.
 (2019) DARTS: differentiable architecture search. In International Conference on Learning Representations (ICLR 2019), Cited by: §2.
 (2017) Towards better analysis of deep convolutional neural networks. IEEE transactions on visualization and computer graphics (TVCG) 23 (1). Cited by: §1.
 (2016) Visualizing deep network training trajectories with PCA. In ICML Workshop on Visualization for Deep Learning, Cited by: §2, §B.
 (2015) Understanding deep image representations by inverting them. In IEEE conference on computer vision and pattern recognition (CVPR 2015), Cited by: §2.
 (2018) Umap: uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426. Cited by: §B.
 (2012) Bayesian approach to global optimization: theory and applications. Vol. 37. Cited by: §2, §6.1.
 (2010) Rectified linear units improve restricted boltzmann machines. In International conference on machine learning (ICML 2010), Cited by: Table 1.
 (2018) Comprehensive privacy analysis of deep learning: standalone and federated learning under passive and active whitebox inference attacks. arXiv preprint arXiv:1812.00910. Cited by: §2.
 (2011) Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, Cited by: §A.1, Table 1.
 (2018) Sensitivity and generalization in neural networks: an empirical study. In International Conference on Learning Representations (ICLR 2018), Cited by: §2.
 (2017) Svcca: singular vector canonical correlation analysis for deep learning dynamics and interpretability. In International Conference on Neural Information Processing Systems (NIPS 2017), Cited by: §1, §2, §5.3.
 (2017) Visualizing the hidden activity of artificial neural networks. IEEE transactions on visualization and computer graphics (TVCG) 23 (1). Cited by: §2.
 (2017) Largescale evolution of image classifiers. In International Conference on Machine Learning (ICML 2017), Cited by: §2.
 (2017) Gradcam: visual explanations from deep networks via gradientbased localization. In IEEE International Conference on Computer Vision (CVPR 2017), Cited by: §2.
 (2017) Membership inference attacks against machine learning models. In IEEE Symposium on Security and Privacy (SP), Cited by: §2.
 (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. Cited by: §1, §2.
 (2014) Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §D.
 (2014) Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research (JMLR) 15 (1). Cited by: §4.1.
 (2017) FashionMNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Cited by: §A.1, Table 1.
 (2014) How transferable are features in deep neural networks?. In International Conference on Neural Information Processing Systems (NIPS 2014), Cited by: §2.
 (2015) Understanding neural networks through deep visualization. In ICML Workshop on Deep Learning, Cited by: §2.
 (2018) Taskonomy: disentangling task transfer learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), Cited by: §2.
 (2014) Visualizing and understanding convolutional networks. In European conference on computer vision (ECCV 2014), Cited by: §1, §2.
 (2018) Interpreting deep visual representations via network dissection. IEEE transactions on pattern analysis and machine intelligence (TPAMI). Cited by: §2.
 (2015) Object detectors emerge in deep scene CNNs. In International Conference on Learning Representations (ICLR 2015), Cited by: §2.
 (2017) Neural architecture search with reinforcement learning. In International Conference on Learning Representations (ICLR 2017), Cited by: §2.
 (2018) Learning transferable architectures for scalable image recognition. In IEEE conference on computer vision and pattern recognition (CVPR 2018), Cited by: §2.