Quantifying the effect of representations on task complexity

Quantifying the effect of representations on task complexity

Abstract

We examine the influence of input data representations on learning complexity. For learning, we posit that each model implicitly uses a candidate model distribution for unexplained variations in the data, its noise model. If the model distribution is not well aligned to the true distribution, then even relevant variations will be treated as noise. Crucially however, the alignment of model and true distribution can be changed, albeit implicitly, by changing data representations. “Better” representations can better align the model to the true distribution, making it easier to approximate the input-output relationship in the data without discarding useful data variations. To quantify this alignment effect of data representations on the difficulty of a learning task, we make use of an existing task complexity score and show its connection to the representation-dependent information coding length of the input. Empirically we extract the necessary statistics from a linear regression approximation and show that these are sufficient to predict relative learning performance outcomes of different data representations and neural network types obtained when utilizing an extensive neural network architecture search. We conclude that to ensure better learning outcomes, representations may need to be tailored to both task and model to align with the implicit distribution of model and task.

1 Introduction

Sometimes perspective is everything. While the information content of encoded data may not change when the way it is represented changes, its usefulness can vary dramatically (see Fig. 1). A “useful” representation then is one that makes it easy to extract information of interest. This in turn very much depends on who or which algorithm is extracting the information. Evidently the way data is encoded and how a model “decodes” the information needs to match.

Historically, people have invented a large variety of “data representations” to convey information. An instance of this theme is the heliocentric vs. geocentric view of the solar system. Before the heliocentric viewpoint was widely accepted, scholars had already worked out the movements of the planets [Theodossiou et al., 2002]. The main contribution of the new perspective was that now the planetary trajectories were simple ellipses instead of more complicated movements involving loops1.

In a machine learning context, many have experimented with finding good data representations for specific tasks such as speech recognition [Logan et al., 2000], face recognition [Hsu et al., 2002], for increased robustness in face detection [Podilchuk and Zhang, 1998], and many others. Yet no clear understanding has emerged of why a given representation is more suited to one task but less for another. We cast the problem of choosing the data representation for learning as one of determining the ease of encoding the relationship between input and output which depends both on how the data is represented and which model is supposed to encode it.

Figure 1: These images contain different representations of the same information. However, one of the two is much easier for us to understand. We posit that, for our nervous system, one of the two images has a lower expected coding length for the task of recognizing the person.

Contribution: In this work, we argue that learning a task is about encoding the relationship between input and output. Each model implicitly has a way of encoding information, where some variations in the data are easier to encode than others. Armed with this insight, we empirically evaluate different data representations and record what impact data representations have on learning outcomes and types of networks found by automated network optimization. Most interestingly, we are able to show that relative learning outcomes of neural architecture searches for different representations can be predicted by measuring the variance of weights of linear regression to these problems, where the feature extraction of linear regression is adapted depending on whether fully connected or convolutional networks are approximated.

1.1 Related Work

This work aims to bring us a bit closer to understanding the effect of data representations on what makes a given learning task easier or harder.

Data representations: Data representations have been optimized for a long time. In fact there is a rich theory of linear invertible representations for both finite and infinite dimensional spaces called Frame Theory [Christensen et al., 2016]. Specific popular examples of frames are Wavelets [Mallat, 1999] and Curvelets [Candes and Donoho, 2000]. Empirically tested only on Imagenet, Uber research [Gueguen et al., 2018] showed that using a data representation closer to how JPEG encodes information may help to create faster residual network architecture with slightly better performance. In a similar spirit in a robotics context, Grassmann and Kahrs [Grassmann and Burgner-Kahrs, 2019] evaluated learning performance on approximating robot dynamics using various common robot dynamics data representations such as Euler angles. What is more common in deep learning is to adapt the network architecture to the task at hand. An intriguing recent example taking this idea a step further are Weight Agnostic Neural Networks [Gaier and Ha, 2019] which have been designed to already “function” on a task even when starting from randomly initialized weights.

Measuring learning difficulty: Already in the ninenties, Thornton [1995] posed the question of how to measure how easy or difficult a learning task is and related the difficulty to information theoretic measures called the information gain (mutual information) and the information gain ratio introduced in the context of decision trees by Quinlan [1986, 2014]. Ho and Basu [2002] take a different road by comparing several possible scores to assess the difficulty of classification learning problems such as linear separability and feature efficiency. More commonly, instead of judging task difficulty, there is a vast literature on feature selection [Guyon and Elisseeff, 2003], e.g. judging how suitable a feature is for a given learning problem. Desirable features are reliably selected for a learning task [Meinshausen and Bühlmann, 2010] and ideally are highly predictive of the output variable. More recently based on PAC-Bayes bound McAllester [2003] considerations, Achille Achille and Soatto [2019] derived an expression for learning task complexity based on the expected training performance and the KL-divergence from posterior and prior of the weights. Likewise there is a large literature on Kolmogorov complexity that tries to encode the algorithmic complexity of an object with applications such as measuring data complexity Li [2006].

2 Data Representations and Task Complexity

The objective of learning can be phrased as finding a function that minimizes the uncertainty of the output given the input while discarding as much task irrelevant information as possible. In information theoretic language, this viewpoint was introduced by Tishby et al. [2000] and extended by Achille and Soatto [2018] in the form of the objective of the Information Bottleneck (IB) Lagrangian. Given an input , a model encoding , an output and mutual information , the IB-Lagrangian Tishby et al. [2000] aims to minimize the following cost function:

The model is supposed to find an encoding of data that maximizes the mutual information to output while also minimizing the mutual information with . We consider the influence data representations have on optimizing the above objective and begin by defining what we mean when we talk about a data representation.

Definition 1.

A data representation is the output of an invertible mapping applied to the “original” data .

Therefore, all data representations are in a sense “equivalent”. Since they are invertible they share the same information content . Yet clearly how data is represented does influence learning. As a “worst case”, an encrypted version of a dataset is unlikely to be useful for a learning task. A representation induces a new candidate distribution. As an example, assume the task of fitting for via linear regression. Then the representations are invertible to each other on this interval, yet the latter allows for a much better fit using linear regression than the former. In the former case, unexplained variations are assumed to be noise. Both candidate distribution and can change with representation .

To understand what impact a data representation may have we will employ the idea of expected coding length and focus on what happens when we choose the “wrong code”. From Information Theory [Cover and Thomas, 2012], we learn that the most efficient encoding we can possibly find is lower bounded by the entropy of the distribution we are trying to compress. In this case, we assume that we have a candidate distribution that we are trying to fit to the true distribution . The expected coding length of our candidate distribution can then never be smaller than the entropy of the true distribution [Cover and Thomas, 2012]: .

Theorem 1 ((Wrong code) Theorem 5.4.3 [Cover and Thomas, 2012]).

The expected length under of the code assignment satisfies

(1)

where is the Kullback-Leibler divergence between and .

Critical to the point we are making, we will assume that any function family has an associated candidate distribution with its own “codes” and expected coding length through which it measures how uncertain a variable is, e.g. linear regression usually assumes a normal distribution for all variations that are not explained linearly. The difficulty with assuming a candidate distribution is that available data may not follow the same distribution. Given such a mismatch, the model will overestimate the entropy of the distribution as shown in theorem 1.

Lemma 2 (Representation-Model-Alignment).

Assuming candidate distributions and representations with we have that

Proof.

From theorem 1 we know that . Thus using the ineq. relation we get

Critically for real-world situations, the wrong code theorem invalidates the assumption that the estimated entropy does not change when an invertible transformation is applied. Both entropy and mutual information do indeed not change, yet in practice one does not necessarily have access to the true distribution but only . The closer representation aligns the model candidate distribution to the true distribution, the smaller the data coding length will be. In this sense there are thus better and worse invertible data representations given a model and learning task.

Coming back to the IB-Lagrangian objective, an attractive option to evaluate the influence of data representations on how difficult a learning task is via the following task complexity defined by Achille Achille and Soatto [2019].

(2)

This term is closely aligned to the IB-Lagrangian defined above and trades-off the performance of the learned model on the training set and the distance of the weights of this model to the prior weights as expressed by the KL-divergence between the two distributions. The second term here has the crucial function of quantifying the confidence that can be placed in any loss values computed on the training data similar to PAC-Bayes bounds McAllester [2003]. The further the posterior is from the prior belief, the more data is necessary to place confidence in the obtained predictions. As a metaphor of why the uncertainty estimate matters imagine the task of predicting the birthrate in Mongolia from the movements of the stock market for a given month. Most certainly one will be able to correlate the two. This is a common occurrence called spurious correlation [Fan et al., 2012].

Furthermore, we learn from Achille Achille and Soatto [2019] that the KL-term is, in expectation over the dataset distribution equal to the mutual information between weights and dataset distribution : . Additionally, by assuming that the posterior weights independently follow a log-normal distribution as found in Achille and Soatto [2018] we can write the information in the weights as with .

We are interested in finding a connection between different data representations and their effect on the task complexity defined in Eq. 2 above. By definition the task complexity is centered around changes from prior to posterior weights, yet we will see that albeit initially implicitly, there is a connection to the way data is represented. For the following calculations we consider the particular regime which is relevant for several applications of interest, such as object detection.

Definition 2 (Distillation regime).

In the distillation regime we assume that:

  1. The samples have very high entropy . Thus we assume holds for linear regression.

  2. The entropy of is small with respect to the entropy of .

Example 1.

Typical object detection tasks are in the distillation regime. The entropy of images is high (property 1), while labels are compactly represented (property 2).

In the following, we consider the case of linear regression as an “approximation” to the more complicated neural networks. This is motivated by empirical observations Goodfellow et al. [2014] that neural networks, on a spectrum from linear to highly nonlinear, are likely to be close in behavior to linear models in many ways. In particular, when using ReLU-activations Xu et al. [2015], the neural network is locally linear in the units that are not affected by the activation function.

Lemma 3.

We define and dataset distribution . We consider the linear regression setting in the distillation regime with and assume in the distillation regime that the variance of residuals is equal to the noise variance . The trained weights are assumed to each independently follow the posterior distribution with log-normal candidate distribution and deterministic . Then the following lower bound holds.

Proof.

where we are assuming that the covariance matrix is symmetric and positive semi-definite. ∎

We note that the computed entropy is based on the linear regression assumptions of a linear functional relationship and Gaussian noise for . Since does not in general follow this assumed distribution this leads to the expected coding length and not to the true entropy . The closer the true distribution is to a Gaussian, the better this approximation is and the smaller the correction term is. From the task complexity definition (Eq. 2) we derive the following task complexity score heuristic. If a given threshold of performance is achieved, solutions are favored which have lower mutual information of weights to dataset distribution.

Definition 3 (Tcs).

Given random variables , mutual information , threshold , and loss on training data , the task complexity score (TCS) is defined as

Corollary 1 (Representation effect on task complexity).

The smaller the KL-divergence of candidate distribution of a representation is, the smaller the lower bound in lemma 2 and the larger the task complexity score (TCS) is, which follows from substitution from lemma 2 and 3.

From the above calculations, we distill that the estimated coding length can potentially provide insights on the task complexity and confidence of task PAC-bounds. The derived bounds are not tight since, among other assumptions, many constraints within the weights are not taken into account by assuming independent log-normally distributed weights. Nevertheless we observe experimentally that the quantitative differences in variance of weights as well as empirical entropy have a notable effect on training outcomes.

3 Experiments

The theoretical findings we presented in the previous sections have to be substantiated in the real world. We therefore conducted a wide ranging empirical study on their relevance over a number of different datasets and network architectures. In total we evaluated over 8000 networks of varying sizes. We provide the full code and data required to replicate our experiments at the following URL:
https://drive.google.com/open?id=1D8wICzJVPJRUWB9y5WgceslXZfurY34g

3.1 Datasets

To have a chance that our findings are applicable beyond the scope of this publication we chose a diverse set of three datasets that capture different vision tasks. We chose two datasets for classification (KDEF and Groceries) and one for regression (Drone Racing). Sample images and further details for each of the datasets can be found in the appendix Tab. 1.

KDEF: This dataset is based on an emotion recognition dataset by Lundqvist et al. [1998]. Each of the images shows male and female actors expressing one of seven emotions. Images are captured from a number of different fixed viewpoints and centered on the face. To add more diversity to the data we added small color and brightness perturbations and a random crop. Moreover, since the dataset provides few samples, we downsampled each of the images to a sixth of their original size.
Drone Racing: This dataset is based on the Drone Racing dataset by Delmerico et al. [2019]. We use the mDAVIS data from subsets 3, 5, 6, 9, and 10. While the original dataset provides the full pose (all six DOF), we train our feedforward networks to recover only the rotational DOF (roll, pitch, and yaw) from grayscale images. We matched the IMU data, which is sampled at 1000Hz to the timestamp for each grayscale image captured at 50Hz using linear interpolation. Since the images do not have multiple color channels we did not investigate YCbCr or PREC representations for this dataset.
Groceries: We use the Freiburg Groceries Dataset [Jund et al., 2016] and their original “test0/train0” split. Our only modifications are that we reserve a random subset of the test data for the evaluation of our hyperparameter optimization and that we reduce the size of the images from to pixels. Each of the images has to be classified into one of 25 categories.

3.2 Tcs and Empirical Entropy Estimation

To estimate the TCS, we run ridge regression (RR) in batches of images with a small regularization multiplier on different representations and feature extractions of the data. To approximate fully connected networks, we run RR on the vectorized images. To approximate convolutional networks we first convert each image to tiles of pixels with stride and then vectorize. We record the variance of learned weights across runs. To compute , we assume a multivariate Gaussian distribution on the variables and compute , where denotes the covariance matrix of and the respective feature extractions defined above are used. To calculate this we apply an SVD decomposition and use the sum of log singular values to estimate the entropy.

3.3 Bayesian Hyperparameter Optimization

Manually tuning hyperparameters for neural networks is both time consuming and may introduce unwanted bias into experiments. There is a wide range of automated methods available to mitigate these flaws [Bergstra and Bengio, 2012, Hutter et al., 2015]. We utilize Bayesian optimization to find the set of suitable hyperparameters for each network. Since the initial weights of our networks are sampled from a uniform random distribution we can expect the performance to fluctuate between runs. Due to its probabilistic approach Bayesian optimization can account for this uncertainty [Shahriari et al., 2015]. The dimensions and their constraints were chosen to be identical for each representation but were adapted to each dataset. For an in-depth introduction to Bayesian optimization we refer to Snoek et al. [2012]. More information on our particular implementation of Bayesian optimization can be found in the appendix.

3.4 Network Architectures

We investigated three basic architectures. Our optimization was constrained to the same domain for all representations of a dataset for each of the network architectures. The full list of constraints for each network and dataset can be found in the code accompanying this paper. The initial learning rate was optimized for all architectures.

Convolutional Networks: We use a variable number of convolutional layers [LeCun et al., 1998], with or without maxpooling layers between them, followed by a variable number of fully connected layers. Moreover, kernel sizes and number of filters are also parametrized.
Dense Neural Networks: These networks consist of blocks of variably sized fully connected layers. We optimize the activation function after each layer as a categorical variable.
ResNets: The Residual Neural Networks or ResNets [He et al., 2015] are made up of a variable number of residual layers. Each of the layers contains a variable number of convolutions which themselves are fully parametrized.

3.5 Representations

While one could select an arbitrary number of representations for images, we limit ourselves to five which have previously been used in image processing. Our focus is not on findings the best representations but to show how sensitive learning processes are to the representation of input data.

RGB: RGB is likely the representation we use most in our everyday life. Almost all modern displays and cameras capture or display data as an overlay of red, green and blue color channels. For simplicity we refer to the grayscale images of the Drone Racing dataset as being “RGB”.
YCbCr: The YCbCr representation is used in a number of different image storage formats such as JPEG and MPEG [David S. Taubman, 2013]. It represents the image as a combination of a luminance and two chrominance channels. It is useful for compression because the human eye is much less sensitive to changes in chrominance than it is to changes in luminance.
PREC: This representation partially decorrelates the color channels of the image based on previous work on preconditioning convolutional neural networks [Liu et al., 2018]. For an image with channels we first calculate the expected value of the covariance between the channels for each image in the dataset: , where is the channel vector at pixel of image . We then solve the eigenvalue problem for obtaining real eigenvalues and containing the eigenvectors. A small is added for numerical stability. u is stored in memory and consecutively applied to each image in the dataset. We get which yields .
DCT: The 2D type II discrete cosine transform (DCT) is a frequency-based representation. Low frequency coefficients are located in the top left corner of the representation and horizontal/vertical frequencies increase towards the right or down, respectively. This representation applies the DCT transform to each of the channels separately. DCT has been used extensively for face detection [Hafed and Levine, 2001, Pan et al., 2000] and all its coefficients bar one are invariant to uniform changes in brightness [Er et al., 2005].
Block DCT: Unlike for the DCT representation we apply the discrete cosine transform to non-overlapping patches of each of the channels. This exact type of DCT is widely used in JPEG compression by quantizing the coefficients of the DCT and applying Huffman encoding [David S. Taubman, 2013].

3.6 Training

Each network was trained using an Adam optimizer [Kingma and Ba, 2014]. Training was terminated when there were more than 7 previous epochs without a decrease in loss on the validation set or after 30 epochs.

4 Discussion and Results

Figure 2: Linear and convolutional average batch training loss scores for all datasets and representations. In particular this highlights the difficulty predicting the output from the DCT representation.
Figure 3: TCS values (shown in nats) correlate with the 10 best evaluation losses for each dataset and representations of RGB, YCbCr, Block DCT, PREC, DCT (green, red, orange, violet, blue). Scores associated to the DCT representation are disregarded due to insufficient training performance. Better scores relate to a higher confidence in predictions which are only worthwhile if training performance is satisfactory.
Figure 4: Performance of representations on the datasets for different neural network types.

After evaluating a total of 5753 networks, 3702 of which finished training, we have verified our intuition that representations are important. We see a fairly consistent pattern over all datasets of RGB and YCbCr being the best representations, followed by PREC and blockwise DCT, while DCT falls short (see Fig. 4). Moreover, we observe the great importance of hyperparameters in the spread of results for each network architecture. Had we chosen to hand-tune our parameters and accidentally picked a very poor performing network for the RGB representation we could have possibly come to the conclusion that DCT achieves better results on all datasets. As predicted, the performance of a representations also depends greatly on the network architecture.

The TCS scores we proposed show strong correlation with the results we obtained from architecture search if previous training batch loss is used as a first filter criterion. They can even predict the comparatively small differences between the other representations with reasonable accuracy (see Fig. 3). Overall, we observe the significant correlated effect representation has on TCS scores, estimated entropy (see Tab. 2) as well as performance.

5 Conclusion

This work started by trying to evaluate the effect different representations can have on task complexity. To achieve this we utilized a task complexity heuristics TCS, as a score that takes both desired training performance and confidence of such an estimate into account. Relevant to the confidence term of the task complexity, we derived a relation connecting mutual information of weights and dataset with the representation dependent expected coding length. As outlined, making full use of this perspective will depend to a great deal on how well we understand the candidate distributions of network architectures and representations that are currently in use. If the score proves itself also in future works, it may serve as a useful tool for automatic representation or architecture search.

Appendix

Results Table

Dataset input output split (train/val/test) loss
Drone 10806/1403/1339 L1
Groceries 6409/1018/1016 NLL
KDEF 3931/470/497 NLL
Table 1: Dataset overview
linear convolutional
Dataset Representation Train loss TCS Train loss TCS
Groceries Block DCT 1.9e-10 -16.07 443.7 1.5e-12 -18.69 598.5
PREC 1.9e-09 -16.01 376.7 1.5e-11 -18.65 531.9
DCT 9.9e-02 -15.39 -187.9 3.6e-03 -18.13 -61.43
RGB 1.8e-08 -15.95 346.7 1.6e-10 -18.61 501.1
YCbCr 1.4e-07 -15.91 282.3 1.2e-09 -18.58 437.1
Drone Block DCT 3.4e-11 -14.68 480.5 2.3e-13 -17.37 638.7
DCT 1.0e-01 -13.95 -220.4 5.0e-03 -16.79 -79.34
RGB 3.9e-06 -14.44 344.0 2.2e-08 -17.22 500.86
KDEF Block DCT 3.1e-10 -14.49 422.4 4.0e-04 -17.16 576.8
PREC 3.9e-04 -14.41 341.3 7.8e-04 -17.10 496.4
DCT 5.5e-01 -13.95 -194.9 4.0e-01 -16.60 -189.2
RGB 1.2e-06 -14.24 225.3 9.1e-04 -16.98 381.4
YCbCr 2.5e-05 -14.13 152.3 2.0e-07 -16.90 307.6
Table 2: Estimated linear (lin) and convolutional (conv) average training loss on batches of size 256, natural logarithm of TCS values and input data entropy assuming a normal distribution for all representations and datasets.

To narrate Tab. 2 we note that that estimating small entropy values is very error prone. When assuming a normal distribution, the entropy is calculated via the sum of the logarithm of eigenvalues of the covariance matrix of the data. The conditioning of the logarithm however gets worse, the closer its argument is to zero. Eigenvalues close enough to zero are thus likely to carry a significant error when used for entropy computation which is particularly prevalent in the DCT representation.

Bayesian Optimization Supplementary Information

While there have been some theoretical proposals that would allow Bayesian optimization to be run in parallel asynchronously [Snoek et al., 2012], we restrict ourselves to a simple form of batch parallelization evaluating points in parallel. We acquire the points by using the minimum constant liar strategy [Chevalier and Ginsbourger, 2012]. The base estimator is first used after 10 points have been evaluated. Our acquisition function is chosen at each iteration from a portfolio of acquisition functions using a GP-Hedge strategy, as proposed in [Brochu et al., 2010]. We optimize the acquisition function by sampling it at points for categorical dimensions and 20 iterations of L-BFGS [Liu and Nocedal, 1989] for continuous dimensions.

Since the optimizer has no information about the geometric properties of the network or if the network can fit in the systems memory, some of the generated networks cannot be trained. Two common modes of failure were too many pooling layers (resulting in a layer size smaller than the kernel of subsequent layers) and running out of memory, which was especially prevalent for dense networks. In our experiments we observed that roughly 35% of all networks did not complete training. To stop the Bayesian optimizer from evaluating these points again we reported a large artificially generated loss to the optimizer at the point where the network crashed. The magnitude of this loss was chosen manually for each dataset to be roughly one order of magnitude larger than the expected loss. The influence of this practice will have to be investigated in future research.

Representation Samples

Figure 5: Image from the Groceries dataset in various representations. From left to right: RGB, YCBCR, PREC, DCT, blockwise DCT (cropped to show relevant coefficients, contrast boosted).

Dataset Samples

Figure 6: Samples from each of the datasets. From left to right: KDEF, Groceries and Drone Racing.

Footnotes

  1. For a clear illustration, see for example
    http://astronomy.nmsu.edu/geas/lectures/lecture11/slide01.html

References

  1. E Theodossiou, E Danezis, VN Manimanis, and E-M Kalyva. From pythagoreans to kepler: the dispute between the geocentric and the heliocentric systems. Journal of Astronomical History and Heritage, 5:89–98, 2002.
  2. Beth Logan et al. Mel frequency cepstral coefficients for music modeling. In ISMIR, volume 270, pages 1–11, 2000.
  3. Rein-Lien Hsu, Mohamed Abdel-Mottaleb, and Anil K Jain. Face detection in color images. IEEE transactions on pattern analysis and machine intelligence, 24(5):696–706, 2002.
  4. Christine Irene Podilchuk and Xiaoyu Zhang. Face recognition using dct-based feature vectors, September 1 1998. US Patent 5,802,208.
  5. Ole Christensen et al. An introduction to frames and Riesz bases. Springer, 2016.
  6. Stéphane Mallat. A wavelet tour of signal processing. Elsevier, 1999.
  7. Emmanuel J Candes and David L Donoho. Curvelets: A surprisingly effective nonadaptive representation for objects with edges. Technical report, Stanford Univ Ca Dept of Statistics, 2000.
  8. Lionel Gueguen, Alex Sergeev, Ben Kadlec, Rosanne Liu, and Jason Yosinski. Faster neural networks straight from jpeg. In Advances in Neural Information Processing Systems, pages 3933–3944, 2018.
  9. R. N. Martha Grassmann and Jessica Burgner-Kahrs. On the merits of joint space and orientation representations in learning the forward kinematics in se(3). In Robotics: Science and Systems, 2019.
  10. Adam Gaier and David Ha. Weight agnostic neural networks. arXiv preprint arXiv:1906.04358, 2019.
  11. Chris Thornton. Measuring the difficulty of specific learning problems. Connection Science, 7(1):81–92, 1995.
  12. J. R. Quinlan. Induction of decision trees. Machine Learning, 1(1):81–106, Mar 1986. ISSN 1573-0565. doi: 10.1007/BF00116251. URL https://doi.org/10.1007/BF00116251.
  13. J Ross Quinlan. C4. 5: programs for machine learning. Elsevier, 2014.
  14. Tin Kam Ho and Mitra Basu. Complexity measures of supervised classification problems. IEEE Transactions on Pattern Analysis & Machine Intelligence, 24(3):289–300, 2002.
  15. Isabelle Guyon and André Elisseeff. An introduction to variable and feature selection. Journal of machine learning research, 3(Mar):1157–1182, 2003.
  16. Nicolai Meinshausen and Peter Bühlmann. Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(4):417–473, 2010.
  17. David A McAllester. Pac-bayesian stochastic model selection. Machine Learning, 51(1):5–21, 2003.
  18. Alessandro Achille and Stefano Soatto. Where is the information in a deep neural network? CoRR, abs/1905.12213, 2019. URL http://arxiv.org/abs/1905.12213.
  19. Ling Li. Data complexity in machine learning and novel classification algorithms. PhD thesis, California Institute of Technology, 2006.
  20. Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000.
  21. Alessandro Achille and Stefano Soatto. Emergence of invariance and disentanglement in deep representations. The Journal of Machine Learning Research, 19(1):1947–1980, 2018.
  22. Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
  23. Jianqing Fan, Shaojun Guo, and Ning Hao. Variance estimation using refitted cross-validation in ultrahigh dimensional regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(1):37–65, 2012.
  24. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  25. Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015.
  26. E. Lundqvist, Flykt, and Öhman. The karolinska directed emotional faces (kdef). CD ROM from Department of Clinical Neuroscience, Psychology section, ISBN 91-630-7164-9, 1998.
  27. Jeffrey Delmerico, Titus Cieslewski, Henri Rebecq, Matthias Faessler, and Davide Scaramuzza. Are we ready for autonomous drone racing? the uzhfpv drone racing dataset. In IEEE Int. Conf. Robot. Autom.(ICRA), 2019.
  28. Philipp Jund, Nichola Abdo, Andreas Eitel, and Wolfram Burgard. The freiburg groceries dataset. arXiv preprint arXiv:1611.05799, 2016.
  29. James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281–305, 2012.
  30. Frank Hutter, Jörg Lücke, and Lars Schmidt-Thieme. Beyond manual tuning of hyperparameters. KI - Künstliche Intelligenz, 29(4):329–337, Nov 2015. ISSN 1610-1987. doi: 10.1007/s13218-015-0381-0.
  31. Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Freitas. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148–175, 2015.
  32. Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 2951–2959. Curran Associates, Inc., 2012. URL http://papers.nips.cc/paper/4522-practical-bayesian-optimization-of-machine-learning-algorithms.pdf.
  33. Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  34. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385.
  35. Michael W. Marcellin David S. Taubman. JPEG2000 Image Compression Fundamentals, Standards and Practice. Springer, Boston, MA, 2013. ISBN 978-1-4615-0799-4.
  36. Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M Lopez, and Andrew D Bagdanov. Rotate your networks: Better weight consolidation and less catastrophic forgetting. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 2262–2268. IEEE, 2018.
  37. Ziad M. Hafed and Martin D. Levine. Face recognition using the discrete cosine transform. International Journal of Computer Vision, 43(3):167–188, Jul 2001. ISSN 1573-1405. doi: 10.1023/A:1011183429707. URL https://doi.org/10.1023/A:1011183429707.
  38. Zhengjun Pan, A. G. Rust, and H. Bolouri. Image redundancy reduction for neural network classification using discrete cosine transforms. In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, volume 3, pages 149–154 vol.3, July 2000. doi: 10.1109/IJCNN.2000.861296.
  39. Meng Joo Er, W. Chen, and Shiqian Wu. High-speed face recognition based on discrete cosine transform and rbf neural networks. IEEE Transactions on Neural Networks, 16(3):679–691, May 2005. ISSN 1045-9227. doi: 10.1109/TNN.2005.844909.
  40. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014.
  41. Clément Chevalier and David Ginsbourger. Fast Computation of the Multi-points Expected Improvement with Applications in Batch Selection. working paper or preprint, October 2012. URL https://hal.archives-ouvertes.fr/hal-00732512.
  42. Eric Brochu, Matthew D. Hoffman, and Nando de Freitas. Hedging strategies for bayesian optimization. CoRR, abs/1009.5419, 2010. URL http://arxiv.org/abs/1009.5419.
  43. Dong C. Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical Programming, 45:503–528, 1989.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402580
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description