On the Expressive Power of Deep Neural Networks

On the Expressive Power of Deep Neural Networks

Maithra Raghu    Ben Poole    Jon Kleinberg    Surya Ganguli    Jascha Sohl Dickstein
Abstract

We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps along a one-dimensional path. Our findings can be summarized as follows:

  1. The complexity of the computed function grows exponentially with depth. We design measures of expressivity that capture the non-linearity of the computed function. Due to how the network transforms its input, these measures grow exponentially with depth.

  2. All weights are not equal (initial layers matter more). We find that trained networks are far more sensitive to their lower (initial) layer weights: they are much less robust to noise in these layer weights, and also perform better when these weights are optimized well.

  3. Trajectory Regularization works like Batch Normalization. We find that batch norm stabilizes the learnt representation, and based on this propose a new regularization scheme, trajectory regularization.


1 Introduction

Deep neural networks have proved astoundingly effective at a wide range of empirical tasks, from image classification (Krizhevsky et al., 2012) to playing Go (Silver et al., 2016), and even modeling human learning (Piech et al., 2015).

Despite these successes, understanding of how and why neural network architectures achieve their empirical successes is still lacking. This includes even the fundamental question of neural network expressivity, how the architectural properties of a neural network (depth, width, layer type) affect the resulting functions it can compute, and its ensuing performance.

This is a foundational question, and there is a rich history of prior work addressing expressivity in neural networks. However, it has been challenging to derive conclusions that provide both theoretical generality with respect to choices of architecture as well as meaningful insights into practical performance.

Indeed, the very first results on this question take a highly theoretical approach, from using functional analysis to show universal approximation results (Hornik et al., 1989; Cybenko, 1989), to analysing expressivity via comparisons to boolean circuits (Maass et al., 1994) and studying network VC dimension (Bartlett et al., 1998). While these results provided theoretically general conclusions, the shallow networks they studied are very different from the deep models that have proven so successful in recent years.

In response, several recent papers have focused on understanding the benefits of depth for neural networks (Pascanu et al., 2013; Montufar et al., 2014; Eldan and Shamir, 2015; Telgarsky, 2015; Martens et al., 2013; Bianchini and Scarselli, 2014). These results are compelling and take modern architectural changes into account, but they only show that a specific choice of weights for a deeper network results in inapproximability by a shallow (typically one or two hidden layers) network.

In particular, the goal of this new line of work has been to establish lower bounds — showing separations between shallow and deep networks — and as such they are based on hand-coded constructions of specific network weights. Even if the weight values used in these constructions are robust to small perturbations (as in (Pascanu et al., 2013; Montufar et al., 2014)), the functions that arise from these constructions tend toward extremal properties by design, and there is no evidence that a network trained on data ever resembles such a function.

This has meant that a set of fundamental questions about neural network expressivity has remained largely unanswered. First, we lack a good understanding of the “typical” case rather than the worst case in these bounds for deep networks, and consequently have no way to evaluate whether the hand-coded extremal constructions provide a reflection of the complexity encountered in more standard settings. Second, we lack an understanding of upper bounds to match the lower bounds produced by this prior work; do the constructions used to date place us near the limit of the expressive power of neural networks, or are there still large gaps? Finally, if we had an understanding of these two issues, we might begin to draw connections between network expressivity and observed performance.

Our contributions: Measures of Expressivity and their Applications

In this paper, we address this set of challenges by defining and analyzing an interrelated set of measures of expressivity for neural networks; our framework applies to a wide range of standard architectures, independent of specific weight choices. We begin our analysis at the start of training, after random initialization, and later derive insights connecting network expressivity and performance.

Our first measure of expressivity is based on the notion of an activation pattern: in a network where the units compute functions based on discrete thresholds, we can ask which units are above or below their thresholds (i.e. which units are “active” and which are not). For the range of standard architectures that we consider, the network is essentially computing a linear function once we fix the activation pattern; thus, counting the number of possible activation patterns provides a concrete way of measuring the complexity beyond linearity that the network provides. We give an upper bound on the number of possible activation patterns, over any setting of the weights. This bound is tight as it matches the hand-constructed lower bounds of earlier work (Pascanu et al., 2013; Montufar et al., 2014).

Key to our analysis is the notion of a transition, in which changing an input to a nearby input changes the activation pattern. We study the behavior of transitions as we pass the input along a one-dimensional parametrized trajectory . Our central finding is that the trajectory length grows exponentially in the depth of the network.

Trajectory length serves as a unifying notion in our measures of expressivity, and it leads to insights into the behavior of trained networks. Specifically, we find that the exponential growth in trajectory length as a function of depth implies that small adjustments in parameters lower in the network induce larger changes than comparable adjustments higher in the network. We demonstrate this phenomenon through experiments on MNIST and CIFAR-10, where the network displays much less robustness to noise in the lower layers, and better performance when they are trained well. We also explore the effects of regularization methods on trajectory length as the network trains and propose a less computationally intensive method of regularization, trajectory regularization, that offers the same performance as batch normalization.

The contributions of this paper are thus:

  1. Measures of expressivity: We propose easily computable measures of neural network expressivity that capture the expressive power inherent in different neural network architectures, independent of specific weight settings.

  2. Exponential trajectories: We find an exponential depth dependence displayed by these measures, through a unifying analysis in which we study how the network transforms its input by measuring trajectory length

  3. All weights are not equal (the lower layers matter more): We show how these results on trajectory length suggest that optimizing weights in lower layers of the network is particularly important.

  4. Trajectory Regularization Based on understanding the effect of batch norm on trajectory length, we propose a new method of regularization, trajectory regularization, that offers the same advantages as batch norm, and is computationally more efficient.

In prior work (Poole et al., 2016), we studied the propagation of Riemannian curvature through random networks by developing a mean field theory approach. Here, we take an approach grounded in computational geometry, presenting measures with a combinatorial flavor and explore the consequences during and after training.

2 Measures of Expressivity

Given a neural network of a certain architecture (some depth, width, layer types), we have an associated function, , where is an input and represents all the parameters of the network. Our goal is to understand how the behavior of changes as changes, for values of that we might encounter during training, and across inputs .

The first major difficulty comes from the high dimensionality of the input. Precisely quantifying the properties of over the entire input space is intractable. As a tractable alternative, we study simple one dimensional trajectories through input space. More formally:

Definition: Given two points, , we say is a trajectory (between and ) if is a curve parametrized by a scalar , with and .

Simple examples of a trajectory would be a line () or a circular arc (), but in general may be more complicated, and potentially not expressible in closed form.

Armed with this notion of trajectories, we can begin to define measures of expressivity of a network over trajectories .

2.1 Neuron Transitions and Activation Patterns

In (Montufar et al., 2014) the notion of a “linear region” is introduced. Given a neural network with piecewise linear activations (such as ReLU or hard tanh), the function it computes is also piecewise linear, a consequence of the fact that composing piecewise linear functions results in a piecewise linear function. So one way to measure the “expressive power” of different architectures is to count the number of linear pieces (regions), which determines how nonlinear the function is.

In fact, a change in linear region is caused by a neuron transition in the output layer. More precisely:

Definition For fixed , we say a neuron with piecewise linear region transitions between inputs if its activation function switches linear region between and .

So a ReLU transition would be given by a neuron switching from off to on (or vice versa) and for hard tanh by switching between saturation at to its linear middle region to saturation at . For any generic trajectory , we can thus define to be the number of transitions undergone by output neurons (i.e. the number of linear regions) as we sweep the input . Instead of just concentrating on the output neurons however, we can look at this pattern over the entire network. We call this an activation patten:

Definition We can define to be the activation pattern – a string of form (for ReLUs) and (for hard tanh) of the network encoding the linear region of the activation function of every neuron, for an input and weights .

Overloading notation slightly, we can also define (similarly to transitions) as the number of distinct activation patterns as we sweep along . As each distinct activation pattern corresponds to a different linear function of the input, this combinatorial measure captures how much more expressive is over a simple linear mapping.

Returning to Montufar et al, they provide a construction i.e. a specific set of weights , that results in an exponential increase of linear regions with the depth of the architectures. They also appeal to Zaslavsky’s theorem (Stanley, 2011) from the theory of hyperplane arrangements to show that a shallow network, i.e. one hidden layer, with the same number of parameters as a deep network, has a much smaller number of linear regions than the number achieved by their choice of weights for the deep network.

More formally, letting be a fully connected network with one hidden layer, and a fully connected network with the same number of parameters, but hidden layers, they show

We derive a much more general result by considering the ‘global’ activation patterns over the entire input space, and prove that for any fully connected network, with any number of hidden layers, we can upper bound the number of linear regions it can achieve, over all possible weight settings . This upper bound is asymptotically tight, matched by the construction given in (Montufar et al., 2014). Our result can be written formally as:

Theorem 1.

(Tight) Upper Bound for Number of Activation Patterns Let denote a fully connected network with hidden layers of width , and inputs in . Then the number of activation patterns is upper bounded by for ReLU activations, and for hard tanh.

From this we can derive a chain of inequalities. Firstly, from the theorem above we find an upper bound of over all , i.e.

Next, suppose we have neurons in total. Then we want to compare (for wlog ReLUs), quantities like for different .

But , and so, noting that the maxima of (for ) is , we get, (for ), in comparison to (*),

We prove this via an inductive proof on regions in a hyperplane arrangement. The proof can be found in the Appendix. As noted in the introduction, this result differs from earlier lower-bound constructions in that it is an upper bound that applies to all possible sets of weights. Via our analysis, we also prove

width=0.9\Gin@clipfalse\Gin@ifigures/hyper_fin_027.pdf

Figure 1: Deep networks with piecewise linear activations subdivide input space into convex polytopes. We take a three hidden layer ReLU network, with input , and four units in each layer. The left pane shows activations for the first layer only. As the input is in , neurons in the first hidden layer have an associated line in , depicting their activation boundary. The left pane thus has four such lines. For the second hidden layer each neuron again has a line in input space corresponding to on/off, but this line is different for each region described by the first layer activation pattern. So in the centre pane, which shows activation boundary lines corresponding to second hidden layer neurons in green (and first hidden layer in black), we can see the green lines ‘bend’ at the boundaries. (The reason for this bending becomes apparent through the proof of Theorem 2.) Finally, the right pane adds the on/off boundaries for neurons in the third hidden layer, in purple. These lines can bend at both black and green boundaries, as the image shows. This final set of convex polytopes corresponds to all activation patterns for this network (with its current set of weights) over the unit square, with each polytope representing a different linear function.
Theorem 2.

Regions in Input Space Given the corresponding function of a neural network with ReLU or hard tanh activations, the input space is partitioned into convex polytopes, with corresponding to a different linear function on each region.

This result is of independent interest for optimization – a linear function over a convex polytope results in a well behaved loss function and an easy optimization problem. Understanding the density of these regions during the training process would likely shed light on properties of the loss surface, and improved optimization methods. A picture of a network’s regions is shown in Figure 1.

2.1.1 Empirically Counting Transitions
width=0.6\Gin@clipfalse\Gin@ifigures/MNIST_transitions.pdf width=0.6\Gin@clipfalse\Gin@ifigures/MNIST_transitions_width.pdf
Figure 2: The number of transitions seen for fully connected networks of different widths, depths and initialization scales, with a circular trajectory between MNIST datapoints. The number of transitions grows exponentially with the depth of the architecture, as seen in (left). The same rate of growth is not seen with increasing architecture width, plotted in (right). There is a surprising dependence on the scale of initialization, explained in 2.2.

We empirically tested the growth of the number of activations and transitions as we varied along to understand their behavior. We found that for bounded non linearities, especially tanh and hard-tanh, not only do we observe exponential growth with depth (as hinted at by the upper bound) but that the scale of parameter initialization also affects the observations (Figure 2). We also experimented with sweeping the weights of a layer through a trajectory , and counting the different labellings output by the network. This ‘dichotomies’ measure is discussed further in the Appendix, and also exhibits the same growth properties, Figure 14.

2.2 Trajectory Length

width=0.9\Gin@clipfalse\Gin@iTanh_trajectory_expansion_fig.jpg

Figure 3: Picture showing a trajectory increasing with the depth of a network. We start off with a circular trajectory (left most pane), and feed it through a fully connected tanh network with width . Pane second from left shows the image of the circular trajectory (projected down to two dimensions) after being transformed by the first hidden layer. Subsequent panes show projections of the latent image of the circular trajectory after being transformed by more hidden layers. The final pane shows the the trajectory after being transformed by all the hidden layers.

In fact, there turns out to be a reason for the exponential growth with depth, and the sensitivity to initialization scale. Returning to our definition of trajectory, we can define an immediately related quantity, trajectory length

Definition: Given a trajectory, , we define its length, , to be the standard arc length:

Intuitively, the arc length breaks up into infinitesimal intervals and sums together the Euclidean length of these intervals.

If we let denote, as before, fully connected networks with hidden layers each of width , and initializing with weights (accounting for input scaling as typical), and biases , we find that:

Theorem 3.

Bound on Growth of Trajectory Length Let be a ReLU or hard tanh random neural network and a one dimensional trajectory with having a non trival perpendicular component to for all (i.e, not a line). Then defining to be the image of the trajectory in layer of the network, we have

  • for ReLUs

  • for hard tanh

That is, grows exponentially with the depth of the network, but the width only appears as a base (of the exponent). This bound is in fact tight in the limits of large and .

width=0.7\Gin@clipfalse\Gin@ifigures/CIFAR10_trajectory_growth.pdf

Figure 4: We look at trajectory growth with different initialization scales as a trajectory is propagated through a convolutional architecture for CIFAR-10, with ReLU activations. The analysis of Theorem 3 was for fully connected networks, but we see that trajectory growth holds (albeit with slightly higher scales) for convolutional architectures also. Note that the decrease in trajectory length, seen in layers and is expected, as those layers are pooling layers.

A schematic image depicting this can be seen in Figure 3 and the proof can be found in the Appendix. A rough outline is as follows: we look at the expected growth of the difference between a point on the curve and a small perturbation , from layer to layer . Denoting this quantity , we derive a recurrence relating and which can be composed to give the desired growth rate.

The analysis is complicated by the statistical dependence on the image of the input . So we instead form a recursion by looking at the component of the difference perpendicular to the image of the input in that layer, i.e. , which results in the condition on in the statement.

In Figures 4, 12, we see the growth of an input trajectory for ReLU networks on CIFAR-10 and MNIST. The CIFAR-10 network is convolutional but we observe that these layers also result in similar rates of trajectory length increases to the fully connected layers. We also see, as would be expected, that pooling layers act to reduce the trajectory length. We discuss upper bounds in the Appendix.

width=0.6\Gin@clipfalse\Gin@ifigures/transitions_vs_lengthstats_io_cropped.pdf

Figure 5: The number of transitions is linear in trajectory length. Here we compare the empirical number of transitions to the length of the trajectory, for different depths of a hard-tanh network. We repeat this comparison for a variety of network architectures, with different network width and weight variance .

For the hard tanh case (and more generally any bounded non-linearity), we can formally prove the relation of trajectory length and transitions under an assumption: assume that while we sweep all neurons are saturated unless transitioning saturation endpoints, which happens very rapidly. (This is the case for e.g. large initialization scales). Then we have:

Theorem 4.

Transitions proportional to trajectory length Let be a hard tanh network with hidden layers each of width . And let

Then for initialized with weight and bias scales .

Note that the expression for is exactly the expression given by Theorem 3 when is very large and dominates . We can also verify this experimentally in settings where the simpilfying assumption does not hold, as in Figure 5.

3 Insights from Network Expressivity

Here we explore the insights gained from applying our measurements of expressivity, particularly trajectory length, to understand network performance. We examine the connection of expressivity and stability, and inspired by this, propose a new method of regularization, trajectory regularization that offers the same advantages as the more computationally intensive batch normalization.

3.1 Expressivity and Network Stability

The analysis of network expressivity offers interesting takeaways related to the parameter and functional stability of a network. From the proof of Theorem 3, we saw that a perturbation to the input would grow exponentially in the depth of the network. It is easy to see that this analysis is not limited to the input layer, but can be applied to any layer. In this form, it would say

A perturbation at a layer grows exponentially in the remaining depth after that layer.

width=0.5\Gin@clipfalse\Gin@ifigures/CIFAR10_noise.pdf

Figure 6: We then pick a single layer of a conv net trained to high accuracy on CIFAR10, and add noise to the layer weights of increasing magnitudes, testing the network accuracy as we do so. We find that the initial (lower) layers of the network are least robust to noise – as the figure shows, adding noise of magnitude to the first layer results in a drop in accuracy, while the same amount of noise added to the fifth layer barely results in a drop in accuracy. This pattern is seen for many different initialization scales, even for a (typical) scaling of , used in the experiment.

This means that perturbations to weights in lower layers should be more costly than perturbations in the upper layers, due to exponentially increasing magnitude of noise, and result in a much larger drop of accuracy. Figure 6, in which we train a conv network on CIFAR-10 and add noise of varying magnitudes to exactly one layer, shows exactly this.

We also find that the converse (in some sense) holds: after initializing a network, we trained a single layer at different depths in the network and found monotonically increasing performance as layers lower in the network were trained. This is shown in Figure 7 and Figure 17 in the Appendix.

width=0.75\Gin@clipfalse\Gin@ifigures/Hard_tanh_acc_after_norm_final_sigma_2.pdf

Figure 7: Demonstration of expressive power of remaining depth on MNIST. Here we plot train and test accuracy achieved by training exactly one layer of a fully connected neural net on MNIST. The different lines are generated by varying the hidden layer chosen to train. All other layers are kept frozen after random initialization. We see that training lower hidden layers leads to better performance. The networks had width , weight variance , and hard-tanh nonlinearities. Note that we only train from the second hidden layer (weights ) onwards, so that the number of parameters trained remains fixed.

3.2 Trajectory Length and Regularization: The Effect of Batch Normalization

Expressivity measures, especially trajectory length, can also be used to better understand the effect of regularization. One regularization technique that has been extremely successful for training neural networks is Batch Normalization (Ioffe and Szegedy, 2015).

width=0.6\Gin@clipfalse\Gin@ifigures/CIFAR10_traj_length_nobn.pdf

Figure 8: Training increases trajectory length even for typical () initialization values of . Here we propagate a circular trajectory joining two CIFAR10 datapoints through a conv net without batch norm, and look at how trajectory length changes through training. We see that training causes trajectory length to increase exponentially with depth (exceptions only being the pooling layers and the final fc layer, which halves the number of neurons.) Note that at Step , the network is not in the exponential growth regime. We observe (discussed in Figure 9) that even networks that aren’t initialized in the exponential growth regime can be pushed there through training.

By taking measures of trajectories during training we find that without batch norm, trajectory length tends to increase during training, as shown in Figures 8 and Figure 18 in the Appendix. In these experiments, two networks were initialized with and trained to high test accuracy on CIFAR10 and MNIST. We see that in both cases, trajectory length increases as training progresses.

A surprising observation is is not in the exponential growth increase regime at initialization for the CIFAR10 architecture (Figure 8 at Step .). But note that even with a smaller weight initialization, weight norms increase during training, shown in Figure 9, pushing typically initialized networks into the exponential growth regime.

width=0.6\Gin@clipfalse\Gin@ifigures/CIFAR10_training_weight_norms.pdf

Figure 9: This figure shows how the weight scaling of a CIFAR10 network evolves during training. The network was initialized with , which increases across all layers during training.

While the initial growth of trajectory length enables greater functional expressivity, large trajectory growth in the learnt representation results in an unstable representation, witnessed in Figure 6. In Figure 10 we train another conv net on CIFAR10, but this time with batch normalization. We see that the batch norm layers reduce trajectory length, helping stability.

width=0.7\Gin@clipfalse\Gin@ifigures/CIFAR_traj_bn.pdf

Figure 10: Growth of circular trajectory between two datapoints with batch norm layers for a conv net on CIFAR10. The network was initialized as typical, with . Note that the batch norm layers in Step are poorly behaved due to division by a close to variance. But after just a few hundred gradient steps and continuing onwards, we see the batch norm layers (dotted lines) reduce trajectory length, stabilising the representation without sacrificing expressivity.

3.3 Trajectory Regularization

Motivated by the fact that batch normalization decreases trajectory length and hence helps stability and generalization, we consider directly regularizing on trajectory length: we replace every batch norm layer used in the conv net in Figure 10 with a trajectory regularization layer. This layer adds to the loss , and then scales the outgoing activations by , where is a parameter to be learnt. In implementation, we typically scale the additional loss above with a constant () to reduce magnitude in comparison to classification loss. Our results, Figure 11 show that both trajectory regularization and batch norm perform comparably, and considerably better than not using batch norm. One advantage of using Trajectory Regularization is that we don’t require different computations to be performed for train and test, enabling more efficient implementation.

width=0.7\Gin@clipfalse\Gin@ifigures/CIFAR_traj_reg.pdf

Figure 11: We replace each batch norm layer of the CIFAR10 conv net with a trajectory regularization layer, described in Section 3.3. During training trajectory length is easily calculated as a piecewise linear trajectory between adjacent datapoints in the minibatch. We see that trajectory regularization achieves the same performance as batch norm, albeit with slightly more train time. However, as trajectory regularization behaves the same during train and test time, it is simpler and more efficient to implement.

4 Discussion

Characterizing the expressiveness of neural networks, and understanding how expressiveness varies with parameters of the architecture, has been a challenging problem due to the difficulty in identifying meaningful notions of expressivity and in linking their analysis to implications for these networks in practice. In this paper we have presented an interrelated set of expressivity measures; we have shown tight exponential bounds on the growth of these measures in the depth of the networks, and we have offered a unifying view of the analysis through the notion of trajectory length. Our analysis of trajectories provides insights for the performance of trained networks as well, suggesting that networks in practice may be more sensitive to small perturbations in weights at lower layers. We also used this to explore the empirical success of batch norm, and developed a new regularization method – trajectory regularization.

This work raises many interesting directions for future work. At a general level, continuing the theme of ‘principled deep understanding’, it would be interesting to link measures of expressivity to other properties of neural network performance. There is also a natural connection between adversarial examples, (Goodfellow et al., 2014), and trajectory length: adversarial perturbations are only a small distance away in input space, but result in a large change in classification (the output layer). Understanding how trajectories between the original input and an adversarial perturbation grow might provide insights into this phenomenon. Another direction, partially explored in this paper, is regularizing based on trajectory length. A very simple version of this was presented, but further performance gains might be achieved through more sophisticated use of this method.

Acknowledgements

We thank Samy Bengio, Ian Goodfellow, Laurent Dinh, and Quoc Le for extremely helpful discussion.

References

  • Krizhevsky et al. [2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • Silver et al. [2016] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
  • Piech et al. [2015] Chris Piech, Jonathan Bassen, Jonathan Huang, Surya Ganguli, Mehran Sahami, Leonidas J Guibas, and Jascha Sohl-Dickstein. Deep knowledge tracing. In Advances in Neural Information Processing Systems, pages 505–513, 2015.
  • Hornik et al. [1989] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359–366, 1989.
  • Cybenko [1989] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314, 1989.
  • Maass et al. [1994] Wolfgang Maass, Georg Schnitger, and Eduardo D Sontag. A comparison of the computational power of sigmoid and Boolean threshold circuits. Springer, 1994.
  • Bartlett et al. [1998] Peter L Bartlett, Vitaly Maiorov, and Ron Meir. Almost linear vc-dimension bounds for piecewise polynomial networks. Neural computation, 10(8):2159–2173, 1998.
  • Pascanu et al. [2013] Razvan Pascanu, Guido Montufar, and Yoshua Bengio. On the number of response regions of deep feed forward networks with piece-wise linear activations. arXiv preprint arXiv:1312.6098, 2013.
  • Montufar et al. [2014] Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In Advances in neural information processing systems, pages 2924–2932, 2014.
  • Eldan and Shamir [2015] Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. arXiv preprint arXiv:1512.03965, 2015.
  • Telgarsky [2015] Matus Telgarsky. Representation benefits of deep feedforward networks. arXiv preprint arXiv:1509.08101, 2015.
  • Martens et al. [2013] James Martens, Arkadev Chattopadhya, Toni Pitassi, and Richard Zemel. On the representational efficiency of restricted boltzmann machines. In Advances in Neural Information Processing Systems, pages 2877–2885, 2013.
  • Bianchini and Scarselli [2014] Monica Bianchini and Franco Scarselli. On the complexity of neural network classifiers: A comparison between shallow and deep architectures. Neural Networks and Learning Systems, IEEE Transactions on, 25(8):1553–1565, 2014.
  • Poole et al. [2016] Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. In Advances in neural information processing systems, pages 3360–3368, 2016.
  • Stanley [2011] Richard Stanley. Hyperplane arrangements. Enumerative Combinatorics, 2011.
  • Ioffe and Szegedy [2015] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 448–456, 2015.
  • Goodfellow et al. [2014] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572, 2014.
  • Kershaw [1983] D. Kershaw. Some extensions of w. gautschi’s inequalities for the gamma function. Mathematics of Computation, 41(164):607–611, 1983.
  • Laforgia and Natalini [2013] Andrea Laforgia and Pierpaolo Natalini. On some inequalities for the gamma function. Advances in Dynamical Systems and Applications, 8(2):261–267, 2013.
  • Sauer [1972] Norbert Sauer. On the density of families of sets. Journal of Combinatorial Theory, Series A, 13(1):145–147, 1972.

Appendix

Here we include the full proofs from sections in the paper.

Appendix A Proofs and additional results from Section 2.1

Proof of Theorem 2
Proof.

We show inductively that partitions the input space into convex polytopes via hyperplanes. Consider the image of the input space under the first hidden layer. Each neuron defines hyperplane(s) on the input space: letting be the th row of , the bias, we have the hyperplane for a ReLU and hyperplanes for a hard-tanh. Considering all such hyperplanes over neurons in the first layer, we get a hyperplane arrangement in the input space, each polytope corresponding to a specific activation pattern in the first hidden layer.

Now, assume we have partitioned our input space into convex polytopes with hyperplanes from layers . Consider and a specific polytope . Then the activation pattern on layers is constant on , and so the input to on is a linear function of the inputs and some constant term, comprising of the bias and the output of saturated units. Setting this expression to zero (for ReLUs) or to (for hard-tanh) again gives a hyperplane equation, but this time, the equation is only valid in (as we get a different linear function of the inputs in a different region.) So the defined hyperplane(s) either partition (if they intersect ) or the output pattern of is also constant on . The theorem then follows. ∎

This implies that any one dimensional trajectory , that does not ‘double back’ on itself (i.e. reenter a polytope it has previously passed through), will not repeat activation patterns. In particular, after seeing a transition (crossing a hyperplane to a different region in input space) we will never return to the region we left. A simple example of such a trajectory is a straight line:

Corollary 1.

Transitions and Output Patterns in an Affine Trajectory For any affine one dimensional trajectory input into a neural network , we partition into intervals every time a neuron transitions. Every interval has a unique network activation pattern on .

Generalizing from a one dimensional trajectory, we can ask how many regions are achieved over the entire input – i.e. how many distinct activation patterns are seen? We first prove a bound on the number of regions formed by hyperplanes in (in a purely elementary fashion, unlike the proof presented in [Stanley, 2011])

Theorem 5.

Upper Bound on Regions in a Hyperplane Arrangement Suppose we have hyperplanes in - i.e. equations of form . for , . Let the number of regions (connected open sets bounded on some sides by the hyperplanes) be . Then

Proof of Theorem 5
Proof.

Let the hyperplane arrangement be denoted , and let be one specific hyperplane. Then the number of regions in is precisely the number of regions in plus the number of regions in . (This follows from the fact that subdivides into two regions exactly all of the regions in , and does not affect any of the other regions.)

In particular, we have the recursive formula

We now induct on to assert the claim. The base cases of are trivial, and assuming the claim for as the induction hypothesis, we have

where the last equality follows by the well known identity

This concludes the proof. ∎

With this result, we can easily prove Theorem 1 as follows:

Proof of Theorem 1
Proof.

First consider the ReLU case. Each neuron has one hyperplane associated with it, and so by Theorem 5, the first hidden layer divides up the inputs space into regions, with .

Now consider the second hidden layer. For every region in the first hidden layer, there is a different activation pattern in the first layer, and so (as described in the proof of Theorem 2) a different hyperplane arrangement of hyperplanes in an dimensional space, contributing at most regions.

In particular, the total number of regions in input space as a result of the first and second hidden layers is . Continuing in this way for each of the hidden layers gives the bound.

A very similar method works for hard tanh, but here each neuron produces two hyperplanes, resulting in a bound of .

Appendix B Proofs and additional results from Section 2.2

Proof of Theorem 3

b.1 Notation and Preliminary Results

Difference of points on trajectory Given in the trajectory, let

Parallel and Perpendicular Components: Given vectors , we can write where is the component of perpendicular to , and is the component parallel to . (Strictly speaking, these components should also have a subscript , but we suppress it as the direction with respect to which parallel and perpendicular components are being taken will be explicitly stated.)

This notation can also be used with a matrix , see Lemma 1.

Before stating and proving the main theorem, we need a few preliminary results.

Lemma 1.

Matrix Decomposition Let be fixed non-zero vectors, and let be a (full rank) matrix. Then, we can write

such that

i.e. the row space of is decomposed to perpendicular and parallel components with respect to (subscript on right), and the column space is decomposed to perpendicular and parallel components of (superscript on left).

Proof.

Let be rotations such that and . Now let , and let , with having non-zero term exactly , having non-zero entries exactly for . Finally, we let have non-zero entries exactly , with and have the remaining entries non-zero.

If we define and , then we see that

as have only one non-zero term, which does not correspond to a non-zero term in the components of in the equations.

Then, defining , and the other components analogously, we get equations of the form

Observation 1.

Given as before, and considering , with respect to (wlog a unit vector) we can express them directly in terms of as follows: Letting be the th row of , we have

i.e. the projection of each row in the direction of . And of course

The motivation to consider such a decomposition of is for the resulting independence between different components, as shown in the following lemma.

Lemma 2.

Independence of Projections Let be a given vector (wlog of unit norm.) If is a random matrix with , then and with respect to are independent random variables.

Proof.

There are two possible proof methods:

  1. We use the rotational invariance of random Gaussian matrices, i.e. if is a Gaussian matrix, iid entries , and is a rotation, then is also iid Gaussian, entries . (This follows easily from affine transformation rules for multivariate Gaussians.)

    Let be a rotation as in Lemma 1. Then is also iid Gaussian, and furthermore, and partition the entries of , so are evidently independent. But then and are also independent.

  2. From the observation note that and have a centered multivariate joint Gaussian distribution (both consist of linear combinations of the entries in .) So it suffices to show that and have covariance . Because both are centered Gaussians, this is equivalent to showing . We have that

    As any two rows of are independent, we see from the observation that is a diagonal matrix, with the th diagonal entry just . But similarly, is also a diagonal matrix, with the same diagonal entries - so the claim follows.

In the following two lemmas, we use the rotational invariance of Gaussians as well as the chi distribution to prove results about the expected norm of a random Gaussian vector.

Lemma 3.

Norm of a Gaussian vector Let be a random Gaussian vector, with iid, . Then

Proof.

We use the fact that if is a random Gaussian, and then follows a chi distribution. This means that , the mean of a chi distribution with degrees of freedom, and the result follows by noting that the expectation in the lemma is multiplied by the above expectation. ∎

We will find it useful to bound ratios of the Gamma function (as appear in Lemma 3) and so introduce the following inequality, from [Kershaw, 1983] that provides an extension of Gautschi’s Inequality.

Theorem 6.

An Extension of Gautschi’s Inequality For , we have

We now show:

Lemma 4.

Norm of Projections Let be a by random Gaussian matrix with iid entries , and two given vectors. Partition into components as in Lemma 1 and let be a nonzero vector perpendicular to . Then

  1. If is an identity matrix with non-zeros diagonal entry iff , and , then

Proof.
  1. Let be as in Lemma 1. As are rotations, is also iid Gaussian. Furthermore for any fixed , with , by taking inner products, and square-rooting, we see that . So in particular

    But from the definition of non-zero entries of , and the form of (a zero entry in the first coordinate), it follows that has exactly non zero entries, each a centered Gaussian with variance . By Lemma 3, the expected norm is as in the statement. We then apply Theorem 6 to get the lower bound.

  2. First note we can view . (Projecting down to a random (as is random) subspace of fixed size and then making perpendicular commutes with making perpendicular and then projecting everything down to the subspace.)

    So we can view as a random by matrix, and for as in Lemma 1 (with projected down onto dimensions), we can again define as by and by rotation matrices respectively, and , with analogous properties to Lemma 1. Now we can finish as in part (a), except that may have only entries, (depending on whether is annihilated by projecting down by) each of variance .

Lemma 5.

Norm and Translation Let be a centered multivariate Gaussian, with diagonal covariance matrix, and a constant vector.

Proof.

The inequality can be seen intuitively geometrically: as has diagonal covariance matrix, the contours of the pdf of are circular centered at , decreasing radially. However, the contours of the pdf of are shifted to be centered around , and so shifting back to reduces the norm.

A more formal proof can be seen as follows: let the pdf of be . Then we wish to show

Now we can pair points , using the fact that and the triangle inequality on the integrand to get

b.2 Proof of Theorem

We use to denote the neuron in hidden layer . We also let be an input, be the hidden representation at layer , and the non-linearity. The weights and bias are called and respectively. So we have the relations

(1)
Proof.

We first prove the zero bias case. To do so, it is sufficient to prove that

as integrating over gives us the statement of the theorem.

For ease of notation, we will suppress the in .

We first write

where the division is done with respect to . Note that this means as the other component annihilates (maps to ) .

We can also define i.e. the set of indices for which the hidden representation is not saturated. Letting denote the th row of matrix , we now claim that:

Indeed, by Lemma 2 we first split the expectation over into a tower of expectations over the two independent parts of to get

But conditioning on in the inner expectation gives us and , allowing us to replace the norm over with the sum in the term on the right hand side of the claim.

Till now, we have mostly focused on partitioning the matrix . But we can also set where the perpendicular and parallel are with respect to . In fact, to get the expression in (**), we derive a recurrence as below:

To get this, we first need to define - the latent vector with all saturated units zeroed out.

We then split the column space of , where the split is with respect to . Letting be the part perpendicular to , and the set of units that are unsaturated, we have an important relation:

Claim

(where the indicator in the right hand side zeros out coordinates not in the active set.)

To see this, first note, by definition,

where the indicates a unit vector.

Similarly

Now note that for any index , the right hand sides of (1) and (2) are identical, and so the vectors on the left hand side agree for all . In particular,

Now the claim follows easily by noting that .

Returning to (*), we split , (and analogously), and after some cancellation, we have

We would like a recurrence in terms of only perpendicular components however, so we first drop the (which can be done without decreasing the norm as they are perpendicular to the remaining terms) and using the above claim, have

But in the inner expectation, the term is just a constant, as we are conditioning on . So using Lemma 5 we have

We can then apply Lemma 4 to get

The outer expectation on the right hand side only affects the term in the expectation through the size of the active set of units. For ReLUs, and for hard tanh, we have , and noting that we get a non-zero norm only if (else we cannot project down a dimension), and for ,

we get

We use the fact that we have the probability mass function for an binomial random variable to bound the term:

But by using Jensen’s inequality with , we get