The Power of Depth for Feedforward Neural Networks

The Power of Depth for Feedforward Neural Networks

Ronen Eldan
Weizmann Institute of Science
ronen.eldan@weizmann.ac.il
   Ohad Shamir
Weizmann Institute of Science
ohad.shamir@weizmann.ac.il
Abstract

We show that there is a simple (approximately radial) function on , expressible by a small 3-layer feedforward neural networks, which cannot be approximated by any 2-layer network, to more than a certain constant accuracy, unless its width is exponential in the dimension. The result holds for virtually all known activation functions, including rectified linear units, sigmoids and thresholds, and formally demonstrates that depth – even if increased by 1 – can be exponentially more valuable than width for standard feedforward neural networks. Moreover, compared to related results in the context of Boolean functions, our result requires fewer assumptions, and the proof techniques and construction are very different.

1 Introduction and Main Result

Learning via multi-layered artificial neural networks, a.k.a. deep learning, has seen a dramatic resurgence of popularity over the past few years, leading to impressive performance gains on difficult learning problems, in fields such as computer vision and speech recognition. Despite their practical success, our theoretical understanding of their properties is still partial at best.

In this paper, we consider the question of the expressive power of neural networks of bounded size. The boundedness assumption is important here: It is well-known that sufficiently large depth- neural networks, using reasonable activation functions, can approximate any continuous function on a bounded domain (Cybenko (1989); Hornik et al. (1989); Funahashi (1989); Barron (1994)). However, the required size of such networks can be exponential in the dimension, which renders them impractical as well as highly prone to overfitting. From a learning perspective, both theoretically and in practice, our main interest is in neural networks whose size is bounded.

For a network of bounded size, a basic architectural question is how to trade off between its width and depth: Should we use networks that are narrow and deep (many layers, with a small number of neurons per layer), or shallow and wide? Is the “deep” in “deep learning” really important? Or perhaps we can always content ourselves with shallow (e.g. depth-) neural networks?

Overwhelming empirical evidence as well as intuition indicates that having depth in the neural network is indeed important: Such networks tend to result in complex predictors which seem hard to capture using shallow architectures, and often lead to better practical performance. However, for the types of networks used in practice, there are surprisingly few formal results (see related work below for more details).

In this work, we consider fully connected feedforward neural networks, using a linear output neuron and some non-linear activation function on the other neurons, such as the commonly-used rectified linear unit (ReLU, ), as well as the sigmoid () and the threshold (). Informally speaking, we consider the following question: What functions on expressible by a network with -layers and neurons per layer, that cannot be well-approximated by any network with layers, even if the number of neurons is allowed to be much larger than ?

More specifically, we consider the simplest possible case, namely the difficulty of approximating functions computable by -layer networks using -layer networks, when the networks are feedforward and fully connected. Following a standard convention, we define a -layer network of width on inputs in as

(1)

where is the activation function, and , , are parameters of the network. This corresponds to a set of neurons computing in the first layer, whose output is fed to a linear output neuron in the second layer111Note that sometimes one also adds a constant bias parameter to the output neuron, but this can be easily simulated by a “constant” neuron in the first layer where and are chosen appropriately. Also, sometimes the output neuron is defined to have a non-linearity as well, but we stick to linear output neurons, which is a very common and reasonable assumption for networks computing real-valued predictions.. Similarly, a -layer network of width is defined as

(2)

where , are parameters of the network. Namely, the outputs of the neurons in the first layer are fed to neurons in the second layer, and their outputs in turn are fed to a linear output neuron in the third layer.

Clearly, to prove something on the separation between -layer and -layer networks, we need to make some assumption on the activation function (for example, if is the identity, then both -layer and -layer networks compute linear functions, hence there is no difference in their expressive power). All we will essentially require is that is universal, in the sense that a sufficiently large -layer network can approximate any univariate Lipschitz function which is non-constant on a bounded domain. More formally, we use the following assumption:

Assumption 1.

Given the activation function , there is a constant (depending only on ) such that the following holds: For any -Lipschitz function which is constant outside a bounded interval , and for any , there exist scalars , where , such that the function

satisfies

This assumption is satisfied by the standard activation functions we are familiar with. First of all, we provide in Appendix A a constructive proof for the ReLU function. For the threshold, sigmoid, and more general sigmoidal functions (e.g. monotonic functions which satisfy for some in ), the proof idea is similar, and implied by the proof of Theorem 1 of Debao (1993)222Essentially, a single neuron with such a sigmoidal activation can express a (possibly approximate) single-step function, a combination of such neurons can express a function with such steps, and any -Lipschitz function which is constant outside can be approximated to accuracy with a function involving steps.. Finally, one can weaken the assumed bound on to any , at the cost of a worse polynomial dependence on the dimension in Thm. 1 part below (see Subsection 4.4 for details).

In addition, for technical reasons, we will require the following mild growth and measurability conditions, which are satisfied by virtually all activation functions in the literature, including the examples discussed earlier:

Assumption 2.

The activation function is (Lebesgue) measurable and satisfies

for all and for some constants .

Our main result is the following theorem, which implies that there are -layer networks of width polynomial in the dimension , which cannot be arbitrarily well approximated by -layer networks, unless their width is exponential in :

Theorem 1.

Suppose the activation function satisfies assumption 1 with constant , as well as assumption 2. Then there exist universal constants such that the following holds: For every dimension , there is a probability measure on and a function with the following properties:

  1. is bounded in , supported on , and expressible by a -layer network of width .

  2. Every function , expressed by a -layer network of width at most , satisfies

The proof is sketched in Sec. 2, and is formally presented in Sec. 4. Roughly speaking, approximates a certain radial function , depending only on the norm of the input. With layers, approximating radial functions (including ) to arbitrary accuracy is straightforward, by first approximating the squared norm function, and then approximating the univariate function acting on the norm. However, performing this approximation with only layers is much more difficult, and the proof shows that exponentially many neurons are required to approximate to more than constant accuracy. We conjecture (but do not prove) that a much wider family of radial functions also satisfy this property.

We make the following additional remarks about the theorem:

Remark 1 (Activation function).

The theorem places no constraints on the activation function beyond assumptions 1 and 2. In fact, the inapproximability result for the function holds even if the activation functions are different across the first layer neurons, and even if they are chosen adaptively (possibly depending on ), as long as they satisfy assumption 2.

Remark 2 (Constraints on the parameters).

The theorem places no constraints whatsoever on the parameters of the -layer networks, and they can take any values in . This is in contrast to related depth separation results in the context of threshold circuits, which do require the size of the parameters to be constrained (see discussion of related work below).

Remark 3 (Properties of ).

At least for specific activation functions such as the ReLU, sigmoid, and threshold, the proof construction implies that is -Lipschitz, and the -layer network expressing it has parameters bounded by .

Related Work

On a qualitative level, the question we are considering is similar to the question of Boolean circuit lower bounds in computational complexity: In both cases, we consider functions which can be represented as a combination of simple computational units (Boolean gates in computational complexity; neurons in neural networks), and ask how large or how deep this representation needs to be, in order to compute or approximate some given function. For Boolean circuits, there is a relatively rich literature and some strong lower bounds. A recent example is the paper Rossman et al. (2015), which shows for any an explicit depth , linear-sized circuit on , which cannot be non-trivially approximated by depth circuits of size polynomial in . That being said, it is well-known that the type of computation performed by each unit in the circuit can crucially affect the hardness results, and lower bounds for Boolean circuits do not readily translate to neural networks of the type used in practice, which are real-valued and express continuous functions. For example, a classical result on Boolean circuits states that the parity function over cannot be computed by constant-depth Boolean circuits whose size is polynomial in (see for instance Håstad (1986)). Nevertheless, the parity function can in fact be easily computed by a simple -layer, -width real-valued neural network with most reasonable activation functions333See Rumelhart et al. (1986), Figure 6, where reportedly the structure was even found automatically by back-propagation. For a threshold activation function and input , the network is given by . In fact, we only need to satisfy for and for , so the construction easily generalizes to other activation functions (such as a ReLU or a sigmoid), possibly by using a small linear combination of them to represent such a ..

A model closer to ours is a threshold circuit, which is a neural network where all neurons (including the output neuron) has a threshold activation function, and the input is from the Boolean cube (see Parberry (1994) for a survey). For threshold circuits, the main known result in our context is that computing inner products mod over -dimensional Boolean vectors cannot be done with a -layer network with -sized parameters and width, but can be done with a small -layer network (Hajnal et al. (1993)). Note that unlike neural networks in practice, the result in Hajnal et al. (1993) is specific to the non-continuous threshold activation function, and considers hardness of exact representation of a function by -layer circuits, rather than merely approximating it. Following the initial publication of our paper, we were informed (Martens (2015)) that the proof technique, together with techniques in the papers (Maass et al. (1994); Martens et al. (2013))), can possibly be used to show that inner product mod is also hard to approximate, using -layer neural networks with continuous activation functions, as long as the network parameters are constrained to be polynomial in , and that the activation function satisfies certain regularity conditions444See remark in Martens et al. (2013). These conditions are needed for constructions relying on distributions over a finite set (such as the Boolean hypercube). However, since we consider continuous distributions on , we do not require such conditions.. Even so, our result does not pose any constraints on the parameters, nor regularity conditions beyond assumptions 1,2. Moreover, we introduce a new proof technique which is very different, and demonstrate hardness of approximating not the Boolean inner-product-mod- function, but rather functions in with a simple geometric structure (namely, radial functions).

Moving to networks with real-valued outputs, one related field is arithmetic circuit complexity (see Shpilka and Yehudayoff (2010) for a survey), but the focus there is on computing polynomials, which can be thought of as neural networks where each neuron computes a linear combination or a product of its inputs. Again, this is different than most standard neural networks used in practice, and the results and techniques do not readily translate.

Recently, several works in the machine learning community attempted to address questions similar to the one we consider here. Pascanu et al. (2013); Montufar et al. (2014) consider the number of linear regions which can be expressed by ReLU networks of a given width and size, and Bianchini and Scarselli (2014) consider the topological complexity (via Betti numbers) of networks with certain activation functions, as a function of the depth. Although these can be seen as measures of the function’s complexity, such results do not translate directly to a lower bound on the approximation error, as in Thm. 1. Delalleau and Bengio (2011); Martens and Medabalimi (2014) and Cohen et al. (2015) show strong approximation hardness results for certain neural network architectures (such as polynomials or representing a certain tensor structure), which are however fundamentally different than the standard neural networks considered here.

Quite recently, Telgarsky (2015) gave a simple and elegant construction showing that for any , there are -layer, wide ReLU networks on one-dimensional data, which can express a sawtooth function on which oscillates times, and moreover, such a rapidly oscillating function cannot be approximated by -wide ReLU networks with depth. This also implies regimes with exponential separation, e.g. that there are -depth networks, which any approximating -depth network requires width. These results demonstrate the value of depth for arbitrarily deep, standard ReLU networks, for a single dimension and using functions which have an exponentially large Lipschitz parameter. In this work, we use different techniques, to show exponential separation results for general activation functions, even if the number of layers changes by just (from two to three layers), and using functions in whose Lipschitz parameter is polynomial in .

2 Proof Sketch

In a nutshell, the -layer network we construct approximates a radial function with bounded support (i.e. one which depends on the input only via its Euclidean norm , and is for any whose norm is larger than some threshold). With layers, approximating radial functions is rather straightforward: First, using assumption 1, we can construct a linear combination of neurons expressing the univariate mapping arbitrarily well in any bounded domain. Therefore, by adding these combinations together, one for each coordinate, we can have our network first compute (approximately) the mapping inside any bounded domain, and then use the next layer to compute some univariate function of , resulting in an approximately radial function. With only layers, it is less clear how to approximate such radial functions. Indeed, our proof essentially indicates that approximating radial functions with layers can require exponentially large width.

To formalize this, note that if our probability measure has a well-behaved density function which can be written as for some function , then the approximation guarantee in the theorem, , can be equivalently written as

(3)

In particular, we will consider a density function which equals , where is the inverse Fourier transform of the indicator , being the origin-centered unit-volume Euclidean ball (the reason for this choice will become evident later). Before continuing, we note that a formula for can be given explicitly (see Lemma 2), and an illustration of it in dimensions is provided in Figure 1. Also, it is easily verified that is indeed a density function: It is clearly non-negative, and by isometry of the Fourier transform, , which equals since is a unit-volume ball.

Figure 1: The left figure represents in dimensions. The right figure represents a cropped and re-scaled version, to better show the oscillations of beyond the big origin-centered bump. The density of the probability measure is defined as

Our goal now is to lower bound the right hand side of Eq. (3). To continue, we find it convenient to consider the Fourier transforms of the functions , rather than the functions themselves. Since the Fourier transform is isometric, the above equals

Luckily, the Fourier transform of functions expressible by a -layer network has a very particular form. Specifically, consider any function of the form

where (such as -layer networks as defined earlier). Note that may not be square-integrable, so formally speaking it does not have a Fourier transform in the standard sense of a function on . However, assuming grows at most polynomially as or , it does have a Fourier transform in the more general sense of a tempered distribution (we refer the reader to the proof for a more formal discussion). This distribution can be shown to be supported on : namely, a finite collection of lines555Roughly speaking, this is because each function is constant in any direction perpendicular to , hence do not have non-zero Fourier components in those directions. In one dimension, this can be seen by the fact that the Fourier transform of the constant function is the Dirac delta function, which equals everywhere except at the origin.. The convolution-multiplication principle implies that equals , or the convolution of with the indicator of a unit-volume ball . Since is supported on , it follows that

In words, the support of is contained in a union of tubes of bounded radius passing through the origin. This is the key property of -layer networks we will use to derive our main theorem. Note that it holds regardless of the exact shape of the functions, and hence our proof will also hold if the activations in the network are different across the first layer neurons, or even if they are chosen in some adaptive manner.

To establish our theorem, we will find a function expressible by a -layer network, such that has a constant distance (in space) from any function supported on (a union of tubes as above). Here is where high dimensionality plays a crucial role: Unless is exponentially large in the dimension, the domain is very sparse when one considers large distances from the origin, in the sense that

(where is the -dimensional unit Euclidean sphere, and is the -dimensional Hausdorff measure) whenever is large enough with respect to the radius of . Therefore, we need to find a function so that has a lot of mass far away from the origin, which will ensure that will be large. Specifically, we wish to find a function so that is radial (hence is also radial, so having large mass in any direction implies large mass in all directions), and has a significant high-frequency component, which implies that its Fourier transform has a significant portion of its mass outside of the ball .

The construction and analysis of this function constitutes the technical bulk of the proof. The main difficulty in this step is that even if the Fourier transform of has some of its mass on high frequencies, it is not clear that this will also be true for (note that while convolving with a Euclidean ball increases the average distance from the origin in the sense, it doesn’t necessarily do the same in the sense).

We overcome this difficulty by considering a random superposition of indicators of thin shells: Specifically, we consider the function

(4)

where , , and , where are disjoint intervals of width on values in the range . Note that strictly speaking, we cannot take our hard-to-approximate function to equal , since is discontinuous and therefore cannot be expressed by a -layer neural network with continuous activations functions. However, since our probability distribution can be shown to have bounded density in the support of Eq. (4), we can use a -layer network to approximate such a function arbitrarily well with respect to the distribution (for example, by computing as above, with each hard indicator function replaced by a Lipschitz function, which differs from on a set with arbitrarily small probability mass). Letting the function be such a good approximation, we get that if no -layer network can approximate the function in Eq. (4), then it also cannot approximate its -layer approximation .

Let us now explain why the function defined in Eq. (4) gives us what we need. For large , each is supported on a thin Euclidean shell, hence is approximately the same as for some constant . As a result, , so its Fourier transform (by linearity) is . Since is a simple indicator function, its Fourier transform is not too difficult to compute explicitly, and involves an appropriate Bessel function which turns out to have a sufficiently large mass sufficiently far away from the origin.

Knowing that each summand has a relatively large mass on high frequencies, our only remaining objective is to find a choice for the signs so that the entire sum will have the same property. This is attained by a random choice of signs: it is an easy observation that given an orthogonal projection in a Hilbert space , and any sequence of vectors such that , one has that when the signs are independent Bernoulli variables. Using this observation with being the projection onto the subspace spanned by functions supported on high frequencies and with the functions , it follows that there is at least one choice of the ’s so that a sufficiently large portion of ’s mass is on high frequencies.

3 Preliminaries

We begin by defining some of the standard notation we shall use. We let and denote the natural and real numbers, respectively. Bold-faced letters denote vectors in -dimensional Euclidean space , and plain-faced letters to denote either scalars or functions (distinguishing between them should be clear from context). denotes the space of squared integrable functions (, where the integration is over ), and denotes the space of absolutely integrable functions (). denotes the Euclidean norm, denotes inner product in space (for functions , we have ), denotes the standard norm in space ( ), and denotes the space norm weighted by a probability measure (namely ). Given two functions , we let be shorthand for the function , and be shorthand for . Given two sets in , we let and

Fourier Transform. For a function , our convention for the Fourier transform is

whenever the integral is well defined. This is generalized for by

(5)

Radial Functions. A radial function is such that for any such that . When dealing with radial functions, which are invariant to rotations, we will somewhat abuse notation and interchangeably use vector arguments to denote the value of the function at , and scalar arguments to denote the value of the same function for any vector of norm . Thus, for a radial function , equals for any such that .

Euclidean Spheres and Balls. Let be the unit Euclidean sphere in , be the -dimensional unit Euclidean ball, and let be the radius so that has volume one. By standard results, we have the following useful lemma:

Lemma 1.

, which is always between and .

Bessel Functions. Let denote the Bessel function of the first kind, of order . The Bessel function has a few equivalent definitions, for example where is the Gamma function. Although it does not have a closed form, has an oscillating shape, which for asymptotically large behaves as . Figure 2 illustrates the function for . In appendix C, we provide additional results and approximations for the Bessel function, which are necessary for our proofs.

Figure 2: Bessel function of the first kind,

4 Proof of Thm. 1

In this section, we provide the proof of Thm. 1. Note that some technical proofs, as well as some important technical lemmas on the structure of Bessel functions, are deferred to the appendix.

4.1 Constructions

As discussed in Sec. 2, our theorem rests on constructing a distribution and an appropriate function , which is easy to approximate (w.r.t. ) by small -layer networks, but difficult to approximate using -layer networks. Thus, we begin by formally defining that we will use.

First, will be defined as the measure whose density is , where is the Fourier transform of the indicator of a unit-volume Euclidean ball . Note that since the Fourier transform is an isometry, , hence is indeed a probability measure. The form of is expressed by the following lemma:

Lemma 2.

Let be the Fourier transform of . Then

The proof appears in Appendix B.1.

To define our hard-to-approximate function, we introduce some notation. Let and be some large numerical constants to be determined later, and set , assumed to be an integer (essentially, we need to be sufficiently large so that all the lemmas we construct below would hold). Consider the intervals

We split the intervals to “good” and “bad” intervals using the following definition:

Definition 1.

is a good interval (or equivalently, is good) if for any

Otherwise, we say that is a bad interval.

For any , define

(6)

By definition of a “good” interval and Lemma 2, we see that is defined to be non-zero, when the value of on the corresponding interval is sufficiently bounded away from , a fact which will be convenient for us later on.

Our proof will revolve around the function

which as explained in Sec. 2, will be shown to be easy to approximate arbitrarily well with a -layer network, but hard to approximate with a -layer network.

4.2 Key Lemmas

In this subsection, we collect several key technical lemmas on and , which are crucial for the main proof. The proofs of all the lemmas can be found in Appendix B.

The following lemma ensures that is sufficiently close to being a constant on any good interval:

Lemma 3.

If , and (for some sufficiently large universal constant ), then inside any good interval , has the same sign, and

The following lemma ensures that the Fourier transform of has a sufficiently large part of its mass far away from the origin:

Lemma 4.

Suppose . Then for any ,

where is the Fourier transform of .

The following lemma ensures that also has sufficiently large mass far away from the origin:

Lemma 5.

Suppose that , and , where is a universal constant. Then for any ,

The following lemma ensures that a linear combination of the ’s has at least a constant probability mass.

Lemma 6.

Suppose that and for some sufficiently large universal constant , then for every choice of , , one has

Finally, the following lemma guarantees that the non-Lipschitz function can be approximated by a Lipschitz function (w.r.t. the density ). This will be used to show that can indeed be approximated by a -layer network.

Lemma 7.

Suppose that . For any choice of , , there exists an -Lipschitz function , supported on and with range in , which satisfies

4.3 Inapproximability of the Function with -Layer Networks

The goal of this section is to prove the following proposition.

Proposition 1.

Fix a dimension , suppose that , and and let be an integer satisfying

(7)

with being universal constants. There exists a choice of , , such that the function has the following property. Let be of the form

(8)

for , where are measurable functions satisfying

for constants . Then one has

where is a universal constant.

The proof of this proposition requires a few intermediate steps. In the remainder of the section, we will assume that are chosen to be large enough to satisfy the assumptions of Lemma 6 and Lemma 5. In other words we assume that and for a suitable universal constant . We begin with the following:

Lemma 8.

Suppose that are as above. There exists a choice of , such that

for a universal constant .

Proof.

Suppose that each is chosen independently and uniformly at random from . It suffices to show that

for some universal constant , since that would ensure there exist some choice of satisfying the lemma statement. Define and consider the operator

This is equivalent to removing low-frequency components from (in the Fourier domain), and therefore is an orthogonal projection. According to Lemma 5 and isometry of the Fourier transform, we have

(9)

for every good . Moreover, an application of Lemma 6, and the fact that for any (as have disjoint supports) tells us that

(10)

for a universal constant . We finally get,

for a universal constant . ∎

Claim 2.

Let be a function such that , and is of the form in Eq. (8). Suppose that the functions are measurable functions satisfying

(11)

for constants . Then,

(12)
Proof.

Informally, the proof is based on the convolution-multiplication and linearity principles of the Fourier transform, which imply that if , where each as well as have a Fourier transform, then . Roughly speaking, in our case each (as a function in ) is shown to be supported on , so its convolution with (which is an indicator for the ball ) must be supported on . Summing over gives the stated result.

Unfortunately, this simple analysis is not formally true, since we are not guaranteed that has a Fourier transform as a function in (this corresponds to situations where the integral in the definition of the Fourier transform in Eq. (5) does not converge). However, at least for functions satisfying the claim’s conditions, the Fourier transform still exists as a generalized function, or tempered distribution, over , and using this object we can attain the desired result.

We now turn to provide the formal proof and constructions, starting with a description of tempered distributions and their relevant properties (see (Hunter and Nachtergaele, 2001, Chapter 11) for a more complete survey). To start, let denote the space of Schwartz functions 666This corresponds to infinitely-differentiable functions with super-polynomially decaying values and derivatives, in the sense that for all indices . on . A tempered distribution in our context is a continuous linear operator from to (this can also be viewed as an element in the dual space ). In particular, any measurable function , which satisfies a polynomial growth condition similar to Eq. (11), can be viewed as a tempered distribution defined as

where . Note that the growth condition ensures that the integral above is well-defined. The Fourier transform of a tempered distribution is also a tempered distribution, and defined as

where is the Fourier transform of . It can be shown that this directly generalizes the standard notion of Fourier transforms of functions. Finally, we say that a tempered distribution is supported on some subset of , if for any function which vanishes on that subset.

With these preliminaries out of the way, we turn to the setting considered in the claim. Let denote the Fourier transform of (possibly as a tempered distribution, as described above), the existence of which is guaranteed by the fact that is measurable and by Eq. (11). We also define, for and , a corresponding function by

and define the tempered distribution (over Schwartz functions in ) as

which is indeed an element of by the linearity of the Fourier transform, by the continuity of with respect to the topology of and by the dominated convergence theorem. Finally, define

Using the fact that777This is because , where is the Dirac delta function, which is the Fourier transform of the constant function. See also (Hunter and Nachtergaele, 2001, Chapter 11, Example 11.31).

(13)

for any , recalling that has unit norm, and letting denote the subspace of vectors orthogonal to , we have the following for any :

(14)

where the use of Fubini’s theorem is justified by the fact that .

We now use the convolution-multiplication theorem (see e.g., (Hunter and Nachtergaele, 2001, Theorem 11.35)) according to which if then

(15)

Using this, we have the following for every :

Based on this equation, we now claim that for any supported on the complement of . This would imply that the tempered distribution is supported in , and therefore is supported in (by linearity of the Fourier transform and the fact that ). Since the Fourier transform of as a tempered distribution coincides with the standard one (as we assume ), the result follows.

It remains to prove that for any supported on the complement of . For such , by definition of a convolution, is supported on the complement of . However, is supported on (since if vanishes on , then is the zero function, hence is also the zero function, and ). Therefore, , which by the last displayed equation, implies that as required. ∎

Lemma 9.

Let be two functions of unit norm in . Suppose that satisfies

(16)

for some . Moreover, suppose that is radial and that