Some Approximation Bounds for Deep Networks

Some Approximation Bounds for Deep Networks

\NameBrendan McCane \Emailmccane@cs.otago.ac.nz
\NameLech Szymanski \Emaillech@cs.otago.ac.nz
\addrDept of Computer Science
   University of Otago    Dunedin    New Zealand
Abstract

In this paper we introduce new bounds on the approximation of functions in deep networks and in doing so introduce some new deep network architectures for function approximation. These results give some theoretical insight into the success of autoencoders and ResNets.

D
\jmlrproceedings

eep Networks; Function Approximation.

1 Introduction

Deep networks have been shown to be more efficient than shallow for certain classes of problems: periodic functions (Szymanski and McCane, 2014); radially symmetric functions (Eldan and Shamir, 2016); and hierarchial compositional functions (Mhaskar and Poggio, 2016). Other work has shown that deep networks can efficiently represent low-dimensional manifolds (Basri and Jacobs, 2016; Shaham et al., 2016), and Telgarsky (2016) shows that there exist functions which cannot be efficiently represented with shallow networks.

All is not lost for shallow networks however. Mhaskar (1996) gives bounds for the approximation of Sobolev functions using shallow networks and shows that these bounds are tight. There appear to be no similar bounds for deep networks on this class of functions. One might naturally ask if deep networks can approximate this class of functions with similar bounds, or if shallow networks are demonstrably superior in this case. If the former, then one may conclude that a deep network is never worse than a shallow counterpart and hence we should always favour deep networks. If the latter, then choosing a shallow network may often be a good choice. This paper goes some way to answering this question by establishing upper bounds of approximation on some specific deep network architectures.

2 Definitions

We follow many of the conventions of Mhaskar and Poggio (2016).

Definition 1 ()

The set of all networks of a given kind with complexity N (the number of units in the network).

Definition 2 ()

Norm of a function. Let be the unit cube in dimensions. Let be the space of all continuous functions on with:

(1)
Definition 3 (Degree of approximation)

If is the unknown function to be approximated, then the distance between and an approximating network is:

(2)
Definition 4 ()

A Sobolev space. Let be an integer. Let be the set of all functions of variables with continuous partial derivatives of orders up to such that:

(3)

where denotes the partial derivative indicated by the multi-integer , and is the sum of the components of .

Definition 5 ()

The transfer function. Let be infinitely differentiable, and not a polynomial on any subinterval of . Further, we restrict ourselves to . Many common smooth transfer functions satisfy these conditions including the logistic function, , and softplus.

Definition 6 ()

The class of all shallow networks with units and inputs. Let denote the class of shallow networks with units of the form:

(4)

where , . The number of trainable parameters in such a network is . Since , it should be obvious that .

In all the deep networks we consider, one neuron in each layer after the first hidden layer is identified as the function approximation neuron. This allows us to progressively approximate the function of interest.

Definition 7 ()

The class of input residual networks with units per layer, where is the number of layers, and is the layer of the network. The first hidden layer of the network receives just the input coordinates. Each subsequent layer has the input coordinates and all the previous layer as input. See Figure 2.

Definition 8 ()

The class of cascade residual networks with units per layer, where is the number of layers, and is the layer of the network. The first hidden layer of the network receives just the input coordinates. Each subsequent layer has the input coordinates and the function approximation neuron of the previous layer as input. See Figure 1.

Definition 9 ()

The class of fully connected layer networks with units per layer. Each layer is fully connected to the next layer except for the function approximation neuron which is connected to the function approximation neuron and final output neuron only. In this case the number of neurons in each layer exceeds the input dimension. See Figure 3.

Definition 10 ()

The class of fully connected layer networks with units per layer. See Figure 3.

Figure 1: A CascadeResNet . are input nodes, are nodes in layer , and are the approximation outputs for layer .
Figure 2: An InputResNet . are input nodes, are nodes in layer , and are the approximation outputs for layer .
Figure 3: A FullyConnectedLayerNet . are input nodes, are nodes in layer , and are the approximation outputs for layer .

3 Approximation Bounds

We take as our starting point Theorem 2.1 of Mhaskar (1996) also reported as Theorem 2.1(a) in Mhaskar and Poggio (2016) and reproduce it here:

Theorem 1 (Theorem 2.1(a) of Mhaskar and Poggio (2016))

Let be infinitely differentiable, and not a polynomial on any subinterval of . For :

(5)

for some constant .

We use the results of Theorem 1 to derive bounds for the deep network architectures defined above in the next three theorems.

Theorem 2 (Cascade Residual Network Approximation)

Let be infinitely differentiable, and not a polynomial on any subinterval of . For , and some constant :

(6)
{proof}

From Theorem 1, choose , so that:

(7)

where is some constant. From the first layer, create a new function to approximate:

(8)

is clearly in and note that , and therefore can be approximated with another single layer network (), leading to the following error of approximation:

(9)
(10)
(11)
(12)

Repeat the procedure by creating a new function to approximate:

(13)

Again and . Approximate with a further layer ():

(14)
(15)
(16)
(17)

A simple inductive argument completes the proof.

Since (because , a constant function approximation would produce an error less than 1), it follows that the network will approximate the function exponentially fast in the number of layers.

Theorem 3 (Fully Connected Layer Network Approximation, )

Let be infinitely differentiable, and not a polynomial on any subinterval of . For , and some constant :

(18)

if each layer is an invertible map.

{proof}

Define as the (invertible) mapping for layer 1, as the (invertible) mapping for layer i excluding the function approximation neuron (see Figure 3). The input to node is ; to is ; etc. Consider the input to . Since is an invertible map, if necessary, we could construct the input to as . Since this is identical to the situation in Theorem 2, the same result applies.

The next theorem deals with the case where . However, in this case a continuous invertible mapping is not possible. Instead we project coordinates into a lower dimensional space using a Hilbert curve mapping to maintain locality (nearby points in the lower dimensional space are nearby in the original space). Theoretically, we could do this with no loss of information, but unfortunately, this requires an infinite recursion, and therefore any computational procedure will induce an error in the new coordinates. Nevertheless, this error can be made small with a fixed cost projection.

A Hilbert curve can be defined by the centre coordinates of a hierarchically divided hypercube. See Figure 4 for a 2D example. The curve itself, up to level , can be constructed by recursively subdividing an initial square and creating line segments between appropriate centre points. The Hilbert curve itself is the limiting curve as goes to infinity and defines a continuous, but non-differentiable, onto mapping from dimensions to 1 dimension. There are several ways to generate Hilbert curve mappings both from dimensions to 1, and from 1 to dimensions. See Lawder (2000) for one efficient method. For level , the maximum difference between a point in and a point on the curve is .

Figure 4: The first 3 levels of a Hilbert curve. From https://commons.wikimedia.org/wiki/File:Hilbert_curve_3.svg, author Geoff Richards.
Theorem 4 (Fully Connected Layer Network Approximation, )

Let be infinitely differentiable, and not a polynomial on any subinterval of . For Lipschitz with Lipschitz constant , a Hilbert curve transform of level , and some constant :

(19)
{proof}

The proof is straightforward. For the coordinate projection from dimensions to dimensions, choose the first dimensions and apply a Hilbert curve transformation. This induces a coordinate error less than and subsequently a function approximation error less than . We then apply Theorem 3 on the remaining coordinates along with the triangle inequality to prove the result.

4 Discussion

The constants, , in the theorems pose some difficulty since the errors are exponential in the number of layers (). It appears to be possible to estimate the size of these constants (Dupont and Scott, 1978, 1980), however the process is not straightforward and we have not attempted to estimate them. Nevertheless these theoretical results provide hints that for more general functions deep networks are never much worse than shallow networks. Given previous results showing that deep networks can be much better than shallow for specific function classes, it follows that there is little to lose in always choosing deep architectures (modulo the difficulties in learning deep networks).

These theoretical results also point toward layer-wise learning algorithms that reduce error exponentially fast in a manner that is somewhat analagous to AdaBoost like algorithms. We are currently investigating the practical implications of such algorithms.

For layer-wise learning of fully connected networks, Theorem 3 suggests that requiring invertible maps might be important. This might explain some of the success of autoencoders. Although Theorem 4 suggests that non-invertible maps might be able to achieve similar results, via space-filling curve mappings, it remains to be seen if such a scheme would be practical.

More recently, ResNets (He et al., 2016; Szegedy et al., 2017) of various types have been shown to outperform non-ResNet architectures with the most common argument given for their success being that it is easier for the gradients to propagate back to the inputs during learning. Theorem 2 and 3 together suggest a second reason may be that it is also easier to approximate residual functions if layers are skipped as there is no requirement that the layer mapping be invertible.

References

  • Basri and Jacobs (2016) Ronen Basri and David Jacobs. Efficient representation of low-dimensional manifolds using deep networks. arXiv preprint arXiv: 1602.04723, 2016.
  • Dupont and Scott (1978) Todd Dupont and Ridgway Scott. Constructive polynomial approximation in sobolev spaces. Recent advances in numerical analysis. Boor C. and Golub G., Eds, pages 31–44, 1978.
  • Dupont and Scott (1980) Todd Dupont and Ridgway Scott. Polynomial approximation of functions in sobolev spaces. Mathematics of Computation, 34(150):441–463, 1980.
  • Eldan and Shamir (2016) Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. JMLR: Workshop and Conference Proceedings, 49:1–34, 2016.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • Lawder (2000) Jonathan K. Lawder. Calculation of mappings between one and n-dimensional values using the hilbert space-filling curve. Technical Report Research Report BBKCS-00-01, School of Computer Science and Information Systems, Birkbeck College, 2000.
  • Mhaskar (1996) Hrushikesh N Mhaskar. Neural networks for optimal approximation of smooth and analytic functions. Neural Computation, 8:164–177, 1996.
  • Mhaskar and Poggio (2016) Hrushikesh N Mhaskar and Tomaso Poggio. Deep vs. shallow networks: An approximation theory perspective. Analysis and Applications, 14(06):829–848, 2016.
  • Shaham et al. (2016) Uri Shaham, Alexander Cloninger, and Ronald R Coifman. Provable approximation properties for deep neural networks. Applied and Computational Harmonic Analysis, 2016.
  • Szegedy et al. (2017) Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI, volume 4, page 12, 2017.
  • Szymanski and McCane (2014) Lech Szymanski and Brendan McCane. Deep networks are effective encoders of periodicity. Neural Networks and Learning Systems, IEEE Transactions on, 25(10):1816–1827, 2014.
  • Telgarsky (2016) Matus Telgarsky. Benefits of depth in neural networks. JMLR: Workshop and Conference Proceedings, 49:1–23, 2016.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
119905
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description