Reducing Parameter Space for Neural Network Training
Abstract
For neural networks (NNs) with rectified linear unit (ReLU) or binary activation functions, we show that their training can be accomplished in a reduced parameter space. Specifically, the weights in each neuron can be trained on the unit sphere, as opposed to the entire space, and the threshold can be trained in a bounded interval, as opposed to the real line. We show that the NNs in the reduced parameter space are mathematically equivalent to the standard NNs with parameters in the whole space. The reduced parameter space shall facilitate the optimization procedure for the network training, as the search space becomes (much) smaller. We demonstrate the improved training performance using numerical examples.
1 Introduction
Interest in neural networks (NNs) has significantly increased in recent years because of the successes of deep networks in many practical applications. Complex and deep neural networks are known to be capable of learning very complex phenomena that are beyond the capabilities of many other traditional machine learning techniques. The amount of literature is too large to mention. Here we cite just a few review type publications [14, 2, 5, 17, 4, 8, 18].
In a NN network, each neuron produces an output in the following form
\hb@xt@.01(1.1) 
where the vector represents the signal from all incoming connecting neurons, are the weights for the input, is the activation function, with as its threshold. In a complex (and deep) network with a large number of neurons, the total number of the free parameters and can be exceedingly large. Their training thus poses a tremendous numerical challenge, as the objective function (loss function) to be optimized becomes highly nonconvex and with highly complicated landscape ([11]). Any numerical optimization procedures can be trapped in a local minimum and produce unsatisfactory training results.
This paper is not concerned with numerical algorithm aspect of the NN training. Instead, the purpose of this paper is to show that the training of NNs can be conducted in a reduced parameter space, thus providing any numerical optimization algorithm a smaller space to search for the parameters. This reduction applies to the type of activation functions with the following scaling property: for any , , where depends only on . The binary activation function [12], one of the first used activation functions, satisfies this property with . The rectified linear unit (ReLU) [15, 7], one of the most widely used activation functions nowadays, satisfies this property with . For NNs with this type of activation functions, we show that they can be equivalently trained in a reduced parameter space. More specifically, let the length of the weights be . Instead of training in , they can be equivalently trained as unit vector with , which means , the unit sphere. The threshold can also be trained in a bounded interval , where , as opposed to the entire real line . By enforcing these constraints on and , the search space for the parameters is reduced, much more significantly for deep and complex NNs. This shall improve the performance of the training algorithms, as the reduction in the parameter space may potentially eliminate many local minima.
This paper is organized as follows. In Section LABEL:sec:single, we first derive the constraints on the parameters using network with single hidden layer, while enforcing the equivalence of the network. In Section LABEL:sec:proof, we prove that the constrained NN formulation remains a universal approximator. In Section LABEL:sec:multiple, we present the constraints for NNs with multiple hidden layers. Finally in Section LABEL:sec:examples, we present numerical experiments to demonstrate the improvement of the network training using the reduced parameter space. We emphasize that this paper is not concerned with any particular training algorithm. Therefore, in our numerical tests we used the most standard optimization algorithm from Matlab®
2 Constraints for NN with Single Hidden Layer
Let us first consider the standard NN with a single hidden layer, in the context for approximating an unknown response function . The NN approximation using activation function takes the following form,
\hb@xt@.01(2.1) 
where is the weight vector, the threshold, , and is the width of the network.
We restrict our discussion to following two activation functions. One is the rectified linear unit (ReLU),
\hb@xt@.01(2.2) 
The other one is the binary activation function, also known as threshold/step function,
\hb@xt@.01(2.3) 
We remark that these two activation functions satisfy the following scaling property: For any and , there exists a constant such that
\hb@xt@.01(2.4) 
where depends only on but not on . The ReLU satisfies this property with , which is known as scale invariance. The binary activation function satisfies the scaling property with .
We also list the following properties, which are important for the method we present in this paper.

For the binary activation function (LABEL:step), for any ,
\hb@xt@.01(2.5) 
For the ReLU activation function (LABEL:ReLU), for any and any ,
\hb@xt@.01(2.6)
2.1 Weight Constraints
We first show that the training of (LABEL:Nx) can be equivalently conducted with constraint , i.e., unit vector. This effectively reduces the search space for the weights from to , the dimensional unit sphere.
Proposition 2.1
Any neural network construction (LABEL:Nx) using the ReLU (LABEL:ReLU) or the binary (LABEL:step) activation functions has an equivalent form
\hb@xt@.01(2.7) 
Proof. Let us first assume for all . We then have
where is the factor in the scaling property (LABEL:scale) satisfied by both ReLU and binary activation functions. Upon defining
we have the following equivalent form as in (LABEL:Nnew)
Next, let us consider the case for some . The contribution of this term to the construction (LABEL:Nx) is thus
where for the binary activation function and for the ReLU function. Therefore, if , this term in (LABEL:Nx) vanishes. If , the contributions of these terms in (LABEL:Nx) are constants, which can be represented by a combination of neuron outputs using the relations (LABEL:step1) and (LABEL:ReLU1), for binary and ReLU activation functions, respectively. We thus obtain a new representation in the form of (LABEL:Nnew) that includes all the terms in the original expression (LABEL:Nx). This completes the proof.
The proof immediately gives us another equivalent form, by combining all the constant terms from (LABEL:Nx) into a single constant first and then explicitly including it in the expression.
Corollary 2.2
Any neural network construction (LABEL:Nx) using the ReLU (LABEL:ReLU) or the binary (LABEL:step) activation functions has an equivalent form
\hb@xt@.01(2.8) 
.
2.2 Threshold Constraints
When the target function is defined on a bounded domain, i.e., , with bounded. We can derive constraints on the thresholds.
Proposition 2.3
With the ReLU (LABEL:ReLU) or the binary (LABEL:step) activation function, let (LABEL:Nx) be an approximator to a function , where is a bounded domain. Let
\hb@xt@.01(2.9) 
Then, (LABEL:Nx) has an equivalent form
\hb@xt@.01(2.10) 
Proof. Proposition LABEL:prop:eq1 establishes that (LABEL:Nx) has an equivalent form (LABEL:Nnew)
Since is a unit vector, we have
\hb@xt@.01(2.11) 
where the bound (LABEL:Xb) is used.
Let us first consider the case , then , , , for both the ReLU and binary activation functions. Subsequently, this term has no contribution to the approximation and can be eliminated.
Next, let us consider the case , then for all . Let , , be the set of terms in (LABEL:Nnew) that satisfy this condition. We then have , for all , and . We now show that the net contribution of these terms in (LABEL:Nnew) is included in the equivalent form (LABEL:Nnew2).

For the binary activation function (LABEL:step), the contribution of these terms to the approximation (LABEL:Nnew) is
Again, using the relation (LABEL:step1), any constant can be expressed by a combination of binary activation terms with thresholds . Such terms are already included in (LABEL:Nnew2).

For the ReLU activation (LABEL:ReLU), the contribution of these terms to (LABEL:Nnew) is
\hb@xt@.01(2.12) where the last equality follows the simple property of the ReLU function . Using Proposition LABEL:prop:eq1, the first two terms then have an equivalent form using unit weight and with zero threshold, which is included in (LABEL:Nnew2). For the constant , we again invoke the relation (LABEL:ReLU1) and represent it by , where is an arbitrary unit vector and . Obviously, this expression includes a collection of terms (4 terms as in (LABEL:ReLU1)), which are included in (LABEL:Nnew2). This completes the proof.
The equivalence between the standard NN expression (LABEL:Nx) and the constrained expression (LABEL:Nnew2) indicates that the NN training can be conducted in a reduced parameter space. For the weights in each neuron, its training can be conducted in , the dimensional unit sphere, as opposed to the entire space . For the threshold, its training can be conducted in the bounded interval , as opposed to the entire real line . The reduction of the parameter space can eliminate many potential local minima and therefore enhance the performance of numerical optimization during the training.
3 Universal approximation property
By universal approximation property, we aim to establish that the constrained formulations (LABEL:Nnew) and (LABEL:Nnew2) can approximate any continuous functions. To this end, we define
\hb@xt@.01(3.1) 
where is the weight set, the threshold set. By following this definition, the standard NN expression and our two constrained expressions correspond to the following spaces.
\hb@xt@.01(3.2) 
where is the dimensional unit sphere because .
The universal approximation property for the standard unconstrained NN expression (LABEL:Nx) has been well studied, cf. [3, 9, 13, 1, 10], and [16] for a survey. Here we cite the following result for .
Theorem 3.1 ([10], Theorem 1)
Let be a function in , of which the set of discontinuities has Lebesgue measure zero. Then the set is dense in if and only if is not an algebraic polynomial almost everywhere.
3.1 Universal approximation property of (LABEL:Nnew)
We now examine the universal approximation property for the first constrained formulation (LABEL:Nnew).
Theorem 3.2
Let be the binary function (LABEL:step) or the ReLU function (LABEL:ReLU), then we have
\hb@xt@.01(3.3) 
and the set is dense in .
Proof. Obviously, we have . By Proposition LABEL:prop:eq1, any element can be reformulated as an element . Therefore, we have . This concludes the first statement (LABEL:thm1eq). Given the equivalence (LABEL:thm1eq), the denseness result immediately follows from Theorem LABEL:propLeshno, as both the ReLU and the binary activation functions are not polynomials and are continuous everywhere except at a set of zero Lebesgue measure.
3.2 Universal approximation property of (LABEL:Nnew2)
We now examine the second constrained NN expression (LABEL:Nnew2).
Theorem 3.3
Let be the binary (LABEL:step) or the ReLU (LABEL:ReLU) activation function. Let , where is bounded with . Define , then for any ,
\hb@xt@.01(3.4) 
Furthermore, is dense in , where is the dimensional ball with radius .
Proof. Obviously we have . On the other hand, Proposition LABEL:prop:eq2 establishes that for any element , there exists an equivalent formulation for any . This implies . We then have (LABEL:thm2eq).
For the denseness of in , let us consider any function . By the Tietze extension theorem (cf. [6]), there exists an extension with for any . Then, the denseness result of the standard unconstrained NN expression (Theorem LABEL:propLeshno) implies that, for any given , there exists such that
By Proposition LABEL:prop:eq2, there exists an equivalent constrained NN expression such that for any . We then immediately have, for any and any given , there exists such that
The proof is now complete.
4 Constraints for NNs with Multiple Hidden Layers
We now generalize the previous result to feedforward NNs with multiple hidden layers. Let us again consider approximation of a multivariate function , where with .
Consider a feedforward NN with layers, , where is the input layer and the output layer. Let , be the number of neurons in each layer. Obviously, we have and in our case. Let be the output of the neurons in the th layer. Then, by following the notation from [4], we can write
\hb@xt@.01(4.1) 
where is the weight matrix and is the threshold vector. In component form, the output of the th neuron in the th layer is
\hb@xt@.01(4.2) 
where be the th column of .
4.1 Weight constraints
The derivation for the constraints on the weights vector can be generalized directly from the singlelayer case and we have the following weight constraints,
\hb@xt@.01(4.3) 
4.2 Threshold constraints
The constraints on the threshold depend on the bounds of the output from the previous layer .
For the ReLU activation function (LABEL:ReLU), we derive from (LABEL:NN_layer) that for ,
\hb@xt@.01(4.4) 
If the domain is bounded and with , then the constraints on the threshold can be recursively derived. Starting from and , we then have
\hb@xt@.01(4.5) 
For the binary activation function (LABEL:step), we derive from (LABEL:NN_layer) that for ,
\hb@xt@.01(4.6) 
Then, the bounds for the thresholds are
\hb@xt@.01(4.7) 
5 Numerical Examples
In this section we present numerical examples to demonstrate the properties of the constrained NN training. We focus exclusively on the ReLU activation function (LABEL:ReLU) due to its overwhelming popularity in practice.
Given a set of training samples , the weights and thresholds are trained by minimizing the following mean squared error (MSE)
We conduct the training using the standard unconstrained NN formulation (LABEL:Nx) and our new constrained formulation (LABEL:Nnew2) and compare the training results. In all tests, both formulations use exactly the same randomized initial conditions for the weights and thresholds. Since our new constrained formulation is irrespective of the numerical optimization algorithm, we use one of the most accessible optimization routines from MATLAB®, the function fminunc for unconstrained optimization and the function fmincon for constrained optimization. It is natural to explore the specific form of the constraints in (LABEL:Nnew2) to design more effective constrained optimization algorithms. This is, however, out of the scope of the current paper.
After training the networks, we evaluate the network approximation errors using another set of samples — a validation sample set, which consists of randomly generated points that are independent of the training sample set. Even though our discussion applies to functions in arbitrary dimension , we present only the numerical results in and because they can be easily visualized.
5.1 Single Hidden Layer
We first examine the approximation results using NNs with single hidden layer, with and without constraints.
5.1.1 One dimensional tests
We first consider a onedimensional smooth function
\hb@xt@.01(5.1) 
The constrained formulation (LABEL:Nnew2) becomes
\hb@xt@.01(5.2) 
Due to the simple form of the weights and the domain , the proof of Proposition LABEL:prop:eq2 also gives us the following tighter bounds for the thresholds for this specific problem,
\hb@xt@.01(5.3) 
We approximate (LABEL:testf1) with NNs with one hidden layer, which consists of neurons. The size of the training data set is . Numerical tests were performed for different choices of random initializations. It is known that NN training performance depends on the initialization. In Figures LABEL:fig:1L1D_IC1, LABEL:fig:1L1D_IC2 and LABEL:fig:1L1D_IC3, we show the numerical results for three sets of different random initializations. In each set, the unconstrained NN (LABEL:Nx), the constrained NN (LABEL:Nx_1D1) and the specialized constrained NN with (LABEL:Nx_1D2) use the same random sequence for initialization. We observe that the standard NN formulation without constraints (LABEL:Nx) produces training results critically dependent on the initialization. This is widely acknowledged in the literature. On the other hand, our new constrained NN formulations are more robust and produce better results that are less sensitive to the initialization. The tighter constraint (LABEL:Nx_1D2) performs better than the general constraint (LABEL:Nx_1D1), which is not surprising. However, the tighter constraint is a special case for this particular problem and not available in the general case.
5.1.2 Twodimensional tests
We next consider twodimensional functions. In particular, we show result for the Franke’s function
\hb@xt@.01(5.4) 
with . Again, we compare training results for both the standard NN without constraints (LABEL:Nx) and our new constrained NN formulation (LABEL:Nnew), using the same random sequence for initialization. The NNs have one hidden layer with neurons. The size of the training set is and that of the validation set is . The numerical results are shown in Figure LABEL:fig:1L2D. On the left column, the contour lines of the training results are shown, as well as those of the exact function. Here all contour lines are at the same values, from 0 to 1 with an increment of . We observe that the constrained NN formulation produces visually better result than the standard unconstrained formulation. On the right column, we plot the function value along . Again, the improvement of the constrained NN is visible.
5.2 Multiple Hidden Layers
We now consider feedforward NNs with multiple hidden layers. We present results for both the standard NN without constraints (LABEL:MultNx) and the constrained ReLU NNs with the constraints (LABEL:weight_general) and (LABEL:b_general). We use the standard notation to denote the network structure, where is the number of neurons in each layer. The hidden layers are . Again, we focus on 1D and 2D functions for ease of visualization purpose, i.e., .
5.2.1 One dimensional tests
We first consider the onedimensional function (LABEL:testf1). In Figure LABEL:fig:2L1D, we show the numerical results by NNs of , using three different sequences of random initializations, with and without constraints. We observe that the standard NN formulation without constraints (LABEL:MultNx) produces widely different results. This is because of the potentially large number of local minima in the cost function and is not entirely surprising. On the other hand, using the exactly the same initialization, the NN formulation with constraints (LABEL:weight_general) and (LABEL:b_general) produces notably better results, and more importantly, is much less sensitive to the initialization. In Figure LABEL:fig:4L1D, we show the results for NNs with structure. We observe similar performance – the constrained NN produces better results and is less sensitive to initialization.
5.2.2 Two dimensional tests
We now consider the twodimensional Franke’s function (LABEL:testf2). In Figure LABEL:fig:2L2D, the results by NNs with structure are shown. In Figure LABEL:fig:4L2D, the results by NNs with structure are shown. Both the contour lines (with exactly the same contour values: from 0 to 1 with increment ) and the function value at are plotted, for both the unconstrained NN (LABEL:MultNx) and the constrained NN with the constraints (LABEL:weight_general) and (LABEL:b_general). Once again, the two cases use the same random sequence for initialization. The results show again the notably improvement of the training results by the constrained formulation.
6 Summary
In this paper we presented a set of constraints on multilayer feedforward NNs with ReLU and binary activation functions. The weights in each neuron are constrained on the unit sphere, as opposed to the entire space. This effectively reduces the number of parameters in weights by one per neuron. The threshold in each neuron is constrained to a bounded interval, as opposed to the entire real line. We prove that the constrained NN formulation is equivalent to the standar unconstrained NN formulation. The constraints on the parameters reduce the search space for network training and can notably improve the training results. Our numerical examples for both single hidden layer and multiple hidden layers verify this finding.
References
 [1] Andrew R Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3):930–945, 1993.
 [2] Monica Bianchini and Franco Scarselli. On the complexity of neural network classifiers: A comparison between shallow and deep architectures. IEEE transactions on neural networks and learning systems, 25(8):1553–1565, 2014.
 [3] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 2(4):303–314, 1989.
 [4] K.L. Du and M.N.S. Swamy. Neural networks and statistical learning. SpringerVerlag, 2014.
 [5] Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In Conference on Learning Theory, pages 907–940, 2016.
 [6] G.B. Folland. Real Analysis: Modern Techniques and Their Applications. John Wiley & Sons Inc., 1999.
 [7] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 315–323, 2011.
 [8] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT Press, 2016.
 [9] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2):251–257, 1991.
 [10] Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 6(6):861–867, 1993.
 [11] Hao Li, Zheng Xu, Gavin Taylor, and Tom Goldstein. Visualizing the loss landscape of neural nets. arXiv preprint arXiv:1712.09913, 2017.
 [12] Warren S McCulloch and Walter Pitts. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115–133, 1943.
 [13] Hrushikesh N Mhaskar and Charles A Micchelli. Approximation by superposition of sigmoidal and radial basis functions. Advances in Applied Mathematics, 13(3):350–373, 1992.
 [14] Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In Advances in neural information processing systems, pages 2924–2932, 2014.
 [15] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML10), pages 807–814, 2010.
 [16] A. Pinkus. Approximation theory of the MLP model in neural networks. Act. Numer., pages 143–195, 1999.
 [17] Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, and Qianli Liao. Why and when can deepbut not shallownetworks avoid the curse of dimensionality: A review. International Journal of Automation and Computing, 14(5):503–519, 2017.
 [18] J. Schmidhuber. Deep learning in neural networks: an overview. Neural Networks, 61:85–117, 2015.