Physical Symmetries Embedded in Neural Networks
Abstract
Neural networks are a central technique in machine learning. Recent years have seen a wave of interest in applying neural networks to physical systems for which the governing dynamics are known and expressed through differential equations. Two fundamental challenges facing the development of neural networks in physics applications is their lack of interpretability and their physicsagnostic design. The focus of the present work is to embed physical constraints into the structure of the neural network to address the second fundamental challenge. By constraining tunable parameters (such as weights and biases) and adding special layers to the network, the desired constraints are guaranteed to be satisfied without the need for explicit regularization terms. This is demonstrated on supervised and unsupervised networks for two basic symmetries: even/odd symmetry of a function and energy conservation. In the supervised case, the network with embedded constraints is shown to perform well on regression problems while simultaneously obeying the desired constraints whereas a traditional network fits the data but violates the underlying constraints. Finally, a new unsupervised neural network is proposed that guarantees energy conservation through an embedded symplectic structure. The symplectic neural network is used to solve a system of energyconserving differential equations and outperforms an unsupervised, nonsymplectic neural network.
Keywords:
Differential equations energy conservation constraints.1 Introduction
Machine learning (ML) has become very popular due to its significant contributions to a variety of scientific fields including the natural and social sciences, medicine, and finance. Although ML consists of many algorithms, neural networks (NN) have emerged as the de facto standard for some tasks including image recognition and natural language processing. Because of their success in these fields, there is great interest in using NNs to make predictions in science and engineering. Supervised NNs have been used to improve turbulence models [1, 2, 3], predict material properties [4, 5], solve quantum many body problems [6], forecast the future behavior of dynamical systems [7, 8, 9], solve differential equations (DEs) [10, 11], and even discover DEs [12, 13, 14]. New network architectures for solving DEs are an active area of research. The neural ordinary differential equations network [15] is a particularly interesting interpretation of NNs. Recasting a NN as a continuous ordinary differential equation (ODE) was shown to result in large performance gains in computational time and memory footprint. Unsupervised NNs have also been used to solve both ordinary and partial DEs [16, 17, 18]. Because of the universal function approximation property of NNs [19], they may be a natural approach to solving complicated physical problems governed by differential equations.
Although ML algorithms, and specifically NNs, have experienced a flurry of development in recent years, there are still several outstanding issues when applying them to problems in the physical sciences. Two primary concerns drive skepticism within the scientific community of the applicability of NNs to physical problems. First, NNs are difficult to interpret. Whereas many classical algorithms, such as the singular value decomposition and Fourier series, are based on hierarchical, orthogonal bases, NNs lack any such obvious interpretability. They are therefore often treated as a black box technology with no recourse for unpacking how or what they learned. Interpretable models are extremely useful in physics because they allow scientists and engineers to gain insight to physical processes at different levels of importance. Second, NNs, in their native form, are physicsagnostic. That is, a NN may fit a given data set, but it may violate the underlying physical laws that generated that data set in the first place. Any predictions with such a NN cannot be trusted in parameter regimes for which there is no data even though the governing physical laws may be the same. In the present work, we seek to address the second concern by embedding physical constraints directly into the network architecture. Physical constraints, such as conservation laws, have typically been included as regularization terms during the training process [14]. This leads to convenient implementations, but ultimately does not exactly preserve the desired constraints. Recent approaches for imposing constraints in NNs include lattice translation [6] and embedded Gallilean invariance [1, 2]. Furthermore, it has been shown that NN structures with embedded constraints inspired by symplectic numerical integrators can remedy issues like the exploding and vanishing gradients in recurrent NNs [20].
The approach taken in the current work directly embeds physical constraints within a NN. The main idea is the introduction of hub neurons (or a hub layer) that enforce the desired constraint. The weights and biases of the hub neurons are derived in such a way that predictions from the NN are guaranteed to preserve constraints such as even/odd symmetry and energy conservation. In section 2 we introduce the basic idea of the hub neurons and apply it to regression problems involving functions with even / odd symmetry where the underlying function has been obscured by noise that may destroy the true symmetry. A hub layer is derived that guarantees that the even / odd symmetry of the underlying system is respected. Similarly, in section 3.1 the goal is to fit a regression line to noisy data that was generated from an energyconserving process. Again, the concept of the hub layer is used to enforce the energy conservation in the final prediction. Section 3.2 focuses on unsupervised NNs for solving ODEs that conserve energy. We introduce a symplectic structure into the network architecture itself, which guarantees that the unsupervised network conserves energy when solving the ODEs. The paper concludes in 4 with a summary of the key ideas introduced in this work and a discussion of future plans.
2 Even/Odd Symmetry
A prototypical example of symmetry is the even/odd symmetry of a particular function. An even function has the property that while an odd function has the property that . In this section, we consider the situation in which data is generated from a function that is known to be even (or odd). The challenge is that noise in the data may break the known symmetry. A standard feedforward multilayer perceptron (MLP) is unaware of the original symmetry and will therefore perform a regression on data that is not symmetric thereby resulting in predictions that violate the desired symmetry. To counter this issue, we design a hidden layer which we call the hub layer. When the hub layer is used as the last hidden layer of an MLP, the regression is guaranteed to be even or odd function as desired.
We consider an MLP that consists of hidden layers with neurons per layer and a single linear output node. The network takes one input and returns one output . The activation function of a neuron of the last hidden layer is denoted by , which is a composition of all the previous layers The output has the form:
(1) 
where is the bias of the output node, are the weights of the last hidden layer, and denotes the index of the neuron in the last hidden layer. We guarantee that is an odd or even function by demanding that Eq. (1) satisfies the relationship where accounts for even and odd symmetry, respectively. Demanding that in Eq. (1) imposes a constraint on the bias of the last node of the form,
To simplify notation, we denote and and enforcing this relationship on the bias term of Eq. (1) leads to
(2) 
Note that any predictions from a NN with the embedded constraint on the bias expressed through Eq. (2) will automatically guarantee odd symmetry in the prediction. We further note that Eq. (2) is the odd part of Eq. (1). More generally, the evenodd decomposition of Eq. (1) is,
(3) 
Hence, Eq. (3) provides a concise ansatz for embedding even/odd symmetry and also motivates the idea of hub neurons, a special nodes that have a special operation after the activation . The advantage of networks with the hub neurons is that when physics requires the observation to be odd or even but the symmetry is broken due to noise, we are able to retain the correct symmetry and make predictions that are physically acceptable. In addition, the hubs reduce the number of the NN solutions and thus, the training becomes more efficient.
To test our method, we generate artificial data from the cosine function (an even function) with Gaussian noise, , The data are therefore generated from,
(4) 
A NN should learn an underlying even function, whereas we expect that a standard MLP will learn a function that does not necessarily preserve the even symmetry. Figure 1 depicts the new NN architecture used on this problem with a hub layer, which is given by the first term of the righthand of Eq. (3), to preserve the even symmetry. In subsection 2.1 we compare predictions and performance of a standard MLP to the hub network using data generated from an even function for a range of noise levels, . Subsection 2.2 we test the robustness of the hub network by performing the same analysis with data generated with a nonsymmetric underlying function at a constant level of noise. In each case, we generate data points out of which are uniformly selected as training points. The remaining points are used as a test set. Each network consists of two hidden layers with five neurons per layer and uses sigmoid activation functions. In order to quantify the robustness of the hub layer in predicting symmetric regression functions, we introduce the even metric:
(5) 
which measures how much a function deviates from the even symmetry, for a perfectly even function. The index runs over the set of test points.
2.1 Testing with Data Generated using an Even Function
We compare the regression lines predicted from a standard MLP to predictions from the new NN with a hub layer. Data were generated from Eq. (4) and . In the left panel of Fig. 2 the red dashed line is the regression obtained by a standard MLP and the solid, blue line corresponds to an MLP network where the last hidden layer is replaced by the even hub layer . The right panel of Fig. 2 shows the loss function on the training data. Throughout this work, the loss function is taken to be the meansquarederror (MSE). We observe that the loss function converges faster when the hub layer is used. The MSE of the simple MLP is lower at the end of the training process, which is expected since the simple MLP fits the data where our network compromised the MSE fit in order to preserve the underlying symmetry.
Next, we generate data for different levels of noise for and present the symmetry metric in Fig. 3 (upper). For each , standard MLP networks were trained and predictions were made with test points. Indeed, the even hub layer allows only even functions. The standard MLP network exhibits a range of values, which increases with the noise^{1}^{1}1The statistics follows the distribution as expected.. Moreover, in the same range of we calculate the loss function in the testing data where we observe that the hub layer performs better especially for low level of noise. The above results are demonstrated by the left panel in Fig. 3.
2.2 Testing with Data Generated Using Asymmetric Function
In this case, we generate noisy data with and introduce an offset in the even function of Eq. (4) in order to test the robustness of the model. Hence the actual data are generated to be asymmetric. In Fig. 3 (right), we again observe that the even hub layer permits only even regression functions () whereas for the standard MLP . As increases the loss function in the testing set for the hub MLP becomes larger than that for the standard MLP because the hub MLP tries to fit an even function but the data are not even symmetric any more, thus the loss is higher as expected.
In this section, we demonstrated through the toy model of odd/even symmetries that by constraining some of the tunable parameters of the NN we can identically preserve the required symmetries. This is a general approach, and will be used on a different problems in Section 3.
3 Constraint Conservation
Conservation laws are at the heart of many physical systems. There is a well known theorem in mathematical physics, the Noether’s theorem, that establishes the connection between any conservation law and an underlying symmetry of the equations governing the behavior of the physical system [21]. For example, space translation symmetry leads to momentum conservation and rotational symmetry yields angular momentum conservation. One of the most celebrated conservation laws is the conservation of energy which derives from the time translational symmetry of the system. In this section, we show how the idea of the hub neurons introduced in the previous section can leveraged to embed energy conservation in a neural network. We begin this section with an overview of using neural networks to solve differential equations and a brief review of energy conservation. We then provide two scenarios in which energy conservation is important. In subsection 3.1, a NN is used to predict a function from data that was originally generated from a physical system that conserves energy. A hub layer is shown to guarantee, up to the numerical accuracy of the solver, the energy conservation in the subsequent regression. In subsection 3.2, unsupervised NNs are used to solve DEs. Once again, a special symplectic architecture is designed based on Hamiltonian formulation to embed into a NN the energy conservation law. Then, the resulting approximated solutions of the desired system of DEs that obtained by the symplectic NN conserve the energy.
Unsupervised feedforward NNs offer an alternative approach to the numerical solution of DEs [16, 17, 18]. One key difference between traditional numerical methods and NNs for solving DEs is that NNs seek to learn the actual function that solves the equation, rather than creating an accurate approximation to the function [17]. A potential advantage of using NNs to solve DEs is that the solutions obtained by NNs are differentiable and in a closed, analytic form [16]. Moreover, NNs for DEs may be easy to parallelize, thereby leading to significant speedup in time to solution [16].
Given a DE , where is a (possibly nonlinear) differential operator, the goal is to find such that the DE, the initial and the boundary conditions are satisfied. The unsupervised NN approach proposed in the original work [16] introduces a “trial” solution as a reparameterization of the solution in order to directly satisfy the initial and boundary conditions. For example, for a first order ODE with initial condition , the trial solution takes the form where is the output from a NN [16]. In this way, at , the trial solution identically satisfies the initial conditions. The training procedure proceeds as follows [16, 17]:

Generate random points, , in the input domain.

Perform a forward pass through the network.

Calculate the loss function:
(6) 
Use backpropagation with stochastic gradient decent to train the network parameters.
Next, we focus on the origin of DEs and discuss their relationship to conservation of energy. In mechanics, we often wish to determine how the position of an object evolves in time. To accomplish this, it is sufficient to know the potential function at a given position , which characterizes an object’s propensity for motion at position . The position is a vector of coordinates (e.g. for three dimension). We write to denote coordinate . In general, a system consists of many objects, each with their own position vector. However, to simplify the presentation, we focus on a system consisting of a single object. The Lagrangian is usually defined as the difference between the kinetic and potential energies,
(7) 
where the kinetic energy quantifies the energy of motion of the object (with mass ) and dot represents the time derivative of a quantity. Starting from the EulerLagrange (EL) equation,
(8) 
the equations of motion are expressed as the differential equations
(9) 
Solving these differential equations in Eq. (9) provides the position of the object as a function of time. Worth noticing that for a conservative system, the force is negative the derivative of the potential and Eq. (9) reduces to Newton’s law. Because the focus in this section is on systems that conserve energy, we note that if the Lagrangian does not have an explicit timedependence, then the energy is conserved.
An alternative formulation of mechanics uses the Hamiltonian,
(10) 
where now the kinetic energy solely depends on the generalized momentum . The Hamiltonian is related to the Lagrangian via,
(11) 
Throughout this work, the potential is solely a function of , which implies
(12) 
Equation (12) will form an essential role in the development of the symplectic NN. The equations of motion are determined by Hamilton’s equations,
(13) 
The Hamiltonian itself represents the total energy of the system and in the situation in which the Hamiltonian does not have an explicit dependence on time, the energy is exactly conserved.
In the next section, we assume noisy data that describe the trajectory of a conservative system. We employ a standard MLP to fit the data and approximate the trajectories which should conserve energy. The standard MLP architecture fails to accomplish this since it does not know the underlying conservation law. We encounter this problem by adding a NN ODE solver that corrects the regression lines to preserve the underlying physics.
3.1 Energy Conserved in Regression
In this section, we propose a NN that guarantees (limited to the solver accuracy) energy conservation by perturbing the prediction from an MLP in the correct way. We work with noisy data that represent the position in time of an object. The motion of the object is restricted to one dimension and therefore the coordinates are simply a scalar such that . Moreover, it is assumed that the dynamics describing that motion are energyconserving. We employ a feedforward MLP with a single linear output node to approximate a regression function that fits the data by minimizing the MSE. Since physical laws are not embedded in the MLP structure, does not represent a function that satisfies energy conservation of the underlying system. We cure this issue by introducing a term that corrects the regression to preserve the energy. Therefore, the prediction is given by,
(14) 
where is the prediction from the NN. From Eq. (12) . We denote total energy of the system by . Using Eq. (10) yields,
(15) 
where from Eq. (14). The proposed network architecture is depicted in Fig. 4. It consists of three parts:

Use a standard MLP to find given some input data .

Compute using automatic differentiation and pass and to another neural network. This second NN solves the first order ODE for the correction implied by Eq. (15),
(16) The second network is another form of the hub layer introduced in the previous section and used to embed physical constraints in the overall network architecture. Equation (16) is a twovalue ODE. We turn it in a single value ODE by assuming and Taylor expanding Eq. (16); that yields and thus we replace the with the sign of .

The outputs from the two NNs are used to form the final prediction .
3.1.1 Harmonic Oscillator
As a concrete example, we consider the harmonic oscillator which has the potential,
(17) 
where is the natural frequency of the oscillator. Without loss of generality we take . Solving the EL equation (8) with the potential and with the initial condition yields the solution for the position of a harmonic oscillator and with energy . We generate data points using the analytical solution in the range . As usual, we introduce Gaussian noise with standard deviation and mean zero. Hence, the function that generates the noisy data is: . We use of the data points to train an MLP with a single hidden layer consisting of sigmoid neurons. Next, we pass and its derivative to the hub layer, which is a NN ODE solver with one hidden layer consisting of sigmoid neurons. The hub layer outputs the correction .
Figure 5 presents results from the regression with the standard MLP network and the network with a hub layer for respecting energy conservation. The left panel shows the actual regression lines predicted from the noisy data. Both the MLP and the hub network perform well compared to the analytical solution. We also show the correction , which is nonzero. The lower panel depicts the total energy and shows that the hub network preserves very well the total energy while the regression from the standard MLP does not. The right panel shows the plot (known as a phasespace plot). We show the exact solution neglecting the noise term which is a closed trajectory. The hub network exhibits closed trajectories (limited again by the NN solver accuracy), however, the standard MLP fails to capture this essential feature of the system.
3.2 Symplectic Neural Network
Differential equations are used to model the physical world from fluid flows through solid mechanics and materials science. These are often nonlinear equations that are not amenable to analytical techniques. Among the many challenges for numerical methods is the desire that a numerical integrator used for solving the equations somehow respect the intrinsic principles used to derive the equations (e.g. conservation of energy). In the present work, we design a symplectic NN architecture for solving DEs, which guarantees that the predicted solutions preserve the energy. Specifically, we embed constraints in the NN structure that are derived from Hamilton’s equations (Eqs. (13)). The constraints reduce the solution space and subsequently, in addition to the correct energy, the NN is more robust and reaches the solution much faster than a nonsymplectic architecture.
We consider a dimensional system with coordinates . Position and momentum coordinate is denoted by and , respectively. The goal is to find the and as a function of time as governed by Hamilton’s equations (Eqs. (13)) by using NNs. Specifically, for each dimension we have a system of two first order ODEs:
(18) 
where we used . The idea is to design a symplectic NN by imposing the first of the Eqs. (18) as a constraint and using the second equation to build the loss function. For the proposed symplectic NN we suggest the trial solutions which satisfies the initial conditions as
(19)  
(20) 
where and are, respectively, the initial values of position and momentum at . Imposing the first of the Eqs. of (18) we find that
(21) 
which is a hub neuron. Using Eq. (21) the parametric solution (20) becomes
(22) 
which satisfies by structure the initial condition and the first Hamilton’sequation . The is the output of a feed forward MLP. The time derivative in is obtained by automatic differentiation. The second Hamilton’s equation defines the loss function as
(23) 
which is used for the training of . The proposed symplectic NN is graphically outlined in Fig. 6.
We demonstrate this idea on the HénonHeiles (HH) dynamical system [22], which was introduced in 1964 to describe the nonlinear motion of a star around a galactic center with the motion restricted to a plane. The HH system has two degrees of freedom such that the coordinate variables are and the momentum variables are . The nonlinear potential of this system is
(24) 
where is a parameter. The Hamilton’s equations results in the nonlinear equations of motion,
(25)  
(26) 
The Eqs. (25) are embedded in our proposed NN. The Eqs. (26) define the loss function according to Eq. (23) as
(27) 
which is used for training an MLP NN with two outputs . The quantities and are determined by the trial solutions (19) and (22), respectively.
We apply the symplectic NN architecture to find the solution of the HH dynamical system in a chaotic regime. The initial conditions for the simulation are and for , corresponding to the energy . We reiterate that due to the symplectic structure of the NN, the energy should remain constant for all time. The NN is trained on a random grid of points for , which is randomly selected at the beginning of each epoch [17]. We use two hidden layers with nodes in each hidden layer. For the nonlinear activation function of each hidden node we choose the trigonometric function . We compare the solutions obtained from the symplectic NN to those obtained by a standard MLP using the same network hyperparameters and using sigmoid activation functions for the hidden nodes. For the standard MLP, instead of solving for the dynamics from Hamilton’s equations, we solve the system of second order equations for and following the EL Eq. (9):
(28) 
and with the trial solutions [16]
(29)  
(30) 
where and are the two outputs from a standard MLP NN. This corresponds to the standard, physicsagnostic approach. Finally, we also compare the NN solutions to solutions obtained using an adaptive, nonsymplectic timeintegrator (odeint in the Scipy package of python).
The loss function for the symplectic and standard MLP NNs during the training process is represented by Fig. 7. The symplectic NN loss is lower, more uniform, and faster in convergence than the loss function for the standard MLP.
Figure 8 (left) shows the solution for the position as a function of time. The solid black line represents the numerical solution and the dashed color lines represent the solution from the symplectic NN. The standard MLP solution, which uses the same hyperparameters as the symplectic network, is shown in the color dotted lines. The symplectic network outperforms the standard MLP, which completely fails to predict the correct solution behavior. We note that the standard MLP solution can be improved by adjusting the hyperparameters (e.g. number of neurons) or by training the network for orders of magnitude more epochs than that used to train the symplectic network. The dashed blue line in the lower panel of Fig. 8 shows that the energy remains constant in time thereby demonstrating the energy conservation property of the symplectic network. The dotted line shows that the standard MLP does not conserve energy. Finally, the right panel of Fig. 8 shows the predicted chaotic orbit in the plane.
4 Conclusion
In recent years, machine learning has made inroads in traditional science and engineering fields. Since these methods are relatively new to physics, there are many outstanding issues. In this paper, we proposed a neural network paradigm, called the hub neurons (or hub layer), that provides the capability to embed physical constraints into neural networks. This approach goes beyond the standard approach of introducing regularization terms to enforce such constraints. Moreover, it provides a mechanism to exactly satisfy known physics such as conservation laws. Such a property is crucial if neural networks are to be used for predictive science, especially if they are employed in regimes for which data is unavailable. In addition to predictive power, networks with embedded physics are faster to train and may be more robust to sparse and noisy data sets. We tested the hub layer concept on three different physical constraints. First, we designed a hub layer neural network to account for even/odd symmetry. The new network outperforms a standard neural network while simultaneously exactly preserving the desired symmetry. Next, we focused on conservation of energy using noisy data that was generated from an energyconserving process. By introducing a hub layer into the neural network that corrects the regression based on the underlying Hamiltonian dynamics, the predictions from the new neural network show a great improvement over a classical neural network. Finally, we use an unsupervised neural network to solve a system of nonlinear differential equations, which conserves energy. Again, we use the hub neuron concept to design a symplectic neural network that guarantees that the neural network predictions are consistent with energy conservation. Giving these promising results, we plan to embed other important symmetries into neural networks in the future.
Acknowledgments:
MM acknowledges useful discussions with Prof. G. P. Tsironis. MM and EK acknowledge partial support from EFRI 2DARE NSF Grant No. 1542807 and from ARO MURI Award No. W911NF140247. We used computational resources on the Odyssey cluster of the FAS Research Computing Group at Harvard University.
References
 [1] Ling, J., Jones, R., Templeton, J: Machine learning strategies for systems with invariance properties. Journal of Computational Physics, 318, 22–35 (2016).
 [2] Ling, J., Kurzawski, A., Templeton, J.: Reynolds averaged turbulence modeling using deep neural networks with embedded invariance. Journal of Fluid Mechanics, 807, 155–166 (2016).
 [3] Fang, R., Sondak, D., Protopapas, P., Succi, S.: Deep learning for turbulent channel flow. ArXiv:1812.02241 (2018).
 [4] Zhang, P., Shen, H., Zhai, H.: Machine Learning Topological Invariants with Neural Networks. Physical Review Letters, 120, 066401 (2018).
 [5] Rhone, T. D., Chen, W., Desai, S., Yacoby, A., Kaxiras, E.: Datadriven studies of magnetic twodimensional materials. ArXiv:1806.07989 (2018).
 [6] Carleo, G., Troyer, M.: Solving the quantum manybody problem with artificial neural networks. Science, 355, 602–606 (2017).
 [7] Lu, Z., Pathak, J., Hunt, B., Girvan, M., Brockett, R., Ott, E.: Reservoir observers: Modelfree inference of unmeasured variables in chaotic systems. Chaos, 27, 041102 (2017).
 [8] Vlachas, P. R., Byeon, W., Wan, Z. Y., Sapsis, T. P., Koumoutsakos, P.: Datadriven forecasting of highdimensional chaotic systems with long shortterm memory networks. Proceedings Royal Society A, 474, (2018).
 [9] Neofotistos, G., Mattheakis, M., Barmparis, G. D., Hizanidis, J., Tsironis, G. P., Kaxiras, E.: Machine learning with observers predicts complex spatiotemporal behavior. Front. Phys.  Quantum Computing. 7, 24 (2019).
 [10] Raissi, M., Perdikaris, P., Karniadakis, G. E.: Inferring solutions of differential equations using noisy multifidelity data. J. of Comp. Physics, 335, 736–746 (2017).
 [11] Sinai, Y. B., Hoyer, S., Hickey, J., Brenner, M. P.: Datadriven discretization: machine learning for coarse graining of pdes. ArXiv:1808.04930 (2018).
 [12] Raissi, M., Perdikaris, P. Karniadakis, G.: Machine learning of linear differential equations using Gaussian processes. J. of Comp. Physics 348, 683–693 (2017).
 [13] Kutz, N., Rudy, S., Alla, A., Brunton, S.: DataDriven discovery of governing physical laws and their parametric dependencies in engineering, physics and biology. IEEE 7th International Workshop on Computational Advances in MultiSensor Adaptive Processing, 1–5 (2017).
 [14] Raissi, M., Perdikaris, P., Karniadakis, G. E.: Physicsinformed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. of Comp. Physics 378, 686–707 (2019).
 [15] Chen, R. T. Q., Rubanova, Y., Bettencourt, J., Duvenaud, D.: Neural Ordinary Differential Equations. 32nd Conference on Neural Information Processing Systems (NIPS 2018), 6572–6583 (2018).
 [16] Lagaris, I. E., Likas, A., Fotiadis, D. I.: Artificial Neural Networks for Solving Ordinary and pdes. IEEE Transactions on Neural Networks, 9(5) (1998).
 [17] Sirignano J., Spiliopoulos, K.: DGM: A deep learning algorithm for solving partial differential equations. Journal of Computational Physics, 375, 1339–1364 (2018).
 [18] Magill, M., Qureshi, F., Haan, H.: Neural Networks Trained to Solve Differential Equations Learn General Representations. 32nd Conference on Neural Information Processing Systems (NIPS 2018), 4075–4085 (2018).
 [19] Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2), 251–257 (1991).
 [20] Haber E., Ruthotto, L.: Stable architectures for deep neural networks. Inverse Problems. 34, 014004 (2017).
 [21] Noether E., Invariante Variationsprobleme, Nachr. d. KÃ¶nig. Gesellsch. d. Wiss. zu GÃ¶ttingen, Math. Phys. Klasse 2, 235–257 (1918).
 [22] Hénon, M., Heiles, C.: The applicability of the third integral of motion: Some numerical experiments. The Astronomical Journal. 69, 73–79 (1964).