Neural Networks Fail to Learn Periodic Functionsand How to Fix It

Neural Networks Fail to Learn Periodic Functions and How to Fix It


Previous literature offers limited clues on how to learn a periodic function using modern neural networks. We start with a study of the extrapolation properties of neural networks; we prove and demonstrate experimentally that the standard activations functions, such as ReLU, tanh, sigmoid, along with their variants, all fail to learn to extrapolate simple periodic functions. We hypothesize that this is due to their lack of a “periodic” inductive bias. As a fix of this problem, we propose a new activation, namely, , which achieves the desired periodic inductive bias to learn a periodic function while maintaining a favorable optimization property of the -based activations. Experimentally, we apply the proposed method to temperature and financial data prediction.

1 Introduction

In general, periodic functions are one of the most basic functions of importance to human society and natural science: the world’s daily and yearly cycles are dictated by periodic motions in the Solar System newton1687 (); the human body has an intrinsic biological clock that is periodic in nature ishida1999biological (); refinetti1992circadian (), the number of passengers on the metro follows daily and weekly modulations, and the stock market experiences (semi-)periodic fluctuations osborn02 (); zhang17 (). Global economy also follows complicated and superimposed cycles of different periods, including but not limited to the Kitchin and Juglar cycle de2012common (); kwasnicki2008kitchin (). In many scientific scenarios, we want to model a periodic system in order to be able to predict the future evolution, based on current and past observations. While deep neural networks are excellent tools in interpolating between existing data, their fiducial version is not suited to extrapolate beyond the training range, especially not for periodic functions.

If we know beforehand that the problem is periodic, we can easily solve it, e.g., in Fourier space, or after an appropriate transformation. However, in many situations we do not know a priori if the problem is periodic or contains a periodic component. In such cases it is important to have a model that is flexible enough to model both periodic and non-periodic functions, in order to overcome the bias of choosing a certain modelling approach. In fact, despite the importance of being able to model periodic functions, no satisfactory neural network-based method seems to solve this problem. Some previous methods that propose to use periodic activation functions exist silvescu1999fourier (); zhumekenov2019fourier (); parascandolo2016taming (). This line of works propose using standard periodic functions such as and or their linear combinations as activation functions. However, such activation functions are very hard to optimize due to large degeneracy in local minima parascandolo2016taming (), and the experimental results suggest that using as the activation function does not work well except for some very simple model, and that it can not compete against -based activation functions ramachandran2017swish (); clevert2015fast (); nair2010rectified (); xu2015empirical () on standard tasks.

The contribution of this work is threefold: (1) we study the extrapolation properties of a neural network beyond a bounded region; (2) we show that standard neural networks with standard activation functions are insufficient to learn periodic functions outside the bounded region where data points are present; (3) we propose a handy solution for this problem and it is shown to work well on toy examples and real tasks. However, the question remains open as to whether better activation functions or methods can be designed.

2 Inductive Bias and Extrapolation Properties of Activation Functions

A key property of periodic functions that differentiates them from regular functions is the extrapolation property of such functions. With a period , a period function repeats itself ad infinitum. Learning a periodic function, therefore, not only requires fitting of pattern on a bounded region, but the learned pattern needs to extrapolate beyond the bounded region. In this section, we experiment with the inductive bias that the common activation functions offer. While it is hard to investigate the effect of using different activation functions in a general setting, one can still hypothesize that the properties of the activation functions are carried over to the property of the neural networks. For example, a tanh network will be smooth and extrapolates to a constant function, while ReLU is piecewise-linear and extrapolates in a linear way.

2.1 Extrapolation Experiments

We set up a small experiment in the following way: we use a fully connected neural network with one hidden layer consisting of neurons. We generate training data by sampling from four different analytical functions in the interval [-5,5] with a gap in the range [-1,1]. This allows us to study the inter-and-extrapolation behaviour of various activation functions. The results can be seen in Fig. 1. This experimental observation, in fact, can be proved theoretically in a more general form.

(a) Activation Function:
(b) Activation Function:
Figure 1: Exploration of how different activation functions extrapolate various basic function types: (first column), (second column), (third column), and (last column). The red curves represents the median model prediction and the shaded regions show the credibility interval from 21 independent runs. Note that the horizontal range is re-scaled so that the training data lies between and .

We see that their extrapolation behaviour is dictated by the analytical form of the activation function: ReLU diverges to , and levels off towards a constant value.

2.2 Theoretical Analysis

In this section, we study and prove the incapability of standard activation functions to extrapolate.

Definition 1.

(Feedforward Neural Network.) Let be a function from , where is the activation function applied element-wise to its input vector, and . is called a feedforward neural network with activation function , and is called the input dimension, and is the output dimension.

Now, one can show that for arbitrary feedforward neural networks the following two extrapolation theorems hold.

Theorem 1.

Consider a feed forward network , with arbitrary but fixed depth and widths . Then


where is a real scalar, is any unit vector of dimension , and is a constant matrix only dependent on .

The above theorem says that any feedforward neural network with activation converges to a linear transformation in the asymptotic limit, and this extrapolated linear transformation only depends on , the direction of extrapolation. See Figure 1 for illustration. Next, we prove a similar theorem for activation. Naturally, a network extrapolates like a constant function.

Theorem 2.

Consider a feed forward network , with arbitrarily fixed depth and widths . Then


where is a real scalar, is any unit vector of dimension , and is a constant vector that only depends on .

We note that these two theorems can be proved through induction, and we give their proofs in the appendix. The above two theorems show that any neural network with ReLU or the tanh activation function cannot extrapolate a periodic function. Moreover, while the specific statement of the theorem applies to and ReLU, it is has quite general applicability. Since the proof is only based on the asymptotic property of activation function when , one can prove the same theorem for any continuous activation function that asymptotically converges to a or ReLU; for example, this would include Swish and Leaky-ReLU (and almost all the other ReLU-based variants), which converge to ReLU; one can follow the same proving procedure to prove a similar theorem for each of these activation functions.

3 Proposed Method:

Figure 2: Snake at different .
Figure 3: Optimization of different loss functions on MNIST. The proposed activation is shown as a blue dashed curve. We see that Snake is easier to optimize than other periodic baselines. Also interesting is that Snake (and ) are also easier to train than the standard ReLU on this task.
Figure 2: Snake at different .

As we have seen in the previous section, the choice of the activation functions plays a crucial role in affecting the interpolation and extrapolation properties of neural networks, and such interpolation and extrapolation properties in return affect the generalization of the network equipped with such activation function.

To easily address the proposed function, we propose to use as an activation function, which we call the “Snake” function. One can augment it with a factor to control the frequency of the periodic part. Thus propose the Snake activation with frequency


We plot Snake for in Figure 3. We see that larger gives higher frequency.

There are also two conceivable alternatives choices for a periodicity-biased activation function. One is the function, which has been proposed in parascandolo2016taming (), along with and their linear combinations as proposed in Fourier neural networks zhumekenov2019fourier (). However, the problem of these functions does not lie in its generalization ability, but lies in its optimization. In fact, is not a monotonic function, and using as the activation function creates infinitely many local minima in the solutions (since shifting the preactivation value by gives the same function), making hard to optimize. See Figure 3 for a comparison on training a 4-layer fully connected neural network on MNIST. We identify the root cause of the problem in as its non-monotonicity. Since the gradient of model parameters is only a local quantity, it cannot detect the global periodicity of the function. Therefore, the difficulty in biasing activation function towards periodicity is that it needs to achieve monotonicity and periodicity at the same time.

We also propose two other alternatives, and . They are easier to optimize than similar to the commonly used . In the neural architecture search in ramachandran2017swish (), these two functions are found to be in list of the best-performing activation functions found using reinforcement learning; while they commented that these two are interesting, no further discussion was given regarding their significance. While these two and have the same expressive power, we choose as the default form of Snake for the following reason. It is important to note that the preactivation values are centered around and the standard initialization schemes such as Kaiming init normalizes such preactivation values to the unit variance sutskever2013importance (); he2015delving (). By the law of large numbers, the preactivation roughly obeys a standard normal distribution. This makes a special point for the activation function, since most of preactivation values will lie close to . However, seems to be a choice inferior to around . Expanding around :


Of particular interest to us is the non-linear term in the activation, since this is the term that drives the neural network away from its linear counterpart, and learning of a non-linear network is explained by this term to leading order. One finds that the first non-linear order expansion for is already third order, while that of is contains a non-vanishing second order term, which can probe non-linear behavior that is odd in . We hypothesize that this non-vanishing second order term gives Snake a better approximation property than . In Table 1, we compare the properties of each activation function.

Extension: We also suggest the extension to make the parameter a learnable parameter for each preactivation value. The benefit for this is that no-longer needs to be determined by hand. While we do not study this extension in detail, one experiment is carried out with learnable , see the atmospheric temperature prediction experiment in Section 6.2.

ReLU Swish Tanh
first non-linear term -
Table 1: Comparison of different periodic and non-periodic activation functions.

3.1 Regression with Fully Connected Neural Network

In this section, we regress a simple d periodic function, , with the proposed activation function.See Figure 4 and compare with the related experiments on and in Figure 1. As expected, all three activation functions learn to regress the training points. However, neither nor seems to be able to capture the periodic nature of the underlying function; both baselines inter- and extrapolate in a naive way, with being slightly smoother than . On the other hand, Snake learns to both interpolate and extrapolate very well, even though the learned amplitude is a little different from the ground truth, it has grasped the correct frequency of the underlying periodic function, both for the interpolation regime and the extrapolation regime. This shows that the proposed method has the desired flexibility towards periodicity, and has the potential to model such problems.

Figure 4: Regressing a simple sin function with Snake as activation functions for .

4 “Universal Extrapolation Theorem”

In contrast to the well-known universal approximation theorems hornik1989multilayer (); csaji2001approximation (); funahashi1993approximation () that qualifies a neural network on a bounded region, we prove a theorem that we refer to as the universal extrapolation theorem, which focuses on the behavior of a neural network with Snake on an unbounded region. This theorem says that a Snake neural network with a sufficient width can approximate any well-behaving periodic function.

Theorem 3.

Let be a piecewise periodic function with period . Then, a Snake neural network, , with one hidden layer and with width can converge to uniformly as , i.e., there exists parameters for all such that


for all , i.e., the convergence is point-wise. If is continuous, then the convergence is uniform.

As a corollary, this theorem implies the classical approximation theorem hornik1989multilayer (); qu2019approximation (); cybenko1989approximation (), which states that a neural network with certain non-linearity can approximate any continuous function on a bounded region.

Corollary 1.

Let be a two-layer neural network parametrized by two weight matrices and , and let be the width of the network, then for any bounded and continuous function on , there exists such that for any , we can find , such that is close to .

This shows that the proposed activation function is a more general method than the ones previously studied because it has both (1) approximation ability on a bounded region and (2) the ability to learn periodicity on an unbounded region. The practical usefulness of our method is demonstrated experimentally to be on par with standard tasks, and to outperform previous methods significantly on learning periodic functions. Notice that the above theorem not only applies to Snake  but also to the basic periodic functions such as and and monotonic variants such , etc.. While we argued for a preference towards , it remains to be determined by large-scale experiments in the industry to decide which one amongst these variants work better in practice, and we do encourage the practitioners to experiment with a few variants to decide the best suitable form for their application.

5 Initialization for Snake

As shown in he2015delving (), different activation functions actually require different initialization schemes (in terms of the sampling variance) to make the output of each layer unit variance, thus avoiding divergence or vanishing of the forward signal. Let , whose input activations are with a unit variance for each of its element, and the goal is to set the variance of each element in such that has a unit variance. To leading order, Snake looks like an identity function, and so one can make this approximation in finding the required variance: which gives . If we use uniform distribution to initialize , then we should sample from , which is a factor of smaller in range than the Kaiming uniform initialization. We notice that this initialization is often sufficient. However, when higher order correction is necessary, we provide the following exact solution, which is a function of in general.

Proposition 1.

The variance of expected value of under a standard normal distribution is , which is maximized at .

The second term can be thought of as the “response” to the non-linear term . Therefore, one should also correct an additional bias induced by the term by dividing the post-activation value by . Since the positive effect of this correction is the most pronounced when the network is deep, we compare the difference between having such a correction and having no correction on ResNet-101 on CIFAR-10. The results are presented in the appendix Section A.6.1. We note that using the correction leads to better training speed and better converged accuracy. We find that for standard tasks such as image classification, setting to work very well. We thus set the default value of to be . However, for tasks with expected periodicity, larger , usually from to tend to work well.

6 Applications

Figure 5: Comparison with other activation functions on CIFAR10.

In this section, we demonstrate the wide applicability of Snake. We start with a standard image classification task, where Snake is shown to perform competitively against the popular activation functions, showing that Snake can be used as a general activation function. We then focus on the tasks where we expect Snake to be very useful, including temperature and financial data prediction. Training in all experiments are stopped at the time when the training loss of all the methods stop to decrease and becomes a constant. The performances of the converged model is not visibly different from the early stopping point for the tasks we considered, and the result would have been similar if we had chosen the early stopping point for comparison1.

6.1 Image Classification

Experiment Description. We train ResNet-18 he2016deep (), with roughly M parameters, on the standard CIFAR-10 dataset. We simply replace the activation functions in ReLU with the specified ones for comparison. CIFAR-10 is a -class image classification task of pixel images; it is a standard dataset for measuring progress in modern computer vision methods2. We use LaProp ziyin2020laprop () with the given default hyperparameters as the optimizer. We set learning rates to be for the first epochs, and for the last epochs. The standard data augmentation technique such as random crop and flip are applied. We note that our implementation reproduces the standard performance of ResNet18 on CIFAR-10, around testing accuracy. This experiment is designed to test whether Snake is suitable for standard and large-scale tasks one encounters in machine learning. We also compare this result against other standard or recently proposed activation functions including , , Leaky xu2015empirical (), ramachandran2017swish (), and parascandolo2016taming ().

Figure 6: Experiment on the atmospheric data. (a) Regressing the mean weekly temperature evolution of Minamitorishima with different activation functions. For Snake, is treated as a learnable parameter and the red contour shows the 90% credibility interval. (b) Comparison of , , and Snake  on a regression task with learnable .
(a) Snake
(b) Snake
(c) ,
(d) Sinusoidal
Figure 7: Prediction of human body temperature. (a) Snake; (b) Averaged temperature prediction in a circadian cycle (i.e. as a function of the natural hours in a day); (c) and ; (d) sinusoidal activations.

Result and Discussion. See Figure 5. We see that shows similar performance to , agreeing with what was found in parascandolo2016taming (), while Snake shows comparative performance to and Leaky both in learning speed and final performance. This hints at the generality of the proposed method, and may be used as a replacement for in a straightforward way. We also test against other baselines on ResNet-101, which has 4 times more parameters than ResNet-18, to check if Snake can scale up to even larger and deeper networks, and we find that, consistently, Snake achieves similar performance ( accuracy) to the most competitive baselines.

6.2 Atmospheric and Body Temperature Prediction

For illustration, we first show two real-life applications of our method to predicting the atmospheric temperature of a local island, and human body temperature. These can be very important for medical applications. Many diseases and epidemics are known to have strong correlation with atmospheric temperature, such as SARS chan2011effects () and the current COVID-19 crisis ongoing in the world zhu2020association (); sajadi2020temperature (); notari2020temperature (). Therefore, being able to model temperature accurately could be important for related policy making.

Atmospheric temperature prediction. We start with testing a feedforward neural network with two hidden layers (both with neurons) to regress the temperature evolution in Minamitorishima, an island south of Tokyo (longitude: 153.98, latitude: 24.28)3. The data represents the average weekly temperature after April 2008 and the results are shown in Fig. 5(a). We see the and -based models fail to optimize this task, and do not make meaningful extrapolation. On the other hand, the Snake-based model succeeds in optimizing the task and makes meaningful extrapolation with correct period. Also see Figure 5(b). We see that Snake achieves vanishing training loss and generalization loss, while the baseline methods all fail to optimize to training loss, and the generalization loss is also not satisfactory. We also compare with the more recent activation functions such as the Leaky-ReLU and Swish, and similar results are observed; see appendix.

Human body temperature. Modeling the human body temperature may also be important; for example, fever is known as one of the most important symptom signifying a contagious condition, including COVID19 guan2020clinical (); singhal2020review (). Experiment Description. We use a feedforward neural network with hidden layers, (both with neurons) to regress the human body temperature. The data is measured irregularly from an anonymous participant over a -days period in April, 2020, of measurements in total. While this experiment is also rudimentary in nature, it reflects a great deal of obstacles the community faces, such as very limited (only points for training) and insufficient measurement taken over irregular intervals, when applying deep learning to real problems such as medical or physiological prediction hinton2018deep (); rajkomar2019machine (). In particular, we have a dataset where data points from certain period in a day is missing, for example, from am to am, when the participant is physically at rest (See Figure 6(b)), and for those data points we have, the intervals between two contiguous measurements are irregular with hours being the average interval, yet this is often the case for medical data where exact control over variables is hard to realize. The goal of this task is to predict the body temperature at at every hour. The model is trained with SGD with learning rate for steps, for another steps, and for another steps.

Results and Discussion. The performances of using , , , and Snake are shown in Figure 7. We do not have a testing set for this task, since it is quite unlikely that a model will predict correctly for this problem due to large fluctuations in human body temperature, and we compare the results qualitatively. In fact, we have some basic knowledge about body temperature. For example, (1) it should fall within a reasonable range from to Celsius degree geneva2019normal (), and, in fact, this is the range where all of the training points lie; (2) at a finer scale, the body temperature follows a periodic behavior, with highest in the afternoon (with a peak at around pm), and lowest in the midnight (around am) geneva2019normal (). At the bare minimum, a model needs to obey (1), and a reasonably well-trained model should also discover (2). However, or fail to limit the temperature to the range and degree. Both baselines extrapolate to above degree at days beyond the training set. In contrast, learning with Snake as the activation function learned to obey the first rule. See Figure 7.a. To test whether the model has also grasped the periodic behavior specified in (2), we plot the average hourly temperature predicted by the model over a days period. See Figure 6(b). We see that the model does capture the periodic oscillation as desired, with peak around pm and minimum around am. The successful identification of am is extremely important, because this is in the range where no data point is present, yet the model inferred correctly the periodic behavior of the problem, showing Snake really captures the correct inductive bias for this problem.

6.3 Financial Data Prediction

Figure 8: Prediction of Wilshire 5000 index, an indicator of the US and global economy.

Problem Setting. The global economy is another area where quasi-periodic behaviors might happen kose2020global (). At microscopic level, the economy oscillates in a complex, unpredictable manner; at macroscopic level, the global economy follows a year cycle that transitions between periods of growth and recession canova1998detrending (); shapiro1988sources (). In this section, we compare different models to predict the total US market capitalization, as measured by the Wilshire 5000 Total Market Full Cap Index4 (we also did the same experiment on the well-known Buffet indicator, which is seen as strong indicator for predicting national economic trend mislinski2020market (); we also see similar results). For training, we take the daily data from 1995-1-1 to 2020-1-31, around points in total, the ending time is deliberately chosen such that it is before the COVID19 starts to affect the global economy atkeson2020will (); fernandes2020economic (). We use the data from to as the test set. Noticeably, the test set differs from training set in two ways (1) a market crush called black Thursday happens (see Figure 8); (2) the general trend is recessive (market cap moving downward on average). It is interesting to see whether the bearish trend in this period is predictable without the affect of COVID19. For neural network based methods, we use a -layer feedforward network with hidden neurons, with specified activation function, we note that no activation function except Snake could optimize to vanishing training loss. The error is calculated with runs.

Method MSE on Test Set
Swish DNN
Table 2: Prediction of Wilshire 5000 Index from 2020-2-1 to 2020-5-31.

Results and Discussion. See Table 2, we see that the proposed method outperforms the competitors by a large margin in predicting the market value from 2020-2-1. Qualitatively, we focus on making comparison with ARIMA, a traditional and standard method in economics and stock price prediction pai2005hybrid (); ariyo2014stock (); wang1996stock (). See Figure 8. We note that ARIMA predicts a growing economy, Snake predicts a recessive economy from 2020-2-1 onward. In fact, for all the methods in Table 2, the proposed method is the only method that predicts a recession in and beyond the testing period, we hypothesize that this is because the proposed method is only method that learns to capture the long term economic cycles in the trend. Also, it is interesting that the model predicts a recession without predicting the violent market crash. This might suggest that the market crash is due to the influence of COVID19, while a simultaneous background recession also occurs, potentially due to global business cycle. For purely analysis purpose, we also forecast the prediction until in Figure 8. Alarmingly, our method predicts a long-term global recession, starting from this May, for an on-average 1.5 year period, only ending around early . This also suggests that COVID19 might not be the only or major cause of the current recessive economy.

Figure 9: Comparison between simple RNN (with ReLU as activation function) and feedforward network with Snake as activation function. (a) The task is to regress a length- time series generated by a simple periodic function contaminated with a white gaussian noise with variance . (b) As the noise increases, the RNN fails in generalization, while this has relatively no effect on the proposed method. (c) One more important advantage of the proposed method is that it requires much less computation time during forward and backward propagation.

6.4 Comparison with RNN on Regressing a Simple Periodic Function

One important application of the learning of periodicities is time series prediction, where the periodicity is at most one-dimensional. This kind of data naturally appears in many audio and textual tasks, such as machine translation bahdanau2014neural (), audio generation Graves2011 (), or multimodal learning liang2018multimodal (). Therefore, it is useful to compare with RNN on related tasks. However, we note that the problem with RNNs is that they implicitly parametrize the data point by time: . It is hence limited to model periodic functions of at most 1d and cannot generalize to a periodic function of arbitrary dimension, which is not a problem for our proposed method. We perform a comparison of RNN with Snake deployed on a feedforward network on a 1d problem. See Figure 9.a for the training set of this task. The simple function we try to model is , we add a white noise with variance to each , and the model sees a time series of length . See 9.b for the performance of both models, when , and validated on a noise-free hold-out section from to . We see that the proposed method outperforms RNN significantly. On this task, One major advantage of our method is that it does not need to back-propagate through time (BPTT), which both causes vanishing gradient and prohibitively high computation time during training pascanu2013difficulty (). In Figure 9.c we plot the average computation time of a single gradient update vs. the length of the time series, we see that, even at smallest , the RNN requires more than times of computation time to update (when both models have a similar number of parameters, about ). For Snake, the training is done with gradient descent on the full batch, and the computation time remains low and does not increase visibly as long as the GPU memory is not overloaded. This is a significant advantage of our method over RNN. Snake can also be used in a recurrent neural network, and is also observed to improve upon and for predicting long term periodic time evolution. Due to space constraint, we discuss this in section A.2.

7 Conclusion

In this work, we have identified the extrapolation properties as a key ingredient for understanding the optimization and generalization of neural networks. Our study of the extrapolation properties of neural networks with standard activation functions suggest the lack of capability to learn a periodic function: due to the mismatched inductive bias, the optimization is hard and generalization beyond the range of observed data points fails. We think that this example suggests that the extrapolation properties of a learned neural networks should deserve much more attention than it currently receives. We then propose a new activation function to solve this periodicity problem, and its effectiveness is demonstrated through the “extrapolation theorem”, and then tested on standard and real-life application experiments. We also hope that our current study will attract more attention to the study of modeling periodic functions using deep learning.


This work was supported by KAKENHI Grant No. JP18H01145, 19K23437, 20K14464, and a Grant-in-Aid for Scientific Research on Innovative Areas “Topological Materials Science’’ (KAKENHI Grant No. JP15H05855) from the Japan Society for the Promotion of Science. Liu Ziyin thanks the graduate school of science of the University of Tokyo for the financial support he receives. He also thanks Zhang Jie from the depth of his heart; without Zhang Jie’s help, this work could not have been finished so promptly. We also thank Zhikang Wang and Zongping Gong for offering useful discussions.

Broader Impact Statement

In the field of deep learning, we hope that this work will attract more attention to the study of how neural networks extrapolate, since how a neural network extrapolates beyond the region it observes data determines how a network generalizes. In terms of applications, this work may have broad practical importance because many processes in nature and in society are periodic in nature. Being able to model periodic functions can have important impact to many fields, including but not limited to physics, economics, biology, and medicine.

(a) Snake,
(b) Snake,
(c) Snake,
Figure 10: Effect of using different .
Figure 11: Regressing a rectangular function with Snake as activation function for different values of . For a larger value of , the extrapolation improves.
Figure 12: Regressing with Snake as activation function for different values of . For a larger value of , the extrapolation improves: Whereas the -model treats the high-frequency modulation as noise, the -model seems to learn a second signal with higher frequency (bottom centre).
Figure 11: Regressing a rectangular function with Snake as activation function for different values of . For a larger value of , the extrapolation improves.

Appendix A Additional Experiments

a.1 Effect of using different and More Periodic Function Regressions

It is interesting to study the behavior of the proposed method on different kinds of periodic functions (continuous, discontinuous, compound periodicity). See Figure 10. We see that using different seems to bias model towards different frequencies. Larger encourages learning with larger frequency and vice versa. For more complicated periodic functions, see Figure 12 and 12.

a.2 Learning a Periodic Time Evolution

In this section, we try to fit a periodic dynamical system whose evolution is given by , and we use a simple recurrent neural network as the model, with the standard activation replaced by the designated activation function. We use Adam as the optimizer. See Figure 13. The region within the dashed vertical lines are the range of the training set.

(c) Snake,
Figure 13: Regressing a simple sin function with tanh, ReLU, and Snake as the activation function.

a.3 Currency exchange rate modelling

We investigate how Snake performs on one more financial data. The task is to predict the exchange rate between EUR and USD. As in the main text, we use a two-hidden-layer feedforward network with 256 neurons in the first and 64 neurons in the second layer. We train with SGD, with a learning rate of , weight decay of , momentum of , and a mini-batch size of 165. For Snake, we make a learnable parameter. The result can be seen in Fig. 14.

(a) Prediction. Learning range is indicated by the blue vertical bars.
(b) Learning loss during training.
Figure 14: Comparison between Snake, tanh, and ReLU as activation functions to regress and predict the EUR-USD exchange rate.

Only Snake can model the rate on the training range and makes the most realistic prediction for the exchange rate beyond the year 2015. The better optimization and generalization property of Snake suggests that it offers the correct inductive bias to model this task.

Figure 15: Full Training set for Section 6.3

a.4 How does Snake learn?

We take this chance to study the training trajectory of Snake  using the market index prediction task as an example. We set in this task. See Figure 15 for the full training set for this section (and also for Section 6.3). See Figure 16 for how the learning proceeds. Interestingly, the model first learns an approximately linear function (at epoch 10), and then it learns low frequency features, and then learns the high frequency features. In many problems such as image and signal processing [1], the high frequency features are often associated with noise and are not indicative of the task at hand. This experiment explains in part the good generalization ability that Snake seems to offer. Also, this suggests that one can also devise techniques to early stopping on Snake in order to prevent the learning of high-frequency features when they are considered undesirable to learn.

Figure 16: Learning trajectory of Snake. One notices that Snake firsts learns linear features, then low frequency features and then high frequency features.

a.5 CIFAR-10 Figures

In this section, we show that the proposed activation function can achieve performance similar to ReLU, the standard activation function, both in terms of generalization performance and optimization speed. See Figure 18. Both activation functions achieve accuracy.

(a) training loss (b) testing accuracy
Figure 17: Comparison on CIFAR10 with ResNet18. We see that for a range of choice of , there is no discernible difference between the generalization accuracy of ReLU and Snake.
Figure 18: Grid Search for Snake at different on ResNet18.

a.6 CIFAR-10 with ResNet101

To show that Snake can scale up to larger and deep neural networks, we also repeat the experiment on CIFAR-10 with ResNet101. See Figure 19. Again, we see that the Snake achieves similar performance to ReLU and Leaky-ReLU.

Figure 19: ResNet100 on CIFAR-10. We see that the proposed method achieves comparable performance to the ReLU-style activation functions, significantly better than and .

Effect of Variance Correction

In this section, we show the effect of variance correction is beneficial. Since the positive effect of correction is the most pronounced when the network is deep, we compare the difference between having such correction and having no correction on ResNet101 on CIFAR-10. See Figure 20; we note that using the correction leads to better training speed and better converged accuracy.

We also restate the proposition here.

Proposition 2.

The variance of expected value of under a standard normal distribution is , which is maximized at .

Proof. The proof is straight-forward calculation. The second moment of Snake is , while the squared first moment is , and subtracting the two, we obtain the desired variance


Solving for this numerically from a numerical solver (we used Mathematica) renders the maximum at .

(a) training loss vs. epoch
(b) testing accuracy vs. epoch
Figure 20: Effect of variance correction

Appendix B Proofs for Section 2.2

We reproduce the statements of the theorems for the ease of reference.

Theorem 4.

Consider a feed forward network , with arbitrary but fixed depth and widths . Then


where is a real scalar, is any unit vector of dimension , and is a constant matrix only dependent on .

We prove this by induction on . We first prove the base case when , i.e., a simple non-linear neural network with one hidden layer.

Lemma 1.

Consider feed forward network with . Then


for all unit vector .

Proof. In this case,

where , and let denote the vector that is when and zero otherwise, and let , then for any fixed we have


where and , and is the desired linear transformation and the desired bias; we are done.

Apparently, due the self-similar structure of a deep feedforward network, the above argument can be iterated over for every layer, and this motivates for a proof based on induction.

Proof of Theorem. Now we induce on . Let the theorem hold for any , and we want to show that it also holds for . Let , we note that any can be written as


then, by assumption, approaches for some linear transformation :


and, by the lemma, this again converge to a linear transformation, and we are done.

Now we can prove the following theorem, this proof is much simpler and does not require induction.

Theorem 5.

Consider a feed forward network , with arbitrarily fixed depth and widths , then


where is a real scalar, and is any unit vector of dimension , and is a constant vector only depending on .

Proof. It suffices to consider a two-layer network. Likewise, , where . As , approaches either positive or negative infinity, and so approaches a constant vector whose elements are either or , which is a constant vector, and also approaches some constant vector .

Now any layer that are composed after the first hidden layer takes in an asymptotically constant vector as input, and since the activation function is a continuous function, is continuous, and so


We are done.

Appendix C Universal Extrapolation Theorems

Theorem 6.

Let be a piecewise periodic function with period . Then, a Snake neural network, , with one hidden layer and with width can converge to uniformly as , i.e., there exists parameters for all such that


for all , i.e., the convergence is point-wise. If is continuous, then the convergence is uniform.

We first show that using as activation function may approximate any periodic function, and then show that Snake may represent a function exactly.

Lemma 2.

Let be defined as in the above theorem and let be a feedforward neural network with as the activation function, then can converge to point-wise.

Proof. It suffices to show that a neural network with as activation function can represent a Fourier series to arbitrary order, and then apply the Fourier convergence theorem. By the Fourier convergence theorem, we know that


for unique Fourier coefficients , and we recall that our network is defined as


then we can represent Eq. 15 order by order. For the -th order term in the Fourier series, we let


and let the unspecified biases be : we have achieved an exact parametrization of the Fourier series of order with a neural network with many hidden neurons, and we are done.

The above proof is done for a activation; we are still obliged to show that Snake can approximate a neuron.

Lemma 3.

A finite number of Snake neurons can represent a single activation neuron.

Proof. Since the frequency factor in Snake can be removed by a rescaling of the weight matrices, we set , and so . We also reverse the sign in front of , and remove the bias , and prove this lemma for . We want to show that for a finite , there and such that


This is achievable for , let , and let , we have:


and set achieves the desired result. Combining with the result above, this shows that a Snake neural network with many hidden neurons can represent exactly a Fourier series to -th order.

Corollary 2.

Let be a two layer neural network parametrized by two weight matrices and , and let be the width of the network, then for any bounded and continuous function on , there exists such that for any , we can find , such that is close to .

Proof. This follows immediately by setting to match the region in the previous theorem.


  1. The code for our implementation of a demonstrative experiment can be found at
  2. Our code is adapted from
  3. Data from
  4. Data from
  5. Hyperparameter exploration performed with Hyperactive:


  1. Andreas Antoniou. Digital signal processing. McGraw-Hill, 2016.
  2. Adebiyi A Ariyo, Adewumi O Adewumi, and Charles K Ayo. Stock price prediction using the arima model. In 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation, pages 106–112. IEEE, 2014.
  3. Andrew Atkeson. What will be the economic impact of covid-19 in the us? rough estimates of disease scenarios. Technical report, National Bureau of Economic Research, 2020.
  4. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
  5. Fabio Canova. Detrending and business cycle facts. Journal of monetary economics, 41(3):475–512, 1998.
  6. KH Chan, JS Peiris, SY Lam, LLM Poon, KY Yuen, and WH Seto. The effects of temperature and relative humidity on the viability of the sars coronavirus. Advances in virology, 2011, 2011.
  7. Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
  8. Balázs Csanád Csáji et al. Approximation with artificial neural networks. Faculty of Sciences, Etvs Lornd University, Hungary, 24(48):7, 2001.
  9. George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4):303–314, 1989.
  10. Bert De Groot and Philip Hans Franses. Common socio-economic cycle periods. Technological Forecasting and Social Change, 79(1):59–68, 2012.
  11. Nuno Fernandes. Economic effects of coronavirus outbreak (covid-19) on the world economy. Available at SSRN 3557504, 2020.
  12. Ken-ichi Funahashi and Yuichi Nakamura. Approximation of dynamical systems by continuous time recurrent neural networks. Neural networks, 6(6):801–806, 1993.
  13. Ivayla I Geneva, Brian Cuzzo, Tasaduq Fazili, and Waleed Javaid. Normal body temperature: a systematic review. In Open Forum Infectious Diseases, volume 6, page ofz032. Oxford University Press US, 2019.
  14. Alex Graves. Practical variational inference for neural networks. In Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS’11, pages 2348–2356, USA, 2011. Curran Associates Inc.
  15. Wei-jie Guan, Zheng-yi Ni, Yu Hu, Wen-hua Liang, Chun-quan Ou, Jian-xing He, Lei Liu, Hong Shan, Chun-liang Lei, David SC Hui, et al. Clinical characteristics of coronavirus disease 2019 in china. New England journal of medicine, 382(18):1708–1720, 2020.
  16. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015.
  17. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  18. Geoffrey Hinton. Deep learning—a technology with the potential to transform health care. Jama, 320(11):1101–1102, 2018.
  19. Kurt Hornik, Maxwell Stinchcombe, Halbert White, et al. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359–366, 1989.
  20. Norio Ishida, Maki Kaneko, and Ravi Allada. Biological clocks. Proceedings of the National Academy of Sciences, 96(16):8819–8820, 1999.
  21. M Ayhan Kose, Naotaka Sugawara, and Marco E Terrones. Global recessions, 2020.
  22. Witold Kwasnicki. Kitchin, juglar and kuznetz business cycles revisited. Wroclaw: Institute of Economic Sciences, 2008.
  23. Paul Pu Liang, Ziyin Liu, Amir Zadeh, and Louis-Philippe Morency. Multimodal language analysis with recurrent multistage fusion. arXiv preprint arXiv:1808.03920, 2018.
  24. Jill Mislinski. Market cap to gdp: An updated look at the buffett valuation indicator. Advisor Perspectives, 2020.
  25. Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–814, 2010.
  26. Isaac Newton. Philosophiae naturalis principia mathematica. William Dawson & Sons Ltd., London, 1687.
  27. Alessio Notari. Temperature dependence of covid-19 transmission. arXiv preprint arXiv:2003.12417, 2020.
  28. Denise R. Osborn and Marianne Sensier. The prediction of business cycle phases: Financial variables and international linkages. National Institute Economic Review, 182(1):96–105, 2002.
  29. Ping-Feng Pai and Chih-Sheng Lin. A hybrid arima and support vector machines model in stock price forecasting. Omega, 33(6):497–505, 2005.
  30. Giambattista Parascandolo, Heikki Huttunen, and Tuomas Virtanen. Taming the waves: sine as activation function in deep neural networks, 2016.
  31. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International conference on machine learning, pages 1310–1318, 2013.
  32. Yang Qu and Ming-Xi Wang. Approximation capabilities of neural networks on unbounded domains. arXiv preprint arXiv:1910.09293, 2019.
  33. Alvin Rajkomar, Jeffrey Dean, and Isaac Kohane. Machine learning in medicine. New England Journal of Medicine, 380(14):1347–1358, 2019.
  34. Prajit Ramachandran, Barret Zoph, and Quoc V Le. Swish: a self-gated activation function. arXiv preprint arXiv:1710.05941, 7, 2017.
  35. Roberto Refinetti and Michael Menaker. The circadian rhythm of body temperature. Physiology & behavior, 51(3):613–637, 1992.
  36. Mohammad M Sajadi, Parham Habibzadeh, Augustin Vintzileos, Shervin Shokouhi, Fernando Miralles-Wilhelm, and Anthony Amoroso. Temperature and latitude analysis to predict potential spread and seasonality for covid-19. Available at SSRN 3550308, 2020.
  37. Matthew D Shapiro and Mark W Watson. Sources of business cycle fluctuations. NBER Macroeconomics annual, 3:111–148, 1988.
  38. Adrian Silvescu. Fourier neural networks. In IJCNN’99. International Joint Conference on Neural Networks. Proceedings (Cat. No. 99CH36339), volume 1, pages 488–491. IEEE, 1999.
  39. Tanu Singhal. A review of coronavirus disease-2019 (covid-19). The Indian Journal of Pediatrics, pages 1–6, 2020.
  40. Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139–1147, 2013.
  41. Jung-Hua Wang and Jia-Yann Leu. Stock market trend prediction using arima-based neural networks. In Proceedings of International Conference on Neural Networks (ICNN’96), volume 4, pages 2160–2165. IEEE, 1996.
  42. Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015.
  43. Liheng Zhang, Charu Aggarwal, and Guo-Jun Qi. Stock price prediction via discovering multi-frequency trading patterns. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, page 2141–2149, New York, NY, USA, 2017. Association for Computing Machinery.
  44. Yongjian Zhu and Jingui Xie. Association between ambient temperature and covid-19 infection in 122 cities from china. Science of The Total Environment, page 138201, 2020.
  45. Abylay Zhumekenov, Malika Uteuliyeva, Olzhas Kabdolov, Rustem Takhanov, Zhenisbek Assylbekov, and Alejandro J. Castro. Fourier neural networks: A comparative study, 2019.
  46. Liu Ziyin, Zhikang T Wang, and Masahito Ueda. Laprop: a better way to combine momentum with adaptive gradient. arXiv preprint arXiv:2002.04839, 2020.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description