1 Introduction
Abstract

A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depends crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting or chaotic activity patterns.

We study the influence of non-synaptic plasticity on the default dynamical state of recurrent neural networks. The non-synaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes, a regular synchronized, an overall chaotic and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interseeded by chaotic bursts which respond sensitively to input signals. We discuss these finding in the context of self-organized information processing and critical brain dynamics.


Intrinsic adaptation in autonomous recurrent neural
networks


Dimitrije Marković and Claudius Gros
Institute for Theoretical Physics, J. W. Goethe University, Frankfurt am Main, Germany.

Keywords: information theory, non-synaptic adaptation, self-organization, neural networks


1 Introduction

In the last couple of decades self organized processes have attracted the interests of many researchers from various scientific areas, both in natural and in social sciences. A system is said to be self-organizing, quite generally, when a state of high dynamical complexity arises reliably from relatively simple basic organization rules (Ashby, 1962; Camazine, Deneubourg, Franks, Sneyd, & Theraula, 2003; Gros, 2010).

It is often the case that the self-organization in dynamical systems is achieved through an interplay or regulative forces involving positive and negative feedback, viz. through the interplay of internal drives which act destabilizing and regulating, respectively, onto the dynamics of the system. In general one type of feedback can dominate, driving the system towards a chaotic or towards an ordered phase respectively. A proper balance of the two opposing drives can bring the dynamical state at the point of a phase transition, a critical state. One speaks of self-organized criticality (SOC) whenever this balance is not achieved through the actions of an outside controller but through internal self-organizing processes (Bak, Tang, & Wiesenfeld, 1987, 1998, 1995; Adami, 1995).

As a dynamical system approaches a critical point, its spatiotemporal complexity rises. It has been suggested that this rise in the complexity improves the computational properties and the capability of dynamical systems to process information (e.g., Solé, & Miramontes, 1995; Bertschinger, & Natschläger, 2004; Legenstein, & Maass, 2006). This notion of “Computation at the edge of chaos” may also been seen in the broader context of “Life at the edge of chaos” (see Zimmer, 1999; Gros, 2010); the dynamical systems at the underpinning of all living have the tendency to self-organize themselves close to a critical state.

In recent years there have been many studies of the possible occurrence of SOC in neural networks with synaptic plasticity. Most of these studies have concluded that synaptic plasticity drives the dynamics generically far below a critical point (e.g., Siri, Quoy, Delord, Cessac, & Berry, 2007, 2008; Dauce, Quoy, Cessac, Doyon, & Samuelides, 1998), viz. it over-regulates the network dynamics. Hence, an organizational principle is needed which will maintain an intermediate level of excitability in neural networks, preventing the occurrence of dynamical states which are non- or hyper-reactive to external influences.

It has been assumed for many years that the dominant driving force, shaping the brain’s dynamical state, is the synaptic plasticity. Thus, little attention was put to other forms of neural adaption, the non-synaptic adaptation of individual neurons (Mozzachiodi, & Byrne, 2010), also known as intrinsic plasticity. Intrinsic plasticity is mostly manifested as a change in the excitability of a neuron, where this change is achieved through the adaptation on the level of membrane components. Here, we investigate the role of non-synaptic plasticity in the formation of complex patterns of neural activity.

We study a previously proposed model of intrinsic plasticity (see Triesch, 2005, 2007; Stemmler, & Koch, 1999), and its influence on the dynamical properties of autonomous recurrent neural networks with rate encoding neurons in discrete time. Within this model neurons aspire to achieve, as an average over time, a firing-rate distribution which maximizes the Shannon information entropy (Gros, 2010). Therefore, the neurons are trying to homeostatically regulate an entire distribution function, a mechanism denoted polyhomeostatic optimization (Markovic, & Gros, 2010).

Intrinsic plasticity in the form of polyhomeostatic optimization gives rise, for random recurrent network topologies, to ongoing and self-sustained neural activities with non-trivial dynamical states. Depending on the target mean firing rate and network parameters, one observes three distinct phase states: synchronized oscillations, an intermittent-bursting and a chaotic phase; all states being globally attracting in their respective phase spaces.

In Section 2 we derived the stochastic learning rules for intrinsic adaptation. This is followed by the analysis of a single self-coupled neuron with intrinsic plasticity (Section 3) and the analysis of recurrent network with intrinsic plasticity (Section 4). Concluding remarks and discussion are finally provided in Section 5.

2 Stochastic adaptation

We used a basic discrete-time, rate-encoding artificial neuron model. The firing rate of the neuron is given as a nonlinear transformation of the total synaptic input current , . The transfer function has a sigmoidal form, a usual choice being the logistic function

(1)

where is the gain and the bias. The parameters of the transfer function, intrinsic parameters, will eventually become slow variables with a stochastic learning rules determining their time evolution.

Let us denote with the probability density function (PDF) of the total input. Given the relation (1) between the input current and the output activity we find

(2)

for , the PDF of the firing rate. The main idea behind the derivation of adaption rules for the intrinsic parameters and is the assumption that the neuron’s excitability should change in a way which maximizes the entropy of the firing rate distribution , keeping at the same time the average activity at a desired level (Triesch, 2005).

The rational for this procedure is the following: the maximization of the firing rate entropy implies that a neuron will use the entire range of available activity states, optimizing the information transfer between neural input and output. Furthermore, the regulation of the average firing rate is present due to environmental constraints on the neuron, e.g. the limited energy resources needed for metabolic processes.

Having a positive-definite variable , with a fixed first moment, the maximum entropy PDF corresponds to the exponential distribution

(3)

where is the partition function. We will refer to the first moment of the PDF (3), denoted as , as the target average firing rate. It is given by

(4)

In general the inverse function cannot be found, but for we recover the , which is valid for exponential PDFs defined on .

A natural way to introduce a distance measure between two PDFs is the Kullback-Leibler (KL) divergence (Gros, 2010), defined as

(5)

where denotes the expectation value with respect to , and denotes the differential entropy functional. By minimizing with respect to the intrinsic parameters and one obtains the learning rules. In Eq. (5) only the first two terms are functions of and . The gradient descent hence gives the following relation

(6)

with . As the input distribution is in general unknown, it is suitable to derive the adaption rules by using a stochastic gradient descent (Spall, 2005). Such adaption rules, for the update of the internal parameters and , are obtained by using the expression between the brackets on the right-hand side of Eq. (6) . An advantageous side effect of this approach is that the adaptation rules become local in time. Using Eq. (1) for the transfer function to evaluate and , we obtain the stochastic learning rules

(7)

where is the learning rate and

The learning rate is assumed to be small; viz the time evolution of the internal parameters is slow compared to the evolution of both and . In this way the stochastic adaptation, which depends only on the instantaneous values of the variables, can closely match the direction of the deterministic gradient Eq. (6). Also, the input distribution can, in general, be non-stationary, therefore the minimum of the cost function could vary in time. For this reason, the learning rate should also be large enough for the adaptation to follow the changing minimum. However, any finite and constant learning rate doesn’t satisfy the condition for exact convergence of internal parameters into the minimum of Kullback-Leibler divergence (Spall, 2005). Still, if the learning rate decreases with every time step, a condition needed for strict convergence, the intrinsic adaptation will react slower with time to a variability in the position of the minimum of the cost function. Thus, it’s more favorable to have here a constant learning rate which will result in the oscillations of the intrinsic parameters around the minimum. The amplitude of this oscillations roughly scales as (Bottou, 2004), thus a small learning rate also ensures convergence within a small vicinity from the minimum of the cost function .

Figure 1: (Left) A dependence of relative parameter change (Eq. 9) on output activity for different target firing rates . (Right) Critical value of the gain , as a function of the average firing rate . The colored area shows the region of stability of the fixpoint , see Section 3.

3 Single neuron

We analyze initially a minimal network setup, a self-coupled neuron adapting homeostatically the intrinsic parameters of the transfer function. A synaptic connection between the axon and the dendrites of the same neuron is also known as an autapse.

Neurons with an autapse are not rare in the brain. They have been observed in various brain regions and in different types of neurons. The discovery of functional autapses provides clues for possible physiological roles (Bekkers, 2003). Herrmann, & Klaus (2004) suggest that autapses lead to oscillatory behavior in otherwise non-oscillating neurons. We shall see below how this type of behavior spontaneously arises in self-excitatory neurons with intrinsic plasticity. We focused on the analysis of excitatory autapse and used it as a basis for understanding the observed behavior in a larger network setup (see Section 4). For the case of self-inhibition please refer to a separate study (Markovic et al., 2010).

The autapse neuron is equivalent to the identification in Eqs. (1) and (7). The complete set of evolution rules for the dynamical variables and is then

(8)

with

(9)

The right-hand side of (9) depends directly on and only implicitly on , as one can easily verify when going through the derivation of the rules (7) for the intrinsic plasticity. In plot at the left side of Fig. 1 we showed (Eq. (9)) for various target firing rates . Note that and , independently of .

Figure 2: (Top) The time dependence of firing-rate (solid line), bias (squares) and gain (circles) for the one-site problem (3), with a learning rate and a target average firing rate ; The gain is set initially bellow a critical value and since the system relaxes quickly to the fixpoint of . Once surpasses a certain threshold, compare Fig. 1, the fixpoint becomes unstable and the system follows a limiting cycle.
(Bottom) Maximal local Lyapunov exponent compared to a Lyapunov exponent of a perturbation parallel to the flow . They were estimated along the points of the trajectory ().

3.1 Stability analysis

We first analyze a reduced model of the three evolution equations (3), obtained by setting , viz considering a constant . The reduced system contains a fixpoint , where and . This fixpoint defines a one dimensional manifold in the complete phase space , where the stability of the manifold depends directly on the given value of and (see Fig. 1). For the dynamics is attracted toward the fixpoint , while for , the fixpoint becomes repelling and the activity of the neuron follows a limiting cycle. One can show that the critical gain is .

The time evolution of the full set of equations (see Fig. 2, top) approaches a limiting cycle, for all starting values of . The evolution rules (3) have fixpoint solutions also for a vanishing adaptation, viz. for . These fixpoints are turned for into attractor relics (Gros, 2007, 2009). The trajectory slows down close to the attractor relics, giving rise to the transient firing states, observable in Fig. 2. This non-trivial activity pattern is a direct consequence of the polyhomeostatic adaption principle. The system cannot achieve, as an average over time, a non-trivial firing-rate distribution by settling into a steady state. Polyhomeostatic adaption hence forces the neuron to remain autonomously active, with varying firing rates.

Figure 3: Output distributions of the two neurons with highest (diamonds) and lowest (circles) Kullback-Leibler divergence (5) compared to the mean output distribution (dashed line) and the target exponential output distribution (full line). The network size neurons and a target mean firing rate (top) and (bottom). The values are identical for all the neurons in the network and fraction of excitatory links . Insets: Output distribution of the single self-coupled neuron having the target average firing rate at the same value as the neurons in the network.

For an insight on the influence of external input signal on the dynamics, we have estimated the maximal local Lyapunov exponent and the Lyapunov exponent for a perturbation in the direction of the flow (). They are presented in the bottom graph of Fig. 2. We see that the neuron is most sensitive to a perturbation during the transition between two attractor relics (low and high activity levels), since is positive through these transition periods. Also, during the fast transition between the attractor relics. We thus conclude that the direction of maximal sensitivity to perturbations is aligned with the direction of the flow at this points. Note that the two attractor relics can be stable during the same time period, although the activity settles in only one of them. This means that a transition could be induced with a sufficiently strong perturbation.

Figure 4: The Kullback-Leibler (KL) divergence of the neuronal firing-rate distribution relative to the target exponential distribution (3), as a function of the standard deviation of a Gaussian input distribution , in the presence of an autapse (green dots, in Eq. (10)) and in the absence of the autapse (red dashed line, in Eq. (10)). Target mean firing rate is set to . Inset: Mean KL divergence (see section 4) of the random recurrent neural network with neurons and mean target firing rate , as a function of the fraction of excitatory links . Note that increase of can be related to the decrease in the noisy component of the input that each neuron receives (see section 4.2).

3.2 Noisy autapse

Let us consider the case of a noisy autapse, when

(10)

The neuron receives, beside the autaptic signal , a random input from an external source (e.g. from some other neurons in the network). The non-autaptic component of the input is drawn from a Gaussian distribution, , where denotes a normal distribution with zero mean and variance .

The external input hence perturbs the signal coming from the autapse. From Fig. 2 it is quite obvious that the output distribution of a self-coupled neuron with , viz. noiseless autapse, in Eq. (10), deviates substantially from the target exponential distribution. The output distribution in the case of noiseless autapse is presented in the insets of Fig. 3. However, as we increase the magnitude of the external signal, the output distribution of the neuron approaches the optimal distribution and the KL divergence decreases toward a minimum, see Fig. 4. Obviously, when , the external input dominates, and the two cases with and without an autapse become equivalent. Nevertheless, even small amounts of noise, that is are sufficient to disrupt the oscillatory behavior of the output activity. This happens because of the existence of the second stable fixpoint, for certain values of and in . As the standard deviation increases, the probability that the firing-rate will transit toward the second fixpoint also increases. Thus, at a certain levels of noise, the activity stochastically escapes in short time intervals from the stable fixpoints, and the regular oscillatory behavior is destroyed. This also implies that a certain level of decorrelation between the input current and output activity has to be reached, if the firing-rate distribution is to come as close as possible to the desired target distribution.

Figure 5: (Left) Mean Kullback-Leibler (KL) divergence (color coded) as a function of the fraction of excitatory links and target mean firing rate in a recurrent network of neurons and connectivity . (Right) as a function of connectivity and target mean firing rate , with excitatory/inhibitory neurons (Top) and projections (Bottom). Fraction of excitatory links in both cases. Density plot was evaluated as a linear interpolation of the experimentally obtained values represented with green dots.

4 Recurrent neural network

We studied numerically random recurrent neural networks (RRNN) of polyhomeostatically adapting neurons (7), where each neuron receives input from pre-synaptic neurons.

In a first step we consider networks of dual neurons, i.e. a single neuron can have both excitatory and inhibitory projections. In such setup, the synaptic input that the th neuron receives is expressed as

(11)

The synaptic weights are selected as , with a probability for the weight to be positive. The learning rate in (7) is set to . We consider homogeneous networks where all neurons have identical target average firing rate , determined through (4), for the target output distributions , see Eq. (3).

For a neuron to ideally map an arbitrary input to the exponential distribution with specified mean, it needs a transfer function which can take any functional form during the adaption process. This is obviously not the case for the logistic function which has only two adaptable parameters. As a result of this limited flexibility of the transfer function, even when the input of the neuron is independent from the output (Fig. 4 red dashed line), the output distribution will never ideally match the target exponential distribution (Markovic et al., 2010; Triesch, 2005).

4.1 Dynamical behaviors

To examine the mean deviation of the output activity from the target exponential distribution, we have estimated the KL divergence , see Eq. (5), for all neurons in a network of neurons, and averaged over the entire network and over random network realizations. The obtained mean is presented in Fig. 5 as a function of target mean firing rate , fraction of excitatory links and network connectivity . Note that is low for high target firing-rates and for balanced excitation/inhibition, or for low connectivity .

Figure 6: Activity of one, randomly chosen, neuron from the network of N=1000 neurons with connectivity K=100, depending on the fraction of excitatory links (top; ) and the mean target firing rate (bottom; ), where the yellow line represents average network activity. The right ordinate shows the corresponding mean Kullback-Leibler divergence (see section 4).

We have observed three distinct dynamical regimes, a pure chaotic regime characterized by low values of , a synchronised oscillatory regime observed in random networks with dominating excitatory connections and a intermittent-bursting regime observed for balanced excitation/inhibition and small .

To illustrate the difference between these dynamical behaviors, we present in Fig. 6 the average neural activity (yellow line) and the activity patterns of a randomly selected neuron in the network of units. As we vary the fraction of excitatory links and keep fixed (Fig. 6 Top) the dynamics shifts from the chaotic phase into the phase of synchronised oscillations. On the other hand, reducing the target mean firing rate for balanced excitation/inhibition (Fig. 6 Bottom) leads to a manifestation of bursts of chaotic activity alternated by periods of nearly constant activity. On the right side of the graphs we give the values of the corresponding mean Kullback-Leibler divergence . In addition, from the oscillatory regime it is possible to transit back into a chaotic or intermittent-bursting regime (depending on the value of ) by reducing the network connectivity . This can be easily seen from the similarity of the density plots on the right and left hand side in Fig. 5.

In Fig. 3 we give an example of the output distributions for two neurons, with highest and lowest values of , when the dynamics of the neural network is set in chaotic regime. The two output distribution are compared with a corresponding target exponential distribution.

Alternatively to randomly selecting a single link as excitatory or inhibitory, one can consider a case when a single neuron is selected as either excitatory or inhibitory. Thus, all the projections from one neuron are of the same type. We have shown for such case on the upper right graph of Fig. 5. The absence of a visible difference between the upper and the lower graph indicates that the intrinsic adaptation, in the low limit, leads to the same dynamical behavior indipendent from having excitatory and inhibitory neurons separeted or neurons with both types of projections.

4.2 Oscillatory behavior

The network dynamics makes a transition into a synchronized oscillatory regime (see Fig. 6), as , the fraction of excitatory links, is increased. To better understand this oscillatory behavior let us recall the discussion from the previous section. In the case of a single self-coupled neuron we showed how a certain level of decorrelation, between the output activity and the input signal has to be achieved in order for a neuron activity to properly match the target distribution (see Fig. 4). The same argument holds in the case of RRNN. Thus, when the input is uncorrelated with its output activity (corresponding to the ), the output distribution closely matches the target distribution .

The total synaptic input a neuron receives can be divided into two components. The first component is correlated with its own output activity via excitatory recurrent connections. The second component corresponds to the noisy and uncorrelated part of the input which results from the competition between inhibition and excitation. The first correlated part of the input becomes dominant over the noisy second contribution as we increase the fraction of excitatory links . The activity therefore starts to follow an oscillatory locked-in trajectory for large fractions .

In the inset of Fig. 4 we present the change of mean KL divergence as the number of excitatory connections grows, but for a fixed target mean firing rate. We can see that increases rapidly once the number of excitatory connections starts to dominate. Note that the transition between two phases occurs in a slightly different manner compared to the case of the neuron with a noisy autapse. One reason for this difference is that input/output correlations will be amplified by additional delayed components of an excitatory feedback a neuron receives. This reasoning can be shown to hold by simulating a single neuron with delayed coupling autapses driven by the input  .

4.3 Intermittent bursts of chaotic dynamics

In the second graph (Fig. 6 Bottom), the dynamics enters an intermediate phase, characterized by intermittent bursting of chaotic neural activities, as the target mean firing rate is decreased. A closer look into the phase space of intrinsic parameters , of the th neuron, reveals that the intrinsic parameters approach a limiting cycle, similar to the case of a neuron with an autapse. During the regime of nearly constant activity the gain steadily increases. Once the gains of sufficient number of neurons crosses a certain critical value, the activity of the entire network shifts into a chaotic regime. The activity during the chaotic regime exceeds the target average activity level , thus the gains of all neuron are driven back to sub-critical values. Even when reducing the learning rates by several orders of magnitude this intermittent-bursting behavior persists. Nevertheless, when considering constant and supercritical gains for all neurons and allowing only the respective biases to adapt, we observe a pure chaotic behavior. This change, which arises when reducing the number of degrees of freedoms by considering constant gains , is not yet fully understood. A one possible cause could be the use of “vanilla” gradient which doesn’t take into account the curvature of the manifold of probability distributions (see Amari, 1998), and therefore doesn’t point into direction of maximal change of .

4.4 Sensitivity to external perturbations

Figure 7: Nonlinear finite time Lyapunov exponent (see section 4.4) at the time step , for the various target average firing rates, with and in all cases. Presented curves are the average of random perturbations. Inset: The limit of dynamical predictability defined as a time needed for an error to reach of the saturation level.

We have evaluated the nonlinear finite time Lyapunov exponent (FTLE), which measures the short-term growth rate of initial perturbations without linearization of the time evolution equations (Ding, & Li, 2007).

In practice the FTLE is estimated by considering a small perturbation of the trajectory along a randomly selected direction , and following the deviation of the perturbated trajectory from the reference orbit, that is . The FTLE is the obtained as , where and . The FTLE depends on the starting point of the initial perturbation and on the size of the initial displacement . The mean FTLE , which is independent from the starting point is evaluated by taking the average of the FTLEs over various points along the trajectory,

The mean FTLE still depends on the initial displacement . If is chosen to be very small one observes initially an exponential growth of the perturbation. For this time period the mean FTLE is essentially constant and reduces to the largest Lyapunov exponent. The growth of the perturbation eventually enters a nonlinear phase, which is maintained until the deviation from the reference orbit reaches a saturation value. Note that FTLE () is not necessarily positive for all and , implying that an initial deviation can converge back towards the reference trajectory.

When analysing the changes of the dynamical behavior as we reduced , while keeping constant at , we found that when is in the range which corresponds to small values of (see lower right part of Fig. 5), the dynamical behavior is in a pure chaotic dynamical state with the constant part of the mean FTLE in the range  . In this phase the FTLE is positive for all and , thus small initial displacements diverge along every point of the orbit .

As the target mean firing rate is decreased down to we observe a kink in the FTLE, with a transition to a second linear time development, see Fig. 7. The manifestation of the kink corresponds to the occurrence of periods of quasi-constant activity as seen in the bottom plot of Fig. 6. This laminar periods are characterised by a negative local Lyapunov exponent, that is small perturbations are suppressed during the laminar periods, leading however to a growth of the perturbation during the periods of chaotic bursting.

In this intermittent bursting phase the short term chaotic behavior () describes the repulsion of two initially close trajectories mainly during the bursting regime. The long term behavior () is also chaotic, as a consequence of the intermittent chaotic bursts. The change in the growth of perturbations (see Fig. 7) results from the interplay of the distinct characteristic timescales of the intrinsic variables ( and ) and the firing rates (), with the first being slow and the later being fast variables (Boffetta, Giuliani, Paladin, & Vulpiani, 1998). The occurrence of transiently stable periods of activity also leads to an increase in the time , measuring the limit of dynamic predictability, as shown in the inset of Fig. 7. The duration of predictability is defined here as the time needed for a perturbation to reach of the saturation level (Ding et al., 2007).

Figure 8: Position dependent finite time Lyapunov exponent (see section 4.4) along the orbit. We considered three cases: (bottom) a chaotic phase with and , (middle) intermittent-bursting phase with and , (top) synchronised oscillatory phase with and . The initial displacement , number of neurons and connectivity were set to and , respectively.

The dependency of the FTLE on the position of a perturbation along the orbit in phase space is presented in Fig. 8. We compared the FTLE, that is , as estimated from six different points along the trajectory , for all three dynamical regimes. The FTLE is then evaluated for 500 consecutive timesteps and after the timestep a new perturbation is introduced. In the pure chaotic dynamical regime the FTLE is positive for all given initial perturbation points. While, in intermittent bursting regime one can also notice negative values of the FTLE when the perturbation is initiated within the laminar period. In contrast, perturbations starting during the periods of bursts leads to a strictly positive FTLEs. In the oscillatory regime the FTLE is negative along the orbit and, similar to the single neuron case, the trajectory is unstable during the fast transitions from low to high activity states (and vice versa), which results in sharp, positive valued, spikes in the FTLE.

Chaotic dynamics is also observed in the non-adapting limit with , whenever the static values of are above the critical value. This is in agreement with the results of a large- mean field analysis of an analogous continuous time Hopfield network (Sompolinsky, Crisanti, & Sommers, 1988). Subcritical static lead, on the other hand, to regular dynamics controlled by point attractors.

5 Discussion

Our results show that the introduction of intrinsic plasticity in random recurrent neural networks results in ongoing and self-sustained neural activities with non trivial dynamical states. For large networks we have observed, depending on the specified parameters, three self-organized distinct phases. The network parameters include the fraction of excitatory connections, the average connectivity and the target average firing rate. These results show that non-synaptic adaptation plays an important role in the formation of complex patterns of neural activity.

An important part of the sensory signals an organism receives result from the reactions of the environment to the motor actions taken by the organism itself. The complexity of this portion of sensory inputs will then depend on the complexity of the organism’s own behavior. This consideration indicates that self-generated and autonomously sustained neural activity is important for the generation of non-trivial behavioral patterns. It is hence more likely that an animal will start an explorative behavior if the brain, supporting the body, is able to maintain a change in sensory input. In other words, if the dynamics of a neural controller would approach, in the absence of sensory inputs, a state of stable and constant activity, it would be capable of generating only trivial motor actions.

As mentioned in the introduction, synaptic plasticity alone drives the dynamics of a recurrent network generically toward a frozen state, independent of the presence or absence of sensory input (Siri et al., 2007). Synaptic plasticity is thus in general, for non-spiking neurons, not sufficient for achieving self sustained activity, a likely essential precondition to complex behavioral patterns.

The relevance of critical brain dynamics for both non-linear sensory processing (Kinouchi, & Copelli, 2006) and for self-sustained neural computation is being investigated intensively. Levina, Herrmann, & Geisel (2007, 2009) have demonstrated that critical neural activity can be achieved when the depletion of synaptic vesicles is included into the dynamics of membrane potential. Under this setup, they observe power law scaling of avalanches formed by the activity of spiking neurons, a result in agreement with experimental observations (e.g., Chialvo, 2010), which are supportive of the notion that the brain works in critical regime. However, without any form of adaptation to varying sensory stimuli, neurons would perform only trivial computations and criticality, or other complex activity patterns, would generically not arise. Thus, to properly understand the brain dynamics and cognitive processes, one must include various forms of plasticity (Triesch, 2005, 2007).

Here we showed that intrinsic or non-synaptic plasticity will drive a system of recurrently interacting neurons, under certain quite general conditions (network connectivity, ratio of excitation versus inhibition), towards a chaotic phase. Synaptic adaption rules, on the other side, are known to generically drive recurrent neural networks into a subcritical or frozen state. Our results hence indicate that self-organization of neural network dynamics into a critical regime could occur whenever intrinsic and synaptic plasticity are both present and relevant. Critical neural dynamics would then result from the interplay between synaptic and non synaptic adaption processes.

An analogous line of arguments has been brought forward by Der, Hesse, & Martius (2006), by demonstrating the relevance of self-organized criticality for the emergence of exploratory behavior in autonomous agents. Optimal predictability of the sensory-motor cycle is achieved when the neural controller works in a critical regime (Der et al., 2006). We believe that self-organized criticality in biologically inspired autonomous recurrent neural networks will exhibit similar patterns of complex behavior. Certainly, the complex behavior should persist when including the interaction between an agent and the environment, as we plan to do in future investigation.

References

  • Adami (1995) Adami, C. (1995). Self-organized criticality in living systems. Physics Letters A, 203,29–32.
  • Amari (1998) Amari, S. I. (1998). Natural Gradient Works Efficiently in Learning. Neural Computation, 10, 251–276.
  • Ashby (1962) Ashby, W. (1962). Principles of the self-organizing system. (pp. 255–278). Principles of Self-Organization: Transactions of the University of Illinois Symposium.
  • Bak et al. (1987) Bak, P., Tang, C., Wiesenfeld, K. (1987) Self-organized criticality: An explanation of the 1/f noise. Physical Review Letters, 59,381–384.
  • Bak et al. (1995) Bak, P., Paczuski, M. (1995). Complexity, contingency, and criticality. Proceedings of the National Academy of Sciences of the United States of America, 92,6689–6696.
  • Bak et al. (1998) Bak, P., Tang, C., Wiesenfeld, K. (1998). Self-organized criticality. Physical Review A, 38,364–374.
  • Bekkers (2003) Bekkers, J. M. (2003). Synaptic Transmission: Functional Autapses in the Cortex. Current Biology, 13,433–435.
  • Bertschinger et al. (2004) Bertschinger, N., Natschläger, T. (2004). Real-time computation at the edge of chaos in recurrent neural networks. Neural computation, 16,1413–1436.
  • Boffetta et al. (1998) Boffetta, G., Giuliani, P., Paladin, G., Vulpiani, A. (1998). An extension of the Lyapunov analysis for the predictability problem. Journal of the Atmospheric Science, 55, 3409–3416.
  • Bottou (2004) Bottou, L. (2004). Stochastic learning. (pp.146–148). Lecture notes in Computer Science.
  • Camazine et al. (2003) Camazine, S., Deneubourg, J.-L., Franks, N. R., Sneyd, J., Theraula, G. (2003). Self-organization in biological systems. (pp. 8–93). Princeton: University Press.
  • Chialvo (2010) Chialvo, D. R. (2010). Emergent complex neural dynamics. Nature Physics, 6, 744–750.
  • Dauce et al. (1998) Dauce, E., Quoy, M., Cessac, B., Doyon, B., Samuelides, M. (1998). Self-organization and dynamics reduction in recurrent networks : stimulus presentation and learning. Neural Networks, 11, 521–533.
  • Der et al. (2006) Der, R., Hesse, F., Martius, G. (2006). Rocking Stamper and Jumping Snakes from a Dynamical Systems Approach to Artificial Life. Adaptive Behavior, 14, 105.
  • Ding et al. (2007) Ding, R., Li, J. (2007). Nonlinear finite-time Lyapunov exponent and predictability. Physics Letters A, 364, 396–400.
  • Gros (2007) Gros, C. (2007). Neural networks with transient state dynamics. New Journal of Physics, 9, 109.
  • Gros (2009) Gros, C. (2009). Cognitive Computation with Autonomously Active Neural Networks: An Emerging Field. Cognitive Computation, 1, 77–90.
  • Gros (2010) Gros, C. (2010). Complex and adaptive dynamical systems: A primer. (pp. 145–175). Springer, Second edition
  • Herrmann et al. (2004) Herrmann, C. S., Klaus, A. (2004). Autapse turns neuron into oscillator. International Journal of Bifurcation and Chaos, 14, 623–633.
  • Kinouchi et al. (2006) Kinouchi, O., Copelli, M. (2006). Optimal Dynamical Range of Excitable Networks at Criticality. Nature Physics, 2, 348–351.
  • Legenstein et al. (2006) Legenstein, R., Maass, W. (2006). Edge of Chaos and Prediction of Computational Performance for Neural Circuit Models. Neural Networks, (pp. 1–36).
  • Levina et al. (2007) Levina, A., Herrmann, J. M., Geisel, T. (2007). Dynamical synapses causing self-organized criticality in neural networks. Nature Physics, 3, 857–860.
  • Levina et al. (2009) Levina, A., Herrmann, J. M., Geisel, T. (2009). Phase Transitions towards Criticality in a Neural System with Adaptive Interactions. Physical Review Letters, 102, 1–4.
  • Markovic et al. (2010) Markovic, D., Gros, C. (2010). Self-Organized Chaos through Polyhomeostatic Optimization. Physical Review Letters, 105.
  • Mozzachiodi et al. (2010) Mozzachiodi, R., Byrne, J. H. (2010). More than synaptic plasticity: role of nonsynaptic plasticity in learning and memory. Trends in neurosciences, 33, 17–26.
  • Siri et al. (2007) Siri, B., Quoy, M., Delord, B., Cessac, B., Berry, H. (2007). Effects of Hebbian learning on the dynamics and structure of random networks with inhibitory and excitatory neurons. Journal of Physiology, Paris, 101,136–148.
  • Siri et al. (2008) Siri, B., Berry, H., Cessac, B., Delord, B., Quoy, M. (2008). A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks. Neural Computation, 20,2937–2966.
  • Solé et al. (1995) Solé, R. V., Miramontes, O. (1995). Information at the edge of chaos in fluid neural networks. Physica D: Nonlinear Phenomena, 80,171–180.
  • Sompolinsky et al. (1988) Sompolinsky, H., Crisanti, A., Sommers, H. J. (1998). Chaos in random neural networks. Physical Review Letters, 61, 259–262.
  • Spall (2005) Spall, J. C. (2005). Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. (pp. 126-147). New Jersey: Wiley, electronic edition.
  • Stemmler et al. (1999) Stemmler, M., Koch, C. (1999). How voltage-dependent conductances can adapt to maximize the information encoded by neuronal firing rate. Nature Neuroscience, 2, 521–527.
  • Triesch (2005) Triesch, J. (2005). A gradient rule for the plasticity of a neuron’s intrinsic excitability. Artificial Neural Networks: Biological Inspirations–ICANN 2005, Springer, (pp. 65–70).
  • Triesch (2007) Triesch, J. (2007). Synergies between intrinsic and synaptic plasticity mechanisms. Neural Computation, 909,885–909.
  • Zimmer (1999) Zimmer, C. (1999). Life after chaos. Science, 284,83–86.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
70033
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description