Semantic learning in autonomously active recurrent neural networks
Abstract
The human brain is autonomously active, being characterized by a selfsustained neural activity which would be present even in the absence of external sensory stimuli. Here we study the interrelation between the selfsustained activity in autonomously active recurrent neural nets and external sensory stimuli.
There is no a priori semantical relation between the influx of external stimuli and the patterns generated internally by the autonomous and ongoing brain dynamics. The question then arises when and how are semantic correlations between internal and external dynamical processes learned and built up?
We study this problem within the paradigm of transient state dynamics for the neural activity in recurrent neural nets, i.e. for an autonomous neural activity characterized by an infinite timeseries of transiently stable attractor states. We propose that external stimuli will be relevant during the sensitive periods, viz the transition period between one transient state and the subsequent semistable attractor. A diffusive learning signal is generated unsupervised whenever the stimulus influences the internal dynamics qualitatively.
For testing we have presented to the model system stimuli corresponding to the bars and stripes problem. We found that the system performs a nonlinear independent component analysis on its own, being continuously and autonomously active. This emergent cognitive capability results here from a general principle for the neural dynamics, the competition between neural ensembles.
keywords:
recurrent neural networks, autonomous neural dynamics, transient state dynamics, emergent cognitive capabilities1 Introduction
It is well known that the brain has a highly developed and complex selfgenerated dynamical neural activity. We are therefore confronted with a dichotomy when attempting to understand the overall functioning of the brain or when designing an artificial cognitive system: A highly developed cognitive system, such as the brain [1], is influenced by sensory input but it is not driven directly by the input signals. The cognitive system needs however this sensory information vitally for adapting to a changing environment and survival.
In this context we then want to discuss two mutually interrelated questions:

Can we formulate a meaningful paradigm for the selfsustained internal dynamics of an autonomous cognitive system?

How is the internal activity influenced by sensory signals, viz which are the principles for the respective learning processes?
We believe that these topics represent important challenges for research in the field of recurrent neural networks and the modeling of neural processes. From an experimental point of view we note that an increasing flux of results from neurobiology supports the notion of quasistationary spontaneous neural activity in the cortex [1, 2, 3, 4, 5, 6]. It is therefore reasonable to investigate the two questions formulated above with the help of neural architectures centrally based on the notion of spontaneously generated transient states, as we will do in the present investigation using appropriate recurrent neural networks.
1.1 Transientstate and competitive dynamics
Standard classification schemes of dynamical systems are based on their longtime behavior, which may be characterized, e.g., by periodic or chaotic trajectories [8]. The term ‘transientstate dynamics’ refers, on the other hand, to the type of activity occurring on intermediate time scales, as illustrated in Fig. 1. A time series of semistable activity patterns, also denoted transient attractors, is characterized by two time scales. The typical duration of the activity plateaus and the typical time needed to perform the transition from one semistable state to the subsequent one. The transient attractors turn into stable attractors in the limit .
Transient state dynamics is intrinsically competitive in nature. When the current transient attractor turns unstable the subsequent transient state is selected by a competitive process. Transientstate dynamics is a form of ‘multiwinnerstakeall’ process, with the winning coalition of dynamical variables suppressing all other competing activities.
Humans can discern about 1012 objects per second [7] and it is therefore tempting to identify the cognitive time scale of about 80100ms with the duration of the transientstate dynamics illustrated in Fig. 1. Interestingly, this time scale also coincides with the typical duration [4] of the transiently active neural activity patterns observed in the cortex [2, 3, 5, 6].
Several highlevel functionalities have been proposed for the spontaneous neural brain dynamics. Edelman and Tononi [9, 10] argue that ‘critical reentrant events’ constitute transient conscious states in the human brain. These ‘statesofmind’ are in their view semistable global activity states of a continuously changing ensemble of neurons, the ‘dynamic core’. This activity takes place in what Dehaene and Naccache [11] denote the ‘global workspace’. The global workspace serves, in the view of Baars and Franklin [12], as an exchange platform for conscious experience and working memory. Crick and Koch [13] and Koch [14] have suggested that the global workspace is madeup of ‘essential nodes’, i.e. ensembles of neurons responsible for the explicit representation of particular aspects of visual scenes or other sensory information.
1.2 Autonomously active recurrent neural nets
Traditional neural network architectures are not continuously active on their own. Feedforward setups are explicitly driven by external input [15] and Hopfieldtype recurrent nets settle into a given attractor after an initial period of transient activities [16]. The possibilities of performing cognitive computation with autonomously active neural networks, the route chosen by nature, are however investigated increasingly [17]. In this context the time encoding of neural information, one of the possible neural codes [18], has been studied in various contexts. Two network architectures, the echo state network suitable for rateencoding neurons [19], and the liquid state machine suitable for spiking neurons [20], have been proposed to transiently encode in time a given input for further linear analysis by a subsequent perceptron. Both architectures, the echostate network and the liquidstate machine, are examples of reservoir architectures with fading memories, which however remain inactive in the absence of sensory input.
An example of a continuously active recurrent network architecture is the winnerless competition based on stable heteroclinic cycles [21]. In this case the trajectory moves along heteroclines from one saddle point to the next approaching a complex limiting cycle. Close to the saddle points the dynamics slows down leading to well defined transiently active neural activity patterns.
2 Clique Encoding in Recurrent Networks
In order to study the issues raised in the introduction, the notion of autonomous neural activity and its relation to the sensory input, we will consider a specific model based on cliqueencoding recurrent nets. The emphasis will be on the discussion of the general properties and of the underlying challenges. We will therefore present here an overview of the algorithmic implementation, referring in part to the literature for further details.
2.1 Cliques, attractors and transient states
Experimental evidence indicates that sparse neural coding is an important operating principle in the brain, as it minimizes energy consumption, maximizes storage capacity and contributes to make information encoding spatially explicit [22]. A powerful form of sparse coding is multiwinnerstakeall encoding in the form of cliques. The term cliques stems from network theory and denotes subgraphs which are fully interconnected [8], a few examples are given in Fig. 2. Cliques are fully interconnected subgraphs of maximal size, in the sense that they are not part of another fully interconnected subgraph containing a larger number of vertices.
Clique encoding is an instance of sparse coding with spatially overlapping memory states. The use of clique encoding is in fact motivated by experimental findings indicating a hierarchical organization of overlapping neural clique assemblies for the realtime memory representation in the hippocampus [23]. In the framework of a straightforward autoassociative neural network the cliques are defined by the network of the excitatory connections, which are shown as lines in Fig. 2, in the presence of an inhibitory background [24, 25]. In this setting all cliques correspond to attractors of the network, viz to spatially explicit and overlapping memory representations.
One can transform the attractor network with clique encoding into a continuously active transientstate network, by introducing a reservoir variable for every neuron. In this setting the reservoir of a neuron is depleted whenever the neuron is active and refilled whenever the neuron is inactive. Via a suitable local coupling between the individual neural activity and reservoir variables a well defined and stable transient state dynamics is obtained [24, 25]. When a given clique becomes a winning coalition, the reservoirs of its constituting sites are depleted over time. When fully depleted the winning coalition becomes unstable and the subsequent winning coalition is activated through a competitive associative process, leading to an ever ongoing associative thought process. The resulting network architecture is a dense and homogeneous associative network (dHan) [24]. An illustrative result of a numerical simulation is given in Fig. 3.
For the isolated system, not coupled to any sensory input, this associative thought process has no semantic content, as the transient attractors, the cliques, have none. The semantic content can be acquired only by coupling to a sensory input and by the generation of correlation between the transient attractors and patterns extracted from the input data stream.
2.2 Competitive dynamics and sensitive periods
To be definite, we utilize a continuoustime formulation for the dHan architecture, with rateencoding neurons, characterized by normalized activity levels . One can then define, via
(1) 
the respective growth rates . Representative time series of growth rates are illustrated in Fig. 4. When , the respective neural activity increases, approaching rapidly the upper bound; when , it decays to zero. The model is specified [24, 25], by providing the functional dependence of the growth rates with respect to the set of activitylevels of all sites and on the synaptic weights, as usual for recurrent or autoassociative networks.
During the transition periods many, if not all, neurons will enter the competition to become a member of the new winning coalition. The competition is especially pronounced whenever most of the growth rates are small in magnitude, with no subset of growth rates dominating over all the others. Whether this does or does not happen depends on the specifics of the model setup. In Fig. 4, two cases are illustrated. In the first case (lower graph) the competition for the next winning coalition is restricted to a subset of neurons, in the second case (upper graph) the competition is networkwide. When most neurons participate in the competition process for a new winning coalition the model will have ‘sensitive periods’ during the transition times and it will be able to react to eventual external signals.
2.3 Sensitive periods and learning
So far we have discussed in general terms the properties of isolated models exhibiting a selfsustained dynamical behavior in terms of a neverending time series of semistable transient states, as illustrated in Figs. 3 and 4, using the dHan architecture with continuoustime and rateencoding neurons.
The importance of sensitive periods comes in when the network exhibiting transientstate dynamics is coupled to a stream of sensory input signals. It is reasonable to assume, that external input signals will contribute to the growth rates via
(2) 
Here the encode the influence of the input signals and we have denoted now with the contribution to the growth rate a neuron in the dHan layer receives from the other dHan neurons. The factor in Eq. (2) ensures that the input signal does not deactivate the current winning coalition as we will discuss further below. Let us here assume for a moment, as an illustration, that the input signals are suitably normalized, such that
(3) 
in order of magnitude. For the simulations presented further below a qualitatively similar optimization will occur homeostatically. For the transient states the for all sites not forming part of the winning coalition and the input signal will therefore not destroy the transient state, compare Figs. 4 and 5. With the normalization given by Eq. (3) the total growth rate will remain negative for all inactive sites and the sensory input will not be able to destroy the current winning coalition. The input signal will however enter the competition for the next winning coalition during a sensitive period, providing an additional boost for the respective neurons.
This situation is exemplified in Fig. 5, where we present simulation results for the 7site system shown in Fig. 2, subject to two sensory inputs . The selfgenerated time series of winning coalitions is not redirected for the first sensory input. The second stimulus overlaps with a sensory period and its strongest components determine the new winning coalitions. The simulation results presented in Fig. 5 therefore demonstrate the existence of well defined timewindows suitable for the learning of correlations between the input signal and the intrinsic dynamical activity. The time windows, or sensitive periods, are present during and shortly after a transition from one winning coalition to the subsequent. A possible concrete implementation for this type of learning algorithm will be given further below.
Let us now come to the factor in Eq. (2), containing the Heavisidestep functions . For vertices of the current winning coalition the intra dHanlayer growth rates are positive, . Therefore, the above factor ensures, that a suppressive has no effect on the members of the current winning coalition. The contribution from the input may therefore alter the balance in the competition for the next winning coalition during the sensitive periods, but not suppress the current active clique.
Let us note, that the setup discussed here allows the system also to react to an occasional strong excitatory input signal having . Such a strong signal would suppress the current transient state altogether and impose itself. This possibility of rare strong input signals is evidently important for animals and would be, presumably, also helpful for an artificial cognitive system.
2.4 Diffusive learning signals
Let us return to the central problem inherent to all systems reacting to input signals and having at the same time a nontrivial intrinsic dynamical activity. Namely, when should learning occur, i.e. when should a distinct neuron become more sensitive to a specific input pattern and when should it suppress its sensibility to a sensory signal.
The framework of competitive dynamics developed above allows for a straightforward solution of this central issue: Learning should occur exclusively when the input signal makes a qualitative difference, viz when the input signal deviates the transientstate process. For illustration let us assume that the series of winning coalitions is
where the index indicates that the transition is driven by the autonomous internal dynamics and that the series of winning coalitions take the form
in the presence of a sensory signal , as it is the case for the data presented in Fig. 5. Note, that a background of weak or noisy sensory input could be present in the first case, but learning should nevertheless occur only in the second case. A reliable distinction between these two cases can be achieved via a suitable diffusive learning signal^{1}^{1}1The name ‘diffusive learning signal’ [8] stems from the fact, that many neuromodulators are released in the brain in the intercellular medium and then diffuse physically to the surrounding neurons, influencing the behavior of large neural assemblies. . It is activated whenever any of the input contributions changes the sign of the respective growth rates during the sensitive periods,
(4) 
viz when it makes a qualitative difference. Let us remember that the are the internal contributions to the growth rate, i.e. the input a dHan neuron receives via recurrent connections from the other dHan neurons. The diffusive learning signal is therefore increasing in strength only when a neuron is activated externally, but not when activated internally, with the denoting the respective growth and decay rates. The diffusive learning signal is a global signal and a sum over all dynamical variables is therefore implicit on the righthand side of Eq. (4).
2.5 The role of attention
The general procedure for the learning of correlation between external signals and intrinsic dynamical states for a cognitive system presented here does not rule out other mechanisms. Here we concentrate on the learning algorithm which occurs automatically, one could say subconsciously. Active attention focusing, which is well known in the brain to potentially shut off a sensory input pathway, or to enhance sensibility to it, may very well work in parallel to the continuously ongoing mechanism investigated here.
We note, however, that the associative thought process within the dHan carries with it a dynamical attention field [24]. Neurons receiving both positive and negative contributions from the winning coalition will need smaller sensory input signals in order to be activated than neurons receiving only negative contributions. To put it colloquially: When thinking of the color blue it is easier to spot a blue car in the traffic than a white one.
3 Competive Learning
So far we have described, in general terms, the system we are investigating. It has sensitive periods during the transition periods of the continuously ongoing transientstate process, with learning of input signals regulated by a diffusive learning signal. The main two components are therefore the dHan layer and the input layer, as illustrated in Fig. 6.
3.1 Input datastream analysis
The input signal acts via Eq. (2) on the dHan layer, with the contribution to the growth rate of the dHan neuron given by
(5) 
where we have denoted with the activitylevels of the neurons in the input layer. For subsequent use we have defined in Eq. (5) an auxiliary variable , which quantifies the influence of inactive inputneurons. The task is now to find a suitable learning algorithm which extracts relevant information from the inputdata stream by mapping distinct inputpatterns onto selected winning coalitions of the dHan layer. This setup is typical for an independent component analysis [26].
The multiwinnerstakeall dynamics in the dHan module implies individual neural activities to be close to 0/1 during the transient states and we can therefore define three types of interlayer links (see Fig. 6):

active (‘act’)
Links connecting active input neurons with the winning coalition of the dHan module. 
orthogonal (‘orth’)
Links connecting inactive input neurons with the winning coalition of the dHan module. 
inactive (‘ina’)
Links connecting active input neurons with inactive neurons of the dHan module.
The orthogonal links take their name from the circumstance that the receptive fields of the winning coalition of the target layer need to orthogonalize to all inputpatters differing from the present one. Note that it is not the receptive field of individual dHan neurons which is relevant, but rather the cumulative receptive field of a given winning coalition.
We can then formulate three simple rules for the respective linkplasticity. Whenever the new winning coalition in the dHan layer is activated by the input layer, viz whenever there is a substantial diffusive learning signal, i.e. when exceeds a certain threshold , the following optimization procedures should take place:

active links
The sum over active afferent links should take a large but finite value , 
orthogonal links
The sum over orthogonal afferent links should take a small value , 
inactive links
The sum over inactive links should take a small but nonvanishing value ,
The , and are the target values for the respective optimization processes. In order to implement these three rules we define three corresponding contributions to the link plasticities:
(6) 
where the inputs and to the dHan layer are defined by Eq. (5). For the signfunction is valid, for and respectively, denotes the Heavisidestep function. In Eq. (6) the , and are suitable optimization rates and the and the activity levels defining active and inactive dHan neurons respectively. A suitable set of parameters, which has been used for the numerical simulations, is given in Table 1.
0.002 0.001 0.001  0.8 0.2 0.2  0.4 0.2  0.25 
Using these definitions, the link plasticity may be written as
where is an appropriate threshold for the diffusive learning signal. The interlayer links cease to be modified whenever the total input is optimal, viz when no more ‘mistakes‘ are made [27].
We note, that a given interlayerlink is in general subject to competitive optimization from the three processes (act/orth/ina). Averaging would occur if the respective learning rates // would be of the same order of magnitude. It is therefore necessary, that and .
3.2 Homeostatic normalization
It is desirable that the interlayer connections neither grow unbounded with time (runawayeffect) nor disappear into irrelevance. Suitable normalization procedures are therefore normally included explicitly into the respective neural learning rules and are present implicitly in Eqs. (6) and Eq. (3.1).
The strength of the inputsignal is optimized by Eq. (3.1) both for active as well as for inactive dHan neurons, a property referred to as fanin normalization. Eqs. (6) and (3.1) also regulate the overall strength of interlayer links emanating from a given input layer neuron, a property called fanout normalization.
Next we note, that the time scales for the intrinsic autonomous dynamics in the dHan layer and for the input signal could in principle differ substantially. Potential interference problems can be avoided when learning is switchedon very fast. In this case the activation and decay rates for the diffusive learning signal are large and the corresponding characteristic time scales are smaller than both the typical time scales of the input and of the selfsustained dHan dynamics.
4 The Bars Problem
A cognitive system needs to extract autonomously meaningful information about its environment from its sensory input data stream via signal separation and feature extraction. The identification of recurrently appearing patterns, i.e. of objects, in the background of fluctuation and of combinations of distinct and noisy patterns, constitutes a core demand in this context. This is the domain of the independent component analysis [26] and blind source separation [28], which seeks to find distinct representations of statistically independent input patterns.
In order to test our system madeup by an input layer coupled to a dHan layer, as illustrated in Fig. 6, we have selected the bars problem [29, 30]. The bars problem constitutes a standard nonlinear reference task for the feature extraction via a nonlinear independent component analysis for an input layer. Basic patterns are the vertical and horizontal bars. The individual input patterns are madeup of a nonlinear superposition of the basic bars, containing with probability any one of them, as illustrated in Fig. 7.
4.1 Simulations and setup
For the simulations we presented to the system about randomly generated input patterns of the type shown in Fig. 7. The bars pattern are black/white with the for active/inactive sites, irrespectively of possible overlaps of vertical and horizontal bars. The individual patterns lasted with about for the time between two successive input signals. These time scales are to be compared with the time scale of the autonomous dHan dynamics illustrated in the Figs. 4 and 5, for which the typical stabilityperiod for a transient state is about . We also note that there is no active training for the system. The associative thought process continuous in the dHan layer, at no time are the neural activities reset and the system restarted. All that happens is that the ongoing associative thought process is influenced from time to time by the input layer and that then the synaptic strengths connecting the input layer to the dHan layer are modified via Eq. (3.1).
The results for the simulation are presented in Fig. 8. For the geometry of the dHan network we used a regular 20site star containing 10 cliques, with every clique being composed of four neurons, see Fig. 2. In Fig. 8 we present the response
(7) 
of the 10 cliques in the dHan layer to the 10 basic input patterns , the isolated bars. Here the denotes the set of sites of the winningcoalition and its size, here . The response is equivalent to the clique averaged afferent synaptic signals , compare Eq. (5), in the presence of an elementary bar in the sensory input field.
4.2 Semantic learning
The individual potential winning coalitions, viz the cliques, have acquired in the course of the simulation, via the learning rule Eq. (3.1), distinct susceptibilities to the 10 bars, compare Fig. 8. At the start of the simulation the winning coalitions were just given by properties of the network typology, viz by the cliques, having no explicit semantic significance. The susceptibilities to the individual bars, which the cliques have acquired via the competition of the internal dHan dynamics with the sensory data input stream, can then be interpreted as a semantic assignment. The internal associative thought process of the dHan layer therefore becomes semantically meaningful via the coupling to the environment, corresponding to a sequence of horizontal and vertical bars. This learning paradigm is compatible with multielectrode array studies of the visual cortex of developing ferrets [1], which indicate that the ongoing cortical dynamics is void of semantic content immediately after birth, acquiring semantic content however during the adolescence.
4.3 Competitive learning
The winning coalitions of the dHan layer are overlapping and every link targets in general more than one potential winning coalition in the dHan layer. This feature contrasts with the ‘singlewinnertakesall’ setup, normally used for standard neural algorithms performing an independent component analysis [26], for which the target neurons are physically separated. For the regular 20site network used in the simulation every dHan neuron appertains to exactly two cliques, compare Fig. 2. The unsupervised learning procedure, Eq. (3.1), involves therefore a competition between the contribution , and , as given by Eq. (6). For the simulations we used a set of parameters, see Table 1, for which the contribution to is adapted at a much higher rate than the contributions to and . The responses of the winning coalitions are therefore close to, but somewhat below, the optimal value used for the simulations, compare Fig. 8. The target value will not be reached even for extended simulations, due to the competition with the other optimization procedures, namely and , compare Eq. (6).
4.4 Receptive fields
The averaged receptive fields
(8) 
of the cliques in the dHan layer with respect to the input neurons are also presented in Fig. 8. The interlayer synaptic weights can be both positive and negative and the orthogonalization procedure, Eq. (6), results in complex receptive fields. The time evolution equations for the interlayer synaptic strengths (3.1) are optimizing, but not maximizing, the response of the winning coalition to a given input signal. The receptive fields retain consequently a certain scatter, since the optimization via Eq. (3.1) ceases whenever a satisfactory signal separation has been obtained. This behavior is consistent with the ‘learning by mistakes’ paradigm [27], which states that a cognitive system needs to learn in general only when committing a mistake.
4.5 Emergent cognitive capabilities
The simulation results for the bars problem presented in Fig. 8 may be generalized to larger systems. For comparison we discuss now the results for the bars problem, for which there are ten horizontal and ten vertical elementary bars. For the dHan network we used a regular 40site star with 20 cliques, a straightforward generalization of the regular star illustrated in Fig. 2. We used otherwise exactly the same set of parameters as previously for the bars problem, in particular also the same number of input training patterns. No optimization of parameters has been performed. The respective responses and receptive fields (compare Eqs. (7) and (8)) are presented in Fig. 9 and 10.
The probability for any of the 20 bars to occur in a given input pattern, like the ones for the bars problem illustrated in Fig. 7, is and any individual input patterns contains on the average bars superposed nonlinearly. The separation of the 20 statistically independent components in the input data stream is therefore a nontrivial task. The results presented in Fig. 9 indicate that the system performs the source separation surprisingly well, but not perfectly. The respective receptive fields, shown in Fig. 10, are only in part selfevident. This is, again, due to the competitive nature of the unsupervised and local learning process, which has the task to optimize the input rates to the dHan layer and not to maximize the signaltonoise ratio. We note in this context that the system contains no prior knowledge about the nature and statistics of the input signals.
In fact, the system has not been constructed in the first place to tackle the nonlinear independent component task. The setup used here has been motivated by two simple guiding principles, the occurrence of selfsustained internal neural activity and the principle of competitive neural dynamics. These principles have been used in our study to examine the interplay of the selfsustained internal neural dynamics with the inflow of external information via a sensory data stream. One can therefore interpret, to a certain extend, the capability of the system to perform a nonlinear independent component analysis as an example of an ‘emergent cognitive capability’. This information processing capability emerges from general construction principles and does not result from the implementation of a specific neural algorithm.
5 Discussion and Challenges
5.1 Discussion
A standard approach in the field of neural networks is to optimize the design of a network such that a given cognitive or computational task can be tackled efficiently. This strategy has been very successful in the past with respect to technical applications like handwriting recognition [31] and regarding the modeling of initial feedforward sensory information processing in cortical areas like the primary optical cortex [32]. Taskdriven network design standardly results in inputdriven neural networks, with cognitive computation coming to a standstill in the absence of sensory inputs.
Realworld cognitive systems like the human brain are however driven by their own internal dynamics and it constitutes a challenge to present and to future research in the field of neural networks to combine models of this selfsustained brain activity with the processing of sensory data. This challenge regards especially recurrent neural networks, since recurrency is an essential ingredient for the occurrence of spontaneous internal neural activities.
In this work we studied the interplay of selfgenerated neural states, the timeseries of winning coalitions, with the sensory input for the purpose of unsupervised feature extraction. We proposed learning to be autonomously activated during the transition from one winning coalition to the subsequent one.
This general principle may be implemented algorithmically in various fashions. Here we used a generalized neural net (dHan  dense homogeneous associative net) for the autonomous generation of a time series of associatively connected winning coalitions and controlled the unsupervised extraction of inputfeatures by an autonomously generated diffusive learning signal.
We tested the algorithm for the bars problem and found good and fast learning and that the initially semantically void transient states acquired, through interaction with the data input stream, a semantic significance. Further preliminary results indicate that the learning algorithm retains functionality under a wide range of conditions and for various sets of parameters. We plan to extend the simulations to various forms of temporal inputs, especially to quasicontinuous input and to natural scene analysis, and to study the embedding of the here proposed concept within the framework of a fullfledged and autonomously active cognitive system.
5.2 The overall perspective
There is a growing research effort trying to develop universal operating principles for biologically inspired cognitive systems, the rational being, that the number of genes in the human genome is by far too small for the detailed encoding of the fast array of neural algorithms the brain is capable off. There is therefore a growing consensus, that universal operating principles may be potentially of key importance also for synthetic cognitive and complex systems [33, 34]. The present work is motivated by this line of approach.
Universal operating principles for a cognitive system remain functionally operative for a wide range of environmental conditions. Examples are, universal time prediction tasks for the unsupervised extraction of abstract concepts and intrinsic generalized grammars from the sensory data input stream [35, 36, 8] and the optimization of complexity and information theoretical measures for closedloop sensorimotor behavioral studies of simulated robots [37, 38, 39]. The present study is motivated by a similar line of thinking, investigating the consequences of a selfsustained internal neural activity in recurrent networks, being based on the notion of transientstate and competitive neural dynamics. The longterm goal of an autonomous cognitive system is pursued in this approach via a modular approach, with each module being based on one of the above mentioned general architectural and operational principles.
References
 [1] J. Fiser, C. Chiu and M. Weliky, Small modulation of ongoing cortical dynamics by sensory input during natural vision, Nature 431 (2004) 573578.
 [2] M. Abeles et al., Cortical activity flips among quasistationary states, PNAS 92 (1995) 86168620.
 [3] D.L. Ringach, States of mind, Nature 425 (2003) 912913.
 [4] T. Kenet, D. Bibitchkov, M. Tsodyks, A. Grinvald and A. Arieli, Spontaneously emerging cortical representations of visual attributes, Nature 425 (2003) 954956.
 [5] J.S. Damoiseaux, S.A.R.B. Rombouts, F. Barkhof, P. Scheltens, C.J. Stam, S.M. Smith and C.F. Beckmann, Consistent restingstate networks across healthy subjects, PNAS 103 (2006) 1384813853.
 [6] C.J. Honey, R. Kötter, M. Breakspear and O. Sporns, Network structure of cerebral cortex shapes functional connectivity on multiple time scales, PNAS 104 (2007) 1024010245.
 [7] R. VanRullen and C. Koch, Is perception discrete or continuous?, Trends in Cognitive Sciences 5 (2003) 207213.
 [8] C. Gros, Complex and Adaptive Dynamical Systems, A Primer, Springer 2008.
 [9] G.M. Edelman and G.A. Tononi, A Universe of Consciousness, New York: Basic Books 2000.
 [10] G.M. Edelman, Naturalizing consciousness: A theoretical framework, PNAS 100 (2003) 55205524.
 [11] S. Dehaene and L. Naccache, Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework, Cognition 79 (2003) 137.
 [12] B.J. Baars and S. Franklin, How conscious experience and working memory interact, Trends Coginitve Science 7 (2003) 166172.
 [13] F.C. Crick and C. Koch, A framework for consciousness, Nature Neuroscience 6 (2003) 119126.
 [14] C. Koch, The Quest for Consciousness  A Neurobiological Approach, Robert and Company 2004.
 [15] S. Haykin, Neural Networks: A Comprehensive Foundation, Prentice Hall 1994.
 [16] J.J. Hopfield, Neural Networks and Physical Systems with Emergent Collective Computational Abilities, PNAS 79 (1982) 2554–2558.
 [17] C. Gros, Cognitive computation with autonomously active neural networks: An emerging field, Cognitive Computation (2009, in press).
 [18] J.J. Eggermont, Is There a Neural Code?, Neuroscience and Biobehavioral Reviews 22 (1998) 355–370.
 [19] H. Jaeger and H. Haas, Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication, Science 304 (2004) 7880.
 [20] W. Maass, T. Natschlager and H. Markram, RealTime Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations, Neural Computation 14 (2002) 25312560.
 [21] M. Rabinovich, A. Volkovskii, P. Lecanda, R. Huerta, H.D.I. Abarbanel and G. Laurent, Dynamical Encoding by Networks of Competing Neuron Groups: Winnerless Competition, Physical Review Letters 87 (2001) 68102.
 [22] B.A. Olshausen and D.J. Field, Sparse coding of sensory inputs, Current Opinion in Neurobiology 14 (2004) 481487.
 [23] L. Lin, R. Osan and J.Z. Tsien, Organizing principles of realtime memory encoding: neural clique assemblies and universal neural codes, Trends in Neurosciences 29 (2006) 4857.
 [24] C. Gros, SelfSustained Thought Processes in a Dense Associative Network, in KI 2005, U. Furbach (Ed.), Springer Lecture Notes in Artificial Intelligence 3698 (2005) 366379; also available as http://arxiv.org/abs/qbio.NC/0508032.
 [25] C. Gros, Neural networks with transient state dynamics, New Journal of Physics 9 (2007) 109.
 [26] A. Hyvärinen and E. Oja, Independent component analysis: Algorithms and applications, Neural Networks 13 (2000) 411430.
 [27] D.R. Chialvo and P. Bak, Learning from mistakes, Neuroscience 90 (1999) 11371148.
 [28] S. Choi, A. Cichocki, H.M. Park and S.Y. Lee, Blind Source Separation and Independent Component Analysis: A Review, Neural Information Processing 6 (2005) 157.
 [29] P. Földiák, Forming sparse representations by local antiHebbian learning, Biological Cybernetics 64 (1990) 165170.
 [30] N. Butko and J. Triesch, Learning Sensory Representations with Intrinsic Plasticity, Neurocomputing 70 (2007) 11301138.
 [31] G. Dreyfus, Neural Networks: Methodology and Applications, Springer, 2005.
 [32] M.A. Arbib, The Handbook of Brain Theory and Neural Networks, MIT Press 2002.
 [33] C. MüllerSchloer, C. von der Malsburg, und R.P. Würtz, ‘Organic Computing’, Informatik Spektrum 27 (2004) 26.
 [34] R.P. Würtz, Organic Computing, Springer Verlag, 2008.
 [35] J.L. Elman, J.L. Finding structure in time, Cognitive Science 14 (1990) 179211.
 [36] J.L. Elman, An alternative view of the mental lexicon, Trends in Cognitive Sciences 8 (2004) 301306.
 [37] A.K. Seth and G.M. Edelman, Environment and Behavior Influence the Complexity of Evolved Neural Networks, Adaptive Behavior 12 (2004) 520.
 [38] O. Sporns and M. Lungarella, Evolving coordinated behavior by maximizing information structure, in L. Rocha et al. (eds), Proceedings of Artificial Life X (2006) 37.
 [39] N. Ay, N. Bertschinger, R. Der, F. Güttler and E. Olbrich, Predictive information and explorative behavior of autonomous robots, European Physical Journal B 63 (2008) 329339.