Learning in Competitive Network with Haeusslers Equation adapted using FIREFLY algorithm
Abstract
Many of the competitive neural network consists of spatially arranged neurons. The weigh matrix that connects cells represents local excitation and longrange inhibition. They are known as softwinnertakeall networks and shown to exhibit desirable informationprocessing. The local excitatory connections are many times predefined handwired based depending on spatial arrangement which is chosen using the previous knowledge of data. Here we present learning in recurrent network through Haeusslers equation and modified wiring scheme based on biologically based Firefly algorithm. Following results show learning in such network from input patterns without handwiring with fixed topology.
pacs:
42.30.Sy, 42.30.Tz
I Introduction
The concept of competitive learning and self organization in neural networks was first introduced by Malsburg (1973) successfully demonstrated for retinotopy Mals (); Haeuss (). Later developed many of the competitive neural network consists of spatially arranged neurons. The weigh matrix that connects cells represents local excitation and longrange inhibition. The ”softwinnertakeall” dynamics is commonly used in such networks. The ”softwinnertakeall” dynamics consists of local group of the winners rather than single winner as in ”hard ” winner in WinnerTakeAll dynamics WTA (); SWTA (). The SelfOrganizing Maps is another example that uses a similar mechanism SOM (). In case of WTA networks the network topology is mostly predefined and spatially fixed. The softWTA mechanism can be implemented by mexican hat fuction or also called Ricker function. With one winning neuron and immidiate neighbours connected with positive weights decreasing with the distance and negative weights on far distant neurons.
A FIREFLY algorithm was developed in order to simulate softWTA network with adaptive network topology. The dynamics of frequency synchronization in fireflies was described by Hanson in (1978). Here we simulate the spatial development of neurons modelled as firelies that can be used to form a population of inhibitory and excitory neurons firefly (); firefly_2 (); firefly_3 (). The disadvantege of this system would be one additional computational step and the memory usage. But on the other hand the efficiency of recall and signal reconstruction using this algorith is much better. In the following sections we describe the learning mechanism using the Haeussler’s equation and the improvement achieved using FIREFLY algorithm.
Ii Haeusslers Equation
The Haeussler’s equation can be written in the following form:
(1) 
with additional saturation condition:
(2) 
Here, the function enforces saturation of the synaptic weights at the value v such that . The first term by itself is trying to pull the weights towards the value , whereas the second term by itself would let eventually converge towards , so that the sum in the second term could be interpreted as a weighted average: . The second term, then, would let those weights that have aboveaverage cooperative help grow at the expense of the below average cooperation weights. The weights will thus favor the higher cooperation terms, increasing , letting more weights fall belowaverage cooperation and thus to start decaying, until finally only weight(s) with maximal cooperation will stabilize. If is positive, the system is driven towards a compromise between the first and the second term of eq . The initial state of should be random, with the probability or strength of a connection between cells and falling off monotonously with the distance between the two in a twodimensional plane. For detailed explaination on equation see Appendix.
Iii Response of the equation
Consider number of neurons with two lateral connections. Fig. 1 shows evolution weight matrix and weight vector for under modified equation. Following conditions are assumed:

for , no self connections

Thus if we have two neighbouring connections, the initial condition would be,
In the example neighbours are assumed. To avoid any boundary effects a periodic condition is assumed. Fig. 1 shows immediate neighbour strengthening and others lowering the weights.

Iv Input Patterns
Two types of input patterns were used, one dimensional and two dimensional defined through Gaussian function. Fig. 2 shows and input in square form.
In one dimensional case the input pattern given as,
(3) 
where is the central value varied over the number of neurons. Similarly in case the and varied with where is total number of neurons.
(4) 
The input vectors are normalized and generated at random for tranning and fed to the network.
V FIREFLY Algorithm
In earlier discussed cases 2D grid is defined. To modify this scenario a population based firefly algorithm is developed. The female fireflies attracted towards dominant male flies depending on brightness. The frequency driven algorithm was developed by Hanson (1978). The excitory neurons and inhibitory neurons are attracted to active cell depending on the activity level or brightness. Thus forming a population of neurons which is used to define a weight matraix. The movement of neurons is governed by equations:
(5) 
where is the brightness of a ’male firefly’ to which the ’female’ fireflies are attracted. The frequency dependence is ignored in this case.
(6) 
After feeding each pattern to the network the population of fireflies is redistributed thus redefining the distribution of of inhibitory and excitory neurons. The weight matrix then subjected to the Haeuslers equation thus reducing number of excitory neurons and inhibitory neurons those represent particular pattern and remain active.
Vi Results
For demonstration purpose low number of neurons are used. Previously described 2D input patterns are fed. The input pattern can either be presented after convergence or
The learning algorithm can either run when a new input is first presented or only after the network has converged. The scenario of running learning algorithm at stimulus onset is more biophysiology compatible. Fig. 3 shows the learnt weight matrix for two dimensional case.
In the earlier examples the periodic boundary condition was not applied. Hence one observes assymetry at .
Fig. 4 compares the recalled pattern from memory with and without FIREFLY subroutine. One clearly observes the the recalled pattern carries matches with the input pattern more closely when FIREFLY is applied.
This type of recurrent competitive networks is also able to complete partial patterns Fig. 5 shows the signal recall when partially distorted pattern is presented. After training the network the excitatory connections are strengtened between units whose activity is highly correlated . This provides solution to the missing information.
When two fused inputs are presented it treats the input patters as similar input as shown in Fig 6. Or if one of the two inputs is weaker than the other it captures the difference between them.
An Example with Handwritten digits
As an ”real world” example we consider images of Handwritten digits, a dataset available on internet for standard case. A model image of each digit is prepared and fed to the network for learning.
Then noisy image is fed to the network which then outputs the corresponding stored image. Fig. 7 shows the stored patterns and retrieved patterns using FIREFLY algorithm.
Vii Discussion
Here we have shown that the biologically motivated FIREFLY algorithm can be used for memory based neural network with competitive learning. In this paper we have shown that the network has ability to act as associative memory, retrieve the information and handle the noisy signal or recalling partial information. One of the disadvange would be that it requires additional calculation time to simulate firefly algorithm. But the spotaneous population based algorithm makes it topology independent network. Further efforts involve optimization of number of excitatory and inhibitory cells for which we would be implementing Lokta Volterra equations.
*
Appendix A
To describe an associative memory for fragments of image content. Each fragment has the form of a 2D neural field with shortrange connections. Fragments may tile a larger visual domain. This visual domain may be , and the fragments correspond to texture patches. After some training the collections of all fragments should represent a codebook for all texture patches occurring in the input with some statistical significance. The visual domain is composed of socalled columns. Each column stands for small resolvable region of the image and contains a full set of texture neurons, e.g., neurons with receptive fields in the form of Gabor wavelets. A particular visual input into selects a subset of all the neurons of . is the set of indices of those neurons that are active in state . A state is assumed to be active for a typical period of length . The goal of the associative memory is to extract from a long sequence of image inputs a set of states . We are restricting consideration to image patches of a certain diameter . Let us designate by resp. the set and number of neurons contained in a patch. The whole image domain is composed of a tiling of such patches, and an infinite number of possible images can be represented combinatorially by each patch selecting one of its states. Whereas the general idea of associative memory is that the states are stored by plastically strengthening all connections between the neurons that are simultaneously active in a state, we want the connections between neurons active within a patch’s activity state to form a network pattern (a ”net fragment”) that is an attractor under network selforganization. In network selforganization the activity of the neurons addressed by the input fluctuates spontaneously, and connections are changed by Hebbian plasticity. This is a positive feedback loop, as excitatory connections generate signal fluctuations, which in turn strengthen connections. This instability is checked by a constraint limiting the weight sum of connections converging on a neuron. Given enough time this loop runs into an attractor state that is stabilized by the balance between Hebbian plasticity and the weight sum constraint. We thus assume that at a given time the input selectively activates neurons with the help of sources (whereas for ). These source terms fluctuate with a time constant and have correlations , where the temporal average is taken over the time scale . We assume further that the propagation of signal deviations from the average is very fast, at a time scale , so that we get equilibrium signal deviations changing at the same time scale as the sources . These equilibrium deviations, then, obey the equation.
(7) 
or in vector notation
(8) 
which is solved by
(9) 
Expanding that matrix inverse as
(10) 
and braking this series off after the third power we have an approximate inverse which we call , so that .
With this in hand we can compute temporal correlations of signals
(11) 
where we have used
(12) 
(If is symmetric, .) This result is easily interpreted: Fluctuations are communicated from the source cell to the two cells directly or over one or two or three intermediate connections.
These correlations are to be used to drive Hebbian plasticity. Using the Häussler equation as model we formulate it as:
(13) 
which contains the cooperation term
(14) 
and the competition term
(15) 
Here, is an unspecific synaptic growth rate. Eq. (6) can be rewritten as
(16) 
References
 (1) Chr. von der Malsburg, ”Selforganization of orientation sensitive cells in the striate cortex”
 (2) A. F. Haeussler and Christoph von der Malsburg, ”Development of retinotopic projections  an analytical treatment”, Journal of Theoretical Neurobiology, 2:4773, 1983.
 (3) R. Douglas and K. Martin, âRecurrent neuronal circuits in the neocortex”, Current Biology, vol. 17, no. 13, pp. 496â500, 2007.
 (4) S. Amari, âDynamics 6 of pattern formation in lateralinhibition type neural fields,â Biological 8 Cybernetics, Jan. 1977.
 (5) T. Kohonen
 (6) Janmenjoy Nayak et al , ” Anovel nature inspired firefly algorithm with higher order neural network: Performance analysis” EST, 2015
 (7) Surafel Luleseged Tilahun and Hong Choon Ong, ”Modified Firefly Algorithm”, Journal of Applied Mathematics, Volume 2012, Article ID 467631,2012
 (8) Iztok Fister et al, ”A comprehensive review of firefly algorithms”