[

[

Neural wave interference and intrinsic tuning]Neural wave interference and intrinsic tuning
in distributed excitatory-inhibitory networks
\setkeysGinwidth=totalheight=keepaspectratio \fvsetfontsize=

{fullwidth}

Sergei Gepshtein, Ambarish S. Pawar, Sergey Savel’ev, Thomas D. Albright

Center for Neurobiology of Vision, Salk Institute for Biological Studies
10010 North Torrey Pines Road, La Jolla, CA 92037, USA

Department of Physics, Loughborough University
Leicestershire, LE11 3TU, United Kingdom

ABSTRACT. We developed a model of cortical computation that implements key features of cortical circuitry and is capable of describing propagation of neural signals between cortical locations in response to spatially distributed stimuli. The model is based on the canonical neural circuit that consists of excitatory and inhibitory cells interacting through reciprocal connections, with recurrent feedback. The canonical circuit is used as a node in a distributed network with nearest neighbor coupling between the nodes. We find that this system is characterized by intrinsic preference for spatial frequency. The value of preferred frequency depends on the relative weights of excitatory and inhibitory connections between cells. This balance of excitation and inhibition changes as stimulus contrast increases, which is why intrinsic spatial frequency is predicted to change with contrast in a manner determined by stimulus temporal frequency. The dynamics of network preference is consistent with properties of the cortical area MT in alert macaque monkeys.

1 Introduction

We sought to develop a framework for modeling visual cortical processes satisfying a number of requirements:

Biological realism: Include key features of cortical circuity that consists of excitatory and inhibitory neurons connected reciprocally and affected by recurrent feedback.

Distributed dynamics: Allow for propagation of signals between cortical locations, in order to predict responses to stimuli distributed spatially and temporally.

Mathematical tractability: Essential properties of dynamics should be tractable by methods of analysis and not numerical simulation alone.

Accordingly, we developed a spatially distributed model based on the canonical excitatory-inhibitory circuit introduced by Wilson and Cowan (1972, 1973). The canonical circuit is illustrated in Figure 1A. It consists of two units (”cells”): excitatory and inhibitory ( and ), connected reciprocally and recurrently. Previous studies of this circuit suggested that it can serve as a universal motif for dynamic models of cortical neural networks (e.g., Tsodyks et al., 1997; Ozeki et al., 2009; Ahmadian et al., 2013; Jadi and Sejnowski, 2014; Rubin et al., 2015).\sidenote[][-40pt]Of particular note is the discovery by Tsodyks et al. (1997) that inhibitory-inhibitory connections are essential for providing stability in the otherwise unstable neural networks with recurrent excitation. This work showed that the architecture with recurrent excitation and inhibitory-inhibitory stabilization allows for intriguing variety of dynamic regimes in the network.

In our spatially distributed model, the canonical circuit serves as a network motif: a repetitive “node” in a larger network (Figure 1B). This network architecture is a conservative generalization of the canonical circuit: from the spatially local system of Wilson and Cowan to a spatially distributed system suitable for modeling neural response to stimuli distributed both spatially and temporally (Savel’ev and Gepshtein, 2014). That is, the excitatory and inhibitory units of each node are each connected to both the excitatory and inhibitory units of the neighboring nodes, which is a form of connectivity least removed from the canonical model.

We used two analytical approaches. The first approach was inspired by the method of solving linear differential equations using Green’s functions (Riley et al., 2006), commonly used in the analysis of mechanical and electrical systems (Challis and Sheard, 2003). A Green’s function is a response of the system to impulse activation: a signal localized spatially or temporally. Having found the Green’s function of the system, responses to distributed activation are represented by convolving the Green’s function with the distributed activation. This approach revealed that the spatially distributed canonical circuit has a number of interesting emergent properties. One of these properties is the intrinsic preference of the system to the spatial frequency of stimulation. The system responds to spatially periodic stimuli of a certain spatial frequency more vigorously than other frequencies.

In the second approach, we investigated how this intrinsic stimulus preference changes at high stimulus contrasts, where the system becomes nonlinear. We found that the intrinsic spatial frequency of the system strongly depends on the relative weights of excitation and inhibition in cell connections. We also found that this balance of excitatory and inhibitory weights changes as luminance contrast of stimuli increases, which is why system preferences are predicted to change with contrast, in a manner determined by stimulus temporal frequency.

The dynamics of system preference revealed in this study departs from the traditional view of neuronal selectivity, but our predictions happen to be strikingly similar to the dynamics of neuronal preferences discovered recently in a study of the Middle Temporal (MT) cortical area of alert macaque monkeys (Pawar et al. 2017, 2018; Pawar et al., in preparation).

Figure 1: [

0.28in] The canonical circuit.
A. The canonical Wilson-Cowan circuit contains one excitatory unit (E) and one inhibitory unit (I), each with recurrent feedback, and connected reciprocally (i.e., with the excitatory cell connected to the inhibitory cell, and vice versa). In the figure, the lines ending with small filled circles and with T-junctions represent, respectively, excitatory and inhibitory connections between the cells.
B. The canonical circuits are arranged as “nodes” in a chain with nearest-neighbor coupling. Indices of the nodes (notated as and ) indicate node locations in the chain. Currents and are the inputs into each node generated by the stimulus. \setfloatalignmentt

2 Approach

Consider a system of Wilson-Cowan equations for a chain of ”nodes” each containing an excitatory cell (””) and an inhibitory cell (””), as shown in Figure 1:

(1)

The variables and represent the firing rates of the excitatory and inhibitory cells, represents the relaxation time of excitation (in units of the relaxation time of inhibition), is the node index in the chain, and , are sigmoid functions. and are positive weights:

where and represent the weights of connections respectively within and between the nodes. Assuming that stimulus inputs and cell responses vary slowly on the scale of inter-node distance, we replace the discrete node index by a continuous spatial variable , to obtain:

By expanding the inverse sigmoid functions and in the Taylor series up to the third order, we obtain from the above the following pair of coupled partial differential equations:

(2)

where are the “interaction constants” reflecting the strengths of connections between the excitatory and inhibitory parts of the network (, , etc), and are the “diffusion constants” reflecting the strengths of connections between the nodes (, , etc) and thus are responsible for spatial propagation of excitatory and inhibitory influences through the network. The Taylor expansion coefficients are (second order) and (third order). Among these coefficients, and can be different for excitatory () and inhibitory () cells; they define the degree of nonlinearity in the system. Parameter describes how the input current is divided between the excitatory and inhibitory cells. That is, in a system activated by the spatiotemporal stimulus , the input currents are and .

3 Results

3.1 Point-source waves

First, consider how the system responds to a ”point” stimulus that activates a singe node of the chain. This question requires that we solve system (2) with zero , separately for and , in linear approximation (where ). For the analysis of system response to localized stimuli, we notate responses as and . We find the solutions separately for the left and right sides of the stimulus and then match the solutions at so that:\sidenote[][-50pt] Here we collect the auxiliary coefficients used henceforth:


       

                     














              .

(3)

We then substitute the solutions for and into (3) and obtain:

(4)

where for and , and where\sidenote[][10pt]Notice that for which will serve a useful reference for Equation 13.

(5)

Equation 4 determines a spatial oscillation (illustrated in Figure 3.1) with the spatial frequency of . The period of spatial oscillation is

(6)

The term determines the rate of spatial decay of system response. For very small ( and thus ), we have . Spatial frequency can be called the natural or intrinsic spatial frequency of the system.

For stimuli more complex than a delta function, the general static solution in linear approximation can be written as:

(7)
{marginfigure}

[-70pt] Response of the distributed circuit to a small spatial stimulus (“point stimulus”) that activates a single node of the circuit.
The point activation propagates through the chain and forms a steady-state spatial pattern distributed across the chain. This pattern is an instance of neural wave and it has the form of spatially damped oscillation.
suggesting how our model is related to the standard description of visual neural mechanisms in terms of linear filters (Campbell and Robson, 1968; Koenderink and van Doorn, 1987; DeValois and DeValois, 1988; Graham, 1989; Watson and Ahumada, 2016). To account for nonlinear properties of neural responses, the standard approach assumes that the stage of linear filters is followed by a separate nonlinear stage.

In contrast to the standard picture, a single mechanism is used in our approach to account for both linear and nonlinear regimes, without postulating a separate nonlinear process. Indeed, by analogy of our system with a system of coupled oscillators (e.g., §28 in Landau and Lifshitz, 1969), it is expected that the intrinsic frequency of our system should change with contrast, as we show just below.

3.2 Distributed periodic stimulation

Next we studied system responses to higher contrasts, where nonlinear effects become significant, and so we should consider terms . In writing the solutions of (2) we have found, however, that our main results (3.2-13) about the interaction of stimulus contrast and intrinsic spatial frequency of the circuit do not change qualitatively when terms are ignored. We therefore present results of this analysis in a simplified form, omitting the quadratic terms in Taylor expansions. And to obtain steady-state solutions, in this section we also ignore time derivatives and the time course of stimulus (which we do consider in the next section). We derive system response to periodic stimuli of the form:

where is spatial frequency. We find the solutions in the form

(8)

where constants and represent the amplitudes of excitatory and inhibitory activations. While keeping only the main harmonics in the solution, we obtain:

(9)

The system (9) can be solved iteratively:

(10)
(11)

where is the effective stimulus contrast.\sidenote[][-91pt]As we show below, some of the coefficients used in the system (3.2-11) have an immediate interpretation in terms of intrinsic preference of the neural chain. In particular, determines the location of the peak of selectivity at low stimulus contrast and zero temporal frequency (12), and determines the degree of selectivity (i.e., the “width” of tuning to spatial frequency). The new system (3.2-11) allows us to derive results of the ()th iterations for and using results of the th iterations for and . The iterative procedure converges for the values of coefficients described below; the results are plotted in Figure 2.

Figure 2: [

.25in]Response of the distributed circuit to increasing contrast.
The curves in each panel are iterative solutions of (3.211). Each curve represents activation of one excitatory cell ( in Figure 1B) at one stimulus contrast, plotted as a function of stimulus spatial frequency. Stimulus spatial frequency is normalized to the resonant spatial frequency of the network measured at the lowest tested contrast (the red curves).
In general, increasing stimulus contrast reveals different magnitudes of the resonant spatial frequency: falling resonant frequency in a system dominated by inhibition (A); rising resonant frequency in a system dominated by excitation (B). \setfloatalignmentt

Figure 2 is a plot of iterative solutions of system (3.2-11) with the coefficients set to (1, 0.2) in panel B and (0.2, 1) in panel C. Effective stimulus contrast is varied by setting to five values (0.005, 0.01, 0.02, 0.03, 0.04) for which the responses of 100 iterations are plotted against the spatial frequency , normalized to the resonance frequency, at the lowest contrast (shown in red). The plots reveal that cell tuning shits with contrast: toward lower in the inhibition-dominated network (, panel B) and toward higher in the excitation-dominated network (, panel C). Other parameters are =1, =0.1, ==1, ==0.5, ==0.1, ==0.01.

Notice that in the system (3.2-11), affects the bandwidth of response, and shifts the location of the maximum of on the dimension of spatial frequency . The magnitude of at which reaches its maximum is the characteristic (or resonant) spatial frequency of the network, . Notice also that the maximum of in this system is found near the minimum of the denominator of (3.2-11). Conditions of this minimum can be written explicitly:

(12)

Remarkably, the terms for and in (12) have opposite signs. To appreciate this result, suppose that the activation of excitatory and inhibitory cells within a node have similar magnitudes . We can therefore rewrite (12) as

(13)

Recall that , , and are constants and that is an increasing function of contrast. It follows that the second term of (13), which is , will increase with contrast when and the resonant frequency will decrease (Figure 2A). And when , the second term of (13) will decrease with contrast and will increase (Figure 2B).

In summary, the network’s intrinsic preference for stimulus spatial frequency () will generally change with stimulus contrast: will increase with contrast in a system dominated by excitation, and will decrease with contrast in a system dominated by inhibition.

3.3 Distributed spatiotemporal stimulation

Now we consider dynamic stimuli: drifting luminance gratings characterized in terms of both spatial and temporal frequencies, which we represent by

where is temporal frequency. Here we concentrate on low luminance contrasts, where the nonlinear terms can be omitted. As we show below, results of this analysis have a form that allows us to make predictions for the system behavior also at high luminance contrasts.

We therefore seek solutions of (2) in the following form:

(14)

In contrast to the in-phase response of the system to purely spatial stimuli, the response to dynamic stimuli is characterized by temporal delays relative to the responses to purely spatial stimuli. This means that responses to spatiotemporal stimuli have both sine and cosine components, even when the stimulus only contains cosine components. By substituting (14) in (2) with and , we find the following solutions for the amplitudes of activation of the excitatory ( and ) and inhibitory ( and ) cells:

(15)

where

and where the denominator of every solution is the same:

with and . In spite of its complexity, equation (15) can be plotted numerically, as we did in panels A and C of Figure 3. The plots reveal a family of the familiar tuning functions.

As before, we find the intrinsic spatial frequency of the system by looking for the system’s maximum response. The latter is expected where the denominator approaches its minimum. Since our goal is to understand how the maximum is controlled by temporal frequency , in the following we focus on the analysis of the denominator in different regions of the parameters space.

To begin, notice that the first term of the denominator  depends exclusively on spatial frequencies, the last term depends exclusively on temporal frequencies, while the middle term depends on the interaction of spatial and temporal frequencies. This structure suggests that the system can be studied using three distinct approaches:

(1) finding the natural spatial frequency of the system (i.e., its maximum response) under variation of spatial frequency , defined by equation for a fixed temporal frequency ,

(2) finding the natural temporal frequency of the system under variation of temporal frequency , defined by equation for a fixed spatial frequency , and

(3) finding the natural tuning speed of the system for certain combination of and , and under joint variation of and , defined by equation .

Here we pursue first two of these approaches, in the next two sections.

Figure 3: [

18pt]Intrinsic tuning to stimulus spatial frequency is predicted to depend on stimulus temporal frequency.
A. Response functions are plotted at different values of stimulus temporal frequency  when the system is in Regime 1. The peak of each tuning function is the intrinsic spatial frequency of the system for the value of  displayed in the corresponding box. Intrinsic spatial frequency increases with .
B. The plot at left, in red, describes Regime 1. Here intrinsic spatial frequency of the system increases with , approaching a vertical asymptote from the left. The plot at right, in black, describes Regime 2 (as in panel C), in which intrinsic spatial frequency decreases with , approaching the same vertical asymptote from the right.
C. Response functions are plotted as in panel A for the system in Regime 2. Here intrinsic spatial frequency decreases with . \setfloatalignmentt

3.4 Variation of spatial frequency

Taking the derivative results in the following equation:

(16)

Notice that for the above equation reduces to

reproducing our previous result for purely spatial stimuli () in equation (12) with the nonlinear terms removed.

For the sake of understanding how stimulus temporal frequency () affects resonant spatial frequency () of the neural chain, it is useful to write as , which we represent by . For nonzero , we rewrite equation (16) by approximating in the denominator of (16) so we can study the influence of of :

(17)

Given that is positive, the sign of the denominator of (17) will determine how affects the system behavior. In other words, varying can change two ways, implying two spatial regimes:

  1. When the denominator of (17) is negative, the numerator must be positive, implying that . Here increases with , as shown is Figure 3A, approaching an asymptote where the denominator is zero. This regime is represented in Figure 3B by the red curve that approaches the asymptote (here represented by the vertical dashed line) from the left side, which is the side of low .

  2. When the denominator of (17) is positive, the numerator must be negative, implying that . In this regime, decreases with as shown in Figure 3C. This regime is represented in Figure 3B by the black curve, which approaches the same asymptote as above, yet from the right side, which is the side of high .

We know from our previous analysis (12-13), that the shift of tuning to higher spatial frequencies is observed at higher stimulus contrasts. It is therefore reasonable to expect that regime S2 should be found at higher stimulus contrasts.

This prediction has been recently corroborated in a physiological study of neuronal selectivity in the Middle Temporal (MT) cortical area of alert macaque monkeys. Pawar et al. (2017, 2018) found that spatial frequency tuning of MT neurons depends on the luminance contrast and temporal frequency of stimulation. As contrast increased, preferences of cortical neurons shifted towards lower or higher spatial frequencies, more often the latter than the former. The changes in spatial frequency preference were largest at low temporal frequency, and they decreased as temporal frequency increased. Notably, preferred spatial frequencies (1) increased with temporal frequency at low stimulus contrast, and (2) decreased with temporal frequency at high stimulus contrast, in agreement with the pattern of predictions summarized in Figure 3B (Pawar et al., in preparation).

3.5 Variation of temporal frequency

Next consider the case when temporal frequency varies and spatial frequency does not. Here the maximum of is reach at a certain “natural” or “intrinsic” temporal frequency of the system. We estimate the natural temporal frequency using the condition

and obtain:

(18)

Each of the three terms on the right side of (18) can be positive or negative, as described in Table 1, producing qualitatively different patterns of behavior described in the table.

T1 positive negative negative The maximum of occurs at a non-zero temporal frequency even at zero spatial frequency , i.e., . In this regime, monotonically increases as increases.
T2 positive negative positive Here as above, The initially positive decreases, so the maximum of can disappear, within an interval of . At larger , increases with .
T3 positive positive negative Again, , but increases and reaches a maximum, after which it drops to zero at a certain . No maximum of occurs at larger .
T4 positive positive positive As above, , yet here decreases and disappear at a certain . No maximum of occurs at larger .
T5 negative negative irrelevant No resonance occurs at , but it appears at some  when , and then increases with increasing .
T6 negative positive negative No resonance occurs at , but it can appear within a certain interval of . No maximum of occurs at larger .
T7 negative positive positive No maximum of occurs at any .
Table 1: [

-18pt]Regimes of intrinsic temporal frequency () of the system discovered by varying temporal frequency for a fixed spatial frequency  of the stimulus (using the second approach listed on page 3.3). \setfloatalignmentb

Figure 4: [

0pt] The intrinsic temporal frequency  of the system depends on stimulus spatial frequency . Here is plotted as a function on in green and blue, together with the two spatial resonance curves in red and black. The latter two curves are reproduced from Figure 3B. See text for detail.

The conditions listed in row T6 correspond to the conditions used to predict the interaction of and illustrated in Figure 3. A key property of this prediction is reproduced in Figure 4 (red and black curves) together with the prediction derived from row T6 (green and blue curves):

The red and black curves were obtained using the first approach described on page 3.3, which is finding the natural spatial frequency of the system  for a fixed temporal frequency .

The green and blue curves were obtained using the second approach described on page 3.3, which is finding the natural temporal frequency of the system  for a fixed spatial frequency .

Of these results, the red and green curves correspond to low stimulus contrasts while the black and blue curves correspond to high stimulus contrasts. Accordingly, the intersection of the red and green curves, and also the intersection of the black and blue curves, correspond to the points of preferred stimulus speed of the system.

4 Discussion

We developed a model of spatially distributed cortical computation based on the spatially localized excitatory-inhibitory circuit originally proposed by Wilson & Cowan (Figure 1A). In the distributed model, the localized circuits serve as ”nodes” in a chain with the neighboring excitatory (E) and inhibitory (I) cells related through E-E, E-I, I-I, and I-E connections, i.e., forming a fully connected network with nearest-neighbor coupling (Figure 1B).

Intrinsic stimulus preference. We investigated response properties of this distributed system by first modeling its response to a ”point” stimulus that activates a single node in the network. The point activation propagates through the chain of nodes and forms a steady-state spatial pattern distributed across the chain. A typical steady-state pattern has the form of a spatially damped oscillation (Figure 2). The spatial frequency of this oscillation is the natural or intrinsic spatial frequency of the distributed network, determined by weights of excitatory and inhibitory connections between the cells within and between the nodes.

Neural interference. Network response to a spatially distributed stimulus can be understood in terms of spatial interference between the point-processes elicited by the stimulus on multiple nodes: At low stimulus contrasts, where system behavior is linear, the interference amounts to superposition of the point-generated waves. Spatially periodic stimuli elicits maximum response when the spatial frequency of the stimulus matches the intrinsic spatial frequency of the network. And at high stimulus contrasts, where system behavior is nonlinear, predicting system response requires more complex analysis. We developed an analytical solution to this problem (Equation 9) allowing one to solve the system iteratively and determine its stimulus preferences at any contrast.

Dynamics of stimulus preference. This analysis revealed that the preferred spatial frequency of the network should change as stimulus contrast increases, towards higher or lower frequency than at low contrasts. The direction of change depends on the balance of excitatory or inhibitory processes in the network (Equation 13). In particular, is predicted to increase with contrast in the network dominated by excitation, and to decrease with contrast in the network dominated by inhibition (Figure 3).

The amount of change of is predicted to depend on stimulus temporal frequency . At low stimulus contrast, increases with , and at high stimulus contrast, decreases with  (Figure 4). The latter result can be described in terms of at low and high stimulus contrasts converging at a certain ”attractor” value of as increases. The ”attractor” value is represented in Figure 4 by the vertical asymptote.

Remarkably, the dynamics of neural preferences similar to that predicted by our model (Figure 4) was discovered recently in a study of neuronal preferences in the cortical area MT of alert macaque monkeys (Pawar et al. 2017, 2018; Pawar et al., in preparation).

References

  • Ahmadian et al. (2013) Ahmadian, Y., Rubin, D. B., and Miller, K. D. (2013). Analysis of the stabilized supralinear network. Neural Computation, 25(8):1994–2037.
  • Campbell and Robson (1968) Campbell, F. W. and Robson, J. G. (1968). Application of Fourier analysis to the visibility of gratings. The Journal of Physiology, 197(3):551.
  • Challis and Sheard (2003) Challis, L. and Sheard, F. (2003). The Green of Green functions. Physics Today, 56(12):41–46.
  • DeValois and DeValois (1988) DeValois, R. L. and DeValois, K. K. (1988). Spatial Vision. Oxford University Press, USA. Oxford Psychology Series No. 14.
  • Graham (1989) Graham, N. (1989). Visual Pattern Analyzers. New York: Oxford University Press. Oxford Psychology Series No. 16.
  • Jadi and Sejnowski (2014) Jadi, M. P. and Sejnowski, T. J. (2014). Regulating cortical oscillations in an inhibition-stabilized network. Proceedings of the IEEE, 102(5):830–842.
  • Koenderink and van Doorn (1987) Koenderink, J. J. and van Doorn, A. J. (1987). Representation of local geometry in the visual system. Biological Cybernetics, 55(6):367–375.
  • Landau and Lifshitz (1969) Landau, L. D. and Lifshitz, E. M. (1969). Mechanics. Butterworth-Heinemann, Oxford, UK, 2nd edition.
  • Ozeki et al. (2009) Ozeki, H., Finn, I. M., Schaffer, E. S., Miller, K. D., and Ferster, D. (2009). Inhibitory stabilization of the cortical network underlies visual surround suppression. Neuron, 62(4):578–592.
  • Pawar et al. (2017) Pawar, A. S., Gepshtein, S., Savel’ev, S., and Albright, T. D. (2017). Tuning of MT neurons depends on stimulus contrast in accord with canonical computation. In Annual Meeting of Society for Neuroscience (SfN), Washington, DC. Program No. 57.20. 2017.
  • Pawar et al. (2018) Pawar, A. S., Gepshtein, S., Savel’ev, S., and Albright, T. D. (2018). Theory and physiology of spatial frequency tuning in cortical area MT. In Computational and Systems Neuroscience (COSYNE) Meeting, Denver, CO.
  • Riley et al. (2006) Riley, K. F., Hobson, M. P., and Bence, S. J. (2006). Mathematical Methods for Physics and Engineering: A Comprehensive Guide. Cambridge University Press.
  • Rubin et al. (2015) Rubin, D. B., Van Hooser, S. D., and Miller, K. D. (2015). The stabilized supralinear network: A unifying circuit motif underlying multi-input integration in sensory cortex. Neuron, 85(2):402–417.
  • Savel’ev and Gepshtein (2014) Savel’ev, S. and Gepshtein, S. (2014). Neural wave interference in inhibition-stabilized networks. In Gencaga, D., editor, Proceedings of the First International Electronic Conference on Entropy and its Applications, page c002.
  • Tsodyks et al. (1997) Tsodyks, M. V., Skaggs, W. E., Sejnowski, T. J., and McNaughton, B. L. (1997). Paradoxical effects of external modulation of inhibitory interneurons. The Journal of Neuroscience, 17(11):4382–4388.
  • Watson and Ahumada (2016) Watson, A. and Ahumada, A. (2016). The pyramid of visibilty. pages 1–6. Human Vision and Electronic Imaging. DOI: 10.2352/ISSN.2470-1173.2016.16HVEI-102.
  • Wilson and Cowan (1972) Wilson, H. R. and Cowan, J. D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12(1):1–24.
  • Wilson and Cowan (1973) Wilson, H. R. and Cowan, J. D. (1973). A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik, 13(2):55–80.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
313731
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description