Predicting single-neuron activity in locally connected networks
Predicting single-neuron activity in locally connected networks
Feraz Azhar
Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, 02115.
Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD, 21287.
William S. Anderson
Department of Neurosurgery, The Johns Hopkins Hospital, Baltimore, MD, 21287.
Abstract
The characterization of coordinated activity in neuronal populations has received renewed interest in the light of advancing experimental techniques which allow recordings from multiple units simultaneously. Across both in vitro and in vivo preparations, nearby neurons show coordinated responses when spontaneously active, and when subject to external stimuli. Recent work (Truccolo, Hochberg, & Donoghue, 2010) has connected these coordinated responses to behavior, showing that small ensembles of neurons in arm related areas of sensorimotor cortex can reliably predict single-neuron spikes in behaving monkeys and humans. We investigate this phenomenon utilizing an analogous point process model, showing that in the case of a computational model of cortex responding to random background inputs, one is similarly able to predict the future state of a single neuron by considering its own spiking history, together with the spiking histories of randomly sampled ensembles of nearby neurons. This model exhibits realistic cortical architecture and displays bursting episodes in the two distinct connectivity schemes studied. We conjecture that the baseline predictability we find in these instances is characteristic of locally connected networks more broadly considered.
1 Introduction
Quantifying the responses of collections of neurons to the variety of stimuli they are exposed to lies at the heart of deciphering the neural code. As one example, the role of coordinated activity in networks of cortical neurons in particular, represents a natural starting point for the exploration of how neurons encode information relevant to behavior and cognition. Of late, progress has been made towards understanding the responses of small collections of neurons, involved in both lower level sensory processing, and in primary and associative cortical areas, as a result of advances in experimental techniques which make simultaneous recordings from nearby populations of neurons possible (Segev, Goodhouse, Puchalla, & Berry II, 2004; Gunning et al., 2005; Suner, Fellows, Vargas–Irwin, Nakata, & Donoghue, 2005). These experimental successes have been coupled with theoretical progress towards understanding collective behavior through the analysis of minimal statistical mechanics based models (Schneidman, Berry II, Segev, & Bialek, 2006; Shlens et al., 2006; Tang et al., 2008), as well as point process models defined explicitly through maximum likelihood techniques (Brillinger, 1988; Chornoboy, Schramm, & Karr, 1988; Martignon et al., 2000; Okatan, Wilson, & Brown, 2005; Nykamp, 2007; Pillow et al., 2008; Truccolo, Hochberg, & Donoghue, 2010).
In the case of models based on statistical mechanics considerations, joint probability densities of networks of neurons can be constructed assuming maximum entropy distributions consistent with no higher than pairwise correlations (for example), as neurons respond to naturalistic external stimuli (Schneidman et al., 2006), non-naturalistic external stimuli (Schneidman et al., 2006; Shlens et al., 2006) as well as for neurons undergoing spontaneous activity (Schneidman et al., 2006; Tang et al., 2008). The resulting models suggest that pairwise interactions capture significant fractions of the information contained in the states adopted by the network. In the case of the point process models, including the intrinsic and extrinsic spiking histories for up to 100ms, for roughly 20–200 neurons, allows one to successfully predict the spiking activities of single neurons in the experimentally measured ensemble (Truccolo et al., 2010). Notably, in this case, predictability is determined when the external stimuli involves performance of behavioral tasks.
Coupled together, these results suggest the existence of strongly coordinated activity in small networks of neurons across a broad range of neuronal preparations, in a variety of stimulus paradigms. We investigate whether this coordinated activity can be accounted for by random fluctuations in locally connected model networks. We make progress towards addressing this question using point process modeling in the style of Truccolo et al. (2010). We show that in a recently developed computational model of cortex (Anderson, Kudela, Weinberg, Bergey, & Franaszczuk, 2009, 2007), there exists significant predictive power in small randomly sampled ensembles of neurons driven solely by random background inputs. These inputs drive coordinated activity, which we measure through predictability of single-neuron spiking, and we conjecture that this baseline predictability, which in effect constitutes a lower bound to predictability, is a feature of locally connected networks considered quite generally.
2 The computational model
We institute our prediction scheme on a recently developed spiking neural network simulation with realistic cortical architecture (Anderson et al., 2009, 2007) which displays network bursting dynamics in response to random background inputs. The details of this model have been reported in Anderson et al. (2009) and Anderson et al. (2007), for completeness we collect salient details relevant to our prediction scheme here, as well as in the appendix.
2.1 The network
The version of the model we investigate here has a total of 65,536 single-compartment neurons operating on a modified Hodgkin-Huxley scheme (Av-Ron, Parnas, & Segel, 1993), including spherically symmetric calcium diffusion, and synaptic currents (Kudela, Franaszczuk, & Bergey, 2003). The network represents a surface area of 1.6mm 1.6mm of simulated cortex, with a thickness of 2.34mm. Neurons are arranged in a repeating lattice of 64 64 minicolumns, with center to center spacing between these minicolumns set at 25m. Each minicolumn contains 16 neurons spanning 7 different classes organized in cortical layers. Layer II/III contains 4 excitatory pyramidal cells, 1 inhibitory double bouquet cell, and 1 inhibitory basket cell. Layer IV contains 1 excitatory stellate cell and 1 inhibitory chandelier cell. Layers V and VI each contain 4 excitatory pyramidal cells. Intracolumnar wiring is adapted from visual cortical models and generic somatosensory models (Douglas & Martin, 2004; Nieuwenhuys, 1994). Extracolumnar wiring is based on histological data for the numbers of contacted postsynaptic cells, and is generally isotropic in space within the plane of the postsynaptic cell. The spatial distributions of connections are typically flat out to 200-300m, with an exponential tail that can extend to 1mm in the case of inhibitory basket cells.
For the purpose of our prediction scheme, we chose random subsets of 20 and 40 layer II/III pyramidal cells from the central two thirds of the model, representing single-neuron recordings from an area . Figure 1 schematically displays the relative locations of each cell in both of these ensembles.
2.2 Background input and network activity
The network is driven by a spatially distributed Poisson background which affects 1% of the modeled neurons. A synaptic current injection causes these neurons to spike with high probability, with the sequence of input pulses for any one neuron constituting a Poisson process. The baseline synaptic input rate for each affected cell is 2Hz. The background input applied to different neurons is uncorrelated (Anderson et al., 2009).
The resulting activity alternates in a random fashion between periods of bursting and quiescence. Bursting is characterized by a flat frequency spectrum. Varying the pattern of connections or sequence of background inputs produces similar random activity. Alternations in the frequency of the applied background input, or numbers of connections between cells alters this activity in a graded fashion, with higher connectivity, and higher background input frequency producing constant bursting behavior.
In this paper, we probe two separate random patterns of connectivity in the network. In the first, which we will refer to as ‘connection pattern 1’, each layer II/III pyramidal cell contacts at most 178 other layer II/III pyramidal cells out to a radius of 300m ( 10% of all layer II/III pyramidal cells encountered within that radius). The resulting network behavior during a bursting period, for both of the 20 and 40 layer II/III pyramidal cell ensembles is displayed in Figure 2(a,b) respectively. The model run for connection pattern 1 lasted 330s and was sampled at 1kHz. The extremal Pearson correlation coefficient, defined as that having the greatest absolute value over all time lags between
In the second pattern of connectivity (‘connection pattern 2’), each layer II/III pyramidal cell contacts at most 112 other layer II/III pyramidal cells out to a radius of 300m ( 6% of all layer II/III pyramidal cells encountered). Resulting network behavior during a bursting period is displayed in Figure 3. The model run for connection pattern 2 lasted 300s and was also sampled at 1kHz. The extremal correlation coefficient in either subset is less than 0.25. We note that intra- and extracolumnar wiring for both patterns of connectivity, for all other pairs of cells was identical.
3 Inference and prediction formalism
The statistical framework within which we propose to perform single-neuron spike prediction is that of point process modeling. In summary, we are interested in writing down a statistical model describing the spiking behavior of a single neuron assuming this model is parametrically related to covariates which we choose. These parameters are set by optimizing the likelihood of the spike train, namely, by maximizing the joint probability distribution of the spiking activity subject to regularization constraints on the parameters. This joint probability distribution can be written in terms of the conditional intensity function of the process whose functional form we need to explicitly specify. We outline the pertinent results from the discrete-time point process analysis which we lean on to perform prediction. The reader is referred to Truccolo, Eden, Fellows, Donoghue, & Brown (2005), Truccolo et al. (2010), Brown, Barbieri, Eden, & Frank (2004), Truccolo (2010), and Daley & Vere-Jones (2003) for further details.
3.1 Discrete-time point processes
Consider an experiment of length ranging over the time interval , over which we observe spiking activity for an ensemble of cells. We partition the interval into equal subintervals of length , , choosing such that at most one spike falls in any one bin (). We are interested in constructing the joint probability distribution of the observed spike train for a single neuron in our ensemble, neuron say, and we assume we are free to use information from the other neurons in the ensemble to help us. Let represent the number of spikes which have occurred up to and including time for neuron , i.e., in the interval . Then the quantity
(1) |
takes the value 1 if a spike occurred in the time interval , or 0 if a spike did not occur in that interval for neuron . Simply put, this counts the number of spikes in the bin for neuron . Let contain the history of neural spiking for all neurons up to but not including the bin, and denote the conditional intensity for neuron in the bin by where contains the parameters we define the conditional intensity to depend upon. Then one can write the probability of neuron firing a spike in the bin to , as (Truccolo et al., 2005, 2010; Brown et al., 2004; Truccolo, 2010),
(2) |
This allows one to write down the joint probability distribution for the spike train for neuron (i.e., the likelihood) over the observed time interval , as
(3) |
3.2 Logistic conditional intensity function
What remains to be specified then is the precise functional form of the conditional intensity function , together with the way in which any regularization parameters we may want to use enter into the formalism.
We follow the spiking histories of the neurons in bins and consider the influence of previous spikes over a time scale of 100ms (as in Truccolo et al., 2010). Denoting the spike train of cell up to but not including the bin by , the histories of our ensemble of neurons is . We project the intrinsic spike train (cell ) onto raised cosine functions of 100ms in length and each of the external cells’ spike trains onto such functions (Pillow, Paninski, Uzzell, Simoncelli, Chichilnisky, 2005; Pillow et al., 2008; Truccolo et al., 2010). We introduce a parameter for each of the raised cosine functions and propose to utilize a logistic form for the conditional intensity (Zhao & Iyengar, 2010). Putting this together, gives
(4) |
where the free parameters in the problem number , with representing a term related to the background firing rate of cell .
3.3 Raised cosine functions
The functions and introduced above, partially overlap over the 100ms of history, with one function in each of the two subsets of functions tapering off to zero just prior to 100ms in the past. They were derived from code associated with Pillow et al. (2008), available online at http://pillowlab.cps.utexas.edu/code_GLM.html. We provide in this subsection, one way to piece together our implementation of the code accompanying Pillow et al. (2008), as well as the equations presented therein for generating these functions.
In the intrinsic history case, we construct a total of functions. Setting , and discretizing the time axis so that , gives a total of 100 points which are used to encode the previous 100ms of the neuron’s own spiking history (in 1ms bins). A refractory period of 2ms is enforced using the function (Truccolo et al., 2010), where , and . The remaining functions are defined for as for , with . Here, is a parameter controlling the nonlinear stretching of the time axis, , , where and control the positions of the peaks of the last and first raised cosine bumps respectively. Two further amendments are made such that , again for , and . The resulting 100-element vectors then correspond to the functions so defined on our discretized lattice, with the choices , , and (set by hand). Figure 4a displays intrinsic history functions as generated through the code accompanying Pillow et al. (2008) (and as directly used in the computations presented in our manuscript).
For the extrinsic history case, we construct a total of such functions. Again setting , and discretizing the time axis so that , we form functions , for , where for , with . Here, , , for . We further set . The resulting 100-element vectors correspond to the functions so defined, with the choices , , and (again set by hand). Figure 4b displays corresponding extrinsic history functions as generated through the code accompanying Pillow et al. (2008).
3.4 Optimization scheme
We set the parameters through a scheme effectively implementing maximum a-posteriori estimation with a Gaussian prior over the parameters
(5) |
where represents a parameter controlling the regularization of the parameters related to cell ’s intrinsic history, and controls the regularization of the parameters related to the extrinsic histories. We note that we do not regularize over parameter - this constant premultiplies which is chosen to enforce the 2ms refractory period in the model (Truccolo et al., 2010). For the sake of computational efficiency, we set the parameters and by hand, effectively probing a subspace of the support of the objective function Eq. (5). In particular, we chose (with ) to probe the regime where the external cells in the model served as perturbations to our underlying predictive capability based solely on a neuron’s own intrinsic spiking history. In this way our results represent lower bounds on predictive capabilities in this theoretical setting. Once we have chosen the optimal parameters in this way, one can use Eq. (2) to compute the predicted probabilities of spiking.
4 Establishing a lower bound on predictability
The prediction scheme described in section 3 leads to estimates for the parameters of the problem . Once these parameters have been set, one can compute the intrinsic and extrinsic history filters which quantify the influence of prior spikes on the probability of spiking for any target neuron. Figure 5 displays schematic temporal filters for a single neuron out of the 20 cell subset, and a single neuron out of the 40 cell subset for connection pattern 1. The parameters which determine these displayed filters are as computed for one fold of the -fold cross validation schemes we used to generate results in this paper ( for connection pattern 1, for connection pattern 2). We display the analogous plot for connection pattern 2 in Figure 6.
Applying temporal filters to our assumed form for the conditional intensity function Eq. (4), and substituting the resulting expression into that for the probability of spiking Eq. (2), gives predicted conditional probabilities for spiking given recordings over the previous 100ms of activity in the ensemble. An example of the conditional probabilities thus generated, for both connection patterns is presented in Figures 7 and 8.
Applying thresholds to the conditional probabilities calculated as in Figures 7 and 8, we computed cross validated receiver operator characteristics (ROC) curves (Fawcett, 2006; Truccolo et al., 2010). ROC curves for two cells are shown in Figures 9(a,b) and 10(a,b), taken from both the 20 and 40 cell subsets, for both connection patterns.
As in Truccolo et al. (2010), we measure predictive performance by computing the area under the ROC curve (AUC), as well as the chance level AUC (denoted by AUC*), obtained here by randomly shuffling the spike trains of the cells under consideration, and then define predictive power as . A predictive power of 0 signals chance level performance, whereas a predictive power close to 1 signals almost perfect performance. Histograms for the 20 cell case for both connection patterns are shown in Figures 9(c) and 10(c) and for the 40 cell case in Figures 9(d) and 10(d). Typical AUC* values were 0.50. We distinguish between the case where the optimization is done solely with the intrinsic spiking history of the target cell (shown in gray in Figures 9 and 10) and the case where the optimization is done with the inclusion of the remaining cells in the ensemble (shown in black in Figures 9 and 10).
For the 20 cell subset of connection pattern 1, considering only intrinsic spiking histories, we obtained predictive powers in the range of 0.32–0.45 with a mean of 0.38; in the full ensemble history case, we obtained values in the range of 0.47–0.64 with a mean of 0.57. For the 40 cell subset of connection pattern 1, predictive powers for the intrinsic spiking history case yielded a range of 0.34–0.48 with a mean of 0.41; whereas in the full ensemble history case, the predictive power ranged between 0.48–0.74 with a mean of 0.64.
For the 20 cell subset of connection pattern 2, considering only intrinsic spiking histories, we obtained predictive powers in the range of 0.26–0.56 with a mean of 0.44; in the full ensemble history case, we obtained values in the range of 0.54–0.88 with a mean of 0.77. For the 40 cell subset of connection pattern 2, predictive powers for the intrinsic spiking history case yielded a range of 0.08–0.57 with a mean of 0.42; whereas in the full ensemble history case, the predictive power ranged between 0.46–0.87 with a mean of 0.78. We find that the pattern of connectivity chosen does impact the predictive power.
A natural question which follows is whether one needs the full ensemble of cells to attain the full ensemble predictive power for any particular target cell under consideration. To address this question, we analyzed randomly chosen subsets of 5 cells from the 20 and 40 cell subsets for both connection patterns. For each cell analyzed, we monitored the change in predictive power which occurred as a result of adding a single cell from the remainder of the subset, without replacement, until all extrinsic cells from the subset had been added to the prediction scheme. The particular order in which cells were added was determined by an estimate of the potential influence they would have on the predictive power of the target cell. The results for one cell out of the chosen subset of 5, for each subset and from each connection pattern is shown in Figure 11. We note that a subset of the full ensemble of cells is sufficient to attain a sizable fraction of the maximal predictive power achieved with all cells from the ensemble included. This was borne out for all the cells analyzed.
5 Discussion
The existence of coordinated responses in small ensembles of nearby neurons has received renewed attention, in large part due to a growing number of recent experimental recordings employing a variety of theoretical formalisms (Schneidman et al., 2006; Shlens et al., 2006; Tang et al., 2008; Yu, Huang, Singer, & Nikolić, 2008; Truccolo et al., 2010). We investigated one aspect of this coordination through analyzing the predictive power inherent in the spiking histories of small ensembles of neurons in a computational model of cortex utilizing a point process framework (Truccolo et al., 2005, 2010).
Our main finding is that surprising predictive power exists in small ensembles of modeled cortical neurons ( neurons) where these cells are driven solely by random background inputs. A small fraction of the neurons in the model were subject to a synaptic current source large enough to drive an action potential in the neuron with high probability, with the times of synaptic current injection following a Poisson process. This background activity was chosen to mimic random inputs to our section of modeled cortex from external sources. This predictive power points to a strong redundancy in the neural code, and in particular highlights the possibility of error correcting properties at the neuronal level (Schneidman et al., 2006; Truccolo et al., 2010).
We observe interesting effects which manifest due to the connectivity patterns studied. Connectivity pattern 1 is based on a more densely connected network, where layer II/III pyramidal cells generally synapse onto a greater number of layer II/III pyramidal cells in the network (at most 178 for pattern 1 vs 112 for pattern 2). The full ensemble predictive powers attained on average are however less for connectivity pattern 1 than those attained for pattern 2. A partial explanation for this fact is that for the less densely connected network (pattern 2), the pairwise correlation structure is stronger (comparing Figures 2 and 3). In a similar vein, it seems that in the less connected, but more strongly correlated network (pattern 2), the addition of fewer extrinsic cells were needed to reach sizable fractions of the full ensemble predictive power (see Figure 11). Although the order in which cells were added wasn’t necessarily the one which maximized the increase in predictive power at each addition, the fact of requiring only a subset of the full ensemble of cells was evident in all cases considered. The small oscillations in the predictive power one observes in Figure 11 once one reaches close to the full ensemble predictive power (in Figure 11(b,c,d)) lie within our estimates for errors in computation of the predictive power () (Hanley & McNeil, 1982). It should be noted that the result that one can reach the full potential predictive power of an ensemble of neurons by observing subgroups of the ensemble has also been alluded to previously (Schneidman et al., 2006; Tkačik, Prentice, Balasubramanian, & Schneidman, 2010).
Our results add to the discussion first pointed out by Truccolo et al. (2010). There, it was shown that small ensembles of neurons in arm-related ares of sensorimotor cortex could reliably predict single-neuron spiking in humans and monkeys performing a range of sensorimotor tasks. They hypothesized that this predictive capability reflected behavior-related encoding, which could not be entirely accounted for by common inputs driving their networks. We show in the light of our above results that predictive capabilities lie in locally connected computational networks responding to sparsely distributed random background inputs.
Important differences between our computational model and the recordings outlined in Truccolo et al. (2010) make a direct comparison difficult. We have chosen our ensemble of cells from the same layer of modeled cortex whereas the results of Truccolo et al. (2010) most likely sample cells from multiple layers. The cross-sectional area from which we have sampled neurons is also likely to have been smaller. From the point of view of network activity, our model displays distinct periods of bursting, which together with our strategy for choosing our ensembles of neurons, might be responsible for the relatively stronger pairwise correlations we observe. Addressing some of these differences, such as randomizing over layers and sampling neurons spaced further apart (though of course our model is of limited size), should decrease the magnitude of the substantial predictive power we find, though we doubt this would eliminate its existence altogether. The question of the interpretation of our results is then intimately connected to the (open) question of how to interpret computational models themselves.
That we find predictive power in a separate system not ostensibly driven by strongly correlated inputs is clear. If one accepts our network and its inputs are not reflective of a piece of motor cortex (for example) performing behavior related computations, then we have at the very least constructed a realistic system in which significant predictive power exists. Predictive power is then not a unique signature of behavior though it of course may be implied by it. Furthermore, given the architecture of the computational model studied, our results hint that this predictive power may be a generic property of locally connected networks, and our model then provides an explicit, realistic structure within which one can begin to unravel the origins of this strongly correlated behavior. An alternative to this view is that our network is reflective of a piece of motor cortex performing behavior related computations, in which case our results provide further support to those of Truccolo et al. (2010), together then with an architecture through which such coordinated activity results.
In this way, we promote the interpretation that locally interconnected networks of neurons contain predictive power such that each unit in the network need not be monitored to obtain knowledge about the collective state of the ensemble. A number of avenues of pursuit present themselves in consequence. As pointed out in Truccolo et al. (2010), the distribution we have constructed does not constitute a minimal model for the system, it simply illustrates that predictive power exists. An interesting question is whether one can do better in terms of prediction through instituting a simpler model, whether it be constructed within the point process framework of Truccolo et al. (2005, 2010), in terms of the maximum entropy approaches presented in the style of Schneidman et al. (2006), Shlens et al. (2006), Tang et al. (2008), ?marre+al_09, or indeed through lower dimensional state-space methods as discussed in Stevenson & Kording (2011). The point process formalism utilized for prediction was not explicitly tuned to necessarily generate optimal results, and the question of how well such inferential techniques do in terms of extracting available information from spike trains remains an interesting (open) question. In addition, the computational model chosen for analysis was also not explicitly tuned to generate the levels of predictive power we observed. How the parameter space of the computational model relates to the observed predictive power is also work for future consideration. The relationship of predictive power to connectivity is another interesting line of inquiry. One imagines there is a relationship between the temporal filters one attains and the connectivity of the network. Whether one can reverse the process and infer connectivity from predictive power would be of significant interest (Okatan et al., 2005; Nykamp, 2007; Kim, Putrino, Ghosh, & Brown, 2011; Chen, Putrino, Ghosh, Barbieri, & Brown, 2011). Indeed the setting of a reduced computational model, where connectivity and network architecture are under direct control, would be a promising place to begin.
Acknowledgments
We thank Piotr Franaszczuk and Sridevi Sarma for discussions. Work at Harvard and at Johns Hopkins was supported in part by the Charles H. Hood Foundation and by the NIH (1K08NS066099-01A1).
Appendix: The computational model
The computational model utilized here constitutes a version of that described in Anderson et al. (2007, 2009) and ?anderson+al_12. Appendix A of Anderson et al. (2007) contains a more complete description of a reduced version of the model under consideration, though for completeness we highlight portions of their discussion relevant to the model used here. This earlier version of the model contains the same cell types (as discussed below) and cell classes (layer II/III pyramidal, etc.), arranged in the same columnar structure as for the later version of the model (as featured in Anderson et al. (2009, 2012), and studied in this manuscript), together with similar intrinsic and extrinsic connectivity schemes. The main difference is the increased number of minicolumns arranged contiguously, taking the total number of cells from 16,384 to 65,536. We note also that the two models referred to above have been posted to the Yale SenseLab ModelDB database of computational neuroscience models (accession numbers 98902 and 141507) at http://senselab.med.yale.edu/modeldb/.
Each of the 65,536 neurons is modeled in single-compartment format in accord with the excitable membrane model of Hodgkin & Huxley (1952), as modified by Av-Ron et al. (1993) and Av-Ron (1994). The membrane potential for each cell varies as
(6) |
where is the membrane capacitance, is the input synaptic current, is the inward sodium current, is the inward calcium current, is the outward potassium current which includes a delayed rectifier current, is a calcium dependent potassium current, is a transient potassium current, and represents a leak current. The voltage dependence of these currents are presented in the appendix of Anderson et al. (2007), to which we refer the reader. The synaptic current for any neuron is determined through a weighted linear combination over all synapses, where
(7) |
The sum extends over all synapses onto the neuron, is a weight which modulates the contribution of the ’th synapse, is the conductance function for synapse , which can depend upon a delay . The term is the reversal potential utilized in the driving function for synapse . The conductance functions can be expressed as
(8) |
where is a conductance constant which is adjusted to provide each postsynaptic potential with an equivalent amount of injected charge integrated over time. The sum runs over the non-negligible past action potential received by synapse , and and are the decay and onset times for a given postsynaptic potential respectively.
Each neuron was modeled as one of three different types, regular spiking, intrinsic bursting, or fast spiking, depending upon its intrinsic electrophysiological properties. Model parameters which characterize each one of these types are presented in Table 2 of Anderson et al. (2007). The layer II/III pyramidal cell ensembles utilized in this paper were of the regular spiking type. Figure 1a of Anderson et al. (2007) displays their response to an internal current injection, which notably exhibits a lack of self-excitation after the source is shut off. Specific parameter values governing their dynamics are presented in the second column of Table 2 in Anderson et al. (2007).
Connectivity between cells is described in detail in sections 2.1 and 2.2 of Anderson et al. (2007), and in Anderson et al. (2009). Most pertinent to our discussion here is the number of layer II/III pyramidal cells contacted by each layer II/III pyramidal cell in our chosen connectivity scheme. By way of example, for ‘connection pattern 1’, layer II/III pyramidal cells make isotropic connections outside their minicolumn to a radius of 300, contacting a total of at most 178 layer II/III pyramidal cells along the way. The layer II/III pyramidal cells also contact (extrinsic to their own minicolumn) the layer IV stellate cells, layer V pyramidal cells, basket and chandelier cells. Anderson et al. (2007) describe (in section 2) a basic connectivity pattern which is then varied via multiplicative factors acting on the total numbers of connections in the model to reproduce a variety of bursting behaviors with a range of frequencies. In this paper, we have set the relative numbers of inhibitory to excitatory connections to most resemble pattern C (Table 1, Anderson et al. (2007)). For ‘connection pattern 2’, the total number of contacted layer II/III pyramidal cells out to for each layer II/III pyramidal cell was reduced to at most 112. In both patterns, connections were reduced for cells within of the edge of the model to help mitigate boundary reflection effects.
Footnotes
- To be clear, we compute Pearson correlation coefficients for spike counts binned in 1ms bins for each time lag between , and then report the correlation coefficient with the greatest absolute value (Welsh, 1999).
- We thank Sridevi Sarma for passing along to us a fast truncated L2 regularized binomial logistic regression solver - created by Demba Elimane Ba, MIT Department of EECS, Neuro. Stat. Research Lab (MIT Department of BCS), August 25, 2008.
References
- Anderson, W.S., Azhar, F., Kudela, P., Bergey, G.K., & Franaszczuk, P.J. (2012). Epileptic seizures from abnormal networks: Why some seizures defy predictability. Epilepsy Res., 99, 202–213.
- Anderson, W.S., Kudela, P., Cho, J., Bergey, G.K., & Franaszczuk, P.J. (2007). Studies of stimulus parameters for seizure disruption using neural network simulations. Biol. Cybern., 97, 173–194.
- Anderson, W.S., Kudela, P., Weinberg, S., Bergey, G.K., & Franaszczuk, P.J. (2009). Phase-dependent stimulation effects on bursting activity in a neural network cortical simulation. Epilepsy Res., 84, 42–55.
- Av-Ron, E. (1994). The role of a transient potassium current in a bursting neuron model. J. Math. Biol., 33, 71–87.
- Av-Ron, E., Parnas, H., & Segel, L.A. (1993). A basic biophysical model for bursting neurons. Biol. Cybern., 69, 87–95.
- Brillinger, D.R. (1988). Maximum likelihood analysis of spike trains of interacting nerve cells. Biol. Cybern., 59, 189–200.
- Brown, E.N., Barbieri, R., Eden, U.T., & Frank, L.M. (2004). Likelihood methods for neural spike train data analysis. In J. Feng (Ed.), Computational neuroscience: A comprehensive approach, CRC Press, London.
- Chen, Z., Putrino, D.F., Ghosh, S., Barbieri, R., & Brown, E.N. (2011). Statistical inference for assessing functional connectivity of neuronal ensembles with sparse spiking data. IEEE Trans. Neural Syst. Rehabil. Eng., 19, 121–135.
- Chornoboy, E.S., Schramm, L.P., & Karr, A.F. (1988). Maximum likelihood identification of neural point process systems. Biol. Cybern., 59, 265–275.
- Daley, D.J., & Vere-Jones, D. (2003). An introduction to the theory of point processes, Springer, New York.
- Douglas, R.J., & Martin, K.A.C. (2004). Neuronal circuits of the neocortex Annu. Rev. Neurosci., 27, 419–451.
- Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognit. Lett., 27, 861–874.
- Gunning, D., Adams, C., Cunningham, W., Mathieson, K., O’Shea, V., Smith, K.M., Chichilnisky, E.J., Litke, A.M., & Rahman, M. (2005). 30 m spacing 519–electrode arrays for in vitro retinal studies. Nuc. Inst. Methods A, 546, 148–153.
- Hanley, J.A., & McNeil, B.J. (1982). The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143, 29–36.
- Hodgkin, A.L., & Huxley, A.F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol., 117, 500–544.
- Kim, S., Putrino, D., Ghosh, S., & Brown, E.N. (2011). A Granger causality measure for point process models of ensemble neural spiking activity. PLoS Comput. Biol., 7, e1001110.
- Kudela, P., Franaszczuk, P.J., & Bergey, G.K. (2003). Changing excitation and inhibition in simulated neural networks: effects on induced bursting behavior. Biol. Cybern., 88, 276–285.
- Marre, O., El Boustani, S., Frégnac, Y., & Destexhe, A. (2009). Prediction of spatiotemporal patterns of neural activity from pairwise correlations. Phys. Rev. Lett., 102, 138101.
- Martignon, L., Deco, G., Laskey, K., Diamond, M., Freiwald, W., & Vaadia, E. (2000). Neural coding : higher-order temporal patterns in the neurostatistics of cell assemblies. Neural. Comput., 12, 2621–2653.
- Nieuwenhuys, R. (1994). The neocortex: an overview of its evolutionary development, structural organization and synaptology. Anat. Embryol.(Berl.), 190, 307–337.
- Nykamp, D.Q. (2007). A mathematical framework for inferring connectivity in probabilistic neuronal networks. Math. Biosci., 205, 204–251.
- Okatan, M., Wilson, M.A., & Brown, E.N. (2005). Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity. Neural. Comput., 17, 1927–1961.
- Pillow, J.W., Paninski, L., Uzzell, V.J., Simoncelli, E.P., & Chichilnisky, E.J. (2005). Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. J. Neurosci., 25, 11003–11013.
- Pillow, J.W., Shlens, J., Paninski, L., Sher, A., Litke, A.M., Chichilnisky, E.J., & Simoncelli, E.P. (2008). Spatio-temporal correlations and visual signaling in a complete neuronal population. Nature, 454, 995–999.
- Shlens, J., Field, G.D., Gauthier, J.L., Grivich, M.I., Petrusca, D., Sher, A., Litke, A.M., & Chichilnisky, E.J. (2006). The structure of multi–neuron firing patterns in primate retina. J. Neurosci., 26, 8254–8266.
- Schneidman, E., Berry II, M.J., Segev, R., & Bialek, W. (2006). Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440, 1007–1012.
- Segev, R., Goodhouse, J., Puchalla, J., & Berry II, M.J. (2004). Recording spikes from a large fraction of the ganglion cells in a retinal patch. Nat. Neurosci., 7, 1155–1162.
- Stevenson, I.H., & Kording, K.P. (2011). How advances in neural recording affect data analysis. Nat. Neurosci., 14, 139–142.
- Suner, S., Fellows, M.R., Vargas–Irwin, C., Nakata, G.K., & Donoghue, J.P. (2005). Reliability of signals from a chronically implanted, silicon–based electrode array in non–human primate primary motor cortex. IEEE Trans. Neural Syst. Rehab. Eng., 13, 524–541.
- Tang, A., Jackson, D., Hobbs, J., Chen, W., Smith, J.L., Patel, H., Prieto, A., Petrusca, D., Grivich, M.I., Sher, A., Hottowy, P., Dabrowski, W., Litke, A.M., & Beggs, J.M. (2008). A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. J. Neursoci., 28, 505–518.
- Tkačik, G., Prentice, J.S., Balasubramanian, V., & Schneidman, E. (2010). Optimal population coding by noisy spiking neurons. PNAS, 107, 14419–14424.
- Truccolo, W. (2010). Stochastic models for multivariate neural point processes: collective dynamics and neural decoding. In S. Grün & S. Rotter (Eds.) Analysis of parallel spike trains, Springer, New York.
- Truccolo, W., Eden, U.T., Fellows, M.R., Donoghue, J.P., & Brown, E.N. (2005). A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. J. Neurophysiol., 93, 1074–1089.
- Truccolo, W., Hochberg, L.R., & Donoghue, J.P. (2010). Collective dynamics in human and monkey sensorimotor cortex: predicting single neuron spikes. Nat. Neurosci., 13, 105–111.
- Welsh, W.F. (1999). On the reliability of cross-correlation function lag determinations in active galactic nuclei. PASP, 111, 1347–1366.
- Yu, S., Huang, D., Singer W., & Nikolić, D. (2008). A small world of neural synchrony. Cereb. Cortex, 18, 2891–2901.
- Zhao, M., & Iyengar, S. (2010). Nonconvergence in logistic and Poisson models for neural spiking. Neural Comput., 22, 1231–1244.