Spin glass models for a network of real neurons

Spin glass models for a network of real neurons

Gašper Tkačik,1 Elad Schneidman,2 Michael J. Berry II, and William Bialek Joseph Henry Laboratories of Physics, Department of Molecular Biology, Lewis–Sigler Institute for Integrative Genomics, and Princeton Center for Theoretical Science
Princeton University, Princeton, New Jersey 08544 USA
11Present address: Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104–6396.
22Present address: Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel.
September 16, 2019
Abstract

Ising models with pairwise interactions are the least structured, or maximum–entropy, probability distributions that exactly reproduce measured pairwise correlations between spins. Here we use this equivalence to construct Ising models that describe the correlated spiking activity of populations of 40 neurons in the salamander retina responding to natural movies. We show that pairwise interactions between neurons account for observed higher–order correlations, and that for groups of 10 or more neurons pairwise interactions can no longer be regarded as small perturbations in an independent system. We then construct network ensembles that generalize the network instances observed in the experiment, and study their thermodynamic behavior and coding capacity. Based on this construction, we can also create synthetic networks of 120 neurons, and find that with increasing size the networks operate closer to a critical point and start exhibiting collective behaviors reminiscent of spin glasses. We examine closely two such behaviors that could be relevant for neural code: tuning of the network to the critical point to maximize the ability to encode diverse stimuli, and using the metastable states of the Ising Hamiltonian as neural code words.

entropy, information, multi–information, neural networks, Monte Carlo, correlation

I Introduction

Physicists have long explored analogies between the statistical mechanics of Ising models and the dynamics of neural networks [1, 2, 3]. The goal of this effort has been not to simulate the details of particular networks, but to understand how interesting functions can emerge, collectively, from large populations of neurons. In the spirit of modern statistical mechanics, one hopes that these collective behaviors will have some degree of universality, and hence that one can make progress without knowing all of the microscopic details of each system. A classic example of this work is the model of associative or content–addressable memory due to Hopfield [1], which is able to recover the correct memory from any of its subparts of sufficient size. Because the computational substrate of neural states in these models were binary ‘spins,’ and the memories were realized as locally stable states of the network dynamics, methods of statistical physics could be brought to bear on theoretically challenging issues such as the storage capacity of the network or its reliability in the presence of noise [2, 3]. On the other hand, precisely because of these abstractions, it has not always been clear how to bring the predictions of the models into contact with experiment.

Recently it has been suggested that the analogy between Ising models and neural networks can be turned into a precise mapping, and connected to experimental data, using the maximum entropy framework [4]. We imagine a neural system exposed to a stationary stimulus ensemble, in which simultaneous recordings of neurons can be made. In small windows of time, a single neuron either does () or does not () generate an action potential or “spike” [5]; the state of the entire network in that time bin thus is described by a ‘binary word’ or spin configuration . As the system responds to its inputs, it visits each of these states with some probability . Even before we ask what the different states ‘mean,’ for example as code words in a representation of the sensory world, specifying this distribution requires us to determine the probability of each of possible states. In practice, once increases beyond , this becomes impossible. The idea of the maximum entropy construction is to measure low order moments of the distribution, such as the average probability of spiking for each cell () and the correlations between pairs of cells (), where the averages are taken over the course of the experiment, and then search for a probability distribution that matches these experimental measurements but otherwise is as unstructured as possible. Minimizing structure means maximizing entropy [6], and given any set of moments or correlations that we want to match, the form of the maximum entropy distribution is easy to find analytically.

For the particular case we are interested in, where the states of neurons are given by binary variables and we match the mean spike probabilities and pairwise correlations, the maximum entropy distribution is

(1)

where the “magnetic fields” and the “exchange couplings” have to be set to reproduce the measured values of and . This is exactly the Ising model with pairwise interactions among the spins. Thus, the maximum entropy construction derives the Ising model from real data, rather than postulating it as an approximation to the underlying dynamics. The construction is not an analogy or metaphor, but an exact mapping—Eq (1) should predict, given the measured correlations between pairs of neurons, the probability of all states for the whole network of neurons. Further, this is a minimal model, in that the real network can have more structure than predicted by the maximum entropy model, but not less.

Conceptually, describes the direct mutual interaction between neurons and that remains after the contributions from other interactions in the network through more circuitous paths have been disentangled from the corresponding correlation , and represents the neuron’s intrinsic bias towards firing or silence [7, 4]. This construction thus takes us from the experimentally accessible correlation functions and to the underlying Hamiltonian , which in turn determines the probability of every binary word in the neural codebook. This path is the inverse of the usual problem in statistical physics, where we take the fields and couplings to be known (or chosen from a known distribution) and we must calculate the observable correlation functions.

The surprising result of Ref [4] is that the pairwise Ising model provides a very accurate description of the combinatorial patterns of spiking and silence, , in ganglion cells of the salamander retina as they respond to natural and artificial movies, and in cortical cell cultures. In other words, the frequency of appearance of all binary patterns across neurons can be explained by the interactions between pairs of neurons; consequently, Eq (1) only requires parameters instead of the original to fully specify the distribution. After the initial success in the salamander retina, similarly encouraging results were obtained in the primate retina, under very different stimulus conditions [8, 9], in visual cortex [10, 11], and in networks grown in vitro [12]. Most of these detailed comparisons of theory and experiment were done for groups of neurons, small enough that the full distribution could be sampled experimentally and used to assess the quality of the pairwise maximum entropy model of Eq (1).

We report here on our efforts to move towards larger networks of neurons and explore the kinds of collective effects that might be expected in networks of a few hundred elements in size. Since making the first version of our results available [13], this work has stimulated research into the tractability and approximate methods for solving the maximum entropy problem [14, 15, 16, 17, 18, 19] and generalizations of the pairwise model [23, 20, 21, 22]. There is also growing interest in the use of maximum entropy methods to describe a wider range of biological systems, from protein structure to gene regulatory networks to ecosystems [25, 24, 26, 27, 28, 29]. Here we present a more detailed exposition, arguing that our findings, in addition to providing a good fit to the data, can help us understand how networks of neurons function.

In detail, we extend the maximum entropy results to neurons using Monte Carlo methods. To motivate the discussion, we start by illustrating how strong collective effects can emerge in a toy mean-field Ising model used in the context of an inverse problem. We then extract realistic Ising models from the data; by studying subnetworks drawn from the full 40-neuron network, we first present an argument for why pairwise models can be successful in accounting for neural data, at least for small enough . We then argue that the observed subnetworks are typical of an ensemble, for which we provide an explicit construction and out of which we can draw larger, synthetic networks. Remarkably, these larger networks seem to be poised very close to a critical point and exhibit a rich vocabulary of locally stable states, allowing us to hypothesize about possible combinatorial coding mechanisms and, as a result, predict two interesting collective behaviors of neural networks that should become visible in the next generation of experiments [30, 31, 32].

Ii Distributions of output words

As a concrete example, we consider a population of retinal ganglion cells—the neurons which provide the brain with all of its input about the visual world—responding to naturalistic movies. In such experiments, it is conventional to ask for a model which can predict the response of the neurons to arbitrary stimuli, . In the natural setting, stimuli are drawn from a space of very high dimensionality, and so constructing this map from stimuli to responses is very challenging [34, 33, 35, 36]. Alternatively, we can ask for a ‘dictionary’ that describes the stimuli consistent with particular patterns of activity, [5]. Here we take a very different approach, largely ignoring the visual stimulus and trying to understand the distribution of responses, . Before proceeding, we explain why this seemingly more limited problem is of interest.

Even when we measure the correlation between two neurons, , the usual approach is to dissect the correlation into contributions which are intrinsic to the network and those which can be ascribed to common, stimulus driven inputs. The idea of decomposing correlations dates back to a time when it was hoped that correlations among spikes could be used to map the synaptic connections between neurons [37]. In fact, in a highly interconnected system, the dominant source of correlations between two neurons—even if they are entirely intrinsic to the network—will always be through the multitude of indirect paths involving other neurons [38]. On the other hand, for neurons in the early stages of sensory processing, it is often suggested that the responses are “conditionally independent” given the stimulus, that is [39]. In the particular case considered here, we know that, for populations of neurons, this model already fails dramatically [4].

The question of whether correlations are driven by the stimulus or are intrinsic to the network is not a question that the brain can answer. We, as external observers, can repeat the stimulus exactly, and search for correlations conditional on the stimulus, but this is not accessible to the organism. The brain has access only to the output of the retina, patterns of activity which are drawn from the distribution . If the responses are codewords for the visual stimulus, then the entropy of this distribution sets the capacity of the code to carry information. Word by word, determines how surprised the brain should be by each particular pattern of response, including the possibility that the response was corrupted in transmission and thus should be corrected or ignored [40]. In a very real sense, what the brain ‘sees’ are sequences of states drawn from the distribution . In the same spirit that many groups have studied the statistical structures of natural scenes [41, 43, 42, 44, 20, 45], we would like to understand the statistical structure of the codewords that represent these scenes. We shall see that this structure has features which are suggestive of new ideas about the nature of the retinal code.

Iii A simple example

We would like to illustrate these ideas with a simple example. Imagine that we record from neurons, and we find that all of them have the same mean rate of spiking, . Further, if we look at any pair of neurons, the probability of both spiking in the same small window of time is a bit larger than expected if they were independent. To be precise, we choose time windows of duration , so the probability of a spike is and the probability of coincident spikes from two particular cells is . We want to describe this network as above, with Ising variables for spiking and for silence. Then we have

(2)
(3)

Since all neurons and pairs are equivalent, the maximum entropy model consistent with pairwise correlations has the simpler form,

(4)

which is just the mean field ferromagnet (assuming that is positive, and that the temperature , which here does not have a direct physical interpretation, has been set to 1). The solution of the mean field model is well known [46]; here we recall a few details which will be especially important in the context of neurons.

As usual, everything we need to know is encoded in the partition function, which we evaluate by introducing an auxiliary field :

where the effective free energy is

(7)

If is large and we hold finite, then the integral in Eq (LABEL:mf2) is dominated by the mean field or classical value for , defined by the minimum of ,

(8)

Evaluating to first order in the fluctuations around , we have

(9)

where refers to higher order terms in , again assuming that stays finite as becomes large.

To fix the values of and , we need to solve for the expectation values and :

(10)
(11)
(12)

where . Rearranging and retaining terms to order or larger, we have

(13)

We recall that we usually require to scale as at large , because we want to have a well defined thermodynamic limit, in which energy and entropy are extensive quantities. In particular, with , we can write the effective free energy in Eq (7) as , where has no explicit dependence. Another key (and familiar) point is that, with the usual scaling of constant as becomes large, the correlations behave as [Eq (13)], and in the limit, they must decay to zero.

To connect the Ising models to the data, we face a new problem: we do not have direct access to the parameters and but rather have to infer them from measured and . As we examine recordings of increasingly large subsets of neurons taken from a dense patch of the retina, we could find that instead of decaying to 0, the average pairwise correlation stays constant. This would mean that cannot be constant as we consider larger groups of neurons, and that hence increasingly large subsets of neurons do not comprise an ensemble with a conventional thermodynamic limit.

It is easy to find an example of such a behavior and gain intuition if we limit ourselves to a (somewhat unrealistic) case when . Then, Eq (8) for the mean field value of has either 1 or 3 solutions, depending on the value of – the solution corresponds to the paramagnetic phase, and the non-zero solutions correspond to spontaneous magnetization. The transition from 1 to 3 solutions is the phase transition in the Ising ferromagnet, and it occurs at the critical value in the thermodynamic limit.

Suppose that we are in the paramagnetic phase, with the solution , and . Equation (10) then tells us that , or . If the average correlation is found experimentally to be , then we can compute how must scale from Eq (13):

(14)

As we assumed, for every finite , will stay below the critical value of 1 and thus choosing the solution with is self-consistent. We also note, however, that as increases, the system approaches the critical value , regardless of the value of , as long as that does not decrease faster than . To detect such an onset of criticality, one could perturb the coupling constans and by introducing a fictitious temperature , perturbing it around the value 1, and observing the emergence of the peak in heat capacity at .

In the real data the mean firing rates and the correlations are not homogenous across the population, and the assumption clearly is not valid, because the neurons spike infrequently, making , so that our toy model cannot be taken too literarily. Nonetheless, this exercise teaches us that one must be careful in transferring our intuition about thermodynamic limits to the case of maximum entropy models for real networks. It is not just that real networks have finite ; indeed, many networks probably have sufficiently large that some sort of large approximation is valid. The problem is rather that, since the correlations among pairs of neurons are experimentally measurable, we are not free to let these vary as we imagine recording from more and more neurons. More precisely, having a non–zero correlation between all pairs in a large, homogeneous network is inconsistent with the large limit except at the critical point. This prepares us for the possibility that, even in a more realistic, inhomogeneous system, the common observation of nontrivial correlations among most pairs of neurons points toward an interesting regime of operation.

Iv Learning maximum entropy models from real data

We recall that maximum entropy models are the least structured, or most random, models consistent with known expectation values [6, 7]. The Ising model of Eq (1) thus is the minimal model forced upon us by measurements of mean spike probabilities and pairwise correlations. In maximum entropy models that match only the measured mean spike probability but not pairwise correlations (the so-called independent models), the couplings are zero, and the probability distribution factors into independent contributions from each neuron. As has been shown elsewhere, and as we also reiterate later in this paper, independent models completely fail to account for the data, and the pairwise extension is therefore the next minimal step that we are required to take. In addition, the pairwise model also is a minimal model in which we can expect to observe any interesting collective behaviors.

To be concrete, we consider the salamander retina responding to naturalistic movie clips, as in the experiments of Refs [4, 47]. The visual stimulus consists of a movie that was projected onto the retina 145 times in succession, for a total of roughly one hour of stable recordings. Using bins of along the time axis yields 1310 samples per movie repeat, for a total of 189950 binary word samples, where in each time bin each neuron can either fire or be silent. The effective number of independent samples is smaller because of correlations across time; using bootstrap error analysis we estimate [25]. Under these conditions, pairs of cells within of each other have correlations drawn from a homogeneous distribution; the correlations decline at larger distance [48]. This correlated patch contains neurons [30], of which we record from [4]. The values of and for this population of cells are shown in Fig 1a and 1c, respectively.

Figure 1: Expectation values and interactions. At left, the expectation values computed from the experimental data; at right, the corresponding terms in the effective Hamiltonian of the maximum entropy model. Neurons are numbered in order of decreasing mean spike rate. (a) The pairwise connected correlations, ; note that we see both positive and negative correlations, and that these correlations are small. (b) The interactions in the Ising model; note that interactions spread more uniformly through the network than the correlations. (c) The mean “magnetization” of the individual cells. (d) The “magnetic fields” that express the intrinsic tendency of the neurons toward spiking or silence. (e) The histogram of correlations. (f) The distribution of interactions inferred for sub–networks of different size.

The central problem is to find the magnetic fields and exchange interactions that reproduce the observed pairwise correlations. It is convenient to think of this problem more generally: We have a set of operators ( for the pairwise model) on the state of the system, and we consider a class of models

(15)
(16)

our problem is to find the couplings ( for the pairwise model) that generate the correct expectation values, which is equivalent to solving the equations

(17)

Up to cells we can solve exactly, but this approach does not scale to and beyond because the partition sum contains a number of terms that is exponential in . For larger systems, this “inverse Ising problem” or Boltzmann machine learning, as it is known in computer science [49], is a hard computational problem that has to be solved by approximate schemes [14, 15, 21].

Given a set of coupling constants , we can estimate the expectation values by Monte Carlo simulation. Increasing the coupling will increase the expectation value , so a plausible algorithm for learning is to increase each in proportion to the deviation of (as estimated by Monte Carlo) from its target value (as estimated from experiment). This is not a true gradient ascent, since changing has an impact on operators , but such an iteration scheme does have the correct fixed points; heuristic improvements include a slowing of the learning rate with time and the addition of some ‘inertia’, so that we update according to

(18)

where is the time–dependent learning rate and measures the strength of the inertial term.

Figure 2: a) Precision of the Ising model for neurons learned via Eq (18): measured covariances are binned on the x-axis and plotted against the corresponding reconstructed covariances on the y-axis; vertical error bars are the standard deviation within the bin, horizontal error bars are bootstrap errors on covariance estimates. b) Zoom–in for small . The scale bar represents the standard deviation of in shuffled data, where all observed correlations arise from insufficient sampling.

We used different variations on this basic strategy for networks of different size. As noted above, for we solve exactly using a custom implementation of algorithm described in Ref [50]; this involves fully evaluating the partition sum. For , we first obtained a good initial approximation for by contrastive divergence Monte Carlo [51]. Then we followed up until convergence by using Eq (18) directly and decreased the learning rate as or slower according to a custom schedule, and kept the inertia at zero. Later in the paper we consider synthetic networks with , and for these we use . Learning the couplings was slow for synthetic networks, but eventually converged; even in this case converged to within for the largest quartile of elements by absolute value, and within for the largest half, without obvious systematic biases. In addition to these algorithmic issues, the Hamiltonian was rewritten such that was constraining , and we found that this removed biases in the reconstructed covariances.

Figure 2 shows the success of the learning algorithm by comparing the measured pairwise correlations to those computed from the inferred Ising model for 40 neurons. The reconstructions of the means, , are not shown, but are accurate to better than .

The pairwise maximum entropy model has many fewer parameters () than there are possible states of the network (); more importantly, the effective number of independent sample in our data set is , and this also is much larger than the number of parameters. Further, the parameters of the model are functions of the mean firing rates and pairwise correlations. Thus, to the extent that we can measure these quantities reliably, there is no issue of having a model which is too complex to be supported by the data. Nonetheless, one would like an independent test to show that we are not ‘over–fitting’ to a limited data set. To do this, we divide the available data in half to create separate training and testing data. We then use the model learned from the training data to compute the average log–likelihood of the data in each half, and measure the difference

(19)

where indexes one random split of the data into training and test halves, and is the set of parameters that we learn from the training data on that split. It should be noted that in the computation of , the partition function cancels, so really we are computing the difference in the mean “energy” between the training and test data. With twenty random choices for the training/test split, we find that . Thus has enough variance to consistently include zero with high probability, implying that over–fitting is not a problem.

V Does the model ‘work’?

For or even , we can compare the predictions of the maximum entropy model with the observed frequencies of the network states . For , this is no longer possible, since the number of states is much larger than the number of samples available in the experiment. What we can do is to compute, in the maximum entropy model, various statistics beyond the pairwise correlations, and check these quantities against estimates from the experimental data. In Fig 3 we show two examples of this approach.

First we calculate the probability that of the neurons spike in the same time bin. The data show that is an approximately exponential distribution, enormously far from the roughly binomial distribution expected if the neurons were spiking independently. The exponential structure of is well reproduced by the predictions from the maximum entropy model; the maximum entropy model has a tail which is slightly too heavy. In more detail, the Ising model underestimates the probability of the no–spike pattern by a few percent, vs. . The same deviation is already observed at , where vs. , and since at this network size we reconstruct the Ising model using an exact algorithm, the deviations at are not due to limitations of our Monte Carlo algorithm. Because the no–spike pattern is sampled well in our dataset, this deviation is significant and indicates that higher order interactions are starting to have an effect, albeit a small one.

As a second test, we consider the correlations among triplets of neurons. We look for the correlated deviations of triplets of neurons from their mean firing rates and define the connected third-order correlation coefficients, . Of course the independent model predicts that these will be zero, whereas about a third of the distinct triplets of neurons have correlations which are significantly different from zero. The maximum entropy model certainly captures the trend of these correlations, although it does tend to overestimate the large ones slightly, by on average. It is worth recalling that these triplet correlations are being predicted from a model that is determined only by the measured means and pairwise correlations, with no additional fitting allowed. In this context, errors are small, and are detectable only because we have tens of thousands of samples.

The maximum entropy formalism is a series of ever better approximations to the true distribution , in which the approximations match the data to increasing correlation orders. There is no a priori reason why pairwise order alone should suffice, but our results show that even at the Ising model performs exceedingly well, closing most of the gap between the independent model and the real data. The correct view of this result is not that pairwise model is exactly correct, since clearly it isn’t, but rather that this model is capturing a large fraction of the correlation structure in the data, even the higher order structure.

Given that the pairwise model does work very well, there are at least three questions we could ask. One is to address the small departures from the model, either by adding explicit higher order interactions, or by trying to infer the existence of ‘hidden’ elements that generate effective multi–neuron interactions even if the dominant elementary interactions are among pairs. This is a question that we leave for future work. The next question is to try and understand why the model works as well as it does. Finally, given that the model works, we can ask what it teaches us about the network as a whole, and about the neural code in the retina. We take up these two questions in the next sections.

Figure 3: a) Probability of observing simultaneous spikes (blue – data, red – Ising model), compared to the failure of the independent model, where (black). Dashed lines show error estimates. b) Measured vs predicted connected three–point correlations for 40 neurons (red) and for an exact Ising model for a 20 neuron subset (black). Horizontal error bars are bootstrap estimates from data, vertical error bars are computed across 20 Monte Carlo runs.

Vi Exploring subnetworks

In the current experimental setup, simultaneous recordings can only be made on a subset of neurons from a patch of the retina with highly correlated firing activity. It is thus relevant to ask about the effects of unobserved or ‘hidden’ nodes on the validity of the pairwise model. To explore this issue with the data we do have, we can ‘hide’ some of the cells in the network to which we have access, and ask how this changes the quality of the maximum entropy description. Intuitively, it is surprising that pairwise models work well both on neurons and networks consisting of subsets of these neurons, as in the original Ref [4]. We would expect that not observing will induce a triplet interaction among neurons for any triplet in which there were pairwise couplings between and all triplet members. Additionally, by comparing the parameters in the full network, , with their corresponding averages from different subnets of size , , we find that the couplings are left almost unchanged, while magnetic fields change substantially.

To explain both phenomena, we examine the flow of the couplings under decimation. Let us start by including a three–body interaction term into the -neuron Hamiltonian of Eq (1) for . Then, we isolate terms related to spin , and marginalize over to obtain . This summation results in

Because of the last term, the resulting expression for clearly does not have the same form as that for . We can, however, expand around up to terms of order as long as , and the similar term for triplets, are small; we then look for a decimated Hamiltonian, , with renormalized couplings and a matching power series. This results in the following flow of coupling when a single neuron has been marginalized over:

(21)
(22)
(23)

where , and . The terms originate from terms with 3 and 4 factors of , respectively. The key point is that neurons spike very infrequently (on average in of the bins) and so , in which case is approximately the hyperbolic tangent of the mean field at site and is close to . If pairwise Ising is a good model at size , and couplings are small enough to permit expansion, then the corrections to pairwise terms after decimation, as well as , are multiplied by . Sparsity of spikes therefore keeps the complexity in check: when a neuron cannot be observed, the correction to the pairwise couplings among the ‘visible’ neurons, as well as the appearance of higher order interactions between them, are suppressed.

We test these ideas by selecting 100 random subgroups of 10 neurons out of 20; for each, we compute the exact pairwise Ising model from the data, as well as applying Eqs (2123), with full expressions for and , 10 times in succession to decimate the network from 20 cells down to the chosen 10. The resulting three–body interactions have a mean and standard deviation ten times smaller than the pairwise . If we ignore these terms, the average Jensen–Shannon divergence [52] between this probability distribution and the best pairwise model for the subgroups is , smaller than the average divergence between either model and the experimental data, and means that samples would be required to distinguish reliably between the two models. At least for the range of examined here, the decimation calculation provides a useful approximation.

The sparsity of spikes can explain the dominance of pairwise interactions: the higher order terms are not intrinsically small, but the fact that spiking is rare means that they do not have much chance to contribute. Thus, the nature of the pairwise model is more like a Mayer cluster or virial expansion than like simple perturbation theory. Of course, with finite , all quantities must be analytic functions of the coupling constants, and so we expect that, if carried to sufficiently high order, any perturbative scheme will converge—although this convergence may become much slower at larger , signaling genuinely collective behavior in large networks. There are a number of reasons to think that (contrary to the suggestion in Ref [18], but consistent with the conclusions of Ref [15]) the real system we are studying is already outside the regime in which low order perturbative approaches are sufficient. First, the simple relation between and predicted by the lowest order of perturbation theory [18, 53] is violated, as shown in Fig 4a, and the resulting errors in the predicted are significant, as shown in Fig 4b. Second, if we make a perturbative expansion of the entropy itself in powers of the measured correlations, even carrying this out to fourth order fails to work for neurons [54]. Finally, the standard deviation of ‘effective field’ that represents the influence on neuron by its neighbors is comparable to the intrinsic bias for .

Figure 4: a) The comparison of couplings for a group of neurons, computed using the exact maximum entropy reconstruction algorithm, with the lowest order perturbation theory result, , where and [18, 53]. In the case of larger networks, the perturbative deviate more and more from equality (black line). b) The exact Ising approximation, , can be compared to the distribution , sampled from data; the solid line shows the Jensen-Shannon divergence between the two distributions, for four example networks of size . The dashed line shows the same comparison in which the Ising model parameters, , were calculated perturbatively. Already at neurons, the perturbative model has an error two orders of magnitude larger than the exact Ising model. The point at shown in gray because cannot be reliably sampled from data.

The question of whether pairwise models remain good for beyond 40, even in the regime when the above perturbation arguments break down, cannot be answered without further experimental data. We can conclude, however, that pairwise models are valid beyond the point where pairwise interactions merely represent trivial perturbative corrections to the dominant intrinsic biases , which happens already at .

Vii Network ensembles

In trying to develop a theoretical approach to biological systems, there is a tension between the search for universals and the need to engage with the details of specific systems. In the present context, even if the maximum entropy models provide a perfect description of the probability distribution of network activity, one may worry that what we have learned is relevant only to the particular 40 neurons we happened to record from in this one experiment. How can we generalize from these results? One idea is that what we observe in this one experiment is typical of what we would find by drawing networks at random out of some ensemble of networks. Our goal in this section is to identify this ensemble.

We start our search for meaningful ensembles of networks by characterizing the “thermodynamics” of the networks that we have observed. Having constructed a maximum entropy model with some parameters , we can take this model seriously as a statistical mechanics problem and ask what happens as we change the “temperature,” which is equivalent to scaling all of the coupling . Notice that this is just one slice through the large space of parameters, but it is one which has a physical interpretation even though the temperature is not itself physical. In particular we can define the entropy, heat capacity, and other thermodynamic variables.

Figure 5: The heat capacity, , for systems of different sizes . Ising models were constructed for 400 subnetworks of size 5, 180 of size 10, 90 of size 15 and 20, 1 full network of size 40 (all from data, black), and 3 synthetic networks of size 120 (green); vertical error bars are standard deviations across these examples. For networks of , the partition function (and therefore heat capacity) can be computed exactly; for , we estimate heat capacity from the variance in energy during Monte Carlo sampling, i.e. . The mean and 1-sigma envelope for of Ising models of randomized networks at are shown in blue dashed lines. We note that the peak of the heat capacity moves towards the operating point at with increasing size.

To proceed, we first plot the dependence of heat capacity on temperature at various system sizes in Fig 5. The behavior of the heat capacity as a function of temperature, , is diagnostic for the underlying density of states, and describes how the states of the system are populated as the temperature increases. Two systems of the same size with similar curves can thus be said to be similar to each other. We recall that the heat capacity can be estimated from Monte Carlo simulations at a single temperature by computing the variance of the energy over many binary patterns generated by the simulation, .

The first interesting result is that when we choose a subnetwork of neurons at random out of the 40 cells from which we record, for or , we find that the fluctuations in are very small. This suggests that the individual networks, even when quite small, are typical of the ensemble of subnetworks that we can choose out of this small patch in the retina.

More ambitiously, we can try to construct an artificial ensemble of networks that reproduces the distribution of mean spike probabilities and the distribution of pairwise correlations that we see in the experiment. Concretely, we assign to each neuron a mean spike probability chosen at random from the observed distribution of mean spike probabilities (Fig 1c shows, implicitly, the cumulative distribution of ), and we assign to each pair of cells a correlation chosen at random from the observed distribution of correlations, shown in Fig 1e. Note that not all combinations of means and correlations are possible for binary variables; after each draw from the distribution of and , we check that all marginal distributions are consistent, and repeat the draw if needed. Once the whole synthetic covariance matrix is generated, we check (e.g. using the Kolmogorov–Smirnov test) that the distribution of its elements is consistent with the measured distribution. Figure 5 shows for networks of 20 neurons constructed in this way, and we see that, within error bars, the behavior of these randomly chosen systems resembles that of real 20 neuron groups in the retina.

We emphasize that ensemble we construct here is very different from the usual one in statistical mechanics. In the theory of spin glasses [55], it is the interactions which are chosen at random. Importantly, each pairwise interaction is chosen independently. If we try to do this in our problem, we find a disaster—when we randomize the fields and interactions , we find that distribution of mean spike probabilities changes dramatically, and as a result everything else about the network also changes (heat capacity, entropy, … ). In contrast, here we keep the distribution of observables, i.e. the pairwise correlations and firing rates, fixed at their measured values, and moreover independent of , as motivated by experiments that find no decay in the pairwise correlation in patches of neurons.

A striking feature of the spin glass problem is that when the are drawn independently and at random, the correlations among the spins have a rich, hierarchical structure [55]. In our construction method we draw the correlation matrix elements independently (subject to the consistency condition above), and therefore expect that this will induce a complicated correlation structure in the space of couplings.

We are building a spin glass model in which all pairs potentially interact, and all the pairwise correlations are drawn from the same distribution. In this sense, we expect that we have some sort of mean–field model, but the correlations have a scale set by experiment, and hence cannot be reduced as becomes larger. This family of models clearly cannot have a normal thermodynamic limit as becomes large. This is not a failing of the model, however: the real neural system has correlations that do not appreciably decay with distance in mesoscopic patches (presumably because of the extended connectivity of the neurons and correlations in the stimulus), and maximum entropy models can explore the consequences of these measured constraints. Recalling our results on the toy model in Section III, we might expect our systems to be driven towards criticality as is increased; we explore this issue in the next section.

Viii Larger networks and criticality

Figure 5 reveals an interesting behavior of heat capacity : as the size of the system increases, the peak of the heat capacity moves closer and closer to the operating point at , in networks of size constructed from data. Armed with the results at and an operational definition of a network ensemble, we decided to check if this behavior continues to larger . We thus generated several synthetic networks of 120 neurons by randomly choosing once more out of the distribution of and observed experimentally. The heat capacity now has a dramatic peak at , very close to the operating point at .

To be clear, we note that the shift in the peak of heat capacity with the system size is a direct consequence of pairwise couplings in the model, and hence this structure is driven by the measured correlation among neurons. In independent models, a simple calculation shows that

(24)

where ; for our dataset, the ensemble average of for independent models peaks at about regardless of , and indeed, when normalized by , the curves of collapse onto a single curve. In contrast, in pairwise models of the same data, the peak moves from for to for , while the heat capacity per neuron, increases by about 40%. This is yet another indication that, although the correlation between any two neurons is weak, neglecting these correlations gives a qualitatively wrong picture of the states accessible to the network as a whole.

In physical systems, a sharp peak in the heat capacity, becoming singular as becomes large, is associated with a critical point in the phase diagram. The critical point is distinguished by the fact that at this point, the system is maximally sensitive to small changes in parameters. It has been suggested that this sensitivity makes operation at a critical point attractive as a strategy for biological sensory and signaling systems [56, 57, 58]. Behavior in the neighborhood of the critical point also is universal, so that systems with many different microscopic structures can generate, quantitatively, the same critical behavior. It should be noted that there are different notions of criticality which have been applied to biological systems. As far as we know this is the first evidence for criticality of a biological network in the thermodynamic sense.

From the thermodynamic point of view, the critical point marks a boundary between phases. But from a statistical point of view, this point is at an extremum. Specifically, in a large system we expect that almost all states which are accessible will have nearly equal probability; in information theory this is the idea of “typicality” (usually applied to long sequences), and in statistical mechanics this is the equivalence of the canonical and microcanonical ensembles. The critical point is the place where the corrections to this expectation are largest. More precisely, if we write the probability distribution in the Boltzmann form, as in Eq (1), the (negative) log of the probability of visiting a state is the energy, and the heat capacity is proportional to the variance of the energy. Thus, at the critical point, where the heat capacity has its maximal value, the variance of log probability is largest.

If the brain is interested directly in how surprised it should be by the current state of its inputs, then it might be important to maximize the dynamic range for instantaneously representing this (negative log) likelihood, that is, to have at its disposal a codebook with word frequencies whose range is wide enough to encompass the range of probabilities of sensory events. In contrast, simple notions of efficient coding would require all symbols to be used with equal probability. But since states of the visual world occur with wildly varying probabilities, achieving this notion of efficiency would require block codes that are extended over time and are therefore slow.

Ix Entropy and multi-information

Scaling of the temperature introduced in Section VII is useful for another reason: it enables us to estimate the entropy of . The entropy in the context of coding measures the capacity of the neurons to convey information about the visual world: the single-neuron biases and interactions effectively reduce the total number of likely binary patterns (or the codebook size) from to , and we would like to quantify this decrease. We recall that

(25)

where the heat capacity can be estimated by Monte Carlo from the variance of the energy, , by drawing a large number of spin configurations at a fixed temperature , computing their energy using the Hamiltonian of Eq (1) and taking the variance. Then, the integral in Eq (25) is performed up to , corresponding to the pairwise model of the real data. Estimating the entropy using the heat capacity integration in Eq (25) is crucial for , because direct estimation from raw Monte Carlo samples becomes infeasible.

Figure 6: The scaling of multi-information and the independent entropy with the system size. Points at network size are computed from real data, with error bars denoting the spread across many subnetworks selected from (there is no error estimate for the single network at ). Dashed lines are scaling fits; independent entropy scales approximately linearly, and multi-approximation approximately quadratically, with system size , and this scaling continues up to the three synthetic networks of size .

If we integrate Eq (25) to find the entropy of the large synthetic networks, we find that the independent entropy of the individual spins, , has been reduced to by pairwise interactions. As Fig 6 shows, even at the entropy deficit or multi–information for the Ising models continues to grow in proportion to the number of pairs (), continuing the pattern found in smaller networks [4]. We also note that Ising models provide a lower bound on the total amount of multi-information in the real distribution, because they only capture the pairwise structure. Therefore, if higher-order couplings become important at larger , the real might be bigger that the one estimated in Fig 6. The next generation of experiments [32], probing the regime, will provide a decisive test of the maximum entropy model predictions.

X Locally stable states

In the Hopfield model, dynamics of the neural network corresponds to motion on an energy surface. Simple learning rules can sculpt the energy surface to generate multiple local minima, or attractors, into which the system can settle. These local minima can represent stored memories, or the solutions to various computational problems [59, 60]. By analogy with spin glasses, we can think of these multiple attractors, or locally stable states, as resulting from the competition between positive and negative interactions. In our maximum entropy models, we similarly find a range of values encompassing both signs, as shown in Figs 1b and 1f. We would like to understand whether this structure leads to multiple attractors, and what this means for the nature of the neural code.

Locally stable states are patterns of activity , such that a flip of any single spin in increases the energy (or decreases the likelihood) of the new state. At we find 4 such local energy minima () in the observed sample that are stable against single spin flips, in addition to the silent state ( for all ). Using zero–temperature Monte Carlo, each configuration observed in the experimental data is assigned to its corresponding stable state: one starts with a binary pattern observed in the data, and flips the spins as long as each spin flip decreases the energy under the reconstructed Ising Hamiltonian . The local energy minimum thus found is matched to one of the . Although this assignment makes no reference to the visual stimulus, the collective states are reproducible across multiple presentations of the same movie (Fig 7a), even when the microscopic state varies substantially (Fig 7b).

Figure 7: (a) Probability that the 40 neuron system is found in a configuration within the basin of attraction of each nontrivial locally stable state , as a function of time during the stimulus movie; means that the retina is in that basin on of the 145 repetitions of the movie. (b) All unique patterns assigned to at . (c) Zipf plot of the rank ordered frequencies with which the lowest lying locally stable states are found in the simulated 120 neuron system.

In a simulated network of neurons, we find a much richer energy landscape. Looking in detail at the distribution of , we find that it is approximately Gaussian and , with of triangles being frustrated ( at ), indicating the possibility of many single-spin-flip stable states, as in spin glasses [55]. For each network, we thus generated one long run collecting independent samples. For each sample zero-temperature Monte Carlo is again used to determine the appropriate basin of attraction; we tracked lowest metastable states and kept detailed statistics for lowest. We conclude that the Gibbs state now is a superposition of thousands of , with a nearly Zipf–like distribution (Fig 7c). We checked that the probability of the Monte Carlo to generate a spin configuration that belongs to basin of attraction is to a good approximation proportional to , where the free energy is the difference between the average energy in the basin of attraction and its entropy, both estimated from the simulation data (i.e., from binary patterns that belong to the basin ). The average escape barrier size for the studied meta-stable states is , in natural units used in Eq (1).

The entropy of the distribution of metastable states, is , about a third of the total entropy. Thus, a substantial fraction of the network’s capacity to convey visual information would be carried by the collective state, that is, by the identity of the basin of attraction, rather than by the detailed microscopic states.

Based on these observations, we can formulate the following hypothesis: Trial-to-trial variability demonstrates that neurons must be to some extent noisy and therefore each ‘microscopic’ state or binary pattern cannot be a codeword with a separate meaning; instead, the space of binary patterns must be partitioned into domains or regions containing similar patterns with synonymous meanings. We propose that such domains are exactly the basins of attraction of locally stable states of . One desirable property of this choice is that the similarity metric for microscopic states is the energy function (or the likelihood) itself, which captures our intuition that the most frequently observed binary word among words that differ in a single letter is probably the ‘correctly spelled’ variant, and the other variations are akin to spelling mistakes. The second desirable characteristic is that an associative and error correcting mechanism for parsing such words already exists, and is simply the Hopfield network. We also find that on the synthetic network of neurons, observing neurons is on average approximately two times more informative about the collective state than about an unobserved neuron, i.e. , where is a group of neurons that doesn’t include , and the information is normalized by the corresponding total entropy. This indicates that the decoding of locally stable states from a neural sub-population should be easier than decoding the microscopic pattern of activity.

To test this hypothesis, one would need to compare the information that basins of attraction provide about the stimulus , , with the information that the microscopic state provides about the stimulus, . If these two quantities are similar in size, then basin-of-attraction identity is a good summary (or compression) of the microscopic state, and can be used to transmit information. Experiments could also be used to refine the hypothesis by checking if, for example, the basin of attraction provides information about some identifiable invariant property of the stimulus (e.g., spatial shape), while additional information about other aspects of the stimulus (e.g., contrast) is encoded in the microscopic pattern within the basin of attraction. Unfortunately, in the real networks of neurons we can just start to detect the emergence of the metastable states, and we are thus unable to perform the test. While at neurons the structure is much richer, these synthetic networks have no associated set of ‘experimentally observed’ patters locked to the stimulus, so such a test, even simulated, can also not be performed. To conclude this section, we note that locally stable states can be defined without a reference to a particular model (the Ising model here), by simply finding patterns that are sampled well enough in the data and are more frequent than all of their single-spin-flip neighbors [45]. Even if pairwise models were to break down at higher , our suggestion might still provide a viable coding hypothesis.

Xi Discussion

Ising models, with a spin glass structure of competing interactions, are the least structured models that can describe the observed mean spike probability and pairwise correlations in networks of real neurons. Remarkably, these minimal models continue to provide a good description of the higher order correlations among retinal neurons up to . Although correlations among pairs of cells are weak, the behavior of these large groups of cells is strongly collective [61].

In the Ising model, the bias for a particular neuron to fire (spin up) or remain silent (spin down) has two components, one intrinsic to the neuron and one from the interactions with other neurons. At , these different contributions are comparable, and so the pairwise models cannot simply be viewed as a small perturbative correction to the independent model. Nevertheless, the sparsity of spikes is of crucial importance for models of neural behavior. In small networks at least, rare spiking means that higher order interactions don’t have much chance to contribute even if they are present, and we hypothesize that this property of neural code could be important also as the brain tries (literally) to make sense of the incoming data.

Having found that the Ising model provides a good description of the real network, we are encouraged to take the model seriously as a statistical mechanics problem. In particular, since the system has competing interactions that differ from neuron to neuron, we would like to understand, in the spirit of spin glass theory [55], whether the particular system that we observe is typical of some ensemble of networks. We have been able to construct such an ensemble, and use this as a tool to predict the behavior of larger systems. In the salamander retina, the “correlated patch” of neurons, within which the pairwise correlations are largely independent of the distance between cells, contains cells, so we would like to understand what our framework predicts on this scale. Remarkably, the larger networks that we construct seem to be operating close to a critical point in their phase diagram.

Criticality is perhaps the most dramatic signature of collective behavior. The next generation of experiments should have access to the full population of cells [30, 31, 32], and will test this prediction in detail. As emphasized in recent work on natural images, one can find evidence of criticality even without constructing an explicit statistical model [45]. Further, it should be remembered that the pattern of correlations in the retina depends not just on the underlying connectivity or the structure of the visual inputs, but also on the adaptation state of the system. While much has been learned about the way in which individual neurons adapt to changes in the input stimulus statistics, especially in the retina [62, 63], much less is known about how these adaptation processes influence the behavior of whole networks. If operation at a critical point is an essential feature of network function, adaptation after a sudden change in input statistics should bring the system back to criticality, even if the exact pattern of spiking probability and correlations in the network is changed, and this is testable. For a model of how adaptive dynamics can enforce critical behavior, see Ref [64].

Our second prediction concerns the emergence and role of the locally stable states in the probability distribution of binary words, : we have shown that at several locally stable states appear reproducibly from trial to trial despite substantial variability in the detailed binary patterns of neural activity. Importantly, the analysis which identifies these states makes no reference to the visual stimulus, yet these states are tied to the stimulus in a way which suggests a role in the neural code. Furthermore, by studying the synthetic networks we have demonstrated that the vocabulary of such states could vastly expand by the time reaches , providing enough capacity and dynamic range even to dominate the encoding of incoming stimuli. Such a collective code is not inconsistent with the single-neuron results obtained to date; rather, one possible manifestation of collective coding is that, as more and more neurons are observed simultaneously, what was thought to be noise or uncorrelated fluctuations in a single neuron, is really a part of the collective state.

Both ideas—tuning to criticality and the use of stable states as robust codewords—have been motivated by experiments on a 40-neuron network where the beginnings of nontrivial collective behavior could be identified. We have outlined how these predictions can plausibly be tested in a next generation of experiments that will access networks of or larger. If validated, these and similar results would provide a substantive link from real data to to the large body of theoretical work on neural networks.

Acknowledgements.
This work was supported in part by NIH Grants R01 EY14196 and P50 GM071508, by the E. Matilda Zeigler Foundation, by NSF Grants IIS–0613435 and PHY–0650617, and by the Burroughs Wellcome Fund Program in Biological Dynamics. WB also thanks his colleagues at the Rockefeller University and at the Universitá di Roma, La Sapienza, for their hospitality during portions of this work.

References

  • [1] JJ Hopfield, Neural networks and physical systems with emergent collective computational abilities. Proc Nat’l Acad Sci (USA) 79, 2554–8 (1982).
  • [2] DJ Amit, Modeling Brain Function: The World of Attractor Neural Networks (Cambridge University Press, Cambridge, 1989).
  • [3] J Hertz, A Krogh & RG Palmer, Introduction to the Theory of Neural Computation (Addison Wesley, Redwood City, 1991).
  • [4] E Schneidman, MJ Berry II, R Segev & W Bialek, Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 440, 1007-1012 (2006), arXiv:q-bio/0512013v1.
  • [5] F Rieke, D Warland, RR de Ruyter van Steveninck & W Bialek, Spikes: Exploring the Neural Code (MIT Press, Cambridge, 1997).
  • [6] ET Jaynes, Information theory and statistical mechanics. Phys Rev 106, 620–630 (1957).
  • [7] E Schneidman, S Still, MJ Berry II & W Bialek, Network information and connected correlations, Phys Rev Lett 91, 238701 (2003), arXiv:physics/0307072v1.
  • [8] J Shlens, GD Field, JL Gaulthier, MI Grivich, D Petrusca, A Sher, AM Litke & EJ Chichilnisky, The structure of multi-neuron firing patterns in primate retina. J Neurosci 26, 8254-66 (2006).
  • [9] J Shlens, GD Field, JL Gaulthier, M Greschner, A Sher, AM Litke & EJ Chichilnisky, The structure of large-scale synchronized firing in primate retina. J Neurosci 29, 5022–31 (2009).
  • [10] IE Ohiorhenuan & JD Victor, Maximum entropy modeling of multi-neuron firing patterns in V1. Proceedings of 2007 Cosyne conference; http://cosyne.org.
  • [11] S Yu, D Huang, W Singer & D Nikolic, A small world of neuronal synchrony. Cereb Cortex 18, 2891–2901 (2008).
  • [12] A Tang, D Jackson, J Hobbs, W Chen, JL Smith, H PAtel, A Prieto, D Petruscam MI Grivich, A Sher, P Hottowy, W Dabrowski, AM Litke & JM Beggs, A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. J Neurosci 28, 505–518 (2008).
  • [13] G Tkačik, E Schneidman, MJ Berry II & W Bialek, Ising models for networks of real neurons. arXiv:q-bio/0611072 (2006).
  • [14] T Broderick, M Dudik, G Tkačik, RE Schapire & W Bialek, Faster solutions of the inverse pairwise Ising problem. arXiv:0712.2437 (2007).
  • [15] S Cocco, S Leibler & R Monasson, Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods. Proc Nat’l Acad Sci USA 106, 14058–62 (2009).
  • [16] S Nirenberg & J Victor, Analyzing the activity of large populations of neurons: How tractable is the problem. Curr Opin Neurobiol 17, 397–400 (2007).
  • [17] Y Roudi, E Aurell & JA Hertz, Statistical physics of pairwise probability models. Front Comput Neurosci 3, 22 (2009).
  • [18] Y Roudi, S Nirenberg & PE Latham, Pairwise maximum entropy models for studying large biological systems: when they can work and when they can’t. PLoS Comput Biol 5, e1000380 (2009).
  • [19] Y Roudi, J Trycha & J Hertz, The ising model for neural data: model quality and approximate methods for extracting functional connectivity. Phys Rev E 79, 051915 (2009).
  • [20] M Bethge & P Berens, Near-maximum entropy models for binary neural representations of natural images. In Advances in Neural Information Processing Systems 20, J Platt et al eds, pp 97–104 (Cambridge, MA, MIT Press, 2008).
  • [21] M Mezard & T Mora, Constraint satisfaction problems and neural networks: a statistical physics perspective. J Physiol Paris 103, 107–113 (2009).
  • [22] B Cessac, H Rostro, JC Vasques & T Vieville, How Gibbs distributions may naturally arise from synaptic adaptation mechanisms. J Stat Phys 136, 565–602 (2009).
  • [23] O Marre, SE Boustani, Y Fregnac & A Destexhe, Prediction of spatio–temporal patterns of neural activity from pairwise correlations. Phys Rev Lett 102, 138101 (2009).
  • [24] W Bialek & R Ranganathan, Rediscovering the power of pairwise interactions. arXiv.org:0712.4397 (2007).
  • [25] G Tkačik, Information Flow in Biological Networks (Dissertation, Princeton University, 2007).
  • [26] F Seno, A Trovato, JR Banavar & A Maritan, Maximum entropy approach for deducing amino acid interactions in proteins. Phys Rev Lett 100, 078102 (2008).
  • [27] I Volkov, JR Banavar, SP Hubbell & A Maritan, Inferring species interactions in tropical forests. Proc Nat’l Acad Sci USA 106, 13854–13859 (2009).
  • [28] PS Dhadialla, IE Ohiorhenuan, A Cohen & S Strickland, Maximum–entropy network analysis reveals a role for tumor necrosis factor in peripheral nerve development and function. Proc Nat’l Acad Sci USA 106, 12494–12499 (2009).
  • [29] M Weigt, RA White, H Szurmant, JA Hoch & T Hwa, Identification of direct residue contacts in protein-protein interaction by message passing. Proc Nat’l Acad Sci USA 106, 67–72 (2009).
  • [30] R Segev, J Goodhouse, JL Puchalla & MJ Berry II, Recording spikes from a large fraction of the ganglion cells in a retinal patch. Nat Neurosci 7, 1155–62 (2004).
  • [31] D Gunning, C Adams, W Cunningham, K Mathieson, V O’Shea, KM Smith, EJ Chichilnisky, 30m spacing 519-electrode arrays for in vitro retinal studies. Nuc Inst Methods A 546, 148–153 (2005).
  • [32] D Amodei, G Schwartz & MJ Berry II, Correlations and the structure of the population code in a dense patch of the retina. Proceedings MEA Meeting 2008, pp. 197, Stett A ed., BIOPRO Baden-Württemberg, 2008.
  • [33] J Keat, P Reinagel, RC Reid & M Meister, Predicting every spike: a model for the responses of visual neurons. Neuron 30, 803–817 (2001).
  • [34] AL Fairhall, CA Burlingame, R Narasimhan, RA Harris, JL Puchalla & MJ Berry II, Selectivity for multiple stimulus features in retinal ganglion cells. J Neurophysiol 96, 2724–2738 (2006).
  • [35] JW Pillow, J Shlens, L Paninski, A Sher, AM Litke, EJ Chichilnisky, EP Simoncelli, Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 454, 995-9 (2008).
  • [36] KS Sadeghi, Progress on Deciphering the Retinal Code. (Dissertation, Princeton University, 2009).
  • [37] D Perkel & T Bullock, Neural coding. Neurosci Res Program Bull 6, 221–343 (1968).
  • [38] I Ginzburg & H Sompolinsky, Theory of correlations in stochastic neural networks. Phys Rev E 50, 3171–3191 (1994).
  • [39] S Nirenberg, SM Carcieri, AL Jacobs & PE Latham, Retinal ganglion cells act largely as independent encoders. Nature 411, 698–701 (2001).
  • [40] JJ Hopfield, Searching for memories, sudoku, implicit check bits, and the iterative use of not–always–correct rapid neural computation. Neural Comp 20, 1119–1164 (2008); arXiv:q–bio/0609006.
  • [41] D Field, Relations between the statistics of natural images and the response properties of cortical cells. J Opt Soc A 4, 2379–94 (1987).
  • [42] DL Ruderman & W Bialek, Statistics of natural images: scaling in the woods. Phys Rev Lett 73, 814–817 (1994).
  • [43] DW Dong & JJ Atick, Statistics of natural time-varying images. Network 6, 345–358 (1995).
  • [44] O Schwartz & EP Simoncelli, Natural signal statistics and sensory gain control. Nat Neurosci 4, 819–25 (2001).
  • [45] GJ Stephens, T Mora, G Tkačik & W Bialek, Thermodynamics of natural images. arXiv.org:0806.2694 (2008).
  • [46] LP Kadanoff, Statistical Physics: Statics, Dynamics and Renormalization (World Scientific, Singapore, 2000).
  • [47] JL Puchalla, E Schneidman, RA Harris & MJ Berry II, Redundancy in the population code of the retina. Neuron 46, 493–504 (2005).
  • [48] R Segev, J Puchalla & MJ Berry II, Functional organization of retinal ganglion cells in the salamander retina. J Neurophysiol 95, 2277–92 (2006).
  • [49] GE Hinton & TJ Sejnowski, Learning and relearning in Boltzmann machines. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, ed. DE Rumelhart, JL McClelland & the PDP Research Group, pp. 282–317 (MIT Press, Cambridge, 1986).
  • [50] M Dudik, SJ Phillips & RE Schapire, Performance guarantees for regularized maximum entropy density estimation. Proceedings 17th Annual conference on learning theory (2004).
  • [51] GE Hinton, Training products of experts by minimizing contrastive divergence. Neural Comput 14, 1771–1800 (2002).
  • [52] J Lin, Divergence measures based on the Shannon entropy. IEEE Trans Inf Theory 37, 145–151 (1991).
  • [53] V Sessak & R Monasson, Small-correlation expansions for the inverse Ising problem. J Phys A 42, 055001 (2009).
  • [54] F Azhar, An Information Theoretic Study of Neural Populations (Dissertation, University of California at Santa Barbara, 2008).
  • [55] M Mezard, G Parisi & MA Virasoro, Spin Glass Theory and Beyond (World Scientific, Singapore, 1987).
  • [56] TAJ Duke & D Bray, Heightened sensitivity of a lattice of a membrane receptors. Proc Nat’l Acad Sci (USA) 96, 10104–8 (1999).
  • [57] VM Eguiluz, M Ospeck, Y Choe, AJ Hudspeth & MO Magnasco, Essential nonlinearities in hearing. Phys Rev Lett 84, 5232–5 (2000).
  • [58] S Camalet, T Duke, F Jülicher & J Prost, Auditory sensitivity provided by self-tuned critical oscillators of hair cells. Proc Nat’l Acad Sci (USA) 97, 3183-3188 (2000).
  • [59] JJ Hopfield & DW Tank, “Neural” computation of decisions in optimization problems. Biol Cybern 52, 141–152 (1985).
  • [60] JJ Hopfield & DW Tank, Computing with neural circuits: a model. Science 233, 625–633 (1986).
  • [61] For results on model systems cf. SM Bohte, H Spekreijse & PR Roelfsema, The effects of pairwise and higher-order correlations on the firing rate of a postsynaptic neuron. Neural Comput 12, 153–179 (2000).
  • [62] S Smirnakis, MJ Berry II, D Warland, W Bialek & M Meister, Adaptation of retinal processing to image contrast and spatial scale. Nature 386, 69–73 (1997).
  • [63] T Hosoya, SA Baccus & M Meister, Dynamic predictive coding by the retina. Nature 436, 71–77 (2005).
  • [64] MO Magnasco, O Piro & GA Cecchi, Self–tuned critical anti–Hebbian networks. Phys Rev Lett 102, 258102 (2009); arXiv:0808.3996v1.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
102710
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description