# Bayesian spike inference from calcium imaging data

###### Abstract

We present efficient Bayesian methods for extracting neuronal spiking information from calcium imaging data. The goal of our methods is to sample from the posterior distribution of spike trains and model parameters (baseline concentration, spike amplitude etc) given noisy calcium imaging data. We present discrete time algorithms where we sample the existence of a spike at each time bin using Gibbs methods, as well as continuous time algorithms where we sample over the number of spikes and their locations at an arbitrary resolution using Metropolis-Hastings methods for point processes. We provide Rao-Blackwellized extensions that (i) marginalize over several model parameters and (ii) provide smooth estimates of the marginal spike posterior distribution in continuous time. Our methods serve as complements to standard point estimates and allow for quantification of uncertainty in estimating the underlying spike train and model parameters.

## I Introduction

Calcium imaging is an increasingly popular technique for large scale data acquisition in neuroscience [1]. The method detects underlying, single neuron activity indirectly through observations of fluorescent indicators for calcium concentration. A key problem in the analysis of calcium imaging data is the inference of exact spike times from the noisy calcium signal which has slower dynamics compared to neural spiking and is sampled at a relatively low acquisition rate. A variety of methods have been proposed to deal with this problem including particle filtering [2], fast nonnegative deconvolution for approximate maximum-a-posteriori (MAP) inference [3], greedy template matching [4], and methods for estimating signals with finite rate of innovation [5]. In these methods, parameter estimation is typically performed offline or in an iterative manner using for example the expectation maximization algorithm.

In this paper we propose Bayesian methods for sampling from the joint posterior distribution of the spike train and the model parameters given the fluorescence observations. We present two efficient approaches for sampling the spikes. The first is a discrete time binary sampler that samples whether a spike occurred at each timebin using Gibbs sampling. By exploiting the weak interaction between spikes at distant timebins, we show that a full sample can be obtained with just complexity, where is the number of timebins, and that parallelization is also possible. Our second sampler operates in continuous time and samples the number of spikes and the spike times at arbitrary resolution using Metropolis-Hastings (MH) techniques. We use a proposal distribution to move the spike times around a local neighborhood that is based on the resulting signal residual. This proposal distribution enables fast mixing and tractable inference; each full sample is obtained with just complexity where is the total number of spikes, rendering this algorithm particularly efficient for recordings that are sparse and/or are obtained at a fine resolution. Moreover, in high-SNR conditions, this method enables super-resolution spike inference (i.e. determining where each spike occurred within each timebin) and smooth estimates of the marginal spike posterior using a Rao-Blackwellized scheme. We also show that is possible to marginalize over several of the model parameters and derive collapsed Gibbs samplers that exhibit faster mixing.

## Ii Model description and block Gibbs sampling

We assume that we observe a single neuron calcium trace for a duration of timesteps. Let the binary spiking vector of the neuron that indicates the existence of the spike at each timebin. The calcium activity that is generated by can be described by a simple first order autoregressive process as

where is a discrete time constant with , indicates the amplitude of each spike, and , with an initial condition for the calcium concentration. Our fluorescence observation vector , can be written as

where is the baseline concentration and is some random Gaussian noise. Our goal is to estimate the spiking vector given the observation vector , which we assume is normalized in the interval . Approximate MAP methods [3] have been shown to perform well under high SNR assumptions. However in the low SNR regime their performance degrades, and the parameter estimation becomes more challenging. To overcome these limitations we introduce a block-Gibbs sampler that produces samples from the joint posterior distribution of the spikes and model parameters given . Let and defined respectively as

By denoting , the likelihood can be written as

with ( denotes a vector of ones of length T). We here place an i.i.d. Bernoulli process prior on the spike trains so the probability of a spike in any timebin is . Under this uniform spiking assumption the discrete time constant can be estimated robustly from the autocovariance function of . For the prior probability we set a hyper-prior . At each iteration we update the parameters using empirical Bayes [6]: For a spiking vector the evidence function [7] can be written as

The evidence function is constant for a fixed ratio and is maximized for . To find distinct values for and we place a flat hyperprior on , which yields an exponential posterior .
For the rest of parameters we assume a joint half-normal (nonnegative) distribution, . The parameters and can also be learned from the data, but we selected general values that assume little prior knowledge and model a wide prior. Other prior choices, e.g. exponential, yield practically the same results. Finally, for the noise variance we set an inverse Gamma prior ^{1}^{1}1The variance can also be estimated from the autocovariance function via the Yulle-Walker equations but we favor a Bayesian approach here., which is a weak and relatively flat prior for both high and low-SNR regions for traces normalized to the interval. Under these assumptions, the block Gibbs sampler proceeds as follows to draw samples from the joint posterior:

with , , is a matrix that depends on the current state of , given by

and . We now turn to the main problem of sampling from the posterior of the spike vector .

## Iii Discrete binary sampler

To sample we take a Gibbs-sampling approach where at each iteration we sample given the current state of the rest of the entries. We use a Metropolized Gibbs (MG) sampler, with the flip of each binary entry being the possible move [8]. If is the state of at the -th sample, and define the current state of . The sampling algorithm flips with probability

(1) |

The ratio inside (1) can be more efficiently computed in the log-domain. By denoting the log-ratio becomes

(2) |

with the -th standard basis vector. It is important to note that the log-ratio of (2) can be computed efficiently in just time. The matrix is lower triangular and Toeplitz with . Therefore it can be approximated by a banded Toeplitz matrix with bandwidth that depends on . Consequently can again be approximated by a banded matrix and is also asymptotically Toeplitz for . It follows that the products can be computed in just time (technically in for bits of accuracy). This gives a total complexity per full sweep to obtain a new sample from . For large , the algorithm can also be parallelized – notice that the entries of which are at least timesteps apart, are approximately conditionally independent. can be partitioned into chunks of length , that can be sampled in parallel. It is obvious that instead of using the MG sampler presented here, one can also use a plain Gibbs sampler with similar complexity. In practice we observed that the MG sampler led to faster mixing.

## Iv Continuous time sampler

The spike samplers presented in section III operate on a discrete domain, with the length of each timebin set by the frame rate of the calcium imaging. While simple and effective, these methods can have several disadvantages. First, when the calcium signal is acquired by raster scanning, the frame rate is typically in the range of 10-30Hz. The length of each timebin is too large to assume that the underlying neuron can fire at most 1 spike per bin. In certain applications, higher resolution of the spike time is needed. A typical example is in “connectomics” where the order with which the various neurons fire spikes is crucial for determining their connectivity [9, 10]. Moreover, the complexity of the discrete samplers scales with the number of observed timebins (i.e. with the temporal resolution). In experiments where this resolution is very fine discrete, binary samplers can become computationally expensive. Compounding this, neural activity may be very sparse and thus sampling at intervals where no spikes occur is uninformative. Instead, a method that scales with the number of spikes is more desirable.

To address these issues we propose a sampler that samples directly the spike times in continuous time, given the calcium observations. The state space corresponds to the set of spike times, and transdimensional moves change the dimensionality of this state space. At every iteration we sample the new time of the -th spike given the current location of the rest of the spikes and the hyperparameters. Since the number of spikes is in general unknown, we also allow for spikes insertions and deletions at each iteration. Similar approaches have been proposed before, see e.g. [11] for the context of signals with finite rate of innovation, where however the number of “spikes” was considered known. In the continuous time setup, the calcium evolution is described by the differential equation

where is the (continuous) time constant of the calcium indicator. For a timebin of size with the discrete and continuous time constants are related through . If are the spike times, denoted by , the calcium signal is given by

(3) |

where the subscript indicates the dependance of the calcium signal on the set of spike times , and the observations are interpreted as , giving a likelihood of the form (we ignore for simplicity)

where . Finally, since the algorithm now operates in the continuous domain, we replace the i.i.d. Bernoulli prior for the spikes with a homogeneous continuous Poisson process prior with parameter . If we set a prior (with denoting the rate), then

For the hyper-parameters we can compute again the evidence function:

is maximized for with . By choosing these values the posterior of puts all the mass in the maximum likelihood estimate . We can either use this deterministic value or assume a fixed finite value for and set at each iteration. We next describe the sampling of the spike times in more detail.

### Iv-a Move existing spikes

We can update individual spike times either using random walk Metropolis-Hastings (MH) moves or a Gibbs algorithm. For the MH algorithm, we propose a move , using a Gaussian density, centered at and with standard deviation equal to e.g. , as a proposal distribution. Since this proposal is symmetric, the proposed move is accepted with probability

(4) |

Similarly to the discrete case, each spike contributes an exponentially decaying calcium trace (3), that has a temporally localized effect to the whole vector . By exploiting this, we can implement the spike shifting operation in just time, provided that we keep in memory the residual vector with . After any operation , the vectors and can be locally updated, and the ratio in (4) can be computed in just time.

An alternative random walk MH method uses a local proposal density more tuned to the calcium data than is a Gaussian random walk proposal. For each spike time, we construct a distribution from the residual between the data and the current calcium signal, restricted to small time interval centered on the current spike time, i.e., with e.g., . The proposal distribution can then be expressed as

where is the set of spike times excluding .
Note that in this case, the proposal distribution is no longer symmetric (since the local intervals are different), and therefore the Hastings ratio is also included in the acceptance probability. However, this can also be computed in time^{2}^{2}2Technically, this computation scales with the resolution at which we want to discretize the proposal distribution. In practice one can use a coarse resolution to choose a bin from the proposal distribution and then sample inside this bin uniformly. and thus this scheme remains efficient. Finally we note that this proposal distribution can be used to derive a Rao-Blackwellized scheme for updating the continuous time posterior spike distribution, instead of using the actual spike samples.
This method works quite well empirically.

Lastly, instead of the local random walk methods for moving spikes, we may use Gibbs sampling. We can sample each new location from the likelihood using e.g., rejection sampling. While this approach mixes with less samples than using MH (intuitively the spike can move to anywhere, instead of only locally), the computational cost for moving each spike is which is undesirable.

### Iv-B Adding and removing spikes

The sampler over spike trains also requires transdimensional moves in order to sample over the number of spikes. To add or remove spikes, we follow a standard birth-death MH algorithm [12]. We choose a fixed probability for proposing new spikes and uniform proposal densities for adding and removing spikes. Suppose we want to add a new spike to the existing set of spikes . This is accepted with probability

with |

where is the proposal probability for removing from , and is the proposal density for adding to . and are posterior probabilities of the spike train given the data and the spiking prior respectively after and before the proposed moves. Under uniform proposals and , we have , , giving

Similarly the acceptance ratio for removing spike is

Following [13], a typical choice of the prior proposal probability is , and we repeat the birth-death sampling process 10 times per iteration. Each iteration of the algorithm is schematically presented in Alg. 1.

Similarly to the case of shifting spikes, the addition and removal can also be implemented in just time. Using a similar argument it is also easy to show that can also be sampled in time using the method described in section II. As a result, the complexity of each iteration scales linearly with the number of spikes and not the number of timebins, rendering this algorithm particularly attractive for recordings that are very sparse and/or have a fine temporal resolution.

## V Collapsed Gibbs sampler

It is possible to marginalize over the baseline and initial value and enhance the mixing rates of our algorithm [14]. We present this approach here for the discrete case and note that the continuous case can be treated similarly. For this part we assume to be fixed. By denoting the marginal prior for , the marginal likelihood is computed as^{3}^{3}3Technically this approach is approximate since have truncated normal priors but here are integrated over the whole . However, in practice the posterior of puts very little (if not negligible) mass on negative values and the approximation is very tight.

with | |||||

and we used the fact that for and a symmetric matrix such that is positive definite, we have

The marginalized likelihood has the same functional form and therefore and can again be sampled with the same methods. Moreover, by exploiting the structure of it is easy to see that each multiplication of the form can be performed in time and therefore the complexity of each full Gibbs sweep still scales as and the algorithm remains efficient. After the initial burn-in period, the (Rao-Blackwellized) posterior of the marginalized parameters can be approximated as

where are the sampled values and each can be computed by conditioning . Note that is treated as a known parameter here. If we assume an inverse Gamma prior then the posterior is no longer inverse Gamma. However can still be sampled with standard rejection sampling methods. Finally marginalization over is also possible, however the marginalized posterior of has no longer the nice quadratic form and the computational complexity of the algorithm increases.

## Vi Results

We apply our methods to calcium imaging data from spinal cord neurons in-vitro, collected using the calcium indicator GCaMP6s with a temporal resolution 15Hz [15]. The neurons were stimulated antidromically (so we know the spike times) and fired small bursts of spikes. In several timebins multiple spikes were fired. Fig. 1 shows an application of the discrete algorithm with marginalized out. 1000 samples were collected with the first 200 discarded (burn-in period). This particular trace is of low SNR, however the algorithm predicts mosts of the spikes and provides low variance estimates of the model parameters.

A limitation of the discrete algorithm is that it can only produce one spike per timebin. To resolve this issue we run the continuous time sampler to the same dataset. All the traces from imaged pixels that correspond to the same neuron were averaged to produce a high-SNR trace. The algorithm produced 500 samples after a burn in period of 200 samples. The results are shown in Fig. 2. Again the algorithm predicts well the produced calcium trace and provides estimates of the spike posterior in continuous time. By binning the produced sample in the original bin size (bottom row) we see that the algorithm can assign multiple spikes per timebin and better approximate the true spikes.

## Vii Conclusions - Future Work

We presented two classes of Bayesian methods for spike train inference from calcium imaging data. Our methods provide a principled approach for estimating the posterior distribution of the spike trains and provide robust estimates of the model parameters, especially in low-SNR conditions. We also derived collapsed Gibbs samplers that exhibit faster mixing with no significant computational cost per sample.

In future work, we plan to explore the use of Hamiltonian Monte Carlo [16] and particle Markov chain Monte Carlo [17] methods for more efficient spike sampling, and extend our methods to allow for a slowly time-varying baseline, a phenomenon that is often observed in vivo experimental conditions. We also plan to scale up to a spatial setup where each measurement corresponds to a pixel that is part of a neuron (or is shared across a small number of neurons) [18], and in the case of compressive calcium imaging [19], where each measurement is formed by projecting the calcium activity of the whole imaged spatial onto a random vector.

## Acknowledgment

We thank S. Chandramouli and D. Carlson for useful discussions, and T. Machado for sharing his spinal cord data. LP is supported from an NSF career award. This work is also supported by ARO MURI W911NF-12-1-0594.

## References

- [1] M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nature methods, vol. 10, no. 5, pp. 413–420, 2013.
- [2] J. Vogelstein, B. Watson, A. Packer, R. Yuste, B. Jedynak, and L. Paninski, “Spike inference from calcium imaging using sequential monte carlo methods,” Biophysical journal, vol. 97, no. 2, pp. 636–655, 2009.
- [3] J. Vogelstein, A. Packer, T. Machado, T. Sippy, B. Babadi, R. Yuste, and L. Paninski, “Fast non-negative deconvolution for spike train inference from population calcium imaging,” Journal of Neurophysiology, vol. 104, no. 6, pp. 3691–3704, 2010.
- [4] B. F. Grewe, D. Langer, H. Kasper, B. M. Kampa, and F. Helmchen, “High-speed in vivo calcium imaging reveals neuronal network activity with near-millisecond precision,” Nature methods, vol. 7, no. 5, pp. 399–405, 2010.
- [5] J. Oñativia, S. R. Schultz, and P. L. Dragotti, “A finite rate of innovation algorithm for fast and accurate spike detection from two-photon calcium imaging,” Journal of neural engineering, vol. 10, no. 4, p. 046017, 2013.
- [6] G. Casella, “Empirical Bayes Gibbs sampling,” Biostatistics, vol. 2, no. 4, pp. 485–500, 2001.
- [7] C. Bishop, Pattern Recognition and Machine Learning. Springer, 2006.
- [8] J. S. Liu, “Metropolized Gibbs sampler: an improvement,” Technical report, Dept. Statistics, Stanford Univ, Tech. Rep., 1996.
- [9] Y. Mishchenko, J. Vogelstein, and L. Paninski, “A Bayesian approach for inferring neuronal connectivity from calcium fluorescent imaging data,” The Annals of Applied Statistics, vol. 5, no. 2B, pp. 1229–1261, 2011.
- [10] S. Keshri, E. Pnevmatikakis, A. Pakman, B. Shababo, and L. Paninski, “A shotgun sampling approach for solving the common input problem in neural connectivity inference.” 2013, arXiv preprint arXiv:1309.372.
- [11] V. Y. F. Tan and V. K. Goyal, “Estimating signals with finite rate of innovation from noisy samples: A stochastic algorithm,” Signal Processing, IEEE Transactions on, vol. 56, no. 10, pp. 5135–5146, 2008.
- [12] J. Moller and R. P. Waagepetersen, Statistical inference and simulation for spatial point processes. CRC Press, 2003.
- [13] R. P. Adams, I. Murray, and D. J. MacKay, “Tractable nonparametric bayesian inference in poisson processes with gaussian process intensities,” in Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 2009, pp. 9–16.
- [14] J. S. Liu, “The collapsed Gibbs sampler in Bayesian computations with applications to a gene regulation problem,” Journal of the American Statistical Association, vol. 89, no. 427, pp. 958–966, 1994.
- [15] T. Machado, L. Paninski, and T. Jessell, “Functional organization of motor neurons during fictive locomotor behavior revealed by large-scale optical imaging,” in SFN Neuroscience, 2013.
- [16] A. Pakman and L. Paninski, “Auxiliary-variable exact Hamiltonian Monte Carlo samplers for binary distributions,” in Advances in Neural Information Processing Systems 26, 2013.
- [17] C. Andrieu, A. Doucet, and R. Holenstein, “Particle Markov chain Monte Carlo methods,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 72, no. 3, pp. 269–342, 2010.
- [18] E. Pnevmatikakis, T. Machado, L. Grosenick, B. Poole, J. Vogelstein, and L. Paninski, “Rank-penalized nonnegative spatiotemporal deconvolution and demixing of calcium imaging data,” in Computational and Systems Neuroscience Meeting COSYNE, 2013.
- [19] E. Pnevmatikakis and L. Paninski, “Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions,” in Advances in Neural Information Processing Systems 26, 2013.