A Bayesian Particle Filtering Method For Brain Source Localisation
Abstract
In this paper, we explore the multiple source localisation problem in the cerebral cortex using magnetoencephalography (MEG) data. We model neural currents as pointwise dipolar sources which dynamically evolve over time, then model dipole dynamics using a probabilistic state space model in which dipole locations are strictly constrained to lie within the cortex. Based on the proposed models, we develop a Bayesian particle filtering algorithm for localisation of both known and unknown numbers of dipoles. The algorithm consists of a region of interest (ROI) estimation step for initial dipole number estimation, a Gibbs multiple particle filter (GMPF) step for individual dipole state estimation, and a selection criterion step for selecting the final estimates. The estimated results from the ROI estimation are used to adaptively adjust particle filter’s sample size to reduce the overall computational cost. The proposed models and the algorithm are tested in numerical experiments. Results are compared with existing particle filtering methods. The numerical results show that the proposed methods can achieve improved performance metrics in terms of dipole number estimation and dipole localisation.
keywords:
Bayesian, MEG, Multiple source localisation, Particle filter1 Introduction
In recent years, the development of noninvasive brain signal measuring techniques such as MEG and electroencephalography (EEG) have seen rapid progress. These techniques are helpful in diagnosis of mental diseases such as epilepsy, Alzheimer’s and Parkinson’s disease Baillet2001 (); Kaipio2005 (). In noninvasive brain signal processing, we are particularly interested in the signal generated from the cerebral cortex which is the outer layer of the cerebrum hamalainen1993 (); hansen2010meg (). Cortical activity in different cortical regions (such as somatosensory, visual, motor or auditory cortex) can be elicited by suitable stimuli (such as an image or a piece of song). A single active neuron is too weak to be measured directly, so tens of thousands of synchronously active neurons are needed to produce a measurable brain signal. For modelling purposes, many spatially neighbouring active neurons can be summarized and modelled as a dipolar current source, which can be simply named as a “dipole”. The electromagnetic field generated by such a dipolar source is measurable using MEG/EEG devices.
Brain source localisation is fundamentally an illposed inverse problem Kaipio2005 (); hansen2010meg (). The main barrier is that there may exist many possible solutions for the same set of data, and hence no unique solution can be obtained in the general case. In this paper, we aim to accurately localise the spatiotemporal brain sources using the electromagnetic signals collected outside the surface of the head, employing physiological constraints and soft prior information to regularise the undetermined problem.
1.1 Related work
Brain source localisation is an active research field where a significant amount of work has been done in the past two decades (see, e.g., hamalainen1993 (); hansen2010meg (); Pascual2002 (); Long2011 (); Jun2005 (); Galka2004 (); Somersalo2003 (); auranen2005bayesian (); nummenmaa2007hierarchical (); Campi2008 (); Sorrentino2009 (); Miao2013 (); Sorrentino2013 () and the references therein).
There are two main types of methods: distributed source approaches, and pointwise dipole approaches hamalainen1993 (). Distributed source methods identify the potential active brain sources that are distributed on a dense grid of fixed locations throughout the whole cerebral cortex (or the whole brain volume if under a looser constraint). Since the number of unknown sources is larger than the number of the M/EEG sensors, mathematical assumptions or constraints are required for an unique solution. Some existing methods include the least squares minimum norm estimation (MNE) hamalainen1993 (), dynamic statistical parametric mapping (dSPM) Dale2000 (), standardized lowresolution electromagnetic tomography (sLORETA) Pascual2002 (), and Kalman filter related approaches Galka2004 (); Long2011 ().
On the other hand, pointwise dipole approaches treat the brain currents as point dipole sources, and estimate the states (this may include dipole location, moment, and orientation) of the point source dipoles. In this type of modelling, the state of each dipole source is treated as a random unknown target. A number of works have been published under this type of modelling; these include multiple signal classification (MUSIC) related approaches Mosher1999 (), Markov chain Monte Carlo related approaches Schmidt1999 (); Jun2005 (), and sequential Monte Carlo (or particle filtering) related approaches Somersalo2003 (); Campi2008 (); Sorrentino2009 (); Sorrentino2013 (); Miao2013 ().
Among the various methods proposed, Bayesian particle filtering seems one of the most promising methods for tackling the source localisation problem. In this paper, we develop a pointwise dipolar source localisation approach using Bayesian particle filtering.
1.2 Bayesian particle filtering methods for dipole localisation problem
Particle filtering methods have been developed for this application over the last decade. Somersalo et al. Somersalo2003 () applied a sequential importance resampling (SIR) particle filter for the dipole localisation problem using artificial planar/3D geometry. Results of a twodipole localisation example was shown using an ideal spherical head model. Campi et al. Campi2008 () proposed a RaoBlackwellised particle filter (RBPF) for dipole tracking with single dipole and two dipole examples. It was shown in that work that the RBPF provided better localisation results with lower computational cost than those from a standard particle filter. Sorrentino et al. Sorrentino2009 () integrated a random finite set scheme into the particle filter. The method was able to track a timevarying number of dipoles with the maximum dipole number specified in advance.
Recently, Sorrentino et. al. Sorrentino2013 () suggested to model the problem using a static dipole setup. The work employed a resamplemove particle filter to recursively estimate the dipole moment. Chen et al. Chen2013A (); Chen2013B () integrated an MNE step into a multiple particle filter method to localise an unknown number of dipoles. The estimation of the dipole number relied on both the MNE step and the previous localisation history. Miao et al. Miao2013 () also adopted a multiple particle filter method to localise multiple dipoles, using a probability hypothesis density (PHD) filter to perform the estimation of the unknown/timevarying dipole number. The algorithm was implemented and assessed in a realtime fieldprogrammable gate array (FPGA) board. However, it modelled the brain under the ideal spherical head model, which cannot provide a realistic description of the true human brain.
1.3 Our work
In this paper, we propose a Gibbs multiple particle filtering (GMPF) algorithm for the multiple dipole source localisation problem. The work is developed based on our previous work Chen2013A (); Chen2013B (). The contribution of this work is described as follows.
Firstly, a continuous head model which forces the state dynamics to strictly remain on the cerebral cortex is developed. To fit with real world applications, we adopt a 1layer realistic head model, the Nolte model Nolte2003 (). Although this head model is quite realistic, the offtheshelf software implementations of it can only be used to evaluate the model at a discrete set of points (the mesh nodes). For distributed source implementations this is all that is needed. However, in our case we need a smooth manifold which defines the cortex surface and hence the discrete set of points is not enough. For this purpose, we adopt a nearestneighbor (NN) interpolation method to form an approximate continuous cortical manifold. This allows us to formulate the particle filter state directly in terms of the location on the continuous cortex surface.
Secondly, we develop a particle filtering algorithm by integrating a Gibbs sampling iteration step into a multiple particle filtering (MPF) Chen2013A () algorithm. Instead of running each component of the MPF only once at each time step, the GMPF iteratively runs the individual components, conditional on the state of the remaining sources, until the state samples converge. This enables the MPF to iterate to obtain a stable state estimate prior to entering the next time step.
Thirdly, we develop a dipole number dynamic model along with the GMPF method Bugallo2007 (); Mihaylova2012 () for localisation with an unknown dynamic number of dipoles. The model generates three potential dipole number predictions based on the estimate from the previous time step. All three predictions are examined and their corresponding state estimates are calculated. A selection criterion is then applied to obtain the optimal prediction results in each time step. Although approximate in a Bayesian sense, this approach improves the accuracy in estimating the number of dipoles, and thereby improves the overall localisation performance of GMPF.
Finally, we apply a computationally adaptive scheme to adjust the number of particles and the state transition range at each step of the algorithm run. In order to generate candidate numbers of sources at each time step, we integrate a standard noise normalized MNE method hamalainen1993 () and a spatial clustering method Gowda1978 () to gain some knowledge on the potential dipolar sources. These prior information are used to evaluate the localisation accuracy. We could then adjust the particle size and the state dynamic space in the next algorithm run.
The remainder of the paper is organized as follows. Section 2 introduces the data modelling procedure. A discrete / continuous head model, a dipole state transition model, and a dipole number dynamic model are described in this section. The localisation algorithm is proposed in Section 3. Both the models and the algorithms are evaluated in Section 4. Section 5 concludes the article.
2 Data model
We consider a clinical application using an MEG system with magnetometers – the proposed method can be applied to other M/EEG settings with slight modifications. Here we use the sensor MEG application as an example. All the sensors are placed outside the brain surface to obtain noninvasive measurements. We are interested to infer the neural activities within the brain cortical region. The state space is constrained to lie within the cerebral cortex and is denoted as .
For MEG data, a 1layer realistic head model is introduced to generate the leadfield matrix (the forward matrix), based on a total of G fixed vertices on the cortex. An NN (nearest neighbour) interpolation method is used to interpolate the locations between these vertices.
As described above, the head model comprises vertices, ; and triangular faces on the surface of the cortex, created assuming a 1shell Nolte model for MEG. The width of the head model is 136 mm, and the distance between two adjacent vertices varies between 2.3 mm to 8.4 mm. The leadfield matrix was generated using the statistical parametric mapping (SPM) software Friston1994 (). Although provides a relatively accurate approximation for the source distribution in the cortical space, it is discretised artificially to a limited number of fixedlocation points. We first introduce the traditional discrete real head model using . The neural current density is, by contrast, in reality a continuous spatial flow. For this reason, we then propose an interpolated realistic head model for continuous pointwise dipole localisation.
Figure 1 shows the triangulation of the cortex. The blue dots are the predefined vertices on the cerebral cortex, and the 5 coloured small areas are example subplanes that represent the individual triangular faces on the cortex. In order to better fit real world applications, we strictly enforce that the trajectory of a pointwise dipolar source lies within the modelled cerebral cortex. Each individual dipolar source may only move within a single triangular cortical region defined by the fixed vertices and the triangular faces. Thus we model each dipole as semistatic within a small spatial volume for the whole observation interval.
For a pointwise dipolar source, we define its state as a vector , where is the time instant and is the number of dipoles at that time. Since the time resolution of MEG is in the millisecond scale, the unit of the time step in this paper is set as one millisecond. We define the matrix containing the joint state of all dipoles:
(1) 
Here, each is a state vector for a single dipole, defined as , where is the 3D location and is the dipole moment (the dipole amplitude with orientation). The dipole orientation is set as normal to the cortical subplane surface, so we can usually take to be just the scalar amplitude of the dipole once the cortical geometry is specified by the head model.
A general measurement model that describes the relationship between the MEG measurement and the dipole states is defined as:
(2) 
where is the measurement vector at time , . represents a linear measurement matrix. is the general measurement model function, and denote the vectors of source locations and amplitudes respectively. is the measurement noise vector, which is assumed to contain independent Gaussian random variables with zero mean and variance .
2.1 Discrete head model
Rather than recomputing, expensively, the general model expression above for each new source location, a discretised head model is generated and interpolated in order to approximate the model on the fine scale of the actual source locations. In order to achieve this, we first compute the lead field matrix corresponding to every possible discretised grid location . Responses due to arbitrary offgrid source locations are then computed using a special NN interpolation scheme detailed below.
For the discrete head model, the measurement function is simply the discretised leadfield matrix hamalainen1993 (), an matrix representing the linear relation between dipole sources at all possible discrete grid locations and the measurements. The model can be rewritten, assuming once again a linear structure, as:
(3) 
where is the vector of amplitudes at the fixed location vertices in the cortex, . Here, for the discrete model, the set of dipole locations are prespecified from the fixed spatial grid .
2.2 Continuous head model
We apply a simple NN interpolation method Coxeter1961 () between the fixed location anchor vertices in order to obtain the approximated continuous head model. As shown in Figure 2, for the three closest neighbouring anchor vertices to source location (denoted as , and in the example), we obtain , and . represents the unit noiseless response measured by MEG when we place a unit dipole at the th anchor vertex , computed using Friston1994 (). The area of the triangular region varies depending on the distance between the three anchor vertices, with the average area at around mm. The triangular cortical region is sufficiently small that it is reasonable to treat it as a flat subplane in which all the interpolated points have the same orientation. In Figure 2, the orientation that is normal to the subplane is denoted as . , and are the orientations of the three anchor vertices respectively when there is a unit dipole placed at each vertex. We define as the angle between and the orientation of the anchor vertex. We have then , and for vertices , and respectively.
For an th dipole with location at , we compute its unit response as:
(4) 
where is the unit response after orientation mapping, and are interpolation coefficients describing the relation between the location of a dipole and the three anchor vertices, computed as follows.
To obtain the relations between , and the 3D locations , , , and , we define , and . We have Coxeter1961 ():
(5)  
(6) 
We then obtain the final measurement by summing up all of the scaled unit responses multiplied by its amplitude, from different individual dipoles: , as in Equation 2, where the approximation arises as a result of the NN interpolation procedure. The interpolation is executed in the following steps:

Find the triangular subplanes where the dipoles are located in.

For each individual dipole , identify the anchor vertices and compute their corresponding orientations.

Calculate the angles to obtain .

Sum up all to obtain the predicted (noiseless) measurement .
2.3 Individual dipole dynamic model
For an individual dipole which exists from time instant to , we define the following individual dipole transition model:
(7) 
where can be a linear or nonlinear function. is the subplane, is the face index and is the dipole index. Each of the subplanes in the 3D cortical space can be treated as a two dimensional plane. Thus we adopt a two dimensional random walk model as our transition function . As we have stated above, the location of a dipole is modelled as semistatic on the cerebral cortex. The state space is constrained accurately on the brain cortical surface , thereby we divide the dipole dynamic into two phases: the transition between different triangular faces and the transition within an individual face.
For the transition between different faces, we define as the neighbouring subplane set, which stores all of the neighbouring triangular faces to the face where the dipole is located at the previous time step. In practice, this is computed and stored in a lookup table using the grid information provided by the leadfield matrix.
For the transition within a face, we draw values for the coefficients and , and randomly select a position within the triangular subplane, and with constraint described in Equation (4).
The procedure to perform the dipole transition is as follows:

For a dipole with state , find the neighbour subplane set .

We have subplanes including neighbouring subplanes and the original subplane where located. We randomly select one subplane with equal probability .

Randomly choose a position using and in the selected subplane.

Calculate to obtain the new dipole state prediction.
is the total number of the identified neighbour faces. In the initialisation step, dipole states are randomly selected from the fixed location grid points. Since a grid point connects several subplanes, it is difficult to define which subplane it belongs to. In practice, we compute the distance between the grid point and all its neighbour subplanes. We select the one with the shortest distance to the grid point as its subplane.
2.4 Dipole number dynamic model
In our model, we assume that the number of the active dipolar sources does not change dramatically between adjacent time steps. For scenarios with more than one dipole currently active, we can model the dipole number transition process by allowing an individual dipole to appear or disappear at a single time instant.
Given the dipole number from the previous time instant, the current dipole number can be obtained from the following dipole number dynamic model:
(8) 
with , where . The dynamic probability is predefined with a consideration of dipole birthdeath movement Ng2007 (). In general, we set , , . and are the birthdeath probability discussed as follows.
2.4.1 Dipole birthdeath move
We adopt a simple birthdeath move for cases when a new dipole appears or an existing dipole disappears from time to . We set , where . For initialisation step or special cases when , we set and .
For the birth process, a randomly selected initial state within the cortical surface is assigned for the new birth dipole. The initialisation is assisted by the results from the probabilistic ROI estimation (detailed in Section 3). The probabilistic ROI estimation provides us knowledge about the signal strength of the grid points in . A probabilistic sampling method is used for sample selection regarding the signal strength (explained in Section 3.2). The selected sample point set is defined as . We randomly select one of the grid points in and assign its location to the new birth dipole. is also used to generate the ROIs by applying a spatial clustering algorithm Gowda1978 () to the selected point sets.
For the dipole death process, the algorithm randomly selects and deletes one of the existing dipoles. The random selection does not introduce extra bias in the death process due to the large sample number and resampling step in the particle filter. Instead of computing a single result in each time step, the particle filter will draw a number of sample points and compute their weights; samples with lower weights are filtered out to improve the estimation accuracy. Therefore the deleted existing dipoles vary from sample to sample, only samples with a higher posterior density will be used for the final estimates. The particle filtering algorithm will be detailed in Section 3.
3 Bayesian sequential Monte Carlo algorithm
In this section, we develop a Bayesian particle filter algorithm to deal with the localisation with both known and unknown dipole number. The dipole localisation problem is cast as a general multitarget tracking problem in our Bayesian framework.
3.1 Algorithm execution
We first describe the algorithm structure, followed by a detailed description of each component in the proposed algorithm. The algorithm contains three different interactive components: the probabilistic ROI estimation step estimates the number of ROIs at time ; provides an initial guess of the dipole number for the main algorithm; the GMPF step performs the particle filtering; the selection criterion step selects the optimal number and its corresponding state estimates at time . Figure 3 illustrates the relationship between the components and their corresponding variables. The target number propagation model and the selection criterion scheme are relatively naive and simple when compared to some other existing estimation methods Ng2007 (). In this paper we focus on the tracking algorithm performance.
The algorithm executes in the following four steps:

A probabilistic ROI estimation step uses the noise normalised MNE method and a probabilistic sampling method to obtain the point set . Therefore we can obtain an estimate for the number of the ROIs . is used for the dipole number initialisation at at .

Given the estimated dipole number from , three potential dipole numbers (with their corresponding dynamic probability) can be obtained from Equation (8).

For each , the GMPF algorithm is applied to estimate the dipole state. The algorithm assigns an individual particle filter (iPF) for each of the identified targets. In a single GMPF run, each iPF is executed to generate the estimate for its corresponding dipole. is immediately updated in the dipole state to assist other iPFs. This updating procedure is performed in a Gibbs sampling manner (see Section 3.3.1 for details). We finally obtain the estimated dipole state for each of the three potential cases.

The selection criterion scheme is then used to compute the posterior probability for each of the three cases. One of the three cases is selected as the final estimate at . An estimation for both the dipole number and the dipole state can then be obtained. We set and .
In addition, the discrete point set from the MNE is used to adaptively control the number of particles and the state transition range in the particle filtering. In the remainder of this section, we give a more detailed description for each parts.
3.2 Probabilistic ROI estimation
A noisenormalised MNE approach is employed to obtain the signal strength of the fixed grid points. The discrete head model from Equation (3) is . A typical noisenormalised MNE solution hamalainen1993 () can be obtained as follows:
(9) 
where is an amplitude estimation vector for the grid points, . is a noise normalised regularisation parameter.
We do a simple probabilistic sampling with respect to the estimated amplitude of the points, as follows:
(10) 
We now have the point set . We employ a hierarchical spatial clustering method Gowda1978 () to cluster the selected points in with respect to their corresponding geographical positions. We define these clusters as the ROIs. We can obtain from this the number and extent of the ROIs , .
is the subset of the grid points for active region , where is the total number of points in that subset and is the grid point index. We compute the location of the th ROI by taking the mean of the points in so that the centre of the ROI is defined as:
(11) 
The centre location of each ROI will be used in Section 3.5.
3.3 Multiple particle filtering
For each value of , the problem reduces to track a fixed number of dipoles. We target the joint filtering distribution for all sources, , where is the previous estimated dipole number up to time , and is the current value of dipole number for the th case. The state vector of each GMPF is denoted by . We assign a GMPF for each ; the three GMPFs operate in parallel and are independent of each other.
For each , there are dipoles and therefore iPFs are assigned, one for each active source, as shown in Figure 3. The state vector can be expanded as , where denotes the state of the th individual dipole out of . To simplify the notation, rewrite the state as .
Now, defining as the trajectory of the previous and the current active source numbers, the joint target distribution can be expressed as:
(12) 
As is known in the current time step and the dipoles are assumed to be independent of each other a priori, we have:
(13) 
where is the predictive distribution for each independent source; the states are uniformly drawn from the whole state space .
In order to draw samples from the joint target posterior distribution, we consider a Gibbs sampling structure Gilks1995 () to iteratively update the desired dipole states at each time step, in which a conditional particle filter for each active source approximates the required full conditional draw required for a Gibbs sampler. The basic idea is very simple: at time we assume that particles are available from from the joint target . In order to run this forward one time step, we initialise the state to some arbitrary initial values. We then run conditional particle filters for each target in turn. Each particle filter is desiged to target the conditional filtering distribution , where denotes the state with source number removed. A single sample is randomly selected from the particle approximation to and this is substituted into the state vector at the th source position. One iteration of the Gibbs sampler comprises a complete sweep through all of the particle conditional distributions, and convergence will occur after some large number of iterations (in practice though we implemented just a few iterations). In this way we aim to overcome the approximation induced by the standard MPF, which accounts only approximately for the statistical coupling between sources, at the expense of an iterative procedure at each time step. An interesting modification of our approach would be to incorporate pseudomarginal sampling ideas into the Gibbs sampler, which would in addition correct the approximation of the particle filter to the conditional filtering distribution. This addition is left as a future topic for exploration.
We now specify in detail the steps required for the GMPF.
3.3.1 Gibbs sampling updating step for conditional posterior distribution
In each time step, the iPFs are executed from the first iPF to the th iPF sequentially. The updating procedure acts in a similar way to that in a standard Gibbs sampler: once a new state is generated, it is immediately used to assist other iPFs by updating the corresponding conditional posterior distribution. In order to obtain a good set of samples from the joint filtering distribution this Gibbs sampler should run for a number of iterations at each time step. We denote the iteration number by the iteration indices .
The conditional posterior distribution of the individual dipole state which is approximated by each iPF can be written as . The iPF state is initialised to from the previous time step after the selection criterion step. describes a state vector excluding the state at the th (when ) Gibbs iteration:
(14) 
is used in the estimation of the th iPF, its first elements are the estimates updated in the current Gibbs iteration while the other elements are the estimates from the previous Gibbs iteration. Once a new state is generated, we update the corresponding and use it in the th iPF. The basic scheme of the Gibbs iteration is described as follows:

Generate from .

Generate from .
⋮ 
Generate from .
Therefore, the final iteration gives , where denotes the sampled value of a dipole state at time step for the th case, obtained after iterations of Gibbs update. After the selection criterion step for each algorithm run, one of the three cases is selected, we will have where the notation is eliminated. This is important because we use some variables at in our Bayesian inference. For example, the term and .
The Gibbs sampling iteration enables the algorithm to get a more accurate estimate in each particle filtering step, particularly when the sources are spatially close and hence not independent in their joint posterior. This raises several issues such as the Gibbs iteration number, convergence analysis, and the computational load considerations. In theory, many iterations would be required to guarantee convergence, but here we only operate a few iterations because of the high computational burden. The intuition here is that usually the dipoles are well separated and quite independent; hence the Gibbs sampling should be approximately converged within a few iterations.
3.3.2 Individual particle filtering
Since and are available terms to each iPF, we only need to sample the unknown state in each iPF. The conditional posterior distribution can be rewritten as . We define as the number of particles for each iPF at time . A particular sample in an iPF is denoted as , which is updated in each of the th Gibbs iteration.
The conditional posterior distribution required for Gibbs sampling can be expanded in two steps: an updating step and a prediction step. The (conditional) updating step can be expressed as:
(15) 
where
is the corresponding prediction step. The conditional posterior for at is , containing parameters and which are known prior to the iPF run. The conditional posterior distribution , which forms a complete recursive form in the Bayesian model. This is because the dipole sources are a priori independent of each other, conditioning on and does not affect the proposed conditional posterior distribution. Here we simply take the weighted samples from the th iPF, building in the independence approximation
(16) 
In these equations we have also built in the assumptions that the sources are a priori independent at each time point, and that the observations at time k are conditionally independent of the states prior to time k. We choose an importance density and then obtain the weight as
(17) 
where denotes the th particle sample after the selection criterion step at . We choose the prior as the importance distribution, so that we have the following simplification, the standard bootstrap filter Gordon1993 ():
(18) 
We normalise to obtain . We adopt a residual resampling step Douc2005 () to avoid the socalled degeneracy problem Doucet2000 (). See Algorithm 1 for the detailed description of this iPF.
For the particle weight of the iPF in the birth move (a new target appears), the samples are drawn uniformly from the whole state space , and assigned equal weight . For the death move (an existing target disappears), the corresponding particles of the selected target are deleted.
As stated in Section 3.3.1, is then updated using the result from the current iPF run. Since it is impossible to obtain the groundtruth state for each th dipole source directly, we randomly pick up the state estimate from the resampled state . When the particle weights at are not available, we randomly select a sample from according to the weight from the previous time step. Once the current particle weights are obtained, we can obtain the estimate by randomly selecting a sample from the resampled particles with an equal probability , where represents the particle index after resampling.
The main body of the proposed algorithm is described in Algorithm 2.
To sum up, we modify the original MPF algorithm in the following three ways:
(1) We integrate an iterated Gibbs sampling procedure to generate the individual state estimate in each iPF. This enables us to obtain more reliable estimations in each assisted iPF run. The number of iterations is controlled by the parameter .
(2) Instead of dividing the state space into several subspaces, samples of each iPF are drawn from the same state space. In the dipole initialisation step, the samples are drawn from the vertices; they are propagated using the individual dipole dynamic model in .
(3) Rather than using a weighted mean method, in the th iPF is selected randomly from all the samples , where is the sample index. In practice, we randomly pick up one of the particle filter samples with the equal probability .
3.4 Selection criterion scheme
We then obtain three candidates and the corresponding states from the GMPF. We apply the selection criterion scheme to find the optimal pair amongst all the available estimates. We can obtain the posterior probability as
(19) 
The estimate is:
(20) 
where . According to Doucet2000 (), and assuming that the source posterior factorizes over (i.e., the sources are independent), we can then obtain by selecting the with the highest probability. Then obtaining from the corresponding posterior mean .
3.5 Adaptive filtering
As shown in Figure 3, the discrete amplitude matrix from MNE is used in GMPF to assist the sampling procedure in every iPF run. The identified ROIs and their point subset are used to control the particle number and particle transition range. In practice, we calculate the localisation root mean squared error (RMSE) between the centre of each ROI and the dipole state estimation at each time step . The localisation RMSE can be obtained from:
(21) 
where function computes the localisation RMSE between all elements in and . The centre points of the ROIs are compared with the localisation estimation. We compute the spatial distance between each of the the individual dipole state and the ROIs. We then obtain an matrix that contains all pairs of the localisation RMSE. We execute the target association according to the RMSE matrix . The targets in the state pairs with the smallest RMSE are then associated together. is then used as a reference criteria to adaptively adjust the particle number and the particle transition range in particle filtering at time .
assists adaptive filtering mainly in two aspects: (1) the number of samples is modified with respect to the RMSE level , a smaller is assigned when we obtain a lower and vice versa. (2) In the individual dipole propagation, the dipole dynamic range for the sample depends on the value of ; a lower results in a smaller dipole dynamic range and vice versa.
4 Numerical results
In this section, we present numerical results using the synthetic data. Since the groundtruth dipole location from real data remains unknown, the performance evaluation relies on the results from the synthetic data.
4.0.1 Simulation setting
We adopted typical examples with both known and unknown number of dipoles. The number of dipoles varied between one and five. The groundtruth dipoles had unit amplitude in our simulation. The orientation of each dipole was set as normal to its corresponding subplane. Visualisations were carried out with tools further developed from those published in Helsinki BEM Library Stenroos2007 (). We generated the MEG data using a 204magnetometer sensor setup. All magnetometers were distributed around the surface of the head. The state space was strictly constrained within the predefined 1layer real head cortex. The width of the brain was 136 mm in our simulation. According to empirical observations, a brain current source often appears and disappears in the same region, and the centre of the current source evolves within a small volume in the cortex. Therefore, it was reasonable for us to assume that all the dipoles were identical and independent of each other, and that each individual dipole might move within a predefined triangular subplane. We set the measurement SNR (signal to noise ratio) as 10. The measurement noise in the head model had a Gaussian distribution with zero mean and variance , where was two times larger than that of the groundtruth noise. We tested each of the algorithms and the model with more than 30 repeated identical experiments. The number of dipoles for each iPF at was set as .
In the simulation, we test both the GMPF algorithm with one Gibbs iteration (the original GMPF) and the GMPF algorithm with five Gibbs iterations. The GMPF algorithm with five Gibbs iterations outperforms the original GMPF algorithm in unknown dipole number examples, however it only provides slightly better results than that from the original GMPF algorithm in the known dipole number examples. So we only show the results from GMPF algorithm with five Gibbs iterations for the unknown dipole number case in Section 4.2.2. The term ‘GMPF’ in the rest of this section refers to the original GMPF unless specified otherwise.
In the remainder of this section, we compares the performance of different head models in examples with known and unknown numbers of dipoles. An example with five known numbers of dipoles is used to evaluate the localisation algorithms using the proposed continuous head model. For localisation with an unknown number of dipoles, we test and compare the performance of different particle filtering algorithms in an example with three and dynamic numbers of dipoles. We finally present an evaluation result particularly for the estimation of the dipole number.
4.1 localisation with known number of dipoles
The performance of a known number of dipoles in terms of their localisation is easier and more accurate than the localisation with an unknown number of dipoles. In this section, we present two examples, one with three dipoles (two on the left and one on the right hemisphere) and other with five dipoles (two on the left and three on the right hemisphere). The three dipole example is used to compare three different head models.
4.1.1 Head model comparison
We compared the model performance between the spherical head model, discrete real head model, and the proposed continuous real head model. The spherical head model is a relatively old model that assumes the human head is a perfect spherical shape. This model was used in previous work Miao2013 (); Sorrentino2009 (). The discrete head model is the 1layer real head model generated using the BEM method, this model was used in Sorrentino2013 (); Chen2013A (); Chen2013B (). In this paper we adopt the discrete model which contains discretised potential source points, all of which have fixed locations. A leadfield matrix with columns is then generated. We employ the NN interpolation method to convert the discrete model into the continuous model, which was presented in Section 2.
GTM / MM  S/S  S/D  S/C  D/S  D/D  D/C  C/S  C/D  C/C 
SIR/2000  17.32  29.37  22.22  35.61  13.01  15.55  21.28  22.78  16.24 
MPF/2000  13.33  19.29  19.11  23.65  11.31  13.77  16.34  13.46  9.22 
GMPF/2000  13.20  19.17  18.16  22.99  11.68  13.94  16.28  12.67  9.38 
SIR/5000  15.28  28.22  21.35  35.06  11.37  13.10  19.49  17.44  15.36 
MPF/5000  11.57  16.32  18.06  21.39  10.73  12.89  16.45  12.08  7.10 
GMPF/5000  11.77  16.28  18.24  20.12  10.25  12.45  16.99  11.54  7.53 
SIR/10000  13.05  25.81  18.20  29.77  9.88  11.29  17.53  14.02  12.74 
MPF/10000  9.36  15.78  15.13  20.22  9.36  10.97  15.68  9.52  5.70 
GMPF/10000  9.28  15.90  14.84  20.35  9.33  10.92  15.71  9.48  5.64 
In Table 1, we present the numerical result with three dipoles with known locations. Three models are tested using a simple standard SIR particle filter, a standard MPF and the proposed algorithm. The entries with ‘S/S’ (column entry) and ‘SIR’ (row entry) corresponds to the method from Sorrentino2009 (); the entries with ‘S/S’ and ‘MPF’ is similar to one of the methods adopted in Miao2013 (); the entries with ‘D/D’ and ‘MPF’ was from our previous work Chen2013B ().
We vary the particle number from 2000 to 10000. The simulated data were generated and tested using all three models, for example, S/D in the table means that we generate the data using a spherical head model, and the measurement model we use in particle filtering is a discrete head model.
Regarding the RMSE performance between different particle filtering algorithms, we can find that both MPF and GMPF perform better than the standard SIR. MPF and GMPF demonstrated similar performance. This is not surprising since the proposed GMPF algorithm is executed in a similar way to that of MPF in the fixed and known dipole number scenario. We also find that the RMSE performance improves with an increase in particle number.
For those few entries with a similar performance between and (e.g., column C/S of MPF and GMPF), a similar RMSE may occur due to model mismatching. In terms of head model comparison, it is as expected that S/S, D/D, and C/C achieve better performance. Amongst all model pairs for the same algorithm, the continuous head model performs better than the others, as shown in bold. We also find that the spherical model can only perform well when the groundtruth model is the same, while the discrete head model and the continuous head model are more robust to different data.
In Figure 4, we show the detailed RMSE performance using the continuous head model and the proposed GMPF algorithm. For the initial particle numbers larger than 12000, the RMSE stays at the same level. This phenomenon is expected according to the individual dipole dynamic model, as a dipole only moves within the triangular subplane.
4.1.2 Known number of five dipoles
Figure 5 shows the tracking results of five known dipoles using the continuous head model and the proposed algorithm.
We can see from Figure 5(a) that the average RMSE over all five dipoles remains at the level of 8 mm. The boxplot ranges between 4 and 12 mm. In Figure 5(b) we show the changes of particle size for some of the identical trials. The initial particle size is . We can see that in most experiments dramatically decreases to very low level from around . If we compare the two figures, one can observe that the proposed algorithm is able to achieve a good RMSE performance while adaptively eliminating the number of particles. This greatly saves the computational cost for multiple particle filter types of algorithms. The total particle number is multiplied by the number of dipoles.
Alg / Dipole  1  2  3  4  5  Processing Time 

MPF  8.72  9.85  9.14  10.68  9.57  56 s 
GMPF  7.02  9.33  7.58  10.24  9.17  28 s 
From Figure 5 (a) and (c), we can see that the average RMSE over five dipoles is almost at the same level for both MPF and GMPF. This can also be seen in Table 2. The middle five columns are the dipole index in the five dipole localisation example. This shows the average RMSE for each dipole over 30 identical experiments. We implemented the algorithms in Matlab using a computer with intel Core i7 CPU @ 3.7 GHz. The processing time for an iteration (with 50 time steps) of the MPF algorithm is 56 seconds and that of GMPF is 28 seconds, the total computational time for 30 iterations is around 1820 seconds for the MPF, and 1340 seconds for the proposed method. From the table we observe that the computational time for GMPF is less than that for MPF, this is due to the implementation of the adaptive filtering step in the proposed algorithm. Although we add an extra Gibbs updating step in GMPF, which increases the computational expense), the decreasing of particle number from adaptive filtering step still makes the GMPF faster than the MPF.
4.2 Localisation with unknown number of dipoles
In real world applications, we are not able to obtain prior information of the dipole number, thus an example with an unknown dipole number needs to be tested to access the proposed model and algorithm. The algorithm is tested using two examples: an unknown three dipoles localisation problem and an unknown dynamic dipole numbers localisation. The dipole number in the latter example varies between three and five. The three unknown dipole number examples is used to evaluate the performance of the proposed algorithm.
As we stated in Equation (8), the dipole number dynamics are modelled and estimated through . In other words, the new estimated dipole number at time depends on the previous time step estimates . For the initial dipole number at time where there is no historical data available, we assign with the clustered MNE estimation result .
4.2.1 Evaluation of dipole number estimation
The new dipole number estimates for highly depend on the estimation at its previous time step. In our selection criterion step, there are three candidate pairs where as stated previously. In this paper, we multiply the dipole number dynamic probability from Equation (8) with the probability computed in the selection criterion step as the candidate posterior probability. We assign , and . The dipole number dynamic probability can be assigned using other designed schemes. Here we simply give the algorithm a prior knowledge (which comes from the observation using real examples) that the dipole number evolves slowly and it is more likely that the dipole number stays the same between two time steps. is assigned using the candidate pair with the highest normalised candidate posterior probability.
Here we use a threedipole example to test the proposed approach. The example setting is the same as that in the known dipole example except that no prior dipole number knowledge is given to the algorithm. We apply both the MPF algorithm in paper Chen2013B () and the proposed GMPF algorithm to the example using a range of particle numbers. Figure 6 shows the histogram of the dipole number estimates. Each plot is the result from 30 identical experiments (with 50 time step length). Figure 6(a) is the estimation result from the MPF algorithm in paper Chen2013B (), since its estimation does not depend on the state estimates from the particle filter, we only show one with 10000 particles. We apply our selection criterion assisted GMPF algorithm with a range of particle numbers.
GMPF Particle Number  7000  10000  13000  16000  19000  MPF 10000 

Avg. Dipole Number  3.568  3.442  3.260  3.282  3.292  3.562 
Table 3 shows the average estimated dipole number over 30 iterations. Since the groundtruth dipole number is 3, we can observe that with an increase in the particle number from 7000 to 19000, the average estimated dipole number improves slightly. The numbers between GMPF 7000 and GMPF 10000 are similar; however if we check Figure 6(a) and Figure 6(b) for their corresponding histogram distribution, we can see there are more successful estimates (dipole estimate equal to 3) in GMPF 7000 than GMPF 10000. From the histogram we can also observe that, with an increase in the particle number using GMPF, the result does not improve much for particle sizes larger than 13000.
4.2.2 Dynamic unknown number of dipole example
We use a dynamic particle number setting in this example. The groundtruth dipole number varies between 3 and 5 during the 50 time steps. The number of dipoles is shown in the ‘True’ row in Table 4. We use the MPF, the original GMPF and the GMPF with five Gibbs iterations to perform the localisation task; the particle numbers are all equals to 10000.
In order to compare the algorithm performance in multiple target tracking scenario with the unknown number of targets, we adopt a multitarget performance evaluation method based on Optimal Subpattern Assignment Metric (OSPA) method Ristic2011 (). The method introduces a penalty term to punish the missed or false tracks when there is an unequal estimated dipole number to the groundtruth. We denote the groundtruth dipole number as , and the groundtruth dipole state as . For case , the OSPA based distance can be computed by: