Some theoretical results for a class of neural mass equations

Some theoretical results for a class of neural mass equations

Grégory Faye NeuroMathComp Laboratory, INRIA, Sophia Antipolis, CNRS, ENS Paris, France Pascal Chossat Olivier Faugeras NeuroMathComp Laboratory, INRIA, Sophia Antipolis, CNRS, ENS Paris, France
Abstract

We study the neural field equations introduced by Chossat and Faugeras in [11] to model the representation and the processing of image edges and textures in the hypercolumns of the cortical area V1. The key entity, the structure tensor, intrinsically lives in a non-Euclidean, in effect hyperbolic, space. Its spatio-temporal behaviour is governed by nonlinear integro-differential equations defined on the Poincaré disc model of the two-dimensional hyperbolic space. Using methods from the theory of functional analysis we show the existence and uniqueness of a solution of these equations. In the case of stationary, i.e. time independent, solutions we perform a stability analysis which yields important results on their behavior. We also present an original study, based on non-Euclidean, hyperbolic, analysis, of a spatially localised bump solution in a limiting case. We illustrate our theoretical results with numerical simulations.

Keywords: Neural fields; nonlinear integro-differential equations; functional analysis; non-Euclidean analysis; stability analysis; hyperbolic geometry; hypergeometric functions; bumps.

AMS subject classifications: 30F45, 33C05, 34A12, 34D20, 34D23, 34G20, 37M05, 43A85, 44A35, 45G10, 51M10, 92B20, 92C20.

1 Introduction

Chossat and Faugeras in [11] have introduced a new and elegant approach to model the processing of image edges and textures in the hypercolumns of area V1 that is based on a nonlinear representation of the image first order derivatives called the structure tensor. They assumed that this structure tensor was represented by neuronal populations in the hypercolumns of V1 that can be described by equation similar to those proposed by Wilson and Cowan [26].
Our investigations are motivated by the work of Bressloff, Cowan, Golubitsky, Thomas and Wiener [8, 9] on the spontaneous occurence of hallucinatory patterns under the influence of psychotropic drugs and the further studies of Bressloff and Cowan [7, 6, 5]. We hardly think that the natural spatial extension of our model would lead to an exciting anlysis of hyperbolic hallucinatory patterns. But, this requires first to better understand the a-spatial model and this is the subject of this present work. The a-spatial model can also be linked to the work by Ben-Yishai [3] and Hansel, Sompolinsky [16] on the ring model of orientation.
The aim of this paper is to present a general and rigorous mathematical framework for the modeling of neuronal populations in one hypercolumn of V1 by the structure tensor which is based on miscellaneous tools of functional analysis. We illustrate our results with numerical experiments. In section 2 we briefly introduce the equations, in section 3 we analyse the problem of the existence and uniqueness of their solutions. In section 4 we deal with stationary solutions. In section 5, we present an analysis of what we called a hyperbolic radially symmetric stationary-pulse in a limiting case. In the penultimate, we present some numerical simulations of the solutions. We conclude in section 7.

2 The model

We recall that the structure tensor is a way of representing the edges and textures of a 2D image [4, 21]. Moreover, a structure tensor can be seen as a symmetric positive matrix.
We assume that a hypercolumn of V1 can represent the structure tensor in the receptive field of its neurons as the average membrane potential values of some of its membrane pouplations. Let be a structure tensor. The average potential of the column has its time evolution that is governed by the following neural mass equation adapted from [11] where we allow the connectivity function to depend upon the time variable and we integrate over the set of symmetric definite-positive matrices:

(1)

The nonlinearity is a sigmoidal function which may be expressed as:

where describes the stiffness of the sigmoid. is an external input.
The set SPD(2) is the set of symmetric positive-definite matrices. It can be seen as a foliated manifold by way of the set of special symmetric positive definite matrices . Indeed, we have: . Furthermore, , where is the Poincare Disk, see e.g. [11]. As a consequence we use the following foliation of SPD(2): , which allows us to write for all , with . , and are related by the relation and the fact that is the representation in of with .
It is well-known [20] that (and hence SSPD(2)) is a two-dimensional Riemannian space of constant sectional curvature equal to -1 for the distance noted defined by

It follows, e.g. [24, 11], that SDP(2) is a three-dimensional Riemannian space of constant sectional curvature equal to -1 for the distance noted defined by

As shown in proposition (A.0.1) of appendix A it is possible to express the volume element in coordinates with :

We rewrite (1) in coordinates:

We get rid of the constant by redefining as .

(2)

The aim of the following sections is to establish that (2) is well-defined and to give necessary and sufficient conditions on the different parameters in order to prove some results on the existence and uniqueness of a solution of (2).

3 The existence and uniqueness of a solution

The aim of this section is to give theoretical and general results of existence and uniqueness of a solution of (1). In the first subsection 3.1 we study the simpler case of homogeneous solutions of (1), i.e. of solutions that are constant with respect to the tensor variable . We then we study in 3.2 the useful case of the semi-homogeneous solutions of (1), i.e. of solutions that depend on the tensor variable but only through its coordinate in , and we end up in 3.3 with the general case.

3.1 Homogeneous solutions

A homogeneous solution to (1) is a solution that does not depend upon the tensor variable for a given homogenous input and a constant initial condition . In coordinates, a homogeneous solution of (2) is defined by:

where:

(3)

Hence necessary conditions for the existence of a homogeneous solution are that:

  • the double integral (3) is convergent,

  • does not depend upon the variable . In that case, we note instead of .

In the special case where is a function of only the distance between and :

the second condition is satisfied. We postpone the verification of this fact until the following section. To summarize, the homogeneous solutions satisfy the differential equation:

(4)

3.1.1 A first existence and uniqueness result

Equation (2) defines a Cauchy problem and we have the following theorem.

Theorem 3.1.1.

If the external current and the connectivity function are continuous on some closed interval containing , then for all in , there exists a unique solution of (4) defined on a subinterval of containing such that .

Proof.

It is a direct application of Cauchy’ theorem on differential equations. We consider the mapping defined by:

It is clear that is continuous from to . We have for all and :

where .
Since, is continuous on the compact interval , it is bounded there by and:

We can extend this result to the whole time real line if and are continuous on .

Proposition 3.1.1.

If and are continuous on , then for all in , there exists a unique solution of (4) defined on such that .

Proof.

We have already shown the following inequality:

Then is locally Lipschitz with respect to its second argument. Let be a maximal solution on and we denote by the upper bound of . We suppose that . Then we have for all :

where .
But theorem B.0.2 ensures that it is impossible, then . The same proof with the lower bound of gives the conclusion. ∎

3.1.2 Simplification of (3) in a special case

Invariance

In the previous section, we have stated that in the special case where was a function of the distance between two points in , then did not depend upon the variables . We now prove this assumption.

Lemma 3.1.1.

When is only a function of , then does not depend upon the variable .

Proof.

We work in coordinates and we begin by rewritting the double integral (3) for all :

The change of variable yields:

And it establishes that does not depend upon the variable . To finish the proof, we show that the following integral does not depend upon the variable :

(5)

where is a real-valued function such that is well defined.
We express in horocyclic coordinates: (see appendix D) and (5) becomes:

With the change of variable , this becomes:

The relation (proved e.g. in [17]) yields:

with and , which shows that does not depend upon the variable , as announced. ∎

Mexican hat connectivity

In this paragraph, we push further the computation of in the special case where does not depend upon the time variable and takes the special form suggested by Amari in [1], commonly referred to as the “Mexican hat” connectivity. It features center excitation and surround inhibition which is an effective model for a mixed population of interacting inhibitory and excitatory neurons with typical cortical connections. It is also only a function of .
In detail, we have:

where:

with and .
In this case we can obtain a very simple closed-form formula for as shown in the following lemma.

Lemma 3.1.2.

When is a mexican hat function of and independent of , then:

(6)

where erf is the error function defined as:

Proof.

The proof is given in appendix C. ∎

3.2 Semi-homogeneous solutions

A semi-homogeneous solution of (2) is defined as a solution which does not depend upon the variable . In other words, the populations of neurons are not sensitive to the determinant of the structure tensor. The neural mass equation is then equivalent to the neural mass equation for tensors of unit determinant. We point out that semi-homogeneous solutions were previously introduced by Chossat and Faugeras in [11]. They also performed a bifurcation analysis of what they called H-planforms. In this section, we define the framework in which their equations make sense.

(7)

where

We have implicitly made the assumption, that does not depend on the coordinate . Some conditions under which this assumption is satisfied are described below.

We now deal with the problem of the existence and uniqueness of a solution to (7) for a given initial condition. We first introduce the framework in which this equation makes sense.

3.2.1 The well-posedness of equation (7)

Let be an open interval of . We assume that:

  • (C1): , ,

  • (C2): where is defined as for all
    ,

  • (C3): where .

Note that conditions (C1)-(C2) and lemma 3.1.1 imply that for all , . And then, for all , the mapping is integrable on . We deduce that for all :

The last inequality is a consequence of lemma 3.1.1 which shows that is not a function of .

Finally for all and , the righthand side of equation (7) is well-defined.
We introduce the following mapping, defined on , where is a yet to be defined functional space:

(8)

Our aim is to find a functional space where (8) is well-defined and the function maps to for all s.
A natural choice would be to choose as a -integrable function of the space variable with . Unfortunately, the homogeneous solutions (constant with respect to ) do not belong to that space. Moreover, a valid model of neural networks should only produce bounded membrane potentials. That is why we focus our choice on functional spaces of bounded, or essentially bounded, functions.

The well-posedness of equation (7) in the case

Our first choice is . The Fischer-Riesz’s theorem ensures that is a Banach space for the norm: . We have the following proposition.

Proposition 3.2.1.

If satisfies conditions (C1)-(C3) then is well defined and is from to .

Proof.

For all and , is obviously measurable and we have from (8):

and hence . ∎

The well-posedness of equation (7) in the case

We now choose the space of the bounded continuous functions on . As shown in, e.g. [18], this is a Banach space with respect to the uniform norm: . The previous proposition 3.3.1 still holds:

Proposition 3.2.2.

If satisfies conditions (C1)-(C3) then is well defined and is from to .

Proof.

For all and :

so is bounded. It remains to show that for all the mapping is continuous on . We fix and write in horocyclic coordinates with and define on as:

A method similar to the one used in section 3.1.2 leads to:

We now use a theorem on integrals depending on a parameter. It is easy to verify that

  1. for all the function is measurable on ,

  2. for almost every the function is continuous on ,

  3. for all ,

    and is integrable on .

It follows that the function is continuous on and is continuous on .
Finally, belongs to . ∎

3.2.2 The existence and uniqueness of a solution of (7)

From now on, is a functional Banach space for the norm . We suppose that all the hypotheses are verified so that is well-defined from to with an open interval containing . In the previous section we have already presented two different examples for : and .
We rewrite (7) as a Cauchy problem defined on :

(9)
Theorem 3.2.1.

If the external current belongs to with an open interval containing and satisfies conditions (C1)-(C3), then for all , there exists a unique solution of (9) defined on a subinterval of containing .

Proof.

We prove that is continuous on . We have

and therefore

Because of condition (C2) we can choose small enough so that
is arbitrarily small. This proves the continuity of . Moreover it follows from the previous inequality that:

with . This ensures the Lipschitz continuity of with respect to its second argument, uniformly with respect to the first. The Cauchy-Lipschitz theorem on a Banach space yields the conclusion. ∎

This solution, defined on the subinterval of can in fact be extended to the whole real line, and we have the following proposition.

Proposition 3.2.3.

If the external current belongs to and satisfies conditions (C1)-(C3) with , then for all , there exists a unique solution of (9) defined on .

Proof.

The proof is a direct application of theorem B.0.3. If is Lipschitz continuous with respect to its second argument, the righthand side of (9) is also Lipschitz continuous. Now is Lipschitz continuous because

3.2.3 The boundedness of a solution of (7)

We assume that is a Banach space chosen so that the mapping is well defined from to . Then the following propostion holds.

Proposition 3.2.4.

If the external current belongs to and is bounded in time, i.e. , and satisfies conditions (C1)-(C3) with , then the solution of (9) is bounded for each initial condition .

Proof.

For all we integrate (7) over :

The following upperbound holds

(10)

and hence

which shows that the solution is bounded for each initial condition .

The upperbound (10) yields a simple attracting set for the dynamics of (7) as shown in the following proposition.

Proposition 3.2.5.

Let . The open ball of of center and radius is stable under the dynamics of equation (7). Moreover it is an attracting set for this dynamics and if and then:

Proof.

We can rewrite (10) as:

(11)

If this implies for all and hence for all , proving that is stable. Now assume that for all . The inequality (11) shows that for large enough this yields a contradiction. Therefore there exists such that . At this time instant we have

and hence

3.3 General solution

We now deal with the general solutions of equation (1). We first give some hypotheses that the connectivity function must satisfy. We present them in two ways, first on the set of structure tensors considered as the set SPD(2) and, second on the set of tensors seen as . Let be a subinterval of . We assume that:

  • (H1): , ,

  • (H2): where is defined as for all where is the identity matrix of ,

  • (H3): , where .

We now express these hypotheses for the representation in of structure tensors:

  • (H1bis): , ,

  • (H2bis): where is defined as for all ,

  • (H3bis): , where

    .

3.3.1 Functional space setting

We need to settle on the choice of a Banach functional space for the membrane potential as in section 3.2. Our study of the semi-homogeneous case suggests the following choice: . As is an open set of , is a Banach space for the norm: .
We introduce the following mapping such that: