A Broad Dynamical Model for Pattern Formation by Lateral Inhibition

# A Broad Dynamical Model for Pattern Formation by Lateral Inhibition

Murat Arcak Department of Electrical Engineering and Computer Sciences, University of California, Berkeley. Email: arcak@eecs.berkeley.edu.
July 19, 2019

## 1 Introduction

Spatial patterns of gene expression are central to the development of multi-cellular organisms. Most mathematical studies of pattern formation investigate diffusion-driven instability, which is a mechanism that amplifies spatial inhomogeneities in a class of reaction-diffusion systems (see, e.g., [1]). However, many patterning events in multi-cellular organisms rely on cell-to-cell contact signaling, such as the Notch pathway [2], and do not involve diffusible proteins for intercellular communication. A particularly interesting phenomenon in this form of communication is lateral inhibition whereby a cell that adopts a particular fate inhibits its immediate neighbors from doing likewise [3], thus leading to ‘fine-grained’ patterns. There is increasing interest in understanding the Notch signaling circuitry in mammalian cells that leads to such lateral inhibition [4, 5]. Recent studies showed that a lateral inhibition pathway also functions in E. Coli, and enables the bacteria to inhibit the growth of other E. Coli strains in direct contact [6].

Dynamical models are of great interest for understanding the circuit topologies involved in lateral inhibition and for predicting the associated patterns. Several simplified models have been employed for Notch signalling pathways in [3] and [5]. The objective of this paper is to present an abstract dynamical model that captures the essential features of lateral inhibition and to demonstrate with dynamical systems techniques that these features indeed lead to patterning. Although this model is not meant specifically for Notch signaling, it encompasses as special cases the lateral inhibition model in [3] as well as a slightly modified version of the one in [5].

Our model treats the evolution of concentrations in each cell as an input-output system, where the inputs represent the influence of adjacent cells and the outputs correspond to the concentrations of the species that interact with adjacent cells. The input-output models for the cells are then interconnected according to an undirected graph where the nodes represent the cells, and the presence of a link between two nodes means that the corresponding cells are in contact. The main assumption on the input-output model is that each constant input yields a unique and globally asymptotically stable steady-state, and that the value of the output at this steady-state is a decreasing function of the input. This decreasing property captures the inhibition of the cell function by its neighbors. The model allows multiple inputs and outputs, and is restricted by a monotonicity assumption, following the definition of monotonicity for dynamical systems with inputs and outputs [7].

Using this model, we first give an instability condition for the homogeneous steady-state, applicable to arbitrary contact graphs. We then focus our attention on bipartite graphs, and demonstrate the emergence of a “checkerboard” pattern, exhibiting alternating high and low values of concentrations in adjacent cells. Next, we establish a strong monotonicity property of the interconnected model for bipartite graphs, which implies that almost every bounded solution (except for a measure-zero set of initial conditions) converges to a steady-state [8, 9]. A graph is bipartite if and only if it contains no odd-length cycles, and Cartesian products of bipartite graphs are also bipartite [10]. Thus, the results of this section are applicable, among others, to grid graphs (one dimensional path graphs and their Cartesian products in higher dimensions) which are appropriate for representing arrays of cells.

## 2 Lateral Inhibition Model and Preliminaries

We let be an undirected, connected graph where the nodes represent the cells, and the presence of a link between two nodes means that the corresponding cells are in contact. In preparation for the dynamical model studied below, we let denote the number of cells and define the matrix :

 pij={di−1if nodes i and j are % adjacent,0otherwise, (1)

where denotes the degree of node . It follows that is a nonnegative row-stochastic matrix, that is:

 P1=1 (2)

where denotes the vector of ones. The matrix is identical to the probability transition matrix for a random walk on the graph . The properties summarized below therefore follow from standard results for random walks (see, e.g., [11]):

###### Lemma 1.

possesses real eigenvalues all of which lie in the interval , and corresponding real, linearly independent eigenvectors , . In particular, , and is a corresponding eigenvector. If is bipartite, then , and an eigenvector is such that the entries are either or , and two entries corresponding to adjacent nodes have opposite signs.

Let denote the cells, and consider the dynamical model:

 ˙xi=f(xi,ui)yi=h(xi) (3)

where is a vector describing the state of reagent concentrations in cell , describes the ‘input’ from adjacent cells, and describes the ‘output’ that serves as an input to adjacent cells. In particular,

 U=(P⊗Im)Y (4)

where is as defined in (1), and . If follows from (1) that the input is the average of the outputs over all neighbors of cell . Thus, we henceforth take the input and output spaces to be identical: .

We assume that and are continuously differentiable and further satisfy the following property:

###### Assumption 1.

For each constant input , system (3) has a globally asymptotically stable steady-state with the additional property that:

 det(∂f(x,u)∂x∣∣∣(x,u)=(x∗,u∗))≠0. (5)

The map and, therefore, the map defined by:

 T(⋅):=h(S(⋅)), (6)

are continuously differentiable.

Following the terminology in [7], we will refer to as the input-state characteristic, and to as the input-output characteristic. Our next assumption is that (3) is a monotone system in the sense of [7], as defined below. According to the classical definition for systems without inputs and outputs [9], a monotone system is one that preserves a partial ordering of the initial conditions as the solutions evolve. The partial ordering is defined with respect to a positivity cone in the Eucledean space that is closed, convex, pointed (), and has nonempty interior. Given such a cone, means , means and , and means that is in the interior of . The system is then defined to be monotone if two solutions and starting with the order maintain for all111Here, “for all ” is understood as “for all times that belong to the common domain of existence of the two solutions.” . The more restrictive notion of strong monotonicity stipulates that implies for all . The monotonicity concept was extended to systems with inputs and outputs in [7]:

###### Definition 1.

Given positivity cones for the input, output, and state spaces, the system , is said to be monotone if and for all imply that the resulting solutions satisfy for all , and the output map is such that implies .

###### Assumption 2.

The system (3) is monotone with respect to , , and , where is some positivity cone in .

As observed in [7, Remark V.2], monotonicity implies that the input-state and input-output characteristics are nondecreasing with respect to the same ordering; that is, with respect to implies with respect to and with respect to . Since in Assumption 2, we conclude that is nonincreasing with respect to the standard order induced by . This nonincreasing property means that, if two cells are in contact, an increase in the output value of one has the opposite effect on the other, which is why (3)-(4) is referred to as a “lateral inhibition” model. We note from the nonincreasing property of that:

 T′(u):=∂T(u)∂u (7)

is a nonpositive matrix in , and denote its spectral radius as:

 ρ(T′(u)). (8)

We conclude this section by quoting lemmas that will be used in the sequel. Lemmas 2 and 3 are from [12]:

###### Lemma 2.

Given the system , with continuously differentiable and , the linearization , about a point satisfying is also monotone with respect to the same positivity cones.

###### Lemma 3.

The linear system , is monotone if and only if:

1) implies ,

2) implies ,

3) implies .

The following lemma, proven in [12] for single-input, single-output systems and extended in [13] to the multivariable case, determines stability of a positive feedback system based on the ‘dc gain’ of the open-loop system:

###### Lemma 4.

Suppose the linear system , is monotone with respect to cones such that and is Hurwitz. If is Hurwitz, then so is . If has an eigenvalue with a positive real part, then so does .

In the special case of single-input, single-output systems, the stability condition above amounts to checking whether the dc gain is greater or smaller than one. In the multi-input, multi-output case, this condition is equivalent to inspecting whether the spectral radius of the dc gain matrix is greater or smaller than one.

The following test from [7, 14] is useful for certifying monotonicity with respect to orthant cones:

###### Lemma 5.

Consider the system , , , , , where the interiors of and are convex, and and are continuously differentiable. If there exist such that:

 (−1)ϵj+ϵk∂fj∂xk(x,u)≥0∀x∈X,∀u∈U,∀j≠k (9) (−1)ϵj+δk∂fj∂uk(x,u)≥0∀x∈X,∀u∈U,∀j,k (10) (−1)ϵj+μk∂hk∂xj(x,u)≥0∀x∈X,∀j,k, (11)

then the system is monotone with respect to the positivity cones , , .

## 3 Instability of the Homogeneous Steady-State

Note that system (3)-(4) admits spatially homogeneous solutions of the form , , where satisfies:

 ˙x=f(x,h(x)). (12)

In particular, if the map has a fixed point:

 u∗=T(u∗), (13)

 x∗=S(u∗). (14)

For single-input, single-output systems with , the nonincreasing property of the map indeed guarantees a unique fixed point in (13).

The “lumped model” (12) describes the dynamics of the -dimensional system (3) reduced to the -dimensional invariant subspace where the solutions are spatially homogeneous. Thus, the steady-state of the lumped model defines the homogeneous steady-state , , for the full system (3)-(4). As a starting point for the analysis of pattern formation, we now give an instability criterion for the homogeneous steady-state:

###### Theorem 1.

Consider the system (3)-(4) and suppose Assumptions 1 and 2 hold. Let denote the smallest eigenvalue of as in Lemma 1, and let , be as in (13), (14). If:

 λNρ(T′(u∗))<−1, (15)

then the homogeneous steady-state , is unstable.

Proof: Let , and note that the linearization of (3)-(4) about the homogeneous steady-state gives the Jacobian matrix:

 IN⊗A+P⊗(BC) (16)

where:

 A:=∂f(x,u)∂x∣∣∣(x,u)=(x∗,u∗),B:=∂f(x,u)∂u∣∣∣(x,u)=(x∗,u∗),C:=∂h(x)∂x∣∣∣x=x∗. (17)

We recall from Lemma 1 that

 V−1PV=Λ:=⎡⎢ ⎢⎣λ1⋱λN⎤⎥ ⎥⎦, (18)

where , and apply the following similarity transformation to (16):

 (V−1⊗In)[IN⊗A+P⊗(BC)](V⊗In)=IN⊗A+Λ⊗(BC). (19)

This matrix is block-diagonal, with the th diagonal block given by:

 A+λkBC. (20)

Claim: If

 λkρ(T′(u∗))<−1, (21)

then (20) has a positive eigenvalue.

The theorem follows from this claim because, if (15) holds, then (20) has a positive eigenvalue for , which implies instability. To prove the claim, we note from Assumption 2 and Lemma 2 that the linear system , is monotone with respect to , , and . We write where and note that (21) implies . Thus, the system , is monotone with . In addition, Assumptions 1 and 2 imply that is Hurwitz, as can be deduced from [12, Lemma 6.5]. Thus, it follows from the second statement of Lemma 4 that if has a positive eigenvalue, then so does (20). The remaining task is thus to prove that

 −(I+CkA−1B)=−I−λkCA−1B (22)

has a positive eigenvalue. To this end, we first show that

 T′(u∗)=−CA−1B. (23)

Since

 f(S(u),u)≡0, (24)

differentiation gives:

 ∂f(x,u)∂x∣∣∣x=S(u)∂S(u)∂u+∂f(x,u)∂u∣∣∣x=S(u)=0. (25)

Next, it follows from the definition (6) that

 T′(u)=∂h(x)∂x∣∣∣x=S(u)∂S(u)∂u. (26)

Combining (25) and (26), and substituting (17), we verify (23). Substituting (23), we then rewrite (22) as

 −I+λkT′(u∗), (27)

and conclude that it indeed has a positive eigenvalue, because implies that is a nonnegative matrix and (21) implies that its spectral radius exceeds one. Since the spectral radius is an eigenvalue for nonnegative matrices (see, e.g., [15]), the conclusion follows.

The eigenvectors of used in the similarity transformation (19) may be interpreted as the spatial modes of the system. Thus, the stability properties of the matrix (20) for each determines whether the corresponding mode decays or grows in time. Since the spectral radius is nonnegative and , , are in decreasing order, whenever the instability criterion (21) holds for a particular mode , it also holds for higher values of . Because larger wavenumbers imply higher spatial frequency content in , we conclude that the instability condition above sets the stage for the formation of high-frequency spatial patterns.

## 4 Patterning in Bipartite Graphs

### 4.1 Emergence of Checkerboard Patterns

For bipartite graphs, where as stated in Lemma 1, the instability condition in Theorem 1 is:

 ρ(T′(u∗))>1. (28)

This condition indicates the growth of the highest spatial-frequency mode which exhibits opposite signs for adjacent nodes. Thus, concentrations in adjacent nodes move in opposite directions in the vicinity of the homogeneous steady-state. We now show that, if the map

 T2(⋅):=T(T(⋅)) (29)

has two fixed points other than , satisfying:

 u1=T(u2),u2=T(u1), (30)

then the system (3)-(4) has an inhomogeneous steady-state with two sets of concentrations, each assigned to one of two adjacent cells. We will refer to this steady-state as a “checkerboard” pattern, since adjacent cells adopt distinct states. Although this term may be associated with cells arranged as a grid graph in two dimensional space, we will use it broadly for any spatial arrangement that forms a bipartite graph.

###### Proposition 1.

Let be a bipartite graph and let the sets and be such that no two nodes in the same set are adjacent. If there exist and , , satisfying (30), then

 xi=S(u1), i∈I,xi=S(u2), i∈I′, (31)

and

 xi=S(u2), i∈I,xi=S(u1), i∈I′, (32)

Proof: To show that (31) is a steady-state, we note that, if , then and, if , then . From (4), the input to a node in is because all neighbors of this node belong to . Likewise, the input to a node in is because all neighbors of this node belong to . Since and , we conclude that (31) is indeed a steady-state, and identical arguments apply to (32).

###### Theorem 2.

Consider the system (3)-(4) and suppose Assumptions 1 and 2, and the hypotheses of Proposition 1 hold. If, in addition,

 ρ(T′(u1)T′(u2))<1, (33)

then the steady-states (31) and (32) are asymptotically stable.

Before giving the proof, we note that (30) corresponds to a period-two orbit of the discrete-time system:

 u(t+1)=T(u(t)), (34)

and (33) implies the asymptotic stability of this orbit. Likewise, (28) indicates instability of the fixed point for this discrete-time system. Thus, an interesting duality exists between (34) and the spatially-distributed system (3)-(4) defined on a bipartite graph: A bifurcation from a stable fixed point to a stable period-two orbit in (34) corresponds to the emergence of stable checkerboard patterns from a homogeneous steady-state in (3)-(4).

In the single-input, single-output case with , where is a nonincreasing function by Assumption 2, condition (28) indeed implies the existence of a period-two orbit (30). To see this, assume to the contrary that is the unique fixed point of . Since is continuous and nonincreasing, this uniqueness property would imply that is a global attractor for all solutions of the difference equation (34) starting in [16, Lemma 1.6.5]. This, however, contradicts (28), which implies instability of for this scalar difference equation.

The argument above does not suggest the uniqueness of the pair , and multiple pairs satisfying (30) may exist. However, we claim that at least one pair satisfies:

 dT2(u)du∣∣∣u=u1=dT2(u)du∣∣∣u=u2=T′(u1)T′(u2)<1, (35)

which is the scalar equivalent of (33), since is nonnegative. To see this, note from (28) that:

 dT2(u)du∣∣∣u=u∗=T′(u∗)T′(u∗)>1 (36)

and suppose, in contrast to (35), that the derivative of is greater than or equal to one at each of its fixed points. This implies that for all , because has nonnegative slope at zero-crossings and, thus, remains nonnegative for . The inequality implies unbounded growth of which is a contradiction because is continuous and nonincreasing, thus, bounded.

Proof of Theorem 2: Let and denote the cardinalities of the sets and , and index the cells such that belong to , and belong to . Then the matrix has the form:

 P=[0P12P210] (37)

where , . Let , and note that the linearization of (3)-(4) about (31) gives the Jacobian matrix:

 [INI⊗A1P12⊗(B1C2)P21⊗(B2C1)INI′⊗A2] (38)

where

 Aj:=∂f(x,u)∂x∣∣∣(x,u)=(S(uj),uj),Bj:=∂f(x,u)∂u∣∣∣(x,u)=(S(uj),uj),Cj:=∂h(x)∂x∣∣∣x=S(uj),j=1,2. (39)

From the definition (1), the matrix , where is a diagonal matrix of the node degrees, is symmetric. Since is also symmetric, we write:

 D1/2PD−1/2=[0RRT0] (40)

where is appropriately defined. Then, we apply the following similarity transformation to (38):

 (D1/2⊗In)[INI⊗A1P12⊗(B1C2)P21⊗(B2C1)INI′⊗A2](D−1/2⊗In)=[INI⊗A1R⊗(B1C2)RT⊗(B2C1)INI′⊗A1]. (41)

The structure of (40) is such that it can diagonalized with an orthonormal matrix of the form:

 Q=[Q1Q1Q30Q2−Q20Q4] (42)

which results in:

 [0RRT0]Q=Q⎡⎢ ⎢ ⎢⎣Λ+−Λ+00⎤⎥ ⎥ ⎥⎦ (43)

where is a diagonal matrix of the strictly positive eigenvalues of , the columns of and span the null spaces of and , respectively, and the dimensions of the zero diagonal blocks in (43) are consistent with the dimensions of these null spaces (which we denote as and , respectively). From the orthonormality of , we get the identities:

 QT1Q1=QT2Q2=12Ir (44) QT4Q4=In4QT3Q3=In3 (45) QT1Q3=0QT2Q4=0, (46)

where is the dimension of . Likewise, equation (43) implies:

 RQ2=Q1Λ+ RTQ1=Q2Λ+ (47) RQ4=0 RTQ3=0. (48)

We now return to the Jacobian matrix (41) and further apply the following similarity transformation:

 ⎡⎢ ⎢ ⎢ ⎢ ⎢⎣2QT1⊗In002QT2⊗InQT3⊗In00QT4⊗In⎤⎥ ⎥ ⎥ ⎥ ⎥⎦[INI⊗A1R⊗(B1C2)RT⊗(B2C1)INI′⊗A1][Q1⊗In0Q3⊗In00Q2⊗In0Q4⊗In] (49)

where the leftmost matrix is the inverse of the rightmost matrix from (44)-(46). Likewise, using (44)-(48), it is not difficult to show that the product (49) equals:

 ⎡⎢ ⎢ ⎢ ⎢⎣Ir⊗A1Λ+⊗(B1C2)Λ+⊗(B2C1)Ir⊗A2In3⊗A1In4⊗A2⎤⎥ ⎥ ⎥ ⎥⎦. (50)

Since Assumptions 1 and 2 imply that and are Hurwitz [12, Lemma 6.5], stability of (50) is determined by the upper left blocks which, upon a similarity transformation with an appropriate permutation matrix, are block-diagonalized into blocks of the form:

 [A1λiB1C2λiB2C1A2] (51)

.

We will now show that (51) is Hurwitz for any . Since all eigenvalues of lie in this interval by Lemma 1, this will conclude the proof. We do not provide a separate proof for the asymptotic stability of (32), as identical arguments apply when the indices and are swapped in (51). If , (51) is Hurwitz because and are Hurwitz. If , then we apply the similarity transformation:

 [I00λ−1iI][A1λiB1C2λiB2C1A2][I00λiI]=[A1λ2iB1C2B2C1A2] (52)

and rewrite the result as:

 A+BC (53)

where

 A:=[A1λ2iB1C20A2],B=[0B2],C=[ C1  0 ]. (54)

We claim that the linear system defined by the triplet is monotone with respect to , and where is as in Assumption 2. To see this, first note from Lemma 2 that and are monotone with respect to the cones specified in Assumption 2. By Lemma 3, this means that:

 x∈K ⇒ Ajx∈K,u∈Rm≥0 ⇒ Bju∈K,x∈K ⇒ Cjx∈Rm≤0,j=1,2. (55)

We now show that the conditions of Lemma 3 hold for with , :

1) Suppose , that is , . Then,

 Ax=[A1x1+λ2iB1C2x2A2x2]∈−K×K (56)

because, from (55), , , and, hence, .

2) We want to show that implies . From the definition of in (54), means . It follows from the second implication in (55) that indeed implies .

3) To prove monotonicity with , we need to show that and imply . This is indeed true, since and, from (55), implies .

Having verified the conditions of Lemma 3, we conclude that is monotone with respect to . In addition, the matrix in (54) is Hurwitz, as and are Hurwitz. Thus, it follows from the first statement in Lemma 4 that, if is Hurwitz, then so is (53). Note that

 (57)

and, from a derivation similar to the one for (23), , Thus, (57) gives:

 −(I+CA−1B)=−I+λ2iT′(u1)T′(u2), (58)

and (33) and imply that is indeed Hurwitz. From Lemma 4, this means that (53) and, thus, (51) is Hurwitz concluding the proof.

### 4.2 Generic Convergence to Steady-States

Thus far we have studied local asymptotic stability properties of the steady-states. Strongly monotone systems (as defined in the paragraph above Definition 1) have been shown to possess a “generic convergence” property [8, 9] which means that almost every bounded solution (except for a measure-zero set of initial conditions) converges to the set of steady-states. Below we first prove monotonicity of (3)-(4) in Theorem 3 and, next establish strong monotonicity in Theorem 4, thereby concluding generic convergence for this system.

###### Theorem 3.

If is bipartite and Assumption 2 holds, then the system (3)-(4) is monotone.

Proof: Let and be defined as in Proposition 1, and suppose that in (3), the cells are indexed such that belong to , and belong to as in the proof of Theorem 2, where is the cardinality of set . Let , , and define , , , similarly. Then, the interconnection condition (4) becomes:

 UI = (P12⊗Im)YI′ (59) UI′ = (P21⊗Im)YI (60)

where and are as in (37). A block diagram illustrating this interconnection is depicted in Figure 1.

To prove the monotonicity of this feedback system, we establish the monotonicity of the feedforward system with input and output :

Claim: The feedforward system in Figure 1 with input and output is monotone with respect to the positivity cones , and .

The theorem follows from this claim because a monotone input-output system, where the inputs and outputs are ordered with respect to the same positivity cone, is monotone when the output is connected to the input with unitary positive feedback (see the first part of the proof of [12, Theorem 2]).

To prove the claim above, we take two input signals satisfying for all , which means that , , with respect to . Likewise, we let with respect to the cone , which means that for and for with respect to the cone . It follows from Assumption 2 that:

 xi(t)⪯^xi(t)∀t≥0i∈I. (61)

Moreover, since implies with respect to by Assumption 2, we conclude with respect to . Because is a nonnegative matrix, (60) implies which means that for all , . As noted above, for and, hence, another application of Assumption 2 yields:

 xi(t)⪰^xi(t)∀t≥0i∈I′. (62)

Since (61) and (62) hold with respect to , we conclude that for all with respect to . To conclude the proof of the claim, we need to show that implies . Indeed, the former implies for and, it follows from Assumption 2 that with respect to . Thus, with respect to and, since is a nonnegative matrix, we conclude with respect to .

To establish strong monotonicity, we need additional excitability and transparency conditions, as defined in [12, 13]:

###### Definition 2.

The monotone system , is said to be excitable if and for almost all imply . It is said to be transparent if and imply .

Since inputs and outputs are ordered with respect to orthants ( and ) in Assumption 2, here we give a less restrictive definition of excitability (transparency) which requires that this property hold with respect to a particular component of the input (output) vector:

###### Definition 3.

The monotone system , is said to be excitable by the th input if