1 Introduction

Centralized and Decentralized Global Outer-synchronization of Asymmetric Recurrent Time-varying Neural Network by Data-sampling

Centralized and Decentralized Global Outer-synchronization of Asymmetric Recurrent Time-varying Neural Network by Data-sampling1

[0.2in]

Wenlian Lu2, Ren Zheng3, Tianping Chen4

Abstract

In this paper, we discuss the outer-synchronization of the asymmetrically connected recurrent time-varying neural networks. By both centralized and decentralized discretization data sampling principles, we derive several sufficient conditions based on diverse vector norms that guarantee that any two trajectories from different initial values of the identical neural network system converge together. The lower bounds of the common time intervals between data samples in centralized and decentralized principles are proved to be positive, which guarantees exclusion of Zeno behavior. A numerical example is provided to illustrate the efficiency of the theoretical results.

Keywords: Outer-synchronization, Data sampling, Centralized principle, Decentralized principle, Recurrent neural networks

1 Introduction

Recurrently connected neural networks, also known as the Hopfield neural networks, have been extensively studied in past decades and found many applications in different areas. Such applications heavily depend on the dynamical behaviors of the system. Therefore, analysis of the dynamics is a necessary step for practical design of neural networks.

The dynamical behaviors of continuous-time recurrently asymmetrically connected neural networks (CTRACNN) have been studied at the very early stage of neural network research. For example, multistable and oscillatory behaviors were studied by Amari (1971, 1972) and Wilson & Cowan (1972). Chaotic behaviors were studied by Sompolinsky, & Crisanti (1988). Hopfield, & Tank (1984, 1986) studied stability of symmetrically connected networks and showed their practical applicability to optimization problems. It should be noted that Cohen and Grossberg, see Cohen, & Grossberg (1983) gave more rigorous results on the global stability of networks.

The global stability of symmetrically connected networks described by differential equations has now been well established. See Chen (1999); Chen, & Amari (2001); Chen, & Lu (2002); Fang, & Kincaid (1996); Forti, & Marini (1994); Hirsch (1989); Kaszkurewicz, & Bhaya (1994); Kelly (1990); Li, Michel, & Porod (1988); Matsuoka (1992); Yang, & Dillon (1994) and the references therein. More related to the present paper, the previous paper (Liu, Lu, & Chen, 2011) addressed the global self-synchronization of general continuous-time asymmetrically connected recurrent networks and discussed the independent identically distributed switching process on the selecting the time-varying parameters in detail.

However, in applications, discrete iteration is popular to be employed to realize neural network process, rather than continuous-time equations. Generally, synchronization analysis for differential equations cannot be applicable to the discrete-time situation. There are several papers (Jin, Nikiforuk, & Gupta, 1994; Jin, & Gupta, 1999; Wang, 1997) that discussed different types of discrete-time neural networks, where the step sizes were constants. However, in Liu, Chen, & Yuan (2012); Manuel, & Tabuada (2011); Seyboth, Dimarogonas, & Johansson (2013); Wang, & Lemmon (2008), these papers pointed out that the constant time-step size was costly. This motivates us to design adaptive step sizes for synchronization of asymmetric recurrent time-varying neural network.

Moreover, the discretization is related to the concept of sampled-data control. There are a number of papers discussing dynamics of neural networks, using sampled-data control. The papers (Lam, & Leung, 2006; Wu, Shi & Su, 1972; Zhu, & Wnag, 2011) applied the sampled-data control technique towards stabilization of three-layer fully connected feedforward neural networks. In Chandrasekar, Rakkiyappan, Rihan, & Lakshmanan (2014); Jung, Park, & Lee (2014); Lee, Park, Kwon, & Lee (2013); Liu, Yu, Cao, & Chen (2015); Rakkiyappan, Chandrasekar, Park, & Kwon (2014), the authors used sampled-data control strategy for exponential synchronization for the neural networks with Markovian jumping parameters and time varying delays. Rakkiyappan, Sakthivel, Park, & Kwon (2013) discussed state estimation for Markovian jumping fuzzy cellular neural networks with probabilistic time-varying delays with sampled-data.

The purpose of this paper is to give a comprehensive analysis on out-synchronization of the discrete-time recurrently asymmetrically connected time-varying neural networks. We propose two schemes of discretizations, named centralized and decentralized discretization respectively, and present sufficient conditions for the global out-synchronization. The common step size for every neuron in centralized discretization but in decentralized discretization process, the distributed step size for each neuron is used to guarantee that any two trajectories from different initial values converge together.

2 Preliminaries and problem formulation

In this section, we provide the models of asymmetric recurrent neural networks with data-sampling, and some notations. The continuous-time version of the recurrent connected neural networks is described by the following differential equations

(1)

where , and are piece-wise continuous and bounded, , and satisfies

(2)

for all , where is a constant and .

In the centralized data-sampling strategy, the continuous-time system (1) is rewritten as

(3)

for . The increasing time sequence ordered as is uniform for all the neuron . Each neuron broadcasts its state to its out-neighbours and receives its in-neighbours’ states information at time .

Comparatively, in the decentralized data-sampling strategy, Eq. (1) is rewritten as the following push-based decentralized system

(4)

for . The increasing time sequence order as is distributed for the neuron . Every neuron pushes its state information to its out-neighbours at time when it updates its state. It receives its in-neighbours’ state information at time when its neighbour neuron renews it state.

To begin the discussion, we give the following three norms of and recall the definition of out-synchronization proposed in Wu, Zheng, & Zhou (2009).

Definition 1

Let be a positive constant and we can define three generalized norms as follow

  1. norm:

  2. norm:

  3. norm:

where is a vector.

Definition 2

Consider any two trajectories and starting from different initial values and of the following system

(5)

The system (5) is said to achieve out-synchronization if there exists a controller for the two trajectories and such that

Other major notations which will be used throughout this paper are summarized in the following definition.

Definition 3

Let be a positive constant and then we define

and

where and .

Because of the boundedness of the functions, it can be seen that and are bounded for all . That is, there exist positive constants and such that

with .

3 Structure-dependent data-sampling principle

In this section, we provide several the structure-based data-sampling rules for the next triggering time point at which the neurons renew their states and the control signals.

3.1 Structure-dependent centralized data-sampling

For any neuron , consider two trajectories and of the system (3) starting from different initial values. Denote with . And it holds

(6)

where for all , and .

The following theorem gives conditions that guarantee the system (3) reaches out-synchronization via - norm.

Theorem 1

Let and be constants with and . Suppose that there exist , such that for all and . Set an increasing time-point sequence as

(7)

. Then the system (3) reaches out-synchronization.

Proof.   From the condition , one can see that

which implies that exists for all and . Thus, one can further see

(8)

and

(9)

for all and . Furthermore, we have

Consider for each , and we have

with

which implies for all and , according to (2). Note

From (8), one can see

which leads

(10)

Then, it follows

(11)

The last equality holds due to (10). Thus, according to the rule (7) and (9), which implies

since the equality in (7) occurs at , thus we have

which implies

In addition, for each , from the rule (7) and the condition , inequality (11) implies that for each , . Hence, it holds

The out-synchronization of system (3) is proved.  

The proofs of the following results are analog to Theorem 3 but via and norm. Their proofs are similar to that of Theorem 3, which can be found in Zheng, Chen, & Lu (2015) and so neglected in the present paper.

Proposition 1

Let and be constants with and . Suppose that there exist , such that for all and . Set an increasing time-point sequence as

(12)

. Then the system (3) reaches out-synchronization.

Proposition 2

Let and be constants with and . Suppose that there exist , such that for all and . Set an increasing time-point sequence as

(13)

. Then the system (3) reaches out-synchronization.

Remark 1

From the proof, one can see , which excludes the Zeno behaviours for the rules (7,12,13).

To explain the independence of the results via three norms, we give out the following example. Denote

Let with

when and

when .

In the first time interval , we have found that when

it holds

where . In the second time interval , we can find that when

it follows

However, to maintain , we have to solve the following inequalities

that is

One can see that there is no such solution of and .

Hence, the conditions (7), (12) are uncorrelated. By using the similar method, we can obtain that the three conditions are pairwise independent. Therefore, we can assert that the results via three norms are independent.

3.2 Structure-dependent push-based decentralized data-sampling

For each neuron , consider two trajectories and of system (4) starting from different initial values. Denote with . It follows

(14)

where for all , and .

The following theorem and propositions give conditions that guarantee the convergence of system (14) via three generalized norms (, and ).

Theorem 2

Let and be constants with and . Suppose that there exist such that for all and . Set as the triggering time points as

(15)

for and . Then the system (4) reaches out-synchronization.

Proof.   For each , let . Similar to the arguments up to (11) in the proof of Theorem 1, one can derive the following inequality immediately:

(16)

From the arguments of (9), one can conclude

(17)

in an analog way.

Let be an increasing sequence such that and , which implies that for each neuron , equality in the rule (2) occurs at least once. Thus, we have

Consider for any neuron at triggering time where , and we have

By the inequality (2), it holds

Based on the triggering rule (2), we can obtain

which means

For any time , the state becomes

where . Thus,

for any and , which implies

The proof for the out-synchronization of system (4) is completed.  

Proposition 3

Let be a constant and . Set as the time points such that

(18)

for and . Then the system (4) reaches out-synchronization.

Proposition 4

Let be a constant and . Set