Asymptotic stability of stochastic LTV systems with applications to distributed dynamic fusion
Abstract
In this paper, we investigate asymptotic stability of linear timevarying systems with (sub) stochastic system matrices. Motivated by distributed dynamic fusion over networks of mobile agents, we impose some mild regularity conditions on the elements of timevarying system matrices. We provide sufficient conditions under which the asymptotic stability of the LTV system can be guaranteed. By introducing the notion of slices, as nonoverlapping partitions of the sequence of systems matrices, we obtain stability conditions in terms of the slice lengths and some network parameters. In addition, we apply the LTV stability results to the distributed leaderfollower algorithm, and show the corresponding convergence and steadystate. An illustrative example is also included to validate the effectiveness of our approach.
I introduction
Stability of linear timevarying (LTV) systems has been a topic of significant interest in a wide range of disciplines including but not restricting to mathematical modeling and control of dynamical systems, [1, 2, 3, 4, 5, 6, 7]. Discretetime, LTV dynamics can be represented by the following model:
(1) 
where is the state vector, ’s are the system matrices, ’s are the input matrices, and is the input vector. This model is particularly relevant to design and analysis of distributed fusion algorithms when the system matrices, ’s, are (sub) stochastic, i.e. they are nonnegative and each row sums to at most . Examples include leaderfollower algorithms, [8, 9], consensusbased control algorithms, [10, 11, 12], and sensor localization, [13, 14].
In contrast to the case when the system matrices, ’s, are timeinvariant, i.e. , as in many studies related to the above examples, we are motivated by the scenarios when these system matrices are timevarying. The dynamic system matrices do not only model timevarying neighboring interactions, but, in addition, capture agent mobility in multiagent networks. Consider, for example, the leaderfollower algorithm, [8, 9], where sensors update their states, ’s in Eq. (1), as a linearconvex combination of the neighboring states, and anchor keeps its (scalar) state, , fixed at all times. It is wellknown that under mild conditions on network connectivity the sensor states converge to the anchor state. However, the neighboring interactions change over time if the sensors are mobile. In the case of possibly random motion over the sensors, at each time , it is not guaranteed that a sensor can find any neighbor at all. If a sensor finds a set of neighbors to exchange information, none of these neighbors may be an anchor. We refer to the general class of such timevarying fusion algorithms over mobile agents as Distributed Dynamic Fusion (DDF). In this context, we study the conditions required on the DDF system matrices such that the dynamic fusion converges to (a linear combination of) the anchor state(s).
For linear timeinvariant (LTI) systems, a necessary and sufficient condition for stability is that the spectral radius, i.e. the absolute value of the largest eigenvalue, of the system matrix is subunit. A wellknown result from the matrix theory is that if the (timeinvariant) system matrix, , is irreducible and substochastic, sometimes referred to as uniformly substochastic, [15, 16], the spectral radius of is strictly less than one and converges to zero. In contrast, the DDF algorithms over mobile agents result into a timevarying system, Eq. (1), where a system matrix, , at any time is nonnegative, and can be: (i) identity if no sensor is able to update its state; (ii) stochastic if the updating sensor divides the total weight of among the sensors in its neighborhood; or, (iii) substochastic if the total weight of is divided among both sensors and anchors. In addition, it can be verified that in DDF algorithms, the resulting LTV system may be such that the spectral radius, , of the system matrices follow . This is, for example, when only a few sensors update and the remaining stick to their past states.
Asymptotic stability for LTV systems may be characterized by the joint spectral radius of the associated family of system matrices. Given a finite set of matrices, , the joint spectral radius of the set , was introduced by Rota and Strang, [17], as a generalization of the classical notion of spectral radius, with the following definition:
in which is the set of all possible products of the length , i.e.
Joint spectral radius (JSR) is independent of the choice of norm, and represents the maximum growth rate that can be achieved by forming arbitrary long products of the matrices taken from the set . It turns out that the asymptotic stability of the LTV systems, with system matrices taken from the set , is guaranteed, [18], if and only if
Although the JSR characterizes the stability of LTV systems, its computation is NPhard, [19], and the determination of a strict bound is undecidable, [20]. Naturally, much of the existing literature has focused on JSR approximations, [18, 21, 20, 22, 23, 19, 24, 25]. For example, Ref. [23] studies lifting techniques to approximate the JSR of a set of matrices. The main idea is to build a lifted set with a larger number of matrices, or a set of matrices with higher dimensions, such that the relation between the JSR of the new set and the original set is known. Lifting techniques provide better bounds at the price of a higher computational cost. In [18], a sum of squares programming technique is used to approximate the JSR of a set of matrices; a bound on the quality of the approximation is also provided, which is independent of the number of matrices. Stability of LTV systems is also closely related to the convergence of infinite products of matrices. Of particular interest is the special case of the (infinite) product of nonnegative and/or (sub) stochastic matrices, see [26, 27, 28, 29, 30, 31, 32, 33]. In addition to nonnegativity and substochasticity, the majority of these works set other restrictions, such as irreducibility or bounds on the row sum on each matrix in the set.
The main contributions of this paper are as follows. Design: we provide a set of conditions on the elements of the system matrices under which the asymptotic stability of the corresponding LTV system can be guaranteed. Analysis: we propose a general framework to determine the stability of an (infinite) product of (sub) stochastic matrices. Our approach does not require either the computation or an approximation of the JSR. Instead, we partition the infinite set of system matrices (stochastic, substochastic, or identity) into nonoverlapping slices–a slice is defined as the smallest product of (consecutive) system matrices such that: (i) every row in a slice is strictly less than one; and, (ii) the slices cover the entire sequence of system matrices. Under the conditions established in the design, we subsequently show that the infinity norm of each slice is subunit (recall that in the DDF setup, infinity norm of each system matrix is one). Finally, in order to establish the relevance to the fusion applications of interest, we use the theoretical results to derive the convergence and steadystate of a dynamic leaderfollower algorithm.
An important aspect of our analysis lies in the study of slice lengths. First, we show that longer slices may have an infinity norm that is closer to one as compared to shorter slices. Clearly, if one can show that each slice norm is subunit (with a uniform upper bound of ) then one further has to guarantee an infinite number of such slices to ensure stability. The aforementioned argument naturally requires slices of finite length, as finite slices covering infinite (system) matrices lead to an infinite number of slices. An avid reader may note that guaranteeing a sharp upper bound on the length of every slice may not be possible for certain network configurations. To address such configurations, we characterize the rate at which the slices (not necessarily in an order) grow large such that the LTV stability is not disturbed. In other words, a longer slice may capture a slow information propagation in the network; characterizing the aforementioned growth is equivalent to deriving the rate at which the information propagation may deteriorate in a network such that the fusion is still achievable.
The rest of this paper is organized as follows. We formulate the problem in Section II, while Section III studies the convergence of an infinite product of (sub) stochastic matrices. Stability of discretetime LTV systems with (sub) stochastic system matrices is studied in Section IV. We provide applications to distributed dynamic fusion in Section V and illustrations of the results in Section VI. Finally, Section VII concludes the paper.
Ii Problem formulation
In this paper, we study the asymptotic stability of the following Linear TimeVarying (LTV) dynamics:
(2) 
where is the state vector, is the timevarying system matrix, is the timevarying input matrix, is the input vector, and is the discretetime index. We consider the system matrix, at each , to be nonnegative and either substochastic, stochastic, or identity, along with some conditions on its elements. The input matrix, at each , may be arbitrary as long as some regularity conditions are satisfied. These regularity conditions on the system matrices, ’s and ’s, are collected in the Assumptions A0–A2 in the following.
In this paper, we are interested in deriving the conditions on the corresponding system matrices under which the LTV dynamics in Eq. (2) forget the initial condition, , and converge to some function of the input vector, . The motivation behind this investigation can be cast in the context of distributed fusion over dynamic graphs that we introduce in the following.
Iia Distributed Dynamic Fusion
Consider a network of mobile nodes moving arbitrarily in a (finite) region of interest, where mobile sensors implement a distributed algorithm to obtain some relevant function of (mobile) anchors; examples include the leaderfollower setup, [8, 9], and sensor localization, [13, 14]. The sensors may be thought of as mobile agents that collect information from the anchors and disseminate within the sensor network. Each node may have restricted mobility in its respective region and thus many sensors may not be able to directly connect to the anchors. Since the motion of each node is arbitrary, the network configuration at any time is completely unpredictable. It is further likely that at many time instants, no node has any neighbor in its communication radius.
Formally, sensors, in the set , are the nodes in the graph that update their states, , as a linearconvex function of the neighboring nodes; while anchors, in the set , are the nodes that inject information, , in the network. Let denote the set of neighbors (not including sensor ) of sensor according to the underlying graph at time , with . We assume that at each time , only one sensor, say , updates its state^{1}^{1}1Although multiple sensors may update their states at each iteration, without loss of generality, we assume that at most one sensor may update., . Since the underlying graph is dynamic, the updating sensor implements one of the following updates:

No neighbors:
(3) 
No neighboring anchor, :
(4) 
At least one anchor as a neighbor:
(5) with .
At every other (nonupdating) sensor, , we have
(6) 
IiB Assumptions
Let , and , we now enlist the assumptions:
A0: When the updating sensor, , has no anchor as a neighbor, the update in Eq. (4) is linearconvex, i.e.
(7) 
resulting in a (row) stochastic system matrix, .
A1: When the updating sensor, , has no anchor but at least one sensor as a neighbor, the weight it assigns to each neighbor (including the selfweight) is such that
(8) 
A2: When the updating sensor updates with an anchor, the update, Eq. (5), over the sensors, , satisfies
(9) 
resulting in a substochastic system matrix, . Also note that the update over the anchors, , in Eq. (5), follows
(10) 
If, in addition, we enforce , as it is assumed in leaderfollower, [8, 9], or sensor localization, [13, 14], Eq. (10) naturally leads to the bound in Eq. (9).
Clearly, which of the four updates in Eqs. (3)–(6) is applied by the updating sensor, , depends on being able to satisfy the corresponding assumptions (A0–A2), in addition to the neighborhood configuration. Indeed, letting
result into the LTV system in Eq. (2). Clearly, the timevarying system matrices, , are either substochastic, stochastic, or identity, depending on the nature of the update.
Remarks: It is meaningful to comment on the assumptions made above. Nonnegativity and stochasticity are standard in the literature concerning relevant iterative algorithms and multiagent fusion, see e.g. [10, 11, 12, 13]. When there is a neighboring anchor, Eq. (9) provides an upper bound on unreliability thus restricting the amount of unreliable information added in the network by a sensor. Eq. (10), on the other hand, can be viewed as a lower bound on reliability; it ensures that whenever an anchor is included in the update, a certain amount of information is always contributed by the anchor. An avid reader may note that Eq. (10) guarantees that the following does not occur: , where is a subsequence within denoting the instants when Eq. (5) is implemented. Similarly, Eqs. (8) and (9) ensure that no sensor is assigned a weight arbitrarily close to and thus no sensor may be entrusted with the role of an anchor. Note also that Eq. (8) naturally leads to an upper bound on the neighboring sensor weight, i.e. , because always includes . Also when there is no neighboring anchor, Eq. (8) guarantees that sensors do not completely forget their past information by putting a nonzero selfweight on their own previous states. Finally, we point out that the bounds in Eqs. (8)–(10), are naturally satisfied by LTI dynamics: , with nonnegative matrices; a topic wellstudied in the context of iterative algorithms, [34, 35], and multiagent fusion.
Iii Infinite product of (sub) stochastic matrices
In this section, we study the convergence of
(11) 
where is the system matrix at time , as defined in Section II. Since multiplication with the identity matrix has no effect on the convergence of the sequence, in the rest of the paper we only consider the updates, in which at least one sensor is able to find and exchange information with some neighbors, i.e. . We are interested in establishing the stability properties of this infinite product. Studying the joint spectral radius is prone to many challenges as described in Section I, and we choose the infinity norm to study the convergence conditions. The infinity norm, , of a square matrix, , is defined as the maximum of the absolute row sums. Clearly, the infinity norm of is one for all since each system matrix has at most one substochastic row.
To establish a subunit infinity norm, we divide the system matrices into nonoverlapping slices and show that each slice has an infinity norm strictly less than one; the entire chain of system matrices is covered by these nonoverlapping slices. Let one of the slices be denoted by with length and, without loss of generality, index the matrices within as
(12) 
Using slice notation, we can introduce a new discretetime index, , which allows us to study the following
(13) 
instead of Eq. (11), note that .
We define a system matrix, , as a success if it decreases the row sum of some row in , which was stochastic before this successful update. Each success, thus, adds a new substochastic row to a slice, and such successful updates are required to complete a slice. In this argument, we assume that a row that becomes substochastic remains substochastic, which is not in true in general, after successive multiplication with stochastic or substochastic matrices, . Thus, we will derive the explicit conditions under which the substochasticity of a row is preserved. Before we proceed with our main result we provide the following lemmas:
Lemma 1.
For the infinity norm to be less than one, each slice has to contain at least one substochastic update.
Proof.
Since any set of stochastic matrices form a group under multiplication [36], a slice without a substochastic update will be a stochastic matrix whose infinity norm is . ∎
We now motivate the slice construction as follows. Partition the rows in an arbitrary into two distinct sets: set contains all substochastic rows, and the remaining (stochastic) rows form the other set, . We initiate each slice with the first success, , and terminate it after the th success, , when each row becomes substochastic. Between the th success in the current slice, say , and the first success in the next slice, , all we can have are stochastic or substochastic matrices that must preserve the substochasticity of each row. See Fig. 1 for the slice representation, where the rightmost system matrices (encircled in Fig. 1) of each slice, i.e. , are substochastic. The th slice length may be defined as
and slice lengths are not necessarily equal.
In the next lemma, we show how a stochastic row can become substochastic, in a slice, . We index ’s in a slice, , by to simplify notation, and define the product of all system matrices up to time in a slice as
(14) 
Lemma 2.
Suppose the th row of is stochastic at index of a given slice, , and is the next system matrix. Row in can become substochastic by either:
(i) a substochastic update at the th row of ; or,
(ii) a stochastic update at the th row of , such that
where is the set of substochastic rows in .
Proof.
For the sake of simplicity, let , in the following. Updating the th row at index leads to
(15) 
where is identity matrix; th row after this update is
where is the th element of , and the th row sum becomes
Thus, we have
(17) 
Let us first consider case (i) where the th row of is substochastic. From Eq. (17) and Assumption A2, we have
(18) 
Therefore, the th row becomes substochastic after a substochastic update at row .
We now consider case (ii) where the th row of is stochastic, i.e. . In this case, is a linearconvex combination of the row sums of , which is strictly less than one, if and only if has at least one substochastic row, say , such that , i.e.
(19) 
So and the lemma follows. ∎
In the next lemma, we show that substochasticity is preserved for each substochastic row within a slice.
Lemma 3.
With Assumptions A0A2, a substochastic row, say , remains substochastic throughout a slice.
Proof.
We use the notation of Lemma 2 on , , and Eq. (15), and rewrite Eq. (III) as
(20)  
Let us consider the general case after the first success, where there exist substochastic rows in , i.e. , and . Without loss of generality, suppose the substochastic rows of lie in the first rows. We need to show that if the th row in is substochastic, i.e. , it remains substochastic after a multiplication by either a stochastic or a substochastic system matrix, . Rewrite the th row sum as
Thus,
(21)  
where we used the fact that in , any row is stochastic.
Let us first consider the th row of to be stochastic:
Thus,
Therefore, from Eq. (III) we can write
(22)  
Finally,
(23)  
(24) 
because the first rows in are substochastic leading to , for any , and since the th row in is stochastic, by Assumption A1 we have
Note that in Eq. (23) the only way to lose substochasticity is to have for all . However, substochasticity can be preserved by putting a nonzero weight on any row in . Since this knowledge in not available in general, a sufficient condition to ensure this is . Thus, the th row sum remains strictly less than one (and greater than zero) after any stochastic update at the th row as long as Assumption A1 is satisfied. Note that the lower bound on the th row sum stems from the nonnegativity of system matrices.
Now consider the th row of to be substochastic. From A2, we have
Therefore,
Thus, from Eq. (III) we can write
(25)  
Finally,
(26) 
where again we used the fact that . Eq. (III) shows that in case of a substochastic th row in , this row remains substochastic in , as long as Assumption A2 is satisfied and the conditions on individual weights are not required. Note the strict inequality, i.e. if for any , then
This lemma establishes that under the Assumptions A0A2, substochasticity is always preserved. ∎
The results so far describe the behavior of the substochastic rows in the slices explicitly derived under the regularity conditions in Assumptions A0A2. The next results characterize the infinity norm bound on the slices. To this end, let us define , as the maximum row sum over the substochastic rows of the product of all system matrices before in the th slice. Mathematically,
(27) 
where is the th element of the following column vector
and is the column vector of ones.
It can be inferred from our discussion so far that a substochastic update at row is sufficient but not necessary for the th row to be substochastic. In the following lemma, we consider the case where no substochastic update occurs at row throughout a slice, and provide an upper bound for the th row sum at the end of a slice.
Lemma 4.
Assume there is no substochastic update at the th row within a given slice, . The th row sum of this slice is upper bounded by
(28) 
where the first success at row occurs in the th update of this slice.
Proof.
Eq. (23) expresses the th row sum after a stochastic update at row . Clearly, before the first success, at index ,
In order to find the maximum possible row sum for the th row at the end of a slice, we should find a scenario, which maximizes the row sum after the first success at index and keeps maximizing it at each subsequent update. Let us consider Eq. (23) after the first success at index . Since no substochastic update is allowed at row from the lemma’s statement, the first success occurs via a stochastic update at the th row, and Assumption A1 is applicable. Since any nonzero decreases the row sum, the minimum number of such weights maximizes the right hand side (RHS) of Eq. (23). Suppose is the only nonzero among all ’s. In this case, Eq. (23) reduces to the following
(29) 
in which . Also note that , since is stochastic before the time instant, . In order to maximize the RHS of Eq. (29), should be minimized, and should be maximized. From Eq. (27), the maximum value of before the first success is . Thus, after the th update, where row becomes substochastic for the first time, we can write
(30) 
After this update, , and
(31) 
where is the th row sum in , and is the th row sum in . Under this scenario, after the first success, the th row has the maximum row sum over all rows of , and in order to increase this row sum at the next update, the th row has to update only with itself. Note that after the success at index , row becomes substochastic, and , for any subsequent update until the end of a slice. After the next update, , using the same argument we can write
(32)  
If row keeps updating with itself, at the end of slice, we have after number of such updates
(33) 
and the lemma follows. ∎
In the following lemma, we consider the general case where substochastic updates are also allowed at row and provide an upper bound for the th row sum at the end of a slice.
Lemma 5.
Assume there is at least one substochastic update at the th row within a given slice, . The th row sum of this slice is upper bounded by
(34) 
where the last substochastic update at row occurs in the th update of a slice.
Proof.
As shown in Eq. (III) and by Assumption A2, any substochastic update at row imposes the upper bound of on the th row sum. Thus, after the last substochastic update at row we have
After , there is no substochastic update, and by Assumption A1, the th selfweight will be nonzero until the end of the slice. Following the same argument as in Lemma 4, the upper bound on the th row sum is maximized after each update if the th row does not update with any substochastic row other than itself. For any update after the last success, Eq. (8) holds and we have
(35) 
After the th update we have
(36) 
and at the end of a slice, we have
(37) 
and the lemma follows. ∎
In the previous two lemmas, we provide an upper bound for each row sum for two cases: when all updates are stochastic and when substochastic updates are also allowed. The following lemma combines these bounds and relate them to the infinity norm bound of a slice.
Lemma 6.
For a given slice, ,
(38) 
where
The next lemma studies the worst case scenario for the infinity norm of a slice, which provides an upper bound for Eq. (38).
Lemma 7.
With assumptions A0A2, for the th slice we have
(39) 
where
(40) 
Proof.
In order to find the maximum upper bound on the infinity norm of a slice, we consider a worst case scenario, in which a row sum incurs the largest increase throughout the slice. To do so, we examine the maximum possible upper bound on the th row sum for the two cases discussed in Lemmas 4 and 5 separately.
Consider no substochastic update at the th row. We should find a scenario that maximizes the RHS of Eq. (33). In addition, we need to make sure that such scenario is practical, i.e. all other rows become substochastic before a slice is terminated. Since there are no substochastic updates at row , a slice can not be initiated by an update in row , i.e. . At the initiation of a slice, one row other than , becomes substochastic, and the upper bound imposed on this row is by Assumption A2, hence . Therefore, following the discussion in Lemma 4,
(41) 
provides the largest upper bound on the th row sum of . Note that this bound is feasible if we consider the following scenario. After row becomes substochastic at we let next updates for the other stochastic rows to become substochastic, each updating only with the substochastic row with the largest row sum. Thus the largest row sum keeps increasing in the same manner as discussed in Lemma 4 within the next updates. At th update, row again updates with a row, which has the maximum row sum in , and keeps updating by itself until the slice is terminated. The aforementioned scenario is equivalent to the one where the first success at row occurs at , and all other rows become substochastic within the first updates, and
(42) 
Now consider substochastic updates at row . The RHS of Eq. (37) is maximized if is minimized. In this case, the minimum value for is one, which corresponds to a scenario where a substochastic update at row initiates a slice and no other substochastic update occurs at this row. Using the same argument as before, all other rows become substochastic within the next updates and the largest upper bound on the th row in this case is the same as the one given in Eq. (41). ∎
Finally, note that for a given slice, ,
(43) 
is the largest upper bound on the infinity norm of a slice.
Iv Stability of discretetime systems
In this section, we study the stability of discretetime, LTV dynamics with (sub) stochastic system matrices. We start with the following definitions:
Definition 1.
The system represented in Eq. (2) is asymptotically stable (or convergent) if for any ,
is bounded and convergent.
Definition 2.
The system represented in Eq. (2) is absolutely asymptotically stable (or zeroconvergent) if for any ,
Recall that we are interested in the asymptotic stability of Eq. (2), such that the steadystate forgets the initial conditions and is a function of inputs. A sufficient condition towards this aim is the absolutely asymptotic stability of the following:
for any , which is equivalent to having
(45) 
where the subscript below denote its dimensions. As depicted in Fig. 1, we can take advantage of the slice representation and study the following dynamics:
(46) 
instead of Eq. (IV), where
Thus, for absolutely asymptotic stability of Eq. (46), for any , we require
We provide our main result in the following theorem.
Theorem 1.
With assumption A0A2, the LTV system in Eq. (46) is absolutely asymptotically stable if either one of the following is true:

Each slices has a bounded length, i.e.
(48) 
There exist a set, , consisting of an infinite number of slices such that
(49) (50) 
There exists a set, , of slices such that
for every , and .
Proof.
Using the submultiplicative norm property, Eq. (46) leads to
(51) 
Case (ii): We first note that the infinity norm of each slice has a trivial upper bound of . From Eq. (51), we have
(53) 
Similar to case (i), this case follows by defining
Case (iii): With in Eq. (40), Eq. (51) leads to
(54) 
Consider the asymptotic convergence of the infinite product of a sequence to . We have
(55) 
Now note that
because sums to infinity for all values of in , and multiplying by a positive number, , does not change the infinite sum. It can be verified that Eq. (55) holds when
subsequently resulting into
for some , and . Therefore if for any , there exist a slice, , in the set, , such that
(56) 
we get
(57) 
and absolutely asymptotic stability follows. By substituting from Eq. (40) in the left hand side of Eq. (56), we get
which leads to
(58) 
Since , is negative and dividing both sides of Eq. (58) by a negative number changes the inequality, i.e.