Small gain theorems for large scale systems and construction of ISS Lyapunov functions^{†}^{†}thanks: This work was done while B. S. Rüffer was with the School of Electrical Engineering and Computer Science, University of Newcastle, Callaghan, NSW 2308, Australia. Sergey Dashkovskiy has been supported by the German Research Foundation (DFG) as part of the Collaborative Research Center 637 ”Autonomous Cooperating Logistic Processes: A Paradigm Shift and its Limitations” (SFB 637). B. S. Rüffer has been supported by the Australian Research Council under grant DP0771131. Fabian Wirth is supported by the DFG priority programme 1305 “Control Theory of Digitally Networked Dynamical Systems”.
Abstract
We consider interconnections of nonlinear subsystems in the inputtostate stability (ISS) framework. For each subsystem an ISS Lyapunov function is given that treats the other subsystems as independent inputs. A gain matrix is used to encode the mutual dependencies of the systems in the network. Under a small gain assumption on the monotone operator induced by the gain matrix, a locally Lipschitz continuous ISS Lyapunov function is obtained constructively for the entire network by appropriately scaling the individual Lyapunov functions for the subsystems. The results are obtained in a general formulation of ISS, the cases of summation, maximization and separation with respect to external gains are obtained as corollaries.
Key words. Nonlinear systems, inputtostate stability, interconnected systems, large scale systems, Lipschitz ISS Lyapunov function, small gain condition
AMS subject classifications. 93A15, 34D20, 47H07
1 Introduction
In many applications large scale systems are obtained through the interconnection of a number of smaller components. The stability analysis of such interconnected systems may be a difficult task especially in the case of a large number of subsystems, arbitrary interconnection topologies, and nonlinear subsystems.
One of the earliest tools in the stability analysis of feedback interconnections of nonlinear systems are small gain theorems. Such results have been obtained by many authors starting with [36]. These results are classically built on the notion of gains, see [3] for a recent, very readable account of the developments in this area. While most small gain results for interconnected systems yield only sufficient conditions, in [3] it has been shown in a behavioral framework how the notion of gains can be modified so that the small gain condition is also necessary for robust stability.
Small gain theorems for large scale systems have been developed, e.g., in [26, 34, 23]. In [26] the notions of connective stability and stabilization are introduced for interconnections of linear systems using the concept of vector Lyapunov functions. In [23] stability conditions in terms of Lyapunov functions of subsystems have been derived. For the linear case characterizations of quadratic stability of large scale interconnections have been obtained in [16]. A common feature of these references is that the gains describing the interconnection are essentially linear. With the introduction of the concept of inputtostate stability in [28], it has become a common approach to consider gains as a nonlinear functions of the norm of the input. In this nonlinear case small gain results have been derived first for the interconnection of two systems in [18, 32]. A Lyapunov version of the same result is given in [17]. A general small gain condition for large scale ISS systems has been presented in [6]. Recently, such arguments have been used in the stability analysis of observers [1], in the stability analysis of decentralized model predictive control [22] and in the stability analysis of groups of autonomous vehicles.
During the revision of this paper it came to our attention that, following the first general small gain theorems for networks [21, 33, 5, 8, 7, 6], other generalizations of small gain results based on similar ideas have been obtained very recently using the maximization formulation of ISS: A generalized small gain theorem for outputLagrangeinputtooutput stable systems in network interconnections has been obtained in [19]. In this reference the authors study ISS in the maximization framework and conclude ISS from a small gain condition in the cycle formulation. It has been noted in [8] that in the maximum case the cycle condition is equivalent to the operator condition examined here. An extension of generalized small gain results to retarded functional differential equations based on the more general cycle condition and vector Lyapunov functions has recently been obtained in [20]. In this reference a construction of a Lyapunov function is shown which takes a different approach to the construction of an overall Lyapunov function. This construction depends vitally on the use of the maximum formulation of ISS.
In this paper we present sufficient conditions for the existence of an ISS Lyapunov function for a system obtained as the interconnection of many subsystems. The results are of interest in two ways. First, it is shown that a small gain condition is sufficient for inputtostate stability of the large scale system in the Lyapunov formulation. Secondly, an explicit formula for an overall Lyapunov function is given. As the dimensions of the subsystems are essentially lower than the dimension of their interconnection, finding Lyapunov functions for them may be an easier task than for the whole system.
Our approach is based on the notion of inputtostate stability (ISS) introduced in [28] for nonlinear systems with inputs. A system is ISS if, roughly speaking, it is globally asymptotically stable in the absence of inputs (socalled 0GAS) and if any trajectory eventually enters a ball centered at the equilibrium point and with radius given by a monotone continuous function, the gain, of the size of the input (the socalled asymptotic gain property), cf. [31].
The concept of ISS turned out to be particularly well suited to the investigation of interconnections. For example, it is known that cascades of ISS systems are again ISS [28] and small gain results have been obtained. We briefly review the results of [18, 17] in order to explain the motivation for the approach of this paper. Both papers study a feedback interconnection of two ISS systems as represented in Figure LABEL:feedbackfig.
The small gain condition in [18] is that the composition of the gain functions is less than identity in a robust sense. We denote the composition of functions by , that is, . The small gain condition then is that if on we have
\hb@xt@.01(1.1) 
for suitable functions then the feedback system is ISS with respect to the external inputs.
In this paper we concentrate on the equivalent definition of ISS in terms of ISS Lyapunov functions [31]. The small gain theorem for ISS Lyapunov functions from [17] states that if on the small gain condition
\hb@xt@.01(1.2) 
is satisfied then an ISS Lyapunov function may be explicitly constructed as follows. Condition (LABEL:eq:2) is equivalent to on . This permits to construct a function such that on , see Figure LABEL:figtwosigmas. An ISS Lyapunov function is then defined by scaling and taking the maximum, that is, by setting . This ISS Lyapunov function describes stability properties of the whole interconnection. In particular, given an input , it can be seen how fast the corresponding trajectories converge to the neighborhood and how large this neighborhood is.
At first sight the difference between the small gain conditions in (LABEL:eq:1) from [18] and (LABEL:eq:2) from [17] appears surprising. This might lead to the impression that the difference comes from studying the problem in a trajectory based or Lyapunov based framework. This, however, is not the case; the reason for the difference in the conditions is a result of the formulation of the ISS condition. In [18] a summation formulation was used for the trajectory based case. In the maximization formulation of the trajectory case the small gain condition is again (LABEL:eq:2), [6]. In [17] the Lyapunov formulation is investigated using maximization, the corresponding result for summation is Corollary LABEL:sumcor below requiring condition (LABEL:eq:1).
In order to generalize the existing results it is useful to reinterpret the approach of [17]: note that the gains may be used to define a matrix
which defines in a natural way a monotone operator on . In this way an alternative characterization of the area between and in Figure LABEL:figtwosigmas is that it is the area where (with respect to the natural ordering in ). Thus the problem of finding may be interpreted as the problem of finding a path such that .
We generalize this constructive procedure for a Lyapunov function in several directions. First the number of subsystems entering the interconnection will be arbitrary. Secondly, the way in which the gains of subsystem affect subsystem will be formulated in a general manner using the concept of monotone aggregation functions. This class of functions allows for a unified treatment of summation, maximization or other ways of formulating ISS conditions. Following the matrix interpretation this leads to a monotone operator on . The crucial thing to find is a sufficiently regular path such that . This allows for a scaling of the Lyapunov functions for the individual subsystems to obtain one for the large scale system.
Small gain conditions on as in [5, 6] yield sufficient conditions that guarantee that the construction of can be performed. However, in [5, 6] the trajectory formulation of ISS has been studied, and the main technical ingredient was, essentially, to prove bounds on . The sufficient condition for the existence of the path turns out to be the same, but the path itself had not been used in [5, 6]. In fact, the line of argument used there is completely different. It is shown in [24] that the results of [6] also hold for the more general ISS formulation using monotone aggregation functions. The condition requires essentially that the operator is not greater or equal to the identity in a robust sense. The construction of then relies on a rather delicate topological argument. What is obvious for the interconnection of two systems is not that clear in higher dimensions. It can be seen that the small gain condition imposed on the interconnection is actually a sufficient condition that allows for the application of the KnasterKuratowskiMazurkiewicz theorem, see [6, 24] for further details. We show in Section LABEL:sec:remarkscasethree how the construction works for three subsystems, but it is fairly clear that this methodology is not something one would like to carry out in higher dimensions. In the maximization formulation a viable alternative is the approach pursued by [20].
The construction of the Lyapunov function is explicit once the scaling function is known. Thus to have a really constructive procedure a way of constructing is required. We do not study this problem here, but note that based on an algorithm by Eaves [11] it actually possible to turn this mere existence result into a (numerically) constructive method [24, 9]. Using the algorithm by Eaves and the technique of Proposition LABEL:prop:psipathwiseconnectedANDfinitelengthOmegapath, it is then possible to construct such a vector function (but of finitelength) numerically, see [24, Chapter 4]. This will be treated in more detail in future work.
The paper is organized as follows. The next section introduces the necessary notation and basic definitions, in particular the notion of monotone aggregation functions (MAFs) and different formulations of ISS. Section LABEL:examples gives some motivating examples that also illustrate the definitions of the Section LABEL:sec:preliminaries and explain how different MAFs occur naturally for different problems. In Section LABEL:sec:monotopergener we introduce small gain conditions given in terms of monotone operators that naturally appear in the definition of ISS. Section LABEL:sec:lyapunovfunctions contains the main results, namely the existence of the vector scaling function and the construction of an ISS Lyapunov function. In this section we concentrate on strongly connected networks which are easier to deal with from a technical point of view. Once this case has been resolved it is shown in Section LABEL:sec:reducible how simply connected networks may be treated by studying the strongly connected components.
The actual construction of is given in Section LABEL:sec:pathconstruction to postpone the topological considerations until after applications to interconnected ISS systems have been considered in Section LABEL:sec:applgenersmall. Since the topological difficulties can be avoided in the case we treat this case briefly in Section LABEL:sec:remarkscasethree to show a simple construction for . Section LABEL:sec:conclusions concludes the paper.
2 Preliminaries
2.1 Notation and conventions
Let be the field of real numbers and the vector space of real column vectors of length . We denote the set of nonnegative real numbers by and denotes the positive orthant in . On the standard partial order is defined as follows. For vectors we denote
The maximum of two vectors or matrices is to be understood componentwise. By we denote the 1norm on and by the induced sphere of radius in intersected with , which is an simplex. On we denote by the projection of the coordinates in corresponding to the indices in onto .
The standard scalar product in is denoted by . By we denote the open ball of radius around with respect to the Euclidean norm . The induced operator norm, i.e. the spectral norm, of matrices is also denoted by .
The space of measurable and essentially bounded functions is denoted by with norm . To state the stability definitions that we are interested in three sets of comparison functions are used: is continuous, strictly increasing, and and . A function is of class , if it is of class in the first argument and strictly decreasing to zero in the second argument. We will call a function proper and positive definite if there are such that
A function is called positive definite if it is continuous and satisfies if and only if .
2.2 Problem Statement
We consider a finite set of interconnected systems with state , where , and . For the dynamics of the th subsystem is given by
\hb@xt@.01(2.1) 
For each we assume unique existence of solutions and forward completeness of in the following sense. If we interpret the variables , , and as unrestricted inputs, then this system is assumed to have a unique solution defined on for any given initial condition and any inputs , and . This can be guaranteed for instance by suitable Lipschitz and growth conditions on the . It will be no restriction to assume that all systems have the same (augmented) external input .
We write the interconnection of subsystems (LABEL:eq:3) as
\hb@xt@.01(2.2) 
Associated to such a network is a directed graph, with vertices representing the subsystems and where the directed edges correspond to inputs going from system to system , see Figure LABEL:fig:networkfigure. We will call the network strongly connected if its interconnection graph has the same property.
For networks of the type that has been just described we wish to construct Lyapunov functions as they are introduced now.
2.3 Stability
An appropriate stability notion to study nonlinear systems with inputs is inputtostate stability, introduced in [28]. The standard definition is as follows.
A forward complete system with is called inputtostate stable if there are , such that for all initial conditions and all we have
\hb@xt@.01(2.3) 
It is known to be an equivalent requirement to ask for the existence of an ISS Lyapunov function, [30]. These functions can be chosen to be smooth. For our purposes, however, it will be more convenient to have a broader class of functions available for the construction of a Lyapunov function. Thus we will call a function a Lyapunov function candidate, if the following assumption is met.
Assumption 2.1
The function is continuous, proper and positive definite and locally Lipschitz continuous on .
Note that by Rademacher’s Theorem (e.g., [12, Theorem 5.8.6, p.281]) locally Lipschitz continuous functions on are differentiable almost everywhere in .
Definition 2.2
We will call a function satisfying Assumption LABEL:A1 an ISS Lyapunov function for , if there exist , and a positive definite function such that in all points of differentiability of we have
\hb@xt@.01(2.4) 
ISS and ISS Lyapunov functions are related in the expected manner:
Theorem 2.3
A system is ISS if and only if it admits an ISS Lyapunov function in the sense of Definition LABEL:def:LipschitzISSLf.
This has been proved for smooth ISS Lyapunov functions in the literature [30]. So the hard converse statement is clear, as it is even possible to find smooth ISS Lyapunov functions, which satisfy Definition LABEL:def:LipschitzISSLf. The sufficiency proof for the Lipschitz continuous case goes along the lines presented in [30, 31] using the necessary tools from nonsmooth analysis, cf. [4, Theorem. 6.3].
Merely continuous ISS Lyapunov functions have been studied in [14, Ch. 3], arising as viscosity supersolutions to certain partial differential inequalities. Here we work with the Clarke generalized gradient of at . For functions satisfying Assumption LABEL:A1 Clarke’s generalized gradient satisfies for that
\hb@xt@.01(2.5) 
An equivalent formulation to (LABEL:basicissLF) is given by
\hb@xt@.01(2.6) 
Note that (LABEL:basicissLFclarke) is also applicable in points where is not differentiable.
The gain in (LABEL:basiciss) is in general different from the ISS Lyapunov gain in (LABEL:basicissLF). In the sequel we will always assume that gains are of class .
2.4 Monotone aggregation
In this paper we concentrate on the construction of ISS Lyapunov functions for the interconnected system . For a single subsystem (LABEL:eq:3), in a similar manner to (LABEL:basicissLF), we wish to quantify the combined effect of the inputs , , and on the evolution of the state . As we will see in the examples given in Section LABEL:examples it depends on the system under consideration how this combined effect can be expressed, through the sum of individual effects, using the maximum of individual effects or by other means. In order to be able to give a general treatment of this we introduce the notion of monotone aggregation functions (MAFs).
Definition 2.4
A continuous function is called a monotone aggregation function if the following three properties hold

positivity: for all and if ;

strict increase^{1}^{1}1Cf. Assumption (LABEL:standardassumptiononcompatibilityMAFandGamma), where for the purposes of this paper (M2) is further restricted.: if , then ;

unboundedness: if then .
The space of monotone aggregation functions is denoted by and denotes a vector , i.e., , for .
A direct consequence of (M2) and continuity is the weaker monotonicity property

monotonicity: .
In [24, 25] MAFs have additionally been required to satisfy another property,

subadditivity: ,
which we do not need for the constructions provided in this paper, since we take a different approach, see Section LABEL:sec:reducible.
Standard examples of monotone aggregation functions satisfying (M1)—(M4) are
On the other hand, the following function is not a MAF, since (M1) and (M3) are not satisfied; .
Using this definition we can define a notion of ISS Lyapunov function for systems with multiple inputs. In this case in (LABEL:eq:3) will have several gains corresponding to the inputs . For notational simplicity, we will include the gain throughout this paper. The following definition requires only Lipschitz continuity of the Lyapunov function.
Definition 2.5
Consider the interconnected system (LABEL:eq:4) and assume that for each subsystem there is a given function satisfying Assumption LABEL:A1.
For the function is called an ISS Lyapunov function for , if there exist , and a positive definite function such that at all points of differentiability of
\hb@xt@.01(2.7) 
The functions and are called ISS Lyapunov gains.
Several examples of ISS Lyapunov functions are given in the next section.
Let us call , , the internal inputs to and the external input. Note that the role of functions and is essentially to indicate whether there is any influence of different inputs on the corresponding state. In case does not depend on there is no influence of on the state of . In this case we define , in particular always . This allows us to collect the internal gains into a matrix
\hb@xt@.01(2.8) 
If we add the external gains as the last column into this matrix then we denote it by . The function describes how the internal and external gains interactively enter in a common influence on . The above definition motivates the introduction of the following nonlinear map
\hb@xt@.01(2.9) 
Similarly we define . The matrices and are from now on referred to as gain matrices, and as gain operators.
Remark 2.6 (general assumption)
Given and , we will from now on assume that and are compatible in the following sense: For each , let denote the set of indices corresponding to the nonzero entries in the th row of . Then it is understood that also the restriction of to the indices satisfies (M2), i.e.,
\hb@xt@.01(2.10) 
In particular we assume that the function
for satisfies (M2). Note that (M1) and (M3) are automatically satisfied.
The examples in the next section show explicitly how the introduced functions, matrices and operators may look like for some particular cases. Clearly, the gain operators will have to satisfy certain conditions if we want to be able to deduce that (LABEL:eq:4) is ISS with respect to external inputs, see Section LABEL:sec:lyapunovfunctions.
3 Examples for monotone aggregation
In this section we show how different MAFs may appear in different applications, for further examples see [10]. We begin with a purely academic example and discuss linear systems and neural networks later in this section. Consider the system
\hb@xt@.01(3.1) 
where . Take as a Lyapunov function candidate. It is easy to see that if and then
if . The conditions and translate into and in terms of this becomes
This is a Lyapunov ISS estimate where the gains are aggregated using a maximum, i.e., in this case we can take and and .
Note that there is a certain arbitrariness in the choice of and . In the example one could as well take and , giving exactly the same condition, but different gains and a different monotone aggregation function. At the end of the day the small gain condition comes down to mapping properties of . Different choices of and may lead to the same operator . However, as we will see at a later stage, certain choices of can be computationally more convenient than others. In particular, if we can choose , the task of checking the small gain condition reduces to checking a cycle condition, cf. Section LABEL:sec:specialcase:mu=max.
3.1 Linear systems
Consider linear interconnected systems
\hb@xt@.01(3.2) 
with and matrices of appropriate dimensions. Each system is ISS from to if and only if is Hurwitz. It is known that is Hurwitz if and only if for any given symmetric positive definite there is a unique symmetric positive definite solution of , see, e.g., [15, Cor. 3.3.47 and Rem. 3.3.48, p.284f]. Thus we choose the Lyapunov function , where is the solution corresponding to a symmetric positive definite . In this case, along trajectories of the autonomous system
we have
for , the smallest eigenvalue of . For system (LABEL:linearsystems) we obtain
\hb@xt@.01(3.3) 
where the last inequality (LABEL:eq:7) is satisfied for a given if
\hb@xt@.01(3.4) 
with . To write this implication in the form (LABEL:ISScond) we note that . Let us denote , , then the inequality (LABEL:eq:8) is satisfied if
This way we see that the function is an ISS Lyapunov function for with gains given by
for , , and
for , and . Further we have
for and . This satisfies (M1), (M2), and (M3), but not (M4). By defining for we can write
and have
\hb@xt@.01(3.5) 
Interestingly, the choice of quadratic Lyapunov functions for the subsystems naturally leads to a nonlinear mapping with a useful homogeneity property, see Proposition LABEL:prop:homogen.
3.2 Neural networks
As the next example consider a CohenGrossberg neural network as in [35]. The dynamics of each neuron is given by
\hb@xt@.01(3.6) 
, where denotes the state of the th neuron and is a strictly positive amplification function. As in [35] we assume that the fixed point is shifted to the origin. Then the function typically satisfies the sign condition and satisfies furthermore for some . The activation function is typically assumed to be sigmoid. The matrix describes the interconnection of neurons in the network and is a given constant input from outside. However for our consideration we allow to be an arbitrary measurable function in .
In applications the matrix is usually the result of training using some learning algorithm and appropriate training data. The specifics depend on the type of network architecture and learning algorithm chosen and on the particular application. Such considerations are beyond the scope of the current paper. We simply assume that is given and concern ourselves solely with stability considerations.
Note that for any sigmoid function there exists a such that . Following [35] we assume , .
Recall the triangle inequality for functions: For any and any it holds
We claim that is an ISS Lyapunov function for in (LABEL:eq:neural_network). Fix an arbitrary function and some satisfying . Then by the triangle inequality we have
In this case we have
additive with respect to the external input and
The MAF satisfies (M1), (M2), and (M3). It satisfies (M4) if and only if is subadditive.
4 Monotone Operators and generalized small gain conditions
In Section LABEL:sec:monotoneaggregation we saw that in the ISS context the mutual influence between subsystems (LABEL:eq:3) and the influence from external inputs to the subsystems can be quantified by the gain matrices and and gain operators and . The interconnection structure of the subsystems naturally leads to a weighted, directed graph, where the weights are the nonlinear gain functions, and the vertices are the subsystems. There is an edge from the vertex to the vertex if and only if there is an influence of the state on the state , i.e., there is a nonzero gain .
Connectedness properties of the interconnection graph together with mapping properties of the gain operators will yield a generalized small gain condition. In essence we need a nonlinear version of a Perron vector for the construction of a Lyapunov function for the interconnected system. This will be made rigorous in the sequel. But first we introduce some further notation.
The adjacency matrix of a matrix is defined by if and otherwise. Then is also the adjacency matrix of the graph representing an interconnection.
We say that a matrix is primitive, irreducible or reducible if and only if is primitive, irreducible or reducible, respectively. Recall (and see [2] for more on this subject) that a nonnegative matrix is

primitive if there exists a such that is positive;

irreducible if for every pair there exists a such that the th entry of is positive; obviously, primitivity implies irreducibility;

reducible if it is not irreducible.
A network or a graph is strongly connected if and only if the associated adjacency matrix is irreducible, see also [2].
For functions we define a diagonal operator by
\hb@xt@.01(4.1) 
For an operator , the condition means that for all , . In words, at least one component of has to be strictly less than the corresponding component of .
Definition 4.1 (Small gain conditions)
Let a gain matrix and a monotone aggregation be given. The operator is said to satisfy the small gain condition (LABEL:eq:smallgaincondition), if
\hb@xt@.01(SGC) 
Furthermore, satisfies the strong small gain condition (LABEL:eq:strongsmallgaincondition), if there exists a as in (LABEL:eq:D) such that
\hb@xt@.01(sSGC) 
It is not difficult to see that (LABEL:eq:strongsmallgaincondition) can equivalently be stated as
\hb@xt@.01(sSGC’) 
Also for (LABEL:eq:strongsmallgaincondition) or (LABEL:eq:strongsmallgainconditionpostfixD) to hold it is sufficient to assume that the function are all identical. This can be seen by defining . We abbreviate this in writing for some .
For maps we define the following sets:
If no confusion arises we will omit the reference to . Topological properties of the introduced sets are related to the small gain conditions (LABEL:eq:smallgaincondition), cf. also [5, 6, 25]. They will be used in the next section for the construction of an ISS Lyapunov function for the interconnection.
5 Lyapunov functions
In this section we present the two main results of the paper. The first is a topological result on the existence of a jointly unbounded path in the set , provided that satisfies the small gain condition. This path will be crucial in the construction of a Lyapunov function, which is the second main result of this section.
Definition 5.1
A continuous path will be called an path with respect to if

for each , the function is locally Lipschitz continuous on ;

for every compact set there are constants such that for all and all points of differentiability of we have
\hb@xt@.01(5.1) 
for all , i.e.
\hb@xt@.01(5.2)
Now we can state the first of our two main results, which regards the existence of paths.
Theorem 5.2
Let be a gain matrix and . Assume that one of the following assumptions is satisfied

is linear and the spectral radius of is less than one;

is irreducible and ;

and ;

alternatively assume that is bounded, i.e., , and satisfies .
Then there exists an path with respect to .
We will postpone the proof of this rather topological result to Section LABEL:sec:pathconstruction and reap the fruits of Theorem LABEL:path first. Note, however, that for LABEL:item:3 there exists a “”type equivalent formulation, cf. Theorem LABEL:thm:mu=max and see [21, 33, 6, 20].
In addition to the above result, the existence of paths can also be asserted for reducible and with mixed, bounded and unbounded, class entries, see Theorem LABEL:thm:monotonepathreduciblecase and Proposition LABEL:prop:partlyboundedGamma, respectively.
Theorem 5.3
Consider the interconnected system given by (LABEL:eq:3), (LABEL:eq:4) where each of the subsystems has an ISS Lyapunov function , the corresponding gain matrix is given by (LABEL:eq:5), and is given by (LABEL:ISScond). Assume there are an path with respect to and a function such that
\hb@xt@.01(5.3) 
is satisfied, then an ISS Lyapunov function for the overall system is given by
\hb@xt@.01(5.4) 
In particular, for all points of differentiability of we have the implication
\hb@xt@.01(5.5) 
where is a suitable positive definite function.
Note that by construction the Lyapunov function is not smooth, even if the functions for the subsystems are. This is why it is appropriate in this framework to consider Lipschitz continuous Lyapunov functions, which are differentiable almost everywhere.
Proof. We will show the assertion in the Clarke gradient sense. For there is nothing to show. So let . Denote by the set of indices for which
\hb@xt@.01(5.6) 
Then , for . Also as is obtained through maximization we have because of [4, p.83] that
\hb@xt@.01(5.7) 
Fix and assume without loss of generality . Then if we assume it follows in particular that . Using the abbreviation , denoting the first component of by and using assumption (LABEL:generalcond) we have
where we have used (LABEL:eq:firststep) and (M2’) in the last inequality. Thus the ISS condition (LABEL:ISScond) is applicable and we have for all that
\hb@xt@.01(5.8) 
By the chain rule for Lipschitz continuous functions [4, Theorem 2.5] we have
Note that in the previous equation the number is bounded away from zero because of (LABEL:sigmabounds). We set for
where is the constant corresponding to the set given by (LABEL:sigmabounds) in the definition of an path. With the convention we now define for
Here we have used that for a given and the norm of such that is bounded away from .
It now follows from (LABEL:eq:10) that if , then we have for all that