Observability through matrixweighted graph
Abstract
Observability of an array of identical LTI systems with incommensurable output matrices is studied, where an array is called observable when identically zero relative outputs imply synchronized solutions for the individual systems. It is shown that the observability of an array is equivalent to the connectivity of its interconnection graph, whose edges are assigned matrix weights. The interconnection graph is studied by means of a collection of simpler graphs, each of which is associated to an eigenvalue of the system matrix of individual dynamics. It is reported that the interconnection graph is connected if and only if no member of this collection is disconnected. Moreover, to better understand the relative behavior of distant units, pairwise observability which concerns with the synchronization of a certain pair of individual systems in the array is studied. This milder version of observability is shown to be closely related to certain connectivity properties of the interconnection graph as well. Pairwise observability is also analyzed using the circuit theoretic tool effective conductance. The observability of a certain pair of units is proved to be equivalent to the nonsingularity of the (matrixvalued) effective conductance between the associated pair of nodes of a resistive network (with matrixvalued parameters) whose node admittance matrix is the Laplacian of the array’s interconnection graph.
1 Introduction
Observability is one of the central concepts in systems theory which, for LTI systems, can be expressed in many seemingly different yet mathematically equivalent forms [7]. One alternative is the following. A pair is observable if implies , where denote the solutions of two identical systems , . Admittedly, this appears to be an uneconomical definition, for the implication therein employs two systems where one would have sufficed. The overuse, however, has a relative advantage: it points in an interesting direction of generalization. Namely, for a pair the below condition suggests itself as a natural extension.
(1) 
where are the solutions of identical systems , . Aside from its theoretical allure, this particular choice of generalization is not without practical motivation; the condition (1) happens to be both necessary and sufficient for synchronization of certain arrays of oscillators. We give two examples in the sequel.
Coupled electrical oscillators. Consider the individual oscillator in Fig. 1, where linear inductors with inductances are connected by linear capacitors with capacitances .
The node voltages are denoted by . Letting the model of this system reads where and
Let now an array be constructed by coupling identical oscillators in the arrangement shown in Fig. 2. If we let denote the node voltage vector for the th oscillator and be the conductance of the resistor connecting the th nodes of the oscillators and , we obtain , where . Denoting by the state of the th system we can then rewrite the coupled dynamics as
(2) 
To understand the collective behavior of these coupled oscillators one can employ the Lyapunov function
(3) 
In particular, combining (2) and (3) we reach
(4) 
Note that the righthand side is negative semidefinite because all are positive semidefinite. Hence the solutions remain bounded. Now, for
suppose that the condition (1) holds. Then, and only then, (4) yields by KrasovskiiLaSalle invariance principle [9] that the oscillators synchronize, i.e., for all and all initial conditions .
Coupled mechanical oscillators. Our second example employs the mechanical system shown in Fig. 3, where masses are connected by linear springs. Such chains are used to model the interaction of atoms in a crystal [5].
Let be the displacement of the mass from the equilibrium. The spring constants are denoted by . Letting the model of this oscillator reads , where the matrices are borrowed from the previous example.
Let now an array be formed by coupling replicas of this oscillator in the arrangement shown in Fig. 4. If we let denote the displacement vector for the th oscillator and represent the viscous friction (damping) between the th masses of the oscillators and , we can write with .
The Lyapunov approach previously adopted for the synchronization analysis of the coupled electrical oscillators is valid here, too. The outcome is the same. Namely, under the condition (1), this time with
the mechanical oscillators synchronize. Having motivated the condition (1) in the context of synchronization, we will next try to explain its relation to certain existing assumptions.
Synchronization of linear systems is a broad area of research, where one of the main goals of the researcher is to unearth conditions under which the solutions of coupled units converge to a common trajectory. Different sets of assumptions have led to a rich collection of results, bringing our understanding on the subject closer to complete; see, for instance, [12, 18, 19, 16, 6, 11]. Despite their differences in degree and direction of generality, all these works share two assumptions in common: (i) the graph describing the interconnection contains a spanning tree and (ii) the individual system is observable (detectable). We intend to emphasize in this paper that these two separate assumptions, the former on connectivity and the latter on observability, dissolve inseparably in the condition (1). In particular, for an array represented by the pair , it is in general not meaningful to search for a spanning tree because the interconnection graph will be matrixweighted, whereas a tree is well defined for a scalarweighted graph only. As for the second assumption, requiring the individual systems to be observable also falls prey to ambiguity since there is not a single output matrix for each system; instead every system is coupled to each of its neighbors through a different matrix . It is true that separation is possible in the special case with and . In this muchstudied scenario, where the output matrices are commensurable, the scalar weights are used to construct the interconnection graph, which can be checked to contain a spanning tree; and the pair can separately be checked for observability. However, in general, the condition (1) in its entirety is what we have to deal with, which requires that we work with the matrixweighted graphs. We will explain how these matrixvalued weights emerge soon. But first, let us review the scarce literature on observability over networks.
Observability over networks, motivated in general by synchronization (consensus) of coupled systems, is largely an unexplored area of research. Among the few works is [8], where the observability of sensor networks is studied by means of equitable partitions of graphs. This tool is employed also in [14]. The observability of path and cycle graphs is studied in [15] and of grid graphs in [13]. Recently, the networks whose individual systems’ dynamics are allowed to be nonidentical is covered in [20]. Each of these investigations covers a different case, yet they all consider interconnections that can be described by graphs with scalarweighted edges. At this point our work is located relatively far from the reported results. In particular, to the best of our knowledge, observability over matrixweighted graphs has not yet been studied in detail.
In the first half of this paper we report conditions on the array that imply observability in the sense of (1). To this end, we construct a graph (with vertices) where to each pair of vertices we assign a weight that is a Hermitian positive semidefinite matrix, whose null space is the unobservable subspace corresponding to the individual pair . We reveal that the array is observable if and only if the interconnection graph is connected. Also, we notice that for each distinct eigenvalue of there exists a graph (we call it an eigengraph) and the observability of is ensured if no eigengraph is disconnected. For our analysis we define the connectivity of a graph through a certain spectral property of its Laplacian. We note that the connectivity of a matrixweighted graph cannot in general be characterized by the standard tools of graph theory such as path and tree. The reason is that the meaning or function of an edge (out of which paths and trees are constructed) becomes equivocal when one has to allow semidefinite weights.
In the second half of the paper we focus on the socalled observability of . Namely, for a given pair of indices , we search for conditions under which provided that . To this end we define connectivity of a matrixweighted graph through its Laplacian. We show the expected equivalence between the observability of the array and the connectivity of the interconnection graph as well as the unexpected lack of equivalence between the observability of the array and the connectivity of its eigengraphs. Moreover, we present the interesting interchangeability between the observability of an array and the nonsingularity of the matrix , where is the (matrixvalued) effective conductance between the nodes and of a resistive network (with matrixvalued parameters) whose node admittance matrix is the Laplacian of the array’s interconnection graph . From a graphtheoretic point of view the nonsingularity of the effective conductance may be interpreted to indicate that the pair of vertices of the matrixweighted graph are connected. This therefore allows one to study connectivity of vertices without employing paths; which is potentially useful since defining a path, as mentioned above, is problematic for matrixweighted graphs. One may ask why our formulation is in terms of effective conductance instead of the commoner effective resistance, e.g., [17]. The reason is that the conductances we work with are matrixvalued and not necessarily invertible. That is, since resistance is the inverse of conductance, we would have run into certain difficulties had we chosen to employ effective resistance instead. Potential applications of generalized electrical circuits with matrixvalued parameters seem to have so far gone unnoticed by the control theorists. Notable exceptions are the works [2, 1, 3] on the problem of estimation over networks.
2 Preliminaries and notation
In this section we provide the formal definitions for the observability of an array and the connectivity of an graph through its Laplacian matrix. (The reader should be warned that the term graph has appeared in the literature in different meanings. In this paper it means a weighted graph, where each pair of vertices is assigned an by matrix.)
A pair is meant to represent the array of identical systems
where is the state of the th system with and is the th relative output with . We let . In our paper we will solely be studying the case for all . Hence we suppose without loss of generality. The generality is not lost because if then we can always redefine ; and then for all if and only if for all . The ordered collection will sometimes be compactly written as when there is no risk of ambiguity.
For each we denote by the observability matrix of the individual pair . Namely,
The associated unobservable subspace is denoted by . Recall that and that is invariant under . In particular, implies for all since . By () we denote the distinct eigenvalues of . By , , we denote a full column rank matrix satisfying , where is the by identity matrix. Note that the columns of are the linearly independent eigenvectors of corresponding to the eigenvalue . In particular, we have . The below definition is what this paper is all about.
Definition 1
An array is said to be observable if
for all initial conditions .
An graph has a finite set of vertices and a weight function with the properties

,

,

,
where indicates the conjugate transpose of . Let . The by matrix
is called the Laplacian of . Let . By construction the Laplacian is Hermitian, i.e., , and enjoys some other desirable properties. Let with and define the synchronization subspace as . We see that . Also, since we can write , the Laplacian is positive semidefinite. Therefore all its eigenvalues are real and nonnegative, thanks to which the ordering is not meaningless. In the sequel, denotes the th smallest eigenvalue of .
We denote by (or by when there is no risk of confusion) the graph of a collection with and . The graph has the vertex set and its weight function is such that . Regarding the array (2) two graph constructions are particularly important. One of them is the graph which we call the interconnection graph. The other is the graph , called the eigengraph corresponding to the eigenvalue .
In graph theory [4], connectivity (in the classical sense) is characterized by means of adjacency. A connected graph is said to have a path between each pair of its vertices, where a path is a sequence of adjacent vertices. For 1graphs the definition of adjacency is unequivocal: a pair of vertices are adjacent if and nonadjacent if . (Adjacent vertices are said to have an edge between them.) For graphs () however, since we have the inbetween semidefinite case , how to define adjacency and, in turn, connectivity becomes a matter of choice. For our purposes in this paper we (inevitably) abandon the concept of adjacency altogether and define connectivity of a graph through its Laplacian. Recall that a 1graph is connected if and only if . Since this is an equivalence result it can replace the definition of connectivity for graphs. This substitute turns out to be much easier to generalize than the standard definition that uses paths.
Definition 2
An graph is said to be connected if .
The next three facts will find frequent use later in the paper.
Lemma 1
An graph is connected if and only if .
Proof. Let denote the Laplacian. Suppose . By definition . Therefore has linearly independent eigenvectors whose eigenvalues are zero. Since this means that has exactly eigenvalues at the origin. That all the eigenvalues of are nonnegative then yields . To show the other direction this time we begin by letting . That is, has at most eigenvalues at the origin. The property then implies that has at most eigenvectors whose eigenvalues are zero. In other words, . This implies, in the light of the facts and , that .
Lemma 2
Consider the solutions of the array (2). Let and . We have
Proof. Let with . Observe . Since we also have if and only if . Recalling we can now write
Therefore
Recall that implies for all . Hence
which completes the proof.
Lemma 3
Let and . The graph is not connected if and only if there exists a vector satisfying .
Proof. Given , let us suppose that is not connected. Since , by Lemma 1 there exists that satisfies and , where . Let us employ the partition with and define as . That is, . Clearly, . Since is full column rank, yields . Lastly, we have to establish . Recall . Therefore
Now we can proceed as
Since is Hermitian positive semidefinite, implies , i.e., .
To show the other direction, suppose that there exists satisfying and . Since we can find with satisfying . And since is full column rank implies . Now we turn the same wheels as in the first part, but in the opposite direction.
Since is Hermitian positive semidefinite, implies , i.e., . This allows us to assert because . Then by Lemma 1 the graph is not connected.
3 Observability and connectivity
In this section we establish the equivalence between observability and connectivity. Then we present a corollary on an interesting special case followed by a relevant numerical example. We end the section with a theorem on detectability. Below is our main result.
Theorem 1
The following are equivalent.

The array is observable.

The interconnection graph is connected.

All the eigengraphs are connected.
Proof. 12. Suppose that is not connected. Hence by Lemma 1, where . Since there must exist a vector that satisfies both and . Choose the initial conditions of the systems (2) so as to satisfy . Then by Lemma 2 we have for all . However there exists at least one pair for which because . Hence the array cannot be observable.
23. Suppose that is not connected for some . Then by Lemma 3 there exists that satisfies and . That is, . This implies by Lemma 1 that is not connected.
31. Suppose that the array is not observable. Then we can find some initial conditions for which the solutions of the systems (2) yield
Let . By Lemma 2 we have because for all . We also have because . Hence . Combining and (in the light of ) implies that is a strict superset of . Let . (Note that .) Let and be two full column rank matrices satisfying and . Recall that the unobservable subspaces are invariant with respect to the matrix . As a consequence is invariant with respect to the matrix . To see that let with . We can write
where for the last implication we use the fact that is Hermitian positive semidefinite. Now, due to invariance, there have to exist matrices and that satisfy
Let be an eigenvector of with eigenvalue , i.e., . Also, let and . Note that and . Now we can write
Let us employ the partitions with and with . Then yields for all . Choose an arbitrary index and define and . Note that and . Moreover, since both and belong to , we have . Now observe