Identifiability of linear dynamic networks

Identifiability of linear dynamic networks

Abstract

Dynamic networks are structured interconnections of dynamical systems (modules) driven by external excitation and disturbance signals. In order to identify their dynamical properties and/or their topology consistently from measured data, we need to make sure that the network model set is identifiable. We introduce the notion of network identifiability, as a property of a parameterized model set, that ensures that different network models can be distinguished from each other when performing identification on the basis of measured data. Different from the classical notion of (parameter) identifiability, we focus on the distinction between network models in terms of their transfer functions. For a given structured model set with a pre-chosen topology, identifiability typically requires conditions on the presence and location of excitation signals, and on presence, location and correlation of disturbance signals. Because in a dynamic network, disturbances cannot always be considered to be of full-rank, the reduced-rank situation is also covered, meaning that the number of driving white noise processes can be strictly less than the number of disturbance variables. This includes the situation of having noise-free nodes.

1Introduction

Dynamic networks are structured interconnections of dynamic systems and they appear in many different areas of science and engineering. Because of the spatial connections of systems, as well as a trend to enlarge the scope of control and optimization, interesting problems of distributed control and optimization have appeared in several domains of applications, among which robotic networks, smart grids, transportation systems, multi agent systems etcetera. An example of a (linear) dynamic network is sketched in Figure 1, where excitation signals and disturbance signals , together with the linear dynamic modules induce the behaviour of the node signals .

Figure 1: Dynamic network where node variables w_i are the outputs of the summation points indicated by circles.
Figure 1: Dynamic network where node variables are the outputs of the summation points indicated by circles.

When structured systems like the one in Figure 1 become of interest for analysing performance and stability, it is appropriate to also consider the development of (data-driven) models. In system identification literature, where the majority of the work is focused on open-loop or feedback controlled (multivariable) systems, there is an increasing interest in data-driven modeling problems related to dynamic networks. Particular questions that can be addressed are, e.g.:

  • Identification of a single selected module , on the basis of measured signals and ;

  • Identification of the full network dynamics;

  • Identification of the topology of the network, i.e. the Boolean interconnection structure between the several nodes .

The problem (a) of identifying a single module in a dynamic network has been addressed in [22], where a framework has been introduced for prediction error identification in dynamic networks, and classical closed-loop identification techniques have been generalized to the situation of structured networks. Using this framework, predictor input selection ([7]) has been addressed to decide on which node signals need to be measured for identification of a particular network module. Errors-in-variables problems have been addressed in ([6]) to deal with the situation when node signals are measured subject to additional sensor noise.

The problem (b) of identifying the full network can be recast into a multivariable identification problem, that can then be addressed with classical identification methods [20]. Either structured model sets can then be used, based on an a priori known interconnection structure of the network, or a fully parametrized model set, accounting for each and every possible link between node signals.

The problem (c) of topology detection has been addressed in e.g. [16] where Wiener filters have been used to reconstruct the network topology. In [4] a Bayesian viewpoint has been taken and regularization techniques have been applied to obtain sparse estimates. Topology detection in a large scale network has been addressed in [18] using compressive sensing methods, and in a biological network in [28] using also sparse estimation techniques. Causal inference has been addressed in [17].

Not only in problem (b) but also in problem (c), the starting point is most often to model all possible links between node signals, in other words to parametrize all possible modules in the network. However when identifying such a full network model, care has to be taken that different network models can indeed be distinguished on the basis of the data that is available for identification. In [11] specific local conditions have been formulated for injectivity of the mapping from the network transfer function (transfer from external signals to node signals ) to network models. This is done outside an identification context and without considering (non-measured) disturbance inputs. Uniqueness properties of a model set for purely stochastic networks (without external excitations ) have been studied in [16] where the assumption has been made, like in many of the works in this domain, that each node is driven by an independent white noise source.

In this paper we are going to address the question: under which conditions on the experimental setup and choice of model set, different network models in the set can be distinguished from each other on the basis of measured data? The typical conditions will then include presence and location of external excitations, presence of and modelled correlations between disturbance signals, and modelled network topology.
This question will be addressed by introducing the concept of network identifiability as a property of a parametrized set of network models. We will study this question for the situations that

  • Disturbance terms are allowed to be correlated over time but also over node signals, i.e. and , can be correlated.

  • The vector disturbance process can be of reduced-rank, i.e. has a driving white noise process that has a dimension that is strictly less than the dimension of . This includes the situation that some disturbance terms can be .

  • Direct feedthrough terms are allowed in the network modules.

The presence of possible correlations between disturbances, limits the opportunities to break down the modelling of the network into several multi-input single-output MISO) problems, as e.g. done in [22]. For capturing these correlations among disturbances all relevant signals will need to be modelled jointly in a multi-input multi-output (MIMO) approach.

If the size of a dynamic network increases, the assumption of having a full rank noise process becomes more and more unrealistic. Different node signals in the network are likely to experience noise disturbances that are highly correlated with and possibly dependent on other node signals in its direct neighbourhood. One could think e.g. of a network of temperature measurements in a spatial area, where unmeasured external effects (e.g. wind) affect all measured nodes in a strongly related way. In the identification literature little attention is paid to this situation. In a slightly different setting, the classical closed-loop system (Figure ?) also has this property, by considering the input to the process to be disturbance-free, rendering the two-dimensional vector noise process of reduced-rank. Closed-loop identification methods typically work around this issue by either replacing the external excitation signal by a stochastic noise process, as e.g. in the joint-IO method ([3]), or by only focussing on predicting the output signal and thus identifying the plant model (and not the controller), as e.g. in the direct method ([15]). In econometrics dynamic factor models have been developed to deal with the situation of high dimensional data and rank-reduced noise ([8]).

The notion of identifiability is a classical notion in system identification, but the concept has been used in different settings. The classical definition as present in [14] is a consistency-oriented concept concerned with estimates converging to the true underlying system (system identifiability) or to the true underlying parameters (parameter identifiability). In the current literature, identifiability has become a property of a parametrized model set, referring to a unique one-to-one relationship between parameters and predictor model, see e.g. [15]. As a result a clear distinction has been made between aspects of data informativity and identifiability. For an interesting account of these concepts see also the more recent work [2]. In the current literature the structure/topology of the considered systems has been fixed and restricted to the common open-loop or closed-loop cases. In our network situation we have to deal with additional structural properties in our models. These properties concern e.g. the choices where external excitation and disturbance signals are present, and how they are modeled, whether or not disturbances can be correlated, and whether modules in the network are known and fixed, or parametrized in the model set. In this paper we will particularly address the structural properties of networks, and we will introduce the concept of network identifiability, as the ability to distinguish networks models in identification. Rather than focussing on the uniqueness of parameters, we will focus on uniqueness of network models.

We are going to employ the dynamic network framework as described in [22], and we will introduce and analyse the concept of network identifiability of a parametrized model set. We will build upon the earlier introduction of the problem and preliminary results presented in [23] and [24], but we will reformulate the starting points and definitions, as well as extend the results to more general situations in terms of correlated noise, reduced-rank noise, and absence of delays in network modules.

This paper will proceed by defining the network setup (Section 2), and subsequently formulating the models, model sets and identifiability concept (Section 3). In Section 4 conditions to ensure network identifiability are presented for various situations, after which some examples are provided in Section . In Section results are provided that exploit the particular interconnection structure that is present in the model set, after which a discussion section follows and conclusions are formulated.

2Dynamic network setting

Following the basic setup of [22], a dynamic network is built up out of scalar internal variables or nodes , , and external variables , . Each internal variable is described as:

where is the delay operator, i.e. ;

  • , are proper rational transfer functions, and the single transfers are referred to as modules in the network.

  • are external variables that can directly be manipulated by the user;

  • is process noise, where the vector process is modelled as a stationary stochastic process with rational spectral density, such that there exists a -dimensional white noise process , , with covariance matrix such that

The noise model requires some further specification. For , referred to as the full-rank noise case, is square, stable, monic and minimum-phase. The situation will be referred to as the singular or rank-reduced noise case. In this latter situation it will be assumed that the node signals , are ordered in such a way that the first nodes are affected by a full-rank noise process, thus allowing a representation for that satisfies

with square and monic, while is stable and has a stable left inverse , satisfying , the identity matrix.

When combining the node signals we arrive at the full network expression

Using obvious notation this results in the matrix equation:

The network transfer function that maps the external signals and into the node signals is denoted by:

with

The identification problem to be considered is the problem of identifying the network dynamics () on the basis of measured variables and .

The dynamic network formulation above is related to what has been called the Dynamic Structure Function (DSF) as considered for disturbance-free systems in [1]. In particular, state space structures can be included by considering every module to be restricted to having first order dynamics only.
In terms of notation, for any transfer function we will denote .

3Network model set and identifiability

In order to arrive at a definition of network identifiability we need to specify a network model and a network model set.

We include the noise covariance matrix in the definition of a model, as is common for multivariable models [20]. The noise model is defined to be non-square in the case of a rank-reduced noise ().

In this paper we will consider model sets for which all models in the set share the same rank, i.e. . We will use parameters only as a vehicle for creating a set of models. We will not consider any particular properties of the mapping from parameters to network models.

The question whether in a chosen model set, the models can be distinguished from each other through identification, has two important aspects:

  • a structural —or identifiability— aspect: is it possible at all to distinguish between models, given the presence and location of external excitation signals and noise disturbances, and

  • a data informativity aspect: given the presence and location of external excitation signals and noise disturbances, are the actual signals informative enough to distinguish between models during a particular identification experiment.

We will refer to the first (structural) aspect as the notion of network identifiability. For consistency of model estimates in an actual identification experiment, it is then required that the model set is network identifiable and that the external excitation signals are sufficiently informative. This separation of concepts allows us to study the structural aspects of networks, separate from the particular choice of test signals in identification.

Based on the network equations (Equation 3)-( ?) we can rewrite the system as

Many identification methods, among which prediction error and subspace identification methods, base their model estimates on second order statistical properties of the measured data. These properties are represented by auto-/cross-correlation functions or spectral densities of the signals and . On the basis of the expressions (Equation 6)-( ?), and noticing that is measured and is not, the model objects that generate the second-order properties of , are typically given by the transfer function and the spectral density , with , being defined as , where is the discrete-time Fourier transform, and the expected value operator. By utilizing (Equation 5)-( ?), we can now write for a parametrized model :

where denotes complex conjugate transpose. As a result we arrive at a definition of network identifiability that addresses the property that network models are uniquely determined from and .

We have chosen to use the spectral density in the definition, rather than its spectral factor as e.g. done in [23]. This is motivated by the objective to include the situation of rank-reduced noise, where will be singular, and the handling of possible direct feedthrough terms and algebraic loops in the network. This will be further addressed and clarified in Section 4.

Before moving to the formulation of verifiable conditions for network identifiability, we present an example of a disturbance free network to illustrate that a model set can be globally identifiable at one model, but not at another model. The example is taken from [23].

4Conditions for verifying network identifiability

In this Section we will derive conditions for verifying global network identifiability. To this end the implication ( ?) of Definition ? will be reformulated into a condition on the network transfer functions , that is more easy to verify. This reformulation is done for three different situations, specifying particular assumptions on the presence/absence of delays in the modules in the networks.

First we are making the following assumption:

This Assumption may look rather restrictive, but actually it can be shown that for the analysis of network identifiability at a particular model, the assumption is not restrictive. Additionally the required value of as well as the requested ordering of signals can be determined from data. This topic will be further addressed in Section 7.

Before being able to formulate verifiable conditions for identifiability, we need to collect some properties of reduced-rank spectra, in order to properly handle the situation that .

4.1Factorizations of reduced-rank spectra

Proof. Part (a) is the standard spectral factorization theorem, see [26]. Part (b) can be verified by direct computation.

This spectral factorization result shows that for the modelling of the noise process , we actually have two options. The first is a noise model with a -dimensional (full-rank) noise process, and a noise filter structured as of which the upper square part is monic. The second option is a noise model , with an -dimensional (possibly reduced rank) noise process, and a monic square noise filter, structured as . In this paper we will dominantly use the first (non-square) representation, while the second (square) representation will be effectively utilized in many of the proofs.

Now we are up to formulating conditions for network identifiability. To this end we will distinguish between different situations, dependent on the presence of delays in the network.

4.2The situation of strictly proper modules

First we consider the situation that all modules in the network are strictly proper, i.e. .

The proof is provided in the Appendix.

Note that the above result is valid for both full-rank () and reduced-rank () noise processes. Additionally there are no restrictions on , e.g. it is not restricted to being diagonal. In [11] the transfer function has been used as a basis for dynamic structure reconstruction, in a continuous-time domain setting. The fact that the network transfer function is the object that can be uniquely identified from data, has been analyzed in [23] for the situation that with diagonal , and no algebraic loops in the networks. This has been the motivation in [23] to use the condition () as a definition of network identifiability. In the situation of rank-reduced noise, including noise-free nodes, this result is still true under the formulated condition that all modules in the network are strictly proper. The only adaptation is that the transfer function is no longer square but rectangular in its dimension, i.e. . An equivalent formulation of () is obtained by adding the equality of covariance matrices to both sides of the implication, leading to In this representation it is clear that, when starting from expression () in the definition of network identifiability, and are uniquely determined from .

4.3The situation of modules with direct feedthrough

In order to handle the situation of having direct feedthrough terms in , we need to deal with the phenomenon of algebraic loops.

It can be shown (see [5]) that there are no algebraic loops in a network if and only if there exists a permutation matrix , such that is upper triangular. We can now formulate a Proposition that is an alternative to Proposition ?.

The proof is provided in the Appendix.

For the particular situation of noise-free nodes, the result of this proposition has been applied in [24]. Proposition ? in relation to Proposition ?, shows that the ability to estimate more flexible correlations between the white noise processes ( is not constrained in Proposition , while being diagonal in Proposition ), is traded against the ability to handle direct feedthrough terms in the modules (Proposition ). It also should be noted that the above results hold true for any particular experimental setup, i.e. for any selection of excitation signals that are present.

4.4The situation of algebraic loops

The results of Proposition ? and ? have been derived based on conditions that guarantee that the transfer function uniquely determines the model terms . So actually this has been a reasoning that is fully based on the noise spectrum . By incorporating more specific conditions on , more generalized situations can be handled, even including the situation of having algebraic loops in the network. We will follow a reasoning where the transfer function will be required to uniquely determine the feedthrough term , and –as a result— also the noise covariance matrix .

To this end we consider the direct feedthrough terms , and . Suppose that row of has parameterized elements, and row of has parameterized elements. We define the permutation matrix and the permutation matrix such that all parametrized entries in the considered row of are gathered on the left hand side, and all parametrized entries in the considered row of are gathered on the right hand side, i.e. with indicating the -th row of a matrix.

Next we define the matrix of dimension as the submatrix of that is constructed by taking the row numbers that correspond to the columns of that are parametrized, and by taking the column numbers that correspond to the columns of that are not parametrized. This is formalized by

We can now formulate the following identifiability result for the situation that even algebraic loops are allowed in the network.

The proof is provided in the Appendix.

In the Proposition, conditions are formulated under which the transfer function will uniquely determine the direct-feedthrough term and —as a result thereof— also the noise covariance matrix . In a context of consistent identification methods, handling the situation of algebraic loops is further discussed in [25].

4.5Network identifiability results for full excitation

We have shown under which conditions the essential condition for global network identifiability can be equivalently formulated in the expression ( ?) on the basis of and . We continue with showing when the implication ( ?) is satisfied in the situation that we have at least as many external excitation plus white noise inputs, as we have node signals. This leads to sufficient conditions for global network identifiability that are not dependent on the particular structure of the network as present in .

The proof of the Theorem is added in the appendix.
Expression ( ?) is basically equivalent to a related result in [11], where a deterministic reconstruction problem is considered on the basis of a network transfer function, however without considering (non-measured) stochastic disturbance signals. Note that the condition can be interpreted as the possibility to give a leading diagonal matrix by column operations. There is an implicit requirement in the theorem that has full row rank, and therefore it does not apply to the case of Example ?. The situation of a rank-reduced matrix will be considered in Section 6.

One of the important consequences of Theorem ? is formulated in the next corollary.

The situation described in the Corollary corresponds to having a single parametrized entry in every row and every column, and thus implies that can be permuted to a diagonal matrix. Uncorrelated excitation can come from noise or external variables. Note that the result of Theorem ? can be rather conservative, as it does not take account of any possible structural conditions in the matrix . Additionally the result does not apply to the situation where is not full row rank, as in that case it can never be transformed to having a leading diagonal by column operations. This is e.g. the case in Example ?. Both structural constraints and possible reduced row rank of will be further considered in Section 6. First we will present some illustrative examples that originate from [24].

5Illustrative examples

6Network identifiability in case of structure restrictions

When there are structure restrictions in or matrix is not full row rank, as in Example ?, the result of Theorem ? is conservative and/or even does not apply. Structure restrictions in are typically represented by fixing some modules, possibly to , on the basis of assumed prior knowledge. For these cases of structure restrictions, in [11] necessary and sufficient conditions have been formulated for satisfying ( ?) at a particular model . The conditions are formulated in terms of nullspaces that can not be checked without knowledge of the underlying network. Since we are most interested in global identifiability of a full model set, rather than in a particular model, we will further elaborate and generalize these conditions and present them in a form where these conditions can effectively be checked.

First we need to introduce some notation.
In line with the reasoning in Section 4.4, we suppose that each row of , has parameterized transfer functions, and row of has parametrized transfer functions, and we define the permutation matrix , and the permutation matrix , such that all parametrized entries in the considered row of are gathered on the left hand side, and all parametrized entries in the considered row of are gathered on the right hand side, i.e. Next we define the transfer matrix of dimension , as the submatrix of that is constructed by taking the row numbers that correspond to the columns of that are parametrized, and by taking the column numbers that correspond to the columns of that are not parametrized. This is formalized by The following Theorem now specifies necessary and sufficient conditions for the central identifiability condition ().

The proof is provided in the Appendix. The condition on the maximum number of parametrized entries in the transfer function matrix, reflects a condition that the number of parametrized transfers that map into a particular node, should not exceed the total number of excitation signals plus white noise signals that drive the network. The check on the row rank of matrices is an explicit way to check the related nullspace condition in [11]. The assumption (a.) in the Theorem, refers to the situation that we do not restrict the model class to any finite dimensional structure, but that we consider the situation that could be represented by a non-parametric identification of all module elements.

The results of this Section can be applied to Example ?.

When information about the ’true’ network is used then one can obtain results that allow us to distinguish between certain networks. However we are mainly interested in results that allow us to distinguish between all networks in a model set, since in an identification setting the true network structure/dynamics will not be known.

7Discussion on signal ordering assumption

In Assumption ? we have formulated a condition on an ordering property of the model set. In this Section we will further discuss this Assumption and how it can be dealt with.

Our definitions of models and model sets in Section 3 only consider models that have the ordering property. So, for discussing the situation of models that do not have this property, we need to slightly adapt our definition.

First of all, if we are considering network identifiability at a particular (unordered) model , then the covariance matrix carries the information of the rank as well as the information for re-ordering the node signals in such way that, after reordering, the model satisfies the ordering property of Assumption 1. This can be understood by realizing that rank , and that there exists a permutation matrix such that , the rank- covariance matrix of the ordered model. That same permutation matrix can then be applied to , to reorder the node signals in the model so as to arrive at its ordered equivalent. So when addressing the problem of global identifiability at a particular model, the model information intrinsically contains the information how to order the signals to satisfy the ordering property.

In more general situations, the required information for determining and for reordering the node signals can be retrieved from data, and . In particular we can observe that on the basis of

and invertibility of , it is clear that rank , and more specifically, by using the monicity property of , that rank . So for a particular model , can be obtained directly from .
A similar situation occurs for the ordering of signals as assumed in Assumption ?, as is formulated next.

A proof is collected in the Appendix. The reasoning that underlies this result, is that under the formulated conditions the covariance matrix can be uniquely retrieved from the data. And based on a permutation matrix can then be found that reorders the node signals into a (reordered) model that satisfies the ordering property.
The conditions of this Proposition are basically the same as the ones applied in Propositions ?, ? and ? for analyzing identifiability.

The results in this section show that the ordering property of Assumption 1 is not a restriction if we consider the identifiability of a model set at a particular model (local analysis). This is due to the fact that in that particular model, either the model information or the measurement data in the form of and carry enough information to find a permutation matrix to arrive at a permuted model that does satisfy Assumption 1.

8Conclusions

The objective of this paper has been to obtain conditions on the presence and location of excitation and disturbance signals and conditions on the parameterized model set such that a unique representation of the full network can be obtained. A property called global network identifiability has been defined to ensure this unique representation, and results have been derived to analyze this property for the case of dynamic networks allowing correlated noises on node signals, as well as rank-reduced noise. Three key ingredients for a network identifiable model set are: presence and location of external excitation signals, modeled correlations between disturbances, and prior (structural) knowledge on the network that is incorporated in the model set.

9Acknowledgement

The authors gratefully acknowledge discussions with Michel Gevers and Manfred Deistler on the topic and presentation of this paper, and Manfred Deistler in particular for the suggestion of using signal spectra as a basis of model equivalence in identifiability.

10Proof of Proposition

Since in the considered situation

has an upper part which is monic, while

it follows that (Equation 8) satisfies the conditions of the unique spectral factorization in Lemma ?a, if . If it satisfies the conditions of the standard spectral factorization. Therefore and are uniquely determined by , or in other words

Since is in the premise of ( ?) and is implied by the premise of the equality of the spectra, as indicated above, the result follows directly.

11Proof of Proposition

First we treat the full-rank situation that .

In this situation

and using the property that is monic leads to

The algebraic loop condition now implies that is upper unitriangular1 and (leaving out arguments for brevity):

With being diagonal and upper unitriangular, this represents a unique decomposition of the permuted spectrum. As a result is uniquely determined from .

Spectral factorization of leads to a unique decomposition

with monic, stable and minimum-phase, but not necessarily diagonal. Since is full rank, there is a nonsingular matrix such that , leading to the unique spectral decomposition:

where . As a result, is uniquely determined from , and the proof follows along the same steps as in the proof of Proposition ?.

Now we turn to the situation .

When applying the spectral decomposition of Lemma ?b to it follows that

with square and monic, and structured according to

Since by assumption is diagonal, it follows that and

As a result

with diagonal. Then exactly the same reasoning as above with a permutation of the signals to turn into a unitriangular matrix, shows that and therefore also is uniquely determined from .

With known, the decomposition uniquely determines from . The proof then follows the same same steps as in the proof of Proposition ?.

12Proof of Proposition

This proof consists of 2 steps. The first step is to use to uniquely determine the feedthrough of , i.e.

The left hand side of the above implication can be written as

Consider row of the matrix equation (Equation 9), and apply the following reasoning for each row separately. By inserting the permutation matrices and , defined in ( ?)-( ?), we obtain for row :

leading to

with . Note that, as defined by (Equation 7), .

When considering the left block of the vector equation (Equation 10), while using the expression for above, we can write

with the left block of . Now and are independent of parameter , which implies that, if has full row rank, then all the parametrized elements in are uniquely determined.

Then the second step is to determine and . By writing the spectrum of as

we obtain through pre- and post-multiplication:

where . For given (from step 1), and given , this equation provides a unique , such that can be uniquely obtained from

The proof then follows the same steps as the proof of Proposition 1.

13Proof of Theorem

a) It will be shown that under the condition of the theorem, the equality implies for all . With the definition of , the equality of the -matrices implies that we can restrict to . That same equality induces

and postmultiplication with leads to

with diagonal and full rank for all .
The left square blocks in both sides of the equation can now be inverted to deliver Due to zeros on the diagonal of and and the diagonal structure of and , it follows that and consequently . Then by (Equation 11) it follows that and .
b) For part (b) it needs to be shown that the implication under (a) holds true for any in . It is direct that this is true, following a similar reasoning as above, if we extend the parameter set to be considered from to .

14Proof of Theorem

We will first provide the proof for situation (1).
The left hand side of the implication ( ?) can be written as

where we use shorthand notation , and . Consider row of the matrix equation (Equation 12), and apply the following reasoning for each row separately. By inserting the permutation matrices and , defined in ( ?),( ?) we obtain for row :

leading to

with . Note that .

Sufficiency:


When considering the left block of the vector equation (Equation 13), while using the expression for above, we can write

with the left block of .
Now and are independent of , which implies that, if has full row rank, then all the parametrized elements in are uniquely determined. Then through (Equation 13) the parametrized elements in are also uniquely determined.

By assumption we know that one solution to (Equation 12) is given by and . Since the solution is unique, and and are a possible solution we know that and must be the only solution. This proves the validity of the implication ( ?).

Necessity of condition 2:


If the matrix is not full row rank, then it has a non-trivial left nullspace. Let the rational transfer matrix of dimension be in the left nullspace of . Then there also exists a proper, rational and stable in the left nullspace of . Then (Equation 14) can also be written as

By the formulated assumptions (a) and (b) it holds that each parameterized transfer function can be any proper rational transfer function, and that these parameterized transfer functions do not share any parameters. This implies that and refer to two different model rows of in the model set, that generate the same network transfer function . Hence implication ( ?) can not hold.

Necessity of condition 1:


If , then will be a tall matrix which can never have a full row rank. Then because of the necessity of the row rank condition on , necessity of condition 1 follows immediately.

Proof of situation (2): For all :


For every we can construct with related of full row rank, and the reasoning as presented before fully applies. If for some we can not construct this full row rank there exists a model in the model set which is not identifiable, and hence the model set is not globally network identifiable in .

15Proof of Proposition

The expression for is given by (discarding arguments ):

while . Because is monic, the expression for reduces to:

We are now going to show that under the different conditions, can be uniquely derived from and .

Situation of strictly proper modules (Proposition ?).
Since we know that it follows immediately from (Equation 16) that , showing that can be directly obtained from .

Situation of diagonal and no algebraic loops (Proposition ?).
If is diagonal then also is diagonal. We consider (Equation 16). Based on the algebraic loop condition, we can construct a permutation matrix such that is upper unitriangular. Then:

With being diagonal and upper unitriangular, this represents a unique decomposition of the permuted spectrum. As a result is uniquely determined from .

Situation of algebraic loops (Proposition ?).
The proof of Proposition ? shows that under the given conditions, is uniquely determined from . Then (Equation 16) leads to the expression

showing that can be uniquely determined.

In all three situations considered, the matrix is uniquely determined from and possibly . Then there exists a permutation matrix that reorders the signals in such a way that is a matrix of which the left upper part is full rank. If we apply this reordering of signals, determined by , to the node signals , then we arrive at a permuted model that has the ordering property, according to Assumption ?.

Footnotes

  1. upper unitriangular is upper triangular with ’s on the diagonal.

References

  1. Dynamical structure function identifiability conditions enabling signal structure reconstruction.
    J. Adebayo, T. Southwick, V. Chetty, E. Yeung, Y. Yuan, J. Gonçalves, J. Grose, J. Prince, G.-B. Stan, and S. Warnick. In Proc. 51st IEEE Conf. Decision and Control (CDC), pages 4635–4641, 2012.
  2. Closed-loop identification of mimo systems: a new look at identifiability and experiment design.
    A.S. Bazanella, M. Gevers, and L. Miskovic. European J. Control
  3. Feedback between stationary stochastic processes.
    P. E. Caines and C. W. Chan. IEEE Trans. Automatic Control
  4. A Bayesian approach to sparse dynamic network identification.
    A. Chiuso and G. Pillonetto. Automatica
  5. System identification in dynamic networks


    A.G. Dankers. .
  6. Errors-in-variables identification in dynamic networks - consistency results for an instrumental variable approach.
    A.G. Dankers, P.M.J. Van den Hof, X. Bombois, and P.S.C. Heuberger. Automatica
  7. Identification of dynamic models in complex networks with predictior error methods - predictor input selection.
    A.G. Dankers, P.M.J. Van den Hof, P.S.C. Heuberger, and X. Bombois. IEEE Trans. Automatic Control
  8. Generalized linear dynamic factor models: An approach via singular autoregressions.
    M. Deistler, B. D. O. Anderson, A. Filler, Ch. Zinner, and W. Chen. European J. Control
  9. The structure of generalized linear dynamic factor models.
    M. Deistler, W. Scherrer, and B. D. O. Anderson. Empirical Economic and Financial Research
  10. Structural conditions for the identifiability of dynamical networks.
    M. Gevers, A.S. Bazanella, and A. Parraga. 2016.
  11. Necessary and sufficient conditions for dynamical structure reconstruction of LTI networks.
    J. Gonçalves and S. Warnick. IEEE Trans. Automatic Control
  12. The Statistical Theory of Linear Systems


    E. J. Hannan and M. Deistler. .
  13. Network reconstruction from intrinsic noise.
    D.P. Hayden, Y. Yuan, and J.M. Gonçalves. CoRR
  14. On the consistency of prediction error identification methods.
    L. Ljung. In R.K. Mehra and D.G. Lainiotis, editors, System Identification: Advances and Case Studies, pages 121–164. New York, 1976.
  15. System Identification: Theory for the User


    L. Ljung. .
  16. On the problem of reconstructing an unknown topology via locality properties of the Wiener filter.
    D. Materassi and M. Salapaka. IEEE Trans. Automatic Control
  17. Equivalence between minimal generative model graphs and directed information graphs.
    C.J. Quinn, N. Kiyavash, and T.P. Coleman. In Proc. IEEE Intern. Symp. Information Theory, pages 293–297, 2011.
  18. Exact topology identification of large-scale interconnected dynamical systems from compressive observations.
    B.M. Sanandaji, T.L. Vincent, and M.B. Wakin. In Proc. American Control Conference (ACC), pages 649–656, San Francisco, CA, USA, 2011.
  19. A review of sufficient conditions for structure identification in interconnected systems.
    B.M. Sanandaji, T.L. Vincent, and M.B. Wakin. In Proc. 16th IFAC Symp. System Identification, pages 1623–1628, 2012.
  20. System Identification


    T. Söderström and P. Stoica. .
  21. Identifiability conditions for linear multivariable systems operating under feedback.
    T. Söderström, L. Ljung, and I. Gustavsson. IEEE Trans. Automatic Control
  22. Identification of dynamic models in complex networks with prediction error methods - basic methods for consistent module estimates.
    P.M.J. Van den Hof, A.G. Dankers, P.S.C. Heuberger, and X. Bombois. Automatica
  23. Identifiability in dynamic network identification.
    H. H. M. Weerts, A. G. Dankers, and P. M. J. Van den Hof. IFAC-PapersOnLine
  24. Identifiability of dynamic networks with part of the nodes noise-free.
    H. H. M. Weerts, P. M. J. Van den Hof, and A. G. Dankers. IFAC-PapersOnLine
  25. Identification of dynamic networks operating in the presence of algebraic loops.
    H.H.M. Weerts, P.M.J. Van den Hof, and A.G. Dankers. In Proc. 55th IEEE Conference on Decision and Control, pages 4606–4611, Las Vegas, AZ, 2016b.
  26. On the factorization of rational matrices.
    D.C. Youla. IRE Trans. Information Theory
  27. Decentralised Network Prediction and Reconstruction Algorithms


    Y. Yuan. .
  28. Robust dynamical network structure reconstruction.
    Y. Yuan, G-B. Stan, S. Warnick, and J. Gonçalves. Automatica
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
12250
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description