Invariant Subspaces of Riesz Spectral Systems with Application to Fault Detection and Isolation*
Abstract
A large class of hyperbolic and parabolic partial differential equation (PDE) systems, such as reactiondiffusion processes, when expressed in the infinitedimensional (InfD) framework can be represented as Riesz spectral (RS) systems. Compared to the finite dimensional (FinD) systems, the geometric theory of InfD systems for addressing certain fundamental control problems, such as disturbance decoupling and fault detection and isolation (FDI), is rather quite limited due to complexity and existence of various types of invariant subspaces notions. Interestingly enough, these invariant concepts are equivalent for FinD systems, although they are different in InfD representation. In this work, first equivalence of various types of invariant subspaces that are defined for RS systems are investigated. This enables one to define and specify the unobservability subspace for RS systems. Specifically, necessary and sufficient conditions are derived for equivalence of various types of conditioned invariant subspaces. Moreover, by using duality properties, various controlled invariant subspaces are developed. It is then shown that finiterankness of the output operator enables one to derive algorithms for computing invariant subspaces that under certain conditions, and unlike methods in the literature, converge in a finite number of steps. A geometric FDI methodology for RS systems is then developed by invoking the introduced invariant subspaces. Finally, necessary and sufficient conditions for solvability of the FDI problem are provided and analyzed.
Riesz spectral (RS) systems, infinite dimensional systems, fault detection and isolation, geometric control approach.
1 Introduction
The fault detection and isolation (FDI) problem of dynamical systems has increasingly attracted interest of researchers during the past two decades [1, 2, 3]. Advances in control theory have led to development of various capabilities for control of quite complex dynamical systems. Due to complexity of these controlled systems one has to investigate and develop more sophisticated FDI strategies and methodologies [1].
A broad class of dynamical systems, ranging from chemical processes in the petroleum industry to heat transfer and compression processes in gas turbine engines, are represented by a set of partial differential equations (PDEs). A large class of hyperbolic and parabolic PDE systems can be represented and formulated as Riesz Spectral (RS) systems in an infinite dimensional (InfD) Hilbert space [4]. The mathematical control theory of systems governed by PDEs has seen a considerable progress in the past four decades [5, 6, 7]. The control theory of PDEs has been extended from ordinary differential equations (ODEs) by generally invoking two methodologies. The first is developed through approximation methods and the second through exact methods. In the former approach, one first approximates the original PDE by an ODE system (using for example finite element or finite difference methods), and then applies the established control theory of ODEs to the approximated PDE model [8, 9, 10]. In contrast, the latter or the exact approach tackles the PDE system holistically and without invoking any approximation [11, 12].
Through application of approximate methodologies, the FDI problem of PDEs and InfD systems has been investigated in the literature in e.g. [8, 10, 13] and [14]. In [8], by using a geometric control approach, the FDI problem of a quasilinear parabolic PDE system is addressed. A Lyapunovbased method is proposed in [10] for FDI of a class of parabolic PDEs. However, given that in the above work the error dynamics analysis is based on the singular perturbation theory, only sufficient conditions for solvability of the FDI problem are provided in [8, 10, 13].
By using an array of sensors, the FDI problem of a beam structure has been investigated in [15]. In [9], by applying a finite difference method, a hyperbolic PDE is first approximated by a 2D Roesser model, and a geometric FDI approach is then developed. Finally, the FDI problem of InfD systems is investigated in [16, 17, 18] by using exact methods, where an adaptive parameter estimation scheme is used to detect and estimate the fault severity.
The geometric theory of finite dimensional (FinD) linear systems was introduced in [19, 20, 21, 22], where fundamental problems such as disturbance decoupling and FDI problems have been addressed. The geometric FDI approach has been extended to affine nonlinear systems in [23, 24]. The FDI problem of Markovian jump linear systems is investigated in [25, 26]. By applying a discrete eventbased FDI logic, geometric FDI approaches for linear and nonlinear systems have been extended in [27] and [28]. Also, in [29] the geometric FDI approach is equipped with an method to enhance the robustness of the detection filters with respect to disturbance and noise signals. However, the geometric FDI approach has not yet been investigated for InfD linear systems in general, and RS systems in particular. In this work, we develop for the first time in the literature a geometric FDI methodology for RS systems.
In this work, we consider certain invariant subspaces, such as the invariant and conditioned invariant subspaces for RS systems. For InfD systems, there are various definitions for invariant and conditioned invariant subspaces that are all equivalent in FinD systems. Therefore, in this work first necessary and sufficient conditions for equivalence of various conditioned invariant subspaces are formally shown for regular RS systems (this is specified formally in the next section). This result plays a crucial role subsequently in solvability of the FDI problem. Next, by introducing an unobservability subspace we formulate the FDI problem in a geometric framework, and derive necessary and sufficient conditions for solvability of the problem. By utilizing duality notions, necessary and sufficient conditions for equivalence of controlled invariant subspaces are also obtained and derived.
It should be pointed out that in [30] we considered real diagonalizable RS systems. In this paper, we investigate invariant subspaces in more detail and derive the results for more general class of RS systems as compared to those considered in [30]. More specifically, the RS operator that is considered in this paper can have complex and finitely many multiple eigenvalues. Moreover, the FDI problem for only a diagonal RS system was introduced in [30], whereas in this paper, we derive necessary and sufficient conditions for solvability of the FDI problem for a more general class of RS systems.
As shown in [31, 32, 33], for a general InfD system, the algorithms that are used to compute invariant subspaces do not converge in a finite number of steps. However, as we shall see subsequently, by using the results that are obtained in Section 3 and under certain conditions one can compute invariant subspaces of regular RS systems in a finite number of steps. Specifically, we develop two schemes that converge in a finite number of steps for computing the conditioned invariant and unobservability subspaces.
To summarize, and in view of the above discussion the main contributions of this paper, and all developed for the first time in the literature, can be listed as follows:

Necessary and sufficient conditions for equivalence of various conditioned invariant subspaces for RS systems are obtained and analyzed. In the literature, only sufficient conditions for equivalence of conditioned invariant subspaces of multiinput multioutput InfD systems are given. However, in this work we provide a single necessary and sufficient condition.

By using duality properties, necessary and sufficient conditions for equivalence of various controlled invariant subspaces are provided.

The unobservability subspace for RS systems is introduced, and algorithms for computing this subspace that converge in a finite number of steps are proposed and derived.

By taking advantage of the introduced subspaces, the FDI problem of RS systems is formulated and necessary and sufficient conditions for solvability of the FDI problem are developed and provided.
The remainder of this paper is organized as follows. In Section 2, RS systems are reviewed. Invariant subspaces are introduced, developed, and analyzed in Section 3. In Section 4, the FDI problem is formulated and necessary and sufficient conditions for its solvability are provided. A numerical example is provided in Section 5 to demonstrate the capability of our proposed strategy.
Finally, Section 6 provides the conclusions.
Notation: The subspaces (finite and infinite dimensional) are denoted by , , . The notations and denote the closure and orthogonal complement of the subspace , respectively. We use the notation when every vector of is orthogonal to all the vectors of . Without any confusion we use the notation to denote the conjugate of a complex number . The set of positive integers, complex, and real numbers are designated by , , and , respectively. The notation denotes the set . Consider a real subspace (). The corresponding complex subspace is defined as all vectors that can be expressed as , where . The maps between two FinD vector spaces are designated by , , . The notations , , denote the maps between two vector spaces such that at least one of them is an InfD vector space. Specifically, we use the notations and to denote the identity operator on the FinD and InfD vector spaces, respectively. denotes the set of all bounded operators defined on . The domain of an unbounded operator is denoted by . The operator of strongly continuous () semigroup that is generated by is denoted by .
The term denotes the resolvent set of the operator (that is, all such that exists and is a bounded operator). The set of all eigenvalues of is designated by . The largest real interval is denoted by . The other notations are defined within the text of the paper.
2 Background
In this section, we review some of the basic concepts that are associated with a class of RS systems that will be investigated and further studied in detail in this paper.
2.1 The Riesz Spectral (RS) Systems
Consider the following infinite dimensional (InfD) system
(1) 
where , and denote the state, input and output vectors, respectively, and is a real InfD separable Hilbert space equipped with the dotproduct . Moreover, we consider the following finite rank output operator
(2) 
and the finite rank operator is defined as , where and .
Moreover, we assume that the model (1) represents a wellposed system. This implies that the solution of system (1) is continuous with respect to the initial conditions for all [11]. This assumption is equivalent to stating that is closed and the infinitesimal generator of a strongly continuous () semigroup is uniquely defined by . A semigroup is the operator where the following conditions hold ([11] Definition 2.1.2):

for all .

.

If , then for all .
Note that the solution of system (1) is given by [11], where denotes the initial condition. The following definitions are crucial for specifying the target system that is considered in this paper.
Definition 1.
([11]  Definition 2.3.1) The set of vectors , is called the Riesz basis for the Hilbert space if

.

There exist two positive numbers and (independent of ) such that for any , we have , where denotes the norm induced from and , .
It can be shown ([11], Section 2.3) that if is a Riesz basis for , then there exists a set of vectors such that and ( denotes the Dirac delta function), for all . In other words, ’s and ’s are biorthonormal vectors [11]. The following lemma provides an important feature and property of the Riesz basis.
Lemma 1.
([11], Lemma 2.3.2b) Consider the Riesz basis of the Hilbert space . Then every can be uniquely represented as .
To define a regular RS operator, we need the following projection operator for each eigenvalue of [34], namely
(3) 
where ( is an index set for ), is a simple closed curve surrounding only the eigenvalue . This represents the projection on the subspace of generalized eigenvectors of corresponding to , that is, the subspace spanned by all ’s satisfying , for some positive integer .
Definition 2.
[34] The operator is called a regular RS operator, if

All but finitely many of the eigenvalues (with finite multiplicity) are simple.

The (generalized) eigenvectors of the operator , , form a Riesz basis for (but defined on the field ), and consequently, (that is an identity operator on ).
Remark 1.
As we shall see subsequently, to derive a necessary condition for solvability of the FDI problem, it is necessary that a bounded perturbation of (that is, where is a bounded operator) is also a regular RS operator. This property holds if , where [34] (Theorem 1). Therefore, in this paper it is assumed that the operator satisfies the above condition. It should be pointed out that a large class of RS systems, including discrete RS systems satisfy this condition [35]. \IEEEQEDclosed
If the operator in the system (1) is a regular RS operator and the operators and are bounded and finite rank we designate the system (1) as a regular RS system. Moreover, the system (1) is wellposed if and only if (this is a feasible assumption from the applications point of view)[2]. Also, according to the Definitions 1 and 2, one can show that [35]
(4) 
where denotes the number of (generalized) eigenvectors corresponding to the eigenvalues (if is a distinct eigenvalue then , and if is repeated we have ). Also, ’s and ’s are the (generalized) eigenvectors and the corresponding biorthonormal vectors of , respectively.
Given that we are interested in RS systems that are defined on the field , we need to work with eigenspaces instead of eigenvectors (eigenvalues and eigenvectors in (4) can be complex). If an eigenvalue is real, the corresponding eigenspace is equal to , where is the corresponding projection that is defined in (3). Let and be a pair of complex conjugate eigenvalues of . Since is a real operator, it is easy to show that if is a (generalized) eigenvector corresponding to , then is a (generalized) eigenvector corresponding to (the conjugate of ). The corresponding real eigenspace to and is constructed by , where correspond to the (generalized) eigenvectors of , and denotes the algebraic multiplicity of . We denote the real eigenspace of corresponding to by . It should be pointed out that and for real and complex eigenvalue , respectively (where is the algebraic multiplicity of ). Note that Condition 2 in Definition 2 implies that (defined on ). Also, we have and . Moreover, we designate the subspace as a subeigenspace if .
Remark 2.
It is worth noting that the only proper subeigenspace of an eigensapce corresponding to a simple eigenvalue is . In other words, let be an eigenspace corresponding to a simple eigenvalue . If (and ), then implies .
3 Invariant Subspaces
Invariant subspaces play a prominent role in the geometric control theory of dynamical systems [19, 36, 22, 33]. For the FDI problem (which is formally defined in Section 4), one requires to work with three invariant subspaces, namely invariant, conditioned invariant, and unobservability subspaces. To investigate the disturbance decoupling problem (refer to [19] for more detail), one deals with controlled invariant and controllability subspaces that are dual to conditioned invariant and unobservability subspaces, respectively [21].
In the literature, invariant and conditioned invariant subspaces have been introduced for InfD systems [36, 31, 32, 4]. Due to complexity of InfD systems, various kinds of invariant subspaces are available (although these are all equivalent in FinD systems). The necessary and sufficient conditions for equivalence of invariant subspaces have been obtained in the literature [11]. However, for equivalence of conditioned invariant subspaces, the results that are available are only limited to sufficient conditions. In the following subsections, we first review invariant subspaces and provide necessary and sufficient conditions for equivalence of conditioned invariant subspaces for regular RS systems. Then, by invoking duality properties, necessary and sufficient conditions for equivalence of controlled invariant subspace are shown formally. Moreover, an unobservability subspace for RS systems is also introduced.
Generally, for InfD systems the algorithms that are developed to compute invariant subspaces require an infinite number of steps to converge. In this section, it is shown that the finiterankness of the output operator enables us, for the first time in the literature, to develop algorithms for computing conditioned invariant and unobservability subspaces that converge in a finite number of steps.
3.1 Invariant Subspace
There are two different definitions that are related to the invariance property. Unlike FinD systems, these definitions are not equivalent for InfD systems. In this subsection, we review these definitions and investigate various types of unobservable subspaces for the RS system (1).
Definition 3.
[36]

The closed subspace is called invariant if .

The closed subspace is invariant if for all , where denotes the semigroup generated by .
For the FinD systems, items 1) and 2) in the above definition are equivalent, however for InfD systems, item 2) is stronger than item 1). In other words, every invariant subspace is invariant, however the reverse is not valid in general [36]. In the geometric control theory of dynamical systems, one needs subspaces that are invariant. Since dealing with invariant subspaces is more challenging than invariant subspaces, we are interested in cases where they are equivalent. For a general InfD system, a sufficient condition to have this equivalence is [36], which is quite a restricted and limited condition. However, the following lemma provides necessary and sufficient conditions for invariance property.
Lemma 2.
[11] (Lemma 2.5.6) Consider an infinitesimal generator (more general than RS operators), and its corresponding operator and a closed subspace . Then is invariant if and only if is invariant, where .
Another important result on invariant subspaces for a regular RS system that is provided in [33] (Theorem IV.6) is given next.
Lemma 3.
As stated in the preceding section, the eigenvalues (and the corresponding eigenvectors) of may be complex, and Lemma 3 is provided for complex subspaces. However, for geometric control approach one needs to work with real subspaces. The following corollary provides the necessary and sufficient conditions for equivalence of Definition 3, items 1) and 2) for regular RS systems and real subspaces.
Corollary 1.
Consider the regular RS system (1) and the invariant subspace . The real subspace is invariant if and only if , where ’s denote the subeigenspaces of and .
Proof.
Let denote the corresponding (generalized) eigenvectors for the eigenvalue of , where denotes the algebraic multiplicity of , and and (for ) are real numbers and vectors, respectively. Since is a regular RS operator, it follows that the eigenspace corresponding to (and its conjugate) is equal to .
(If part): Let . The corresponding complex subspace of (refer to the Notation description in Section 1) is then expressed by , where (and its conjugate) is the corresponding complex subspace to . Consequently, is invariant. By Lemma 3, is invariant. Hence, , for all and . Since and are real, by referring to the definition of we have and for all . Therefore, implying that is invariant.
(Only if part): Let be invariant. The corresponding complex subspace is also invariant. Again, by using Lemma 3, . Therefore, .
This completes the proof of the corollary.
\IEEEQEDclosed
In this work, we are mainly concerned with two important invariant subspaces of RS systems as discussed below. We denote the largest  and invariant subspaces that are contained in by and , respectively. The unobservable subspace of the system (1) is defined by . Also, the unobservable subspace of the system (1) is defined by [31]. Note that for all and is not necessarily invariant. However, as shown subsequently, by using this subspace one is enabled to develop an algorithm to compute the conditioned invariant subspaces in a finite number of steps. Moreover, these subspaces will be used in Section 3.3 to introduce the unobservability subspace of RS systems, where the following corollary plays a crucial role.
Corollary 2.
Consider the RS system (1), where is a regular RS operator with a bounded output operator . The unobservable subspace is the largest subspace contained in that can be expressed as , where ’s are subeigenspaces of and .
3.2 Conditioned Invariant Subspaces
In this subsection, the conditioned invariant subspaces of the system (1) are defined and characterized. Not surprisingly, various definitions, that are all equivalent in FinD systems, are available for conditioned invariant subspaces of InfD systems that are not equivalent to one another [31]. This subsection mainly concentrates on deriving necessary and sufficient conditions where these definitions are shown to be equivalent. Let us first define the notion of conditioned invariant subspace.
Definition 4.
[31]

The closed subspace is designated as (,)invariant if .

The closed subspace is feedback (,)invariant if there exists a bounded operator such that is invariant with respect to , as per Definition 3, item 1).

The closed subspace is conditioned invariant if there exists a bounded operator such that (i) the operator is the infinitesimal generator of a semigroup ; and (ii) is invariant with respect to , as per Definition 3, item 2).
It should be pointed out that in the literature conditioned invariant is also called invariant [31]. It can be shown that Definition 4, item 3) item 2) item 1) [31]. A sufficient condition for equivalence of the above definitions is developed in [31].
Lemma 4.
[31] A given (,)invariant subspace is conditioned invariant, if is closed and .
In this subsection, we show that Definition 4, item 1) and item 2) are equivalent for the system (1), when the finite rank output operator is represented by (2) (even if ). Moreover, we derive necessary and sufficient conditions for conditioned invariance. These results enable one to subsequently derive the necessary and sufficient conditions for solvability of the FDI problem. Towards this end, we first need the following lemma.
Lemma 5.
Consider the closed subspace , where (and not necessarily orthogonal) and . Then
(5) 
where , and is a finite subset of .
Proof.
It follows readily that is dense in . Hence, the subspace is also dense in . Furthermore, since is a FinD subspace, it is a closed subspace. Therefore, by using the Proposition 1.7.17 in [37] (which states that the sum of two closed subspaces is also closed if at least one of them is FinD), it follows that is closed. Since, is closed and dense in , we have . This completes the proof of the lemma. \IEEEQEDclosed
The following lemma shows the equivalence of (,) and feedback (,)invariance properties for a general InfD system provided that the output operator is a finite rank operator (as considered to be satisfied by the model (2) in this paper).
Lemma 6.
Proof.
As pointed out earlier, every feedback (,)invariant subspace is (,)invariant. Therefore, we only show the converse. By definition, we have . Since , and is separable ( is a closed subspace of the separable Hilbert space ), there exists a basis for such that . Let us rearrange the basis such that the first vectors construct the FinD subspace , where and . It should be pointed out that from (2) (i.e. the finite rankness of ) and the fact that , it follows that . Note that if it implies that , and therefore it is invariant and by setting it is also feedback (,)invariant. Now, without loss of any generality we assume that for all (if , one can remove the projection of on and call it as . Since , it follows that ). Given that , now by using Lemma 5 one obtains , where .
We now show how one can construct a bounded operator such that . Let , . We construct such that . Note that , , and is a bounded operator. It follows that is an invertible operator from onto . In other words, is a bijective map. Therefore, is a monic matrix (i.e., ), and consequently always there is a solution for , such that , where . A solution to is an extension of as , where , , , and is the embedding operator from to . Since is FinD, it follows that is bounded. Now, set . Since , one can write , where and . Given that is (,)invariant, it follows that , and by definition of , we obtain . Therefore, , and consequently is a feedback (,)invariant subspace. This completes the proof of the lemma. \IEEEQEDclosed
As shown in [33] the conditioned invariance and (,)invariance are not generally equivalent. Moreover, if is not finite rank the feedback (,)invariance and (,)invariance are not equivalent [33, 31]. However, Lemma 6 shows the equivalence between the feedback (,)invariance and (,)invariance in the sense of Definition 2, if the output operator is finite rank and .
The following lemma shows that the conditioned invariance is an independent property from the bounded operator . This result allows one to derive necessary and sufficient conditions for the conditioned invariance.
Lemma 7.
Consider a conditioned invariant subspace such that , and consider a bounded operator such that . Then .
Proof.
By invoking Lemma 2, we have , for all . Let us set (by using the HilleYosida theorem ([11]Theorem 2.1.12), where it is shown that for every infinitesimal generator there exists a real number such that and we have the set nonempty). Based on results of Lemma 2, we need to show that . First, let , where is defined as in the proof of Lemma 6 and . Since , and is bounded and bijective, it then follows that . Let and . Given that , it follows that
(6) 
Since is invariant, one obtains , and consequently we have .
Next, by following along the steps provided below we show that if then .

Let be a basis of and set for (as ). Since one can write , where and .

We show that ’s are linearly independent. Towards this end, assume are linearly dependent and therefore we obtain , where for . Hence, one can write , where (since ’s are basis vectors), and . Consequently, given and by the definition of we have and
^{1} . This is in contradiction with the fact (recall that ). Therefore, ’s are linearly independent. Since the resolvent operators are bijective and is FinD, we obtain , and consequently is a basis of . 
We show that , where , and ’s are defined as above. Set . As shown above in (6), we have . Since it follows that . Given that is a basis of , we obtain .
Finally, for every one can write , where and . As we have shown above and . Therefore, , and consequently . This completes the proof of the lemma. \IEEEQEDclosed
A bounded operator is called a friend of the conditioned invariant subspace if . The set of all friend operators of is denoted by . Let and consider a bounded operator . As in FinD systems [21] (page 31), it follows (by using the above lemma) that a sufficient condition for to be a friend of is .
We are now in a position to state the main results of this subsection leading us to the necessary and sufficient conditions for the conditioned invariance of regular RS systems.
Theorem 1.
Proof.
(If part): Let . We show that can be spanned by the eigenspaces of , for a bounded (and therefore according to Corollary 1, is invariant). By invoking Lemma 7 we need to show this property for only one . Without loss of any generality, assume that (if , redefine to ).
First, we show that one can assume without loss of any generality. Since is invariant, it follows that [33]. Also, one can assume that . If is FinD, is FinD, and hence . Let, be InfD. By following along the same steps as in Lemma 6, we define the basis of such that for all and is a basis for , where (since the existence of the basis is guaranteed). Let us set , where it follows that . Therefore, without loss of any generality, we assume .
Second, to show the result we first construct the bounded operator such that (i) , and (ii) . Define and . In other words, is the largest subspace in such that and . Moreover, by the definition of , we obtain . Since , we have . Now, consider the operator such that and define (since , there always exists a solution for ). First, we show that is also an (,)invariant subspace in two steps as follows.

Let . We show that (if , we have and we skip this step). Since , and , it follows that . By the definition of