Relaxing Integrity Requirements for Attack-Resilient Cyber-Physical Systems

Relaxing Integrity Requirements for Attack-Resilient Cyber-Physical Systems

Ilija Jovanov, and Miroslav Pajic,  This work was supported in part by the NSF CNS-1652544 and CNS-1505701 grants, and the Intel-NSF Partnership for Cyber-Physical Systems Security and Privacy. This material is also based on research sponsored by the ONR under agreements number N00014-17-1-2012 and N00014-17-1-2504. Some of the preliminary results have appeared in [7].I Jovanov and M. Pajic are with the Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708 USA. E-mail: {ilija.jovanov, miroslav.pajic}@duke.edu.
Abstract

The increase in network connectivity has also resulted in several high-profile attacks on cyber-physical systems. An attacker that manages to access a local network could remotely affect control performance by tampering with sensor measurements delivered to the controller. Recent results have shown that with network-based attacks, such as Man-in-the-Middle attacks, the attacker can introduce an unbounded state estimation error if measurements from a suitable subset of sensors contain false data when delivered to the controller. While these attacks can be addressed with the standard cryptographic tools that ensure data integrity, their continuous use would introduce significant communication and computation overhead. Consequently, we study effects of intermittent data integrity guarantees on system performance under stealthy attacks. We consider linear estimators equipped with a general type of residual-based intrusion detectors (including and SPRT detectors), and show that even when integrity of sensor measurements is enforced only intermittently, the attack impact is significantly limited; specifically, the state estimation error is bounded or the attacker cannot remain stealthy. Furthermore, we present methods to: (1) evaluate the effects of any given integrity enforcement policy in terms of reachable state-estimation errors for any type of stealthy attacks, and (2) design an enforcement policy that provides the desired estimation error guarantees under attack. Finally, on three automotive case studies we show that even with less than 10% of authenticated messages we can ensure satisfiable control performance in the presence of attacks.

Attack-resilient state estimation, attack detection, Kalman filtering, cyber-physical systems security, linear systems.

I Introduction

Several high-profile incidents have recently exposed vulnerabilities of cyber-physical systems (CPS) and drawn attention to the challenges of providing security guarantees as part of their design. These incidents cover a wide range of application domains and system complexity, from attacks on large-scale infrastructure such as the 2016 breach of Ukrainian power-grid [38], to the StuxNet virus attack on an industrial SCADA system [13], as well as attacks on controllers in modern cars (e.g., [3]) and unmanned arial vehicles [29]

There are several reasons for such number of security related incidents affecting control of CPS. The tight interaction between information technology and physical world has greatly increased the attack vector space. For instance, an adversarial signal can be injected into measurements obtained from a sensor, using non-invasive attacks that modify the sensor’s physical environment; as shown in attacks on GPS-based navigation systems [37, 9]. Even more important reason is network connectivity that is prevalent in CPS. An attacker that manages to access a local control network could remotely affect control performance by tampering with sensor measurements and actuator commands in order to force the plant into any desired state, as illustrated in [32]. From the controls perspective, attacks over an internal system network, such as the Man-in-the-Middle (MitM) attacks where the attacker inserts messages anywhere in the sensorscontrollersactuators pathway, can be modeled as additional malicious signals injected into the control loop via the system’s sensors and actuators [35].

While the interaction with the physical world introduces new attack surfaces, it also provides opportunities to improve system resilience agains attacks. The use of control techniques that employ a physical model of the system’s dynamics for attack detection and attack-resilient state estimation has drawn significant attention in recent years (e.g., [35, 36, 27, 4, 34, 24, 1, 26, 25, 30], and a recent survey [17]). One line of work is based on the use of unknown input observers (e.g., [34, 27]) and non-convex optimization for resilient estimation (e.g., [4, 25]), while another focuses on attack-detection and estimation guarantees in systems with standard Kalman filter-based state estimators (e.g., [22, 21, 11, 12, 24, 23, 10]). In the later works, estimation residue-based failure detectors, such as  [22, 23] and sequential probability ratio test (SPRT) detectors [12], are employed for intrusion detection. Still, irrelevant of the utilized attack detection mechanism, after compromising a suitable subset of sensors, an intelligent attacker can significantly degrade control performance while remaining undetected (i.e., stealthy). For instance, for resilient state estimation techniques as in [4, 25], measurements from at least half of the sensors should not be tampered with [4, 31], while [22, 11] capture attack requirements for Kalman filter-based estimators. The reason for such conservative results lies in the common initial assumption that once a sensor or its communication to the estimator is compromised, all values received from the sensors can be potentially corrupted – i.e., integrity of the data received from these sensors cannot be guaranteed.

On the other hand, most of network-based attacks, including MitM attacks, can be avoided with the use of standard cryptographic tools. For example, to authenticate data and ensure integrity of received communication packets, a common approach is to add a message authentication code (MAC) to the transmitted sensor measurements. Therefore, data integrity requirements can be imposed by the continuous use of MACs in all transmissions from a sufficient subset of sensors. However, the overhead caused by the continuous computation and communication of authentication codes can limit their use. For instance, adding MAC bits to networked control systems that employ Controller Area Networks (CAN) may not be feasible due to the message length limitation (e.g., only 64 payload bits per packet in the basic CAN protocol), while splitting them into several communication packets significantly increases the message transmission time [16]. To illustrate this, consider two sensors periodically transmitting measurements over a shared network. As presented in Figure 1(a), without authentication (i.e., if transmitted data contain no MAC bits) the communication packets will be schedulable but the system would be vulnerable to false-data injection attacks. Yet, if all measurements from both sensors are authenticated, with the increase in the packet size due to authentication overhead, it is not possible to schedule transmissions from both sensors in every communication frame (Figure 1(b)). Finally, a feasible schedule exists if MAC bits are attached to every other measurement packet transmitted by each sensor (Figure 1(c)).

Fig. 1: Communication schedule for periodic messages (with period ) from two sensors over a shared network: (a) a feasible schedule for non-authenticated messages (i.e., when MAC bits are not attached to transmitted packets); (b) there is no feasible schedule when all messages are authenticated; and (c) if data integrity is only intermittently enforced (e.g., by adding MAC bits only to every other packet), scheduling of the messages becomes feasible.

Consequently, in this paper we focus on state estimation in systems with intermittent data integrity guarantees for sensor measurements delivered to the estimator. Specifically, we study the performance of linear filters equipped with residual-based intrusion detectors in the presence of attacks on sensor measurements. We build on the system model from [22, 11, 23] by capturing that the use of authentication mechanisms in intermittent time-points ensures that sensor measurements received at these points are valid. To keep our discussion and results general, we consider a wide class of detection functions that encompasses commonly used detectors, including  and SPRT detectors. We show that even when integrity of communicated sensor data is enforced only intermittently and the attacker is fully aware of the times of the enforcement, the attack impact gets significantly limited; concretely, either the state estimation error remains bounded or the attacker cannot remain stealthy. This holds even when communication from all sensors to the estimator can be compromised as well as in any other case where otherwise (i.e., without integrity enforcements) an unbounded estimation error can be introduced.

Furthermore, to facilitate the use of intermittent data integrity enforcement for control of CPS in the presence of network-based attacks, we introduce an analysis and design framework that addresses two challenges. First, we introduce techniques to evaluate the effects of any given integrity enforcement policy in terms of reachable state-estimation errors for any type of stealthy attacks. Note that methods to evaluate potential state estimation errors due to attacks are considered in [23, 12, 22]. However, given that the previous work considers system architectures without intermittent use of authentication, these techniques result in overly conservative estimates of reachable regions or they cannot capture the effects of intermittent integrity guarantees on the estimation error. Second, we present a method to design an enforcement policy that provides the desired estimation error guarantees for any attack signal inserted via compromised sensors. The developed framework also facilitates tradeoff analysis between the allowed estimation error and the rate at which data integrity should be enforced – i.e., the required system resources such as communication bandwidth as we have presented in [14].

The rest of the paper is organized as follows. In Section II, we introduce the problem, including the system and attack models. In Section III, we analyze the impact of stealthy attacks in systems without integrity enforcements and formally define intermittent integrity enforcement policies. Section IV focuses on state estimation guarantees when data integrity is at least intermittently enforced. We then introduce a methodology to analyze effects of integrity enforcement policies as well as design suitable policies that ensure the desired estimation error even in the presence of attacks (Section V). Finally, in Section VI, we present case studies that illustrate effectiveness of our approach, before providing final remarks in Section VII.

I-a Notation and Terminology

The transpose of matrix is specified as , while the element of a vector is denoted by . Moore-Penrose pseudoinverse of matrix is denoted as . In addition, denotes the -norm of a matrix and, for a positive definite matrix , . denotes the null space of the matrix. Also, indicates a square matrix with the quantities inside the brackets on the diagonal, and zeros elsewhere, while denotes a block-diagonal operator. We denote positive definite and positive semidefinite matrix as and , respectively, while stands for the determinant of the matrix. Also, denotes the -dimensional identity matrix, and denotes matrix of zeroes. We use and to denote the sets of reals, natural numbers and nonnegative integers, respectively. As most of our analysis considers bounded-input systems, we refer to any eigenvalue as unstable eigenvalue if .

For a set , we use to denote the cardinality (i.e., size) of the set. In addition, for a set , with we denote the complement set of with respect to – i.e., . Projection vector denotes the row vector (of the appropriate size) with a 1 in its position being the only nonzero element of the vector. For a vector , we use to denote the projection from the set to set () by keeping only elements of with indices from .111Formally, , where and . Finally, the support of the vector is the set

Ii Problem Description

Before introducing the problem formulation, we describe the considered system and its architecture (shown in Figure 2), as well as the attacker model.

Ii-a System Model without Attacks

We consider an observable linear-time invariant (LTI) system whose evolution without attacks can be represented as

(1)

where and denote the plant’s state and input vectors, at time , while the plant’s output vector contains measurements provided by sensors from the set . Accordingly, the matrices and have suitable dimensions. Also, and denote the process and measurement noise; we assume that , , and are independent Gaussian random variables.

Fig. 2: System architecture – by launching Man-in-the-Middle (MitM) attacks, the attacker can inject adversarial signals into plant measurements obtained from system sensors.

Furthermore, the system is equipped with an estimator in the form of a Kalman filter. Given that the Kalman gain usually converges in only a few steps, to simplify the notation we assume that the system is in steady state before the attack. Hence, the Kalman filter estimate is updated as

(2)
(3)

where is the estimation error covariance matrix, and is the sensor noise covariance matrix. Also, the residue at time and its covariance matrix are defined as

(4)

Finally, the state estimation error is defined as the difference between the plant’s state and Kalman filter estimate as

(5)

In addition to the estimator, we assume that the system is equipped with an intrusion detector. We consider a general case where the detection function of the intrusion detector is defined as

(6)

Here, is the length of the detector’s time window, and for are predefined non-negative coefficients, with being strictly positive. The above formulation captures both fixed window size detectors, where is a constant, as well as detectors where the time window size satisfies . Also, the definition of the detection function covers a wide variety of commonly used intrusion detectors, such as and sequential probability ratio test (SPRT) detectors previously considered in these scenarios [20, 18, 23, 19, 11, 12]. The alarm is triggered when the value of the detection function satisfies that

(7)

and the probability of the alarm at time can be captured as

(8)

Ii-B Attack Model

We assume that the attacker is capable of launching MitM attacks on communication channels between a subset of the plant’s sensors and the estimator; for instance, by secretly relaying corresponding altered communication packets. However, we do not assume that the set is known to the system or system designers. Thus, to capture the attacker’s impact on the system, the system model from (1) becomes

(9)

Here, and denote the state and plant outputs in the presence of attacks, from the perspective of the estimator, since in the general case they differ from the plant’s state and outputs of the non-compromised system. In addition, denotes the signals injected by the attacker at time starting from (i.e., );222More details about why the attacker does not insert attack at step can be found in Remark 1. to model MitM attacks on communication between the sensors from set and the estimator, we assume that is a sparse vector from with support in the set  – i.e., for all and .333Although a sensor itself may not be directly compromised with MitM attacks, but rather communication between the sensor and estimator, we will also refer to these sensors are compromised sensors. In addition, in this work we sometimes abuse the notation by using to denote both the set of compromised sensors and the set of indices of the compromised sensors.

We consider the following threat model.

(1) The attacker has full knowledge of the system – in addition to knowing the dynamical model of the plant, employed Kalman filter, and detector, the attacker is aware of all potential security mechanism used in communication. Specifically, we consider systems that use standard methods for message authentication to ensure data integrity, and assume that the attacker is aware at which time points data integrity will be enforced. Thus, to avoid being detected, the attacker will not launch attacks in these steps and will also take into account these integrity enforcements in planning its attacks (as described in Section III).444In Section IV, we will also consider the case where the attacker has limited knowledge of the system’s use of security mechanisms. Since we model our system such that attacks start at , this further implies that at data integrity is not enforced, as otherwise the attacker would not be able to insert false data.

(2) The attacker has the required computation power to calculate suitable attack signals, while planning ahead as needed. (S)he also has the ability to inject any signal using communication packets mimicking sensors from the set , except at times when data integrity is enforced. For instance, when MACs are used to ensure data integrity and authenticity of communication packets, our assumption is that the attacker does not know the shared secret key used to generate the MACs.

The goal of the attacker is to design attack signal such that it maximizes the error of state estimation while ensuring that the attack remains stealthy. To formally capture this objective and the stealthiness constraint, we denote the state estimation, residue, and estimation error of the compromised system by , , and , respectively. Thus, the attacker’s aim is to maximize , while ensuring that the increase in the probability of alarm is not significant. We also define as

the change in the estimation error and residue, respectively, caused by the attacks. From (1) and (9), the evolution of these signals can be captured as a dynamical system of the form

(10)
(11)

with .

Remark 1.

From the above equations, the first attack vector to affect the change in estimation error is . Thus, without loss of generality, we assume that the attack starts at (i.e., ). This also implies that .

Note that the above dynamical system is noiseless (and deterministic), with input  controlled by the attacker. Therefore, since for the non-compromised system in steady state, it follows that

(12)

Given that provides expectation of the state estimation error under the attack, this signal can be used to evaluate the impact that the attacker has on the system.555For this reason, and to simplify our presentation, in the rest of the paper we will sometimes refer to as the (expected) state estimation error instead of the change of the state estimation error caused by attacks. Thus, we specify the objective of the attacker as to maximize the expected state estimation error (e.g., ). This is additionally justified by the fact that since is controlled by the attacker (i.e., deterministic to simplify of our presentation), which implies

(13)

To capture the attacker’s stealthiness requirements, we use the probability of alarm in the presence of an attack

(14)
(15)

Therefore, to ensure that attacks remain stealthy, the attacker’s stealthiness constraint in each step is to maintain

(16)

for a small predefined value of .

Ii-C Problem Formulation

As we will present in the next section, for a large class of systems, a stealthy attacker can easily introduce an unbounded state estimation error by compromising communication between some of the sensors and the estimator. On the other hand, existing communication protocols commonly incorporate security mechanisms (e.g., MAC) that can ensure integrity of delivered sensor measurements. Specifically, this means that the system could enforce for some sensor , or if integrity for all transmitted sensor measurements is enforced at some time-step . However, as we previously described, the integrity enforcement comes at additional communication and computation cost, effectively preventing their continuous use in resource constrained CPS.

Consequently, we focus on the problem of evaluating the impact of stealthy attacks in systems with intermittent (i.e., occasional) use of data integrity enforcement mechanisms.666Formal definition of such policies are presented in the next section. Specifically, we will address the following problems:

  • Can the attacker introduce unbounded state estimation errors in systems with intermittent integrity guarantees?

  • How to efficiently evaluate the impact of intermittent integrity enforcement policies on the induced state estimation errors in the presence of a stealthy attacker?

  • How to design a non-overly conservative development framework that incorporates guarantees for estimation degradation under attacks into design of suitable integrity enforcement policies?

Iii Impact of Stealthy Attacks on State Estimation Error

To capture the impact of stealthy attacks on the system, we start with the following definition.

Definition 1.

The set of all stealthy attacks up to time is

(17)

where .

When reasoning about a set of reachable state estimation errors due to stealthy attacks from , we have to also take into account the variability of the estimation error. From (13), we can define a specific region that will contain the error  with a desired probability. Therefore, we introduce the following definition.

Definition 2.

The -reachable region of the state estimation error under the attack (i.e., ) is the set

(18)

Furthermore, the global reachable region of the state estimation error is the set

(19)

Here, is a design parameter directly related to the desired confidence that belongs to the reachable region. Effectively, the set captures the set of state estimation errors that can be reached in step due to the injected malicious signal, while captures the set of all reachable state estimation errors. To assess vulnerability of the system, a critical characteristic of is boundedness – whether a stealthy attacker can introduce unbounded estimation errors. To simplify the boundedness analysis of , we start with the following theorem.

Theorem 1.

Let be the detector function. Then, for any , such that , there exists a unique such that if and only if .

Proof.

In the case without attacks, in steady-state has distribution with degrees of freedom, since the residue is zero-mean () with covariance matrix  [8, 6]. Furthermore, from (10) and (11), , is output of a deterministic system controlled by , and thus is a non-zero mean with covariance matrix – i.e.,  the attacker is only influencing the . Therefore, will have a non-central distribution with degrees of freedom; the non-centrality parameter of this distribution will be  [6].

Let be the threshold for the detector in (7). The alarm probabilities and can be computed from the distributions for and as

where and are cumulative distribution functions of and noncentralized , respectfully, at , with degrees of freedom and noncentrality parameter . Since and are fixed by the system design, it follows that will be a constant, and will be a function of .

Consider . This means that

(20)

The probability distribution function of non-central distribution is smooth (thus making smooth), and is a decreasing function of [6]. Hence, it follows that for any there will exist exactly one such that (20) is satisfied. Furthermore, for any that is lower than , the corresponding from (20) has to be lower than , and vice versa, which concludes the proof.∎

Since the bound for in Theorem 1 depends on , and the fact that the detector with degrees of freedom is used, we will denote such value as .

Remark 2.

Related results from [20, 23], focus only on the detection function and show only sufficient conditions for stealthy attacks – i.e., that in this case from a robustness condition it follows that the the stealthiness condition is satisfied. However, the equivalence between conditions and will enable us to reduce conservativeness of our analysis as well as analyze boundness of the reachability region for the general type of detection functions from (15), by allowing us to employ both conditions interchangeably.

From Definition 1 and Theorem 1 the following result holds.

Corollary 1.

For the detection function , there exists such that the set of all stealthy attacks satisfies

(21)

The previous results introduce an equivalent ‘robustness-based’ representation for the set of stealthy attacks in systems where detectors are used. They also provide a foundation to consider the more general formulation (15) for the detector function. We start with the following results characterizing over- and under-approximations of the set  in such case, also using suitable ’robustness-based’ representations of the stealthiness condition. By showing that reachable estimation error regions are bounded for these sets of attacks, we will be able to reason whether the reachable region of state estimation errors is bounded for attacks from the set .

Lemma 1.

For a system with the detector function of the form from (15), the set of all stealthy attacks can be underapproximated by the set

(22)

(i.e., ), where .

In essence, the lemma states that if holds, then for the general detection function from (15) is satisfied with probability that is lower than or equal to .

Proof.

Consider an attack sequence and the resulting evolution of the system from (10) and (11), with , for all . Then,

(23)

In addition, we define and

(24)

as well as . From (24), is a scaled sum of noncentral distributions with degrees of freedom, so will have the noncentral distribution with degrees of freedom and the central moment equal to

(25)

Since , following the proof of Theorem 1 for the detection function we have that is satisfied if and only if . That is, using (25)

(26)

Since (23) follows from the condition of the theorem, from (26) we have that is satisfied. From (24) we have that , meaning that . Thus, holds , and (i.e., ). ∎

Lemma 2.

For a system with the detector function of the form from (15), the set of all stealthy attacks at time , , can be overapproximated by the set

(27)

(i.e., ), where .

Proof.

Consider an attack sequence with the detector function from (15). Let . Since it follows that , where is defined as in (14). Since are stealthy, it follows that , and thus holds.

On the other hand, the function has the distribution; by following the proof of Theorem 1 for we have that is satisfied if and only if . Therefore, we have that implies , meaning that (i.e., ). ∎

Remark 3.

The previous lemmas also hold for the detection function this can be shown by replacing  with in the previous analysis, since it would not affect their proofs. In essence, this means that these results hold for both windowed detectors and SPRT detectors – SPRT detectors are explored in detail in Section V.

Lemmas 1 and 2 introduce attack sets and for which the attack constraints are captured as robustness bounds on instead of probabilities of attack detection, and for which . Hence, to analyze impact of stealthy attacks, we can consider the effects of attacks that have to maintain below a certain threshold.

Theorem 2.

from (18) is bounded if and only if the set

(28)

is bounded, where and .

Proof.

From (13), is bounded and we can simplify our presentation by focusing on the case where . Furthermore, for any vector , the set is bounded if and only if the vector is bounded. Therefore, the set will be bounded if and only if (from (12)) is bounded.

Consider attack vectors . From Lemmas 1 and 2 we have that

(29)

where we somewhat abuse the notation, by having capture all reachable vectors when the system (10) is ‘driven’ by attack vectors from the set . On the other hand, from linearity of the system described by (10) and (11), the sets and are either both bounded or both unbounded. Thus, from (29), these sets are bounded if and only if is bounded.

Finally, as where are the largest and smallest, respectively, eigenvalue of , the region will be bounded for the constraint if and only if its bounded with a 2-norm stealthiness constraint from (28). ∎

Iii-a Perfectly Attackable Systems

Theorem 2 can be used to formally capture dynamical systems for which there exists a stealthy attack sequence that results in an unbounded state estimation error – i.e., for such systems, given enough time, the attacker can make arbitrary changes in the system states without risking detection.

Definition 3.

A system is perfectly attackable (PA) if the system’s reachable set from (19) is an unbounded set.

As shown in [22, 11], for LTI systems without any additional data integrity guarantees, the set can be bounded or unbounded depending on the system dynamics and the set of compromised sensors . From Theorem 2, this property is preserved for the set as well. For this reason, we will be using the definition of to analyze boundedness of , and to simplify the notation due to linearity of the constraint we will assume that – i.e., for this analysis we consider the stealthiness attack constraint as

(30)

imposed on the system from (10) and (11).

Now, the theorem below follows from [22, 11].

Theorem 3.

A system from (9) is perfectly attackable if and only if the matrix is unstable, and at least one eigenvector corresponding to an unstable eigenvalue satisfies and is a reachable state of the system from .

Note that [22] also uses the term unstable eigenvalue to denote . In the next section, we show that intermittent integrity guarantees significantly limit stealthy attacks even for perfectly-attackable systems.

Iv Stealthy Attacks in Systems with Intermittent Integrity Enforcement

In this section, we analyze the effects that intermittent data integrity guarantees have on the estimation error under attack. To formalize this notion, we start with the following definition.

Definition 4.

A global intermittent data integrity enforcement policy , where such that , for all , and , ensures that

Furthermore, for a sensor , the sensor’s intermittent data integrity enforcement policy , where with , for all , and , ensures that

Intuitively, an intermittent data integrity enforcement policy for sensor ensures that the injected attack via the sensor will be equal to zero in at least consecutive points, where the starts of these ‘blocks’ are at most time-steps apart. Similarly, for a global intermittent data integrity enforcement policy, the whole attack vector has to be for at least consecutive steps, and the duration between these blocks is bounded from above to at most time-steps.

Global intermittent integrity enforcement is easier to model (and analyze, as we will show in the next section). However, compared to the use of separate sensor’s intermittent integrity enforcements, global enforcement policies impose significantly larger communication and computation overhead in every time-step when data integrity is enforced. For example, with global enforcement every sensor has to be able to compute and add a MAC to its message transmitted over a shared bus during one sampling period (which usually corresponds to a single communication frame). In addition, since in these systems estimation/control updates are commonly computed once all messages are received, when the integrity is enforced the estimator has to be able to evaluate/recompute all received MACs before its execution for that time-period. On the other hand, with integrity enforcement for each sensor, their MACs can be sent and reevaluated in separate (e.g., consecutive) sampling periods (i.e., communication frames).

Remark 4.

It is worth noting that our definition of intermittent integrity enforcement policies imposes a maximum time between integrity enforcements which, as we will show, is related to the worst-case estimation error caused by the attacks. The definition also captures periodic integrity enforcements when for all . Finally, the definition also allows for capturing policies with continuous integrity enforcements, by specifying .

The following theorem specifies that when a global intermittent integrity enforcement policy is used a stealthy attacker cannot insert an unbounded expected state estimation error.

Theorem 4.

Consider an LTI system from (1) with a global data integrity policy , where

(31)

is finite, is the observability index of the pair, and denotes the number of unstable eigenvalues of . Then the system is not perfectly attackable.

From the above theorem, it follows that even intermittent integrity guarantees significantly limit the damage that the attacker could make to the system. Furthermore, the theorem makes no assumptions about the set of compromised sensors; in the general case, system designers may not be able to provide this type of guarantees during system design, and thus no restrictions are imposed on the set, neither regarding the number of elements or whether some sensors belong to it.

Remark 5.

In our preliminary results reported in [7], a similar formulation of Theorem 4 is used with . Since from [33], using the rank–nullity theorem it follows that , meaning that the condition from Theorem 4 is stronger than our earlier result and may further reduce the number of integrity-enforcement points.

In the rest of the paper, we use the notation from Theorem 4 for and . To show the theorem, we exploit the following Lemma 3 and Theorem 5; the lemma states that if stealthy attacks introduce unbounded estimation error , the unbounded components must belong to vector subspaces corresponding to unstable modes of the system (i.e., matrix ).

Lemma 3.

Consider system from (10) and (11) under the stealthiness contraint (30), and let us denote by eigenvectors and generalized eigenvectors that correspond to unstable eigenvalues of matrix . Then, unbounded estimation errors can be represented as

(32)

where is a bounded vector, and for some it holds that as .

Proof.

The proof is provided in the appendix. ∎

Theorem 5.

Consider any , such that (i.e., at time an integrity enforcement block in the policy starts). If is reachable state of , and if vectors are bounded, then the vector has to be bounded for any stealthy attack.777Formally, the theorem states that the subsequence of the sequence is bounded, if the subsequence of the sequence is bounded. However, to simplify our presentation and notation, we simply refer to the vectors, instead of subsequences, as bounded.

Proof.

From (10) and (11) it follows that

(33)
(34)

Since is bounded, and due to the stealthy attack constraint (30), then is bounded. Thus, to show that is bounded, it is sufficient to prove that is bounded.

Let’s assume the opposite – i.e., that is unbounded while are all bounded. From (33) it follows that

Given that are bounded due to the stealthy attack requirements, in order for to be unbounded, has to be unbounded as well.

Since is bounded, this implies that has to be bounded too. However, as has to be bounded due to the stealthiness condition, it follows that has to remain bounded. Similarly, we can show that this holds up to , and thus the vector defined as

(35)

is bounded. Now, we consider two cases.

Case I: If is observability index of pair (i.e., ), then has full rank, from which it follows that (and thus ) has to be also bounded, which is a contradiction.

Case II: Consider , and let us use similarity transform on the initial system, where is defined as in the Lemma 3 proof – i.e., and we index (generalized) eigenvectors such that for each eigenvector with generalized eigenvectors, is its generalized eigenvector chain; in addition, are the eigenvectors (including generalized eigenvectors) for all unstable modes of .

Thus, the transformed system can be captured as

where is the Jordan form of , captures unstable modes of and the pair is also observable.

Since is unbounded we have that is unbounded (from ). Thus, from Lemma 3,

(36)

where is unbounded while is a bounded vector. Due to the fact that , from (35) it follows that