Attack Resilient Interconnected Second Order Systems:
A GameTheoretic Approach
Abstract
This paper studies the resilience of secondorder networked dynamical systems to strategic attacks. We discuss two widely used control laws, which have applications in power networks and formation control of autonomous agents. In the first control law, each agent receives pure velocity feedback from its neighbor. In the second control law, each agent receives its velocity relative to its neighbors. The attacker selects a subset of nodes in which to inject a signal, and its objective is to maximize the norm of the system from the attack signal to the output. The defender improves the resilience of the system by adding selffeedback loops to certain nodes of the network with the objective of minimizing the system’s norm. Their decisions comprise a strategic game. Graphtheoretic necessary and sufficient conditions for the existence of Nash equilibria are presented. In the case of no Nash equilibrium, a Stackelberg game is discussed, and the optimal solution when the defender acts as the leader is characterized. For the case of a single attacked node and a single defense node, it is shown that the optimal location of the defense node for each of the control laws is determined by a specific network centrality measure. The extension of the game to the case of multiple attacked and defense nodes is also addressed.
I Introduction
Ia Motivation
The resilience of cyberphysical systems to strategic attacks is one of the primary concerns in the design level and realtime operation of interconnected systems. Examples of such systems include power networks, water and gas networks, and transportation systems. A subtle difference between faults and attacks is that in the latter, the attacker uses knowledge of vulnerabilities to maximize its effect and/or minimize its visibility or effort to attack. The defender thus has to adopt an intelligent strategy to counter the attacker. One approach to modeling interactions between intelligent attackers and defenders is via game theory.
IB Related Work
Security and resilience of cyberphysical systems from the gametheoretic standpoint has attracted attention in the past decade; see [1, 2, 3, 4, 5, 6] and references therein. The notion of gamesingames in cyberphysical systems reflects two interconnected games, one in the cyber layer and the other in the physical layer, for which the payoff of each game affects the result of the other one [7]. Some approaches discussed appropriate game strategies, e.g., Nash or Stackelberg, based on the type of adversarial behavior (active or passive) [3, 8]. The evolution of networked systems are modeled as cooperative games [9] and the resilience of these games to adversarial actions and/or communication failures are investigated [10, 11]. There is a large literature on the security of first and second order systems [12, 13, 14]. To date, no approach uses game theory to model the actions of intelligent attackers and defenders in second order systems.
IC Contributions
The contributions of this paper are as follows:

We discuss an attackerdefender game on the resilience of two canonical forms of second order systems. The attacker targets a set of nodes in the network to maximize the system norm from the attack signals to the output, while the defender chooses a set of nodes (to install feedback control) in order to minimize this system norm (or mitigate the effect of the attack).

Necessary and Sufficient conditions for the existence of Nash equilibrium (NE) for the game for each of the two secondorder dynamics is discussed (Propositions 2 and 3). For the cases where there is no NE, a Stackelberg game is discussed when the defender acts as the game leader (Theorems 1 and 3 and Corollary 1).

For the case of a single attacked node and a single defense node, it is shown that the optimal location of the defense node in the network for each of the second order systems introduces a specific network centrality measure (Remark 3).
It worth noting that for resilient distributed control algorithms proposed in the literature, a large level of network connectivity is required to bypass the effects of malicious actions [14, 15]. However, in many realworld applications, e.g., power systems, the underlying topology is designed and can not be changed. From this view, the defense mechanism proposed in this paper has an advantage compared to the previous methods in the sense that it does not rely on the connectivity level of the underlying network.
Ii Graph Theory
We use to denote an undirected graph where is the set of vertices (or nodes) and is the set of undirected edges, where if an only if there exists an undirected edge between and . Let . The adjacency matrix of is denoted , where if there is an edge between and in and zero otherwise. The neighbors of vertex in the graph are denoted by the set . We define the degree for node as . The Laplacian matrix of an undirected graph is denoted by , where . We use to indicate the th vector of the canonical basis. The eccentricity of a vertex in a connected graph is the maximum graph distance between and any other vertex . The center of a graph is a set of vertices with minimum eccentricity. The effective resistance between a pair of nodes and , denoted , is the electrical resistance measured across nodes and when the network represents an electrical circuit where each edge has electrical conductance [16]. The effective eccentricity of a vertex in a connected graph is the maximum graph effective resistance between and any other vertex of . The effective center of a graph is a set of vertices with minimum effective eccentricity. A degree central node in the network is the node with the largest degree.
Iii System Model and Preliminaries
Consider a network of agents where each agent follows secondorder dynamics
(1) 
where and represent position (or phase) and velocity (or frequency), respectively. and are the control input and additive disturbance to the dynamics. The control policy can be either of the following two
(2a)  
(2b) 
Here and are nonnegative control gains. Control law (2a) uses the relative position and absolute velocity as feedbacks whereas (2b) uses both relative position and velocity as control feedbacks. To simplify our analysis, we assume that and , where is called the defender’s control gain.^{2}^{2}2This parameter is private and only known by the system designer. Note that all of the analysis in this paper can be readily extended to the weighted case. Control laws (2a) and (2b) are canonical forms of wellknown secondorder systems. In particular, (2a) is the linearized swing equation for a network of power generators [17, 18], and (2b) describes the formation control of autonomous agents, e.g., a platoon of connected vehicles [19].
Iiia Attack Model
Let denote the set of nodes under attack. The state of a node which is under attack evolves as
(3) 
where and are the effects of attack signals on the first and the second states, respectively. In vector form, (3) is given by
(4) 
where , . Depending on whether control law (2a) or (2b) is in place, the matrix respectively takes on the form
(5) 
where and where . is a binary vector, i.e., , whose th element is one if node has a self feedback and zero if it does not. We assume that has at least one nonzero diagonal element so that is nonsingular [20]. Here and , since we assume that if node is under attack, then its both states are affected by the attack signal. Matrix encodes the decisions of the attacker. The th row of has a single 1 if node is affected by the attack, and all zeros otherwise. The set of nodes under attack and the set of nodes with feedback (defense nodes) are denoted by and , respectively. An example of attacker and defender actions on a networked system is schematically shown in Fig. 1 (a).
IiiB AttackerDefender Game
Because we do not have a priori knowledge of the frequency contents of the attack signal, we must choose a system norm which captures the average impact of all frequencies of the attack input. We therefore choose system norm, which is widely used to measure the level of coherence in synchronization of coupled oscillators [21, 22]. We first calculate the norm of (4).
Proposition 1
Now, we formally define the attackerdefender game.
Definition 1 (AttackerDefender Game)
The attacker chooses a set of nodes to attack, , in order to maximize the norm from the attack signal to the output . The defender places local feedback control at nodes, , to minimize the system norm.^{3}^{3}3Due to the lack of knowledge of the number of attack signals, the defender considers as an upper bound of the number of attacked nodes and acts based on this worstcase scenario. The result is a zerosum game in which the payoff, based on (6), is given by
(7) 
The set of attacked nodes determine matrix , and the set of defense nodes determines matrix and consequently matrices and in (5).
The actions of the attacker and the defender, when nodes are under attack and nodes are defended, define a matrix game . Here , where corresponds to the set chosen by the attacker and corresponds to the set chosen by the defender. In other words, the attacker, the maximizer, chooses columns of matrix and the defender, the minimizer, chooses the rows.
Iv AttackerDefender Game on
In this section, we discuss equilibrium strategies for the attackerdefender game when the control law is (2a). First, consider a single attacked node and single defense node.
Iva Single AttackedSingle defense Nodes
In this case, attacker’s payoff is
(8) 
A Nash equilibrium may not exist, as discussed in the following example.
Example 1
For the path graph of length 3 shown in Fig. 1 (b), payoff matrix becomes
(9) 
where the attacker (maximizer) chooses columns and the defender (minimizer) chooses the rows. For both the attacker and defender choose node 2 at NE, and the equilibrium payoff is . For bigger than this threshold, there is no NE for the game.
The following is a necessary and sufficient condition for the existence of an NE for the attackerdefender game.
Proposition 2
Suppose that in the game on in (1), there are one attacked and one defense nodes. Then there exists an NE if and only if , where and are the largest and second largest degrees of nodes in graph . In this case, the game value is and the NE strategy is that both attacker and defender choose the node(s) with the largest degree.
Remark 1
According to Proposition 2, the value of which ensures the existence of NE is limited by the gap between the largest and the second largest degrees in the network. For the cases where this does not hold, e.g., when the node with the largest degree is not unique as in Fig. 1 (c), there is no NE. Moreover, the largest possible threshold for graphs on vertices corresponds to the star graph in which the threshold becomes , as in Fig. 1 (d).
When there is no NE, we instead analyze a Stackelberg game in which the defender acts as the leader. We can write in (1) as
As leader, the defender solves the following optimization problem
(10) 
where is chosen over all defense nodes in . is the best response of the attacker when the strategy of the defender is , i.e., is the solution of the following optimization problem
(11) 
where is chosen over all attacked nodes in . Unlike NE, a Stackelberg game always admits an equilibrium strategy.
Remark 2
We note that for the attacker to play the Stackelberg game, i.e., find the optimal strategy (11), it is not necessary to know the exact value of the feedback gain . According to proposition 2 and Theorem 2, which comes later, it is sufficient for the attacker to only know if is above or below the threshold in order to find its best response strategy.
The following theorem, characterizes the equilibrium of the Stackelberg game.
Theorem 1
Consider a Stackelberg attackerdefender game on in (1) in which there exists a single attacked node and single defense node, the defender as the game leader, and . Then the equilibrium strategy corresponds to the case where the defender chooses , i.e., the node with the largest degree. In this case, the attacker’s best response will be , i.e., the node with the second largest degree.
The following example discusses the role of the threshold in the attacker’s strategy.
Example 2
For the graph shown in Fig. 2 we have . Hence, the threshold is . The attacker’s decisions are plotted with respect to the defender’s best action, i.e., the node with the largest degree. For , the attacker’s best action is the node with the largest degree (as follows from Proposition 2) and for , the attacker’s best response is the node with the second largest degree (as follows from Theorem 1). For , the payoff will be the same when the attacker chooses either nodes 3 or 4.
IvB Multiple AttackedMultiple Defense Nodes
Now consider the case that there exist attacked nodes and defense nodes, i.e., . Here we only consider a Stackelberg setup as it is more applicable to security problems [2]. We remark that if the defender is the leader, it reflects the defender’s need to consider the worst case. Thus, it is more convenient to have the defender as the game leader.
The Stackelberg game is a combinatorial problem. Thus, in general, its computational cost would be high, unless it is reduced with specific assumptions. With this in mind, in our problem, finding optimal defense nodes when the defender is the game leader is burdensome, unless the control gain is sufficiently large and the number of attacked nodes is sufficiently small.
Theorem 2
Consider a Stackelberg attackerdefender game on in (1) where there exists attacked nodes and defense nodes, and , with the defender as the game leader. If then at the equilibrium the defender chooses , i.e., nodes with the largest degrees in the network. The best response of the attacker is to choose .
V AttackerDefender Game on
Va Single Attacked, Single Defense Nodes
Similar to the case of , we start with the case of single attacked and single defense nodes. We first have the following proposition.
Proposition 3
The attackerdefender game on in (1) with a single attack and single defense node does not admit an NE.
Similar to the attackerdefender game on , in the absence of NE, an optimal defense strategy can be determined by finding the solution of the Stackelberg game. Recalling the notion of the effective center of a graph, from Section II, we have the following theorem which is proven in Appendix H.
Theorem 3
Consider the Stackelberg attackerdefender game on in (1) on graph with the defender as the game leader. Then, a solution of the game corresponds to the case when the defender chooses the effective center of , i.e., . In this case, the best response of the attacker will be , i.e., a node with the maximum effective resistance from .
For the case of acyclic networks, Theorem 3 reduces to the following corollary.
Corollary 1 (Acyclic Networks)
Consider the Stackelberg attackerdefender game on in (1), with the defender as the game leader, over the connected undirected tree . At equilibrium, the defender chooses the center of the graph and the attacker chooses the node with the greatest distance from the center.
Remark 3 (Game Equilibriums and Network Centrality)
As mentioned before, the optimal location of the defense node for the objective function is the degree central node (Theorem 1) and for is the graph’s center for acyclic networks (Corollary 1) or effective center for general graphs (Theorem 3). These network centralities (and consequently optimal defense node placements) can differ substantially from each other. One of such examples is the graph shown in Fig. 3 (a), in which by increasing the length of the path, the two centralities become far apart.
VB Multiple Attacked, Multiple defense Nodes
In order to tackle this problem, we interpret the selffeedback loops in the form of connections to some virtual agent (or grounded node) as shown in Fig. 3 (b). In this case, matrix would be a submatrix of the Laplacian matrix of the extended graph (including ) where the row and the column corresponding to are removed. Such submatrices are called grounded Laplacian in the literature [20]. With this is mind, it is known that the th diagonal element of is , i.e., the effective resistance between node and the virtual node [16].^{4}^{4}4When the graph is a tree, the effective resistance and physical distance become the same. As an example, consider nodes 1 and 2 in Fig. 3 (b) which are chosen as defenders and nodes 1 and 3 which are under attack. In this case, we have . Based on this fact, the proof of the following theorem is straightforward.
Theorem 4
Consider the Stackelberg attackerdefender game on in (1) with defense nodes and attack nodes, , with the defender as the game leader, over the connected undirected graph . Denote the virtual agent corresponding to a set of defense nodes by . Then, a solution of the game is when the defender chooses set in which the maximum sum of effective resistances between and all combinations of nodes in the network is minimized, i.e., . In this case, the attacker chooses the set of attacked nodes as .
As it is seen from Theorem 4, finding the optimal set of defense nodes requires a high level of computation.
Remark 4
(The Effect of Increasing Connectivity): Since the effective resistance between two nodes in the graph is an increasing function of edge weights [20], adding extra edges to the network (or increasing the weight of edges) decreases the diagonal elements of and consequently decreases the system norm. Hence, unlike control law (2a), increasing connectivity is beneficial from the defender’s perspective for (2b).
C Proof of Proposition 1
{proof}We prove for the first case, the second case (2b) follows a similar procedure. We compute the norm using the trace formula , where is the observability Gramian and it is uniquely obtained from the Lyapunov equation with an additional constraint where is the mode corresponding to the marginally stable eigenvalue of . It is due to the fact that the marginally stable mode is not detectable, i.e., for all , and since the rest of the eigenvalues are stable, the indefinite integral exists. The proof of the uniqueness of is the same as [23, Lemma 1] and is omitted here. To calculate the observability Gramian, we have
(12) 
By solving (12) we get , and . Hence we have which yields the result.
D Proof of Proposition 2
{proof}It is easy to verify that each element of the matrix game is
(13) 
We first prove the sufficient condition, i.e., assume . Then, if the attacker changes its strategy (unilaterally) from the node with the maximum degree to some node , according to (13) and the upper bound for , the game value becomes . Moreover, if the defender wants to change its strategy to another node , based on (13) since the smallest element of each column is its diagonal element, it will get . Hence, neither the attacker nor the defender are willing to change their strategy unilaterally.
Now suppose that having both attacker and defender choose the node with the largest degree is NE. Then we have to have for all which results in for all and this proves the claim.
E Proof of Theorem 1
{proof}When , for each row (defender’s action) of matrix , the largest element (the best response of the attacker) will be , except the row corresponding to the node with the largest degree. In that row, the largest element will be . Since , the optimal action of the defender will be . The best response of the attacker will be the node with the second largest degree. This solution may not be unique, however, the optimal value of this game is unique and given by .
F Proof of Theorem 2
{proof}For multiple attackedmultiple defense nodes case, each element of the matrix game (corresponding to defender decision set and attacker decision set ) is
(14) 
where and . Since the defender is the game leader, it has to choose a row in game matrix whose maximum element is minimum (over all other rows). When is lower bounded by , considering a fixed set (set of defense nodes), for each set of attacked nodes we have
(15) 
Inequality (15) together with (14) shows that for the row corresponding to set , its largest element corresponds to the set of attackers for which . In order for this to happen, we must have . In this case, and the maximum element in the row corresponding to set is (according to the second term in (14)) . Thus, the best action of the defender, to minimize that maximum row element, is to choose , i.e., nodes with largest degrees in the graph.
G Proof of Proposition 3
{proof}As mentioned in Section V, the th diagonal element of is the effective resistance from node and the virtual node which is connected to the single defense node with an edge of weight (conductance) [20]. Thus, we have . Hence, the value of each diagonal element of the game matrix is and each offdiagonal element is . Thus, each diagonal element is strictly less than the elements of its corresponding row and column. Now, assume that a NE exists and let denote the equilibrium strategies of the attacker and defender. Thus, we should have for all and . If element is a diagonal element, then the left inequality will be violated and if it is a nondiagonal element, the right inequality will be violated.
H Proof of Theorem 3
{proof}We know that for the game matrix we have , where is the virtual agent connected to the defense node with an edge with weight and is the attacked node. As the defender is the leader of the Stackelberg game, it minimizes (over all rows) the maximum element of each row of . Thus, the optimal place for the defender is and this is the effective center of the graph defined in Section II. Note that this solution (strategies of the defender and attacker) may not be unique since the effective center of the network may not be a single node. However, the value of the game is unique.
Appendix A Conclusion
A gametheoretic approach to the resilience of two canonical forms of secondorder network control systems was discussed. For the case of a single attacked node and a single defense node, it was shown that the optimal location of the defense node for each of the two secondorder systems introduces a specific network centrality measure. The extension of the game to the case of multiple attacked and defense nodes was also discussed and graphtheoretic interpretations of the equilibrium of the Stackelberg game for this case was investigated. An avenue for the future work is to extend these results to directed networks.
References
 [1] Q. Zhu and T. Basar, “Gametheoretic methods for robustness, security, and resilience of cyberphysical control systems: Gamesingames principle for optimal crosslayer resilient control systems,” in IEEE control systems, vol. 35, no. 1, 2015, pp. 45–65.
 [2] M. Manshaei, Q. Zhu, T. Alpcan, T. Basar, and J. P. Hubaux, “Game theory meets network security and privacy,” ACM Computing Surveys, vol. 45, pp. 53–73, 2013.
 [3] A. Gupta, C. Langbort, and T. Basar, “Optimal control in the presence of an intelligent jammer with limited actions,” 49th IEEE Conference on Decision and Control, pp. 1096–1101, 2010.
 [4] S. Amin, G. A. Schwartz, and S. S. Sastry, “Security of interdependent and identical networked control systems,” Automatica, vol. 49, no. 1, pp. 186–192, 2013.
 [5] M. Wu and S. Amin, “Securing infrastructure facilities: When does proactive defense help?” Dynamic Games and Applications, https://doi.org/10.1007/s1323501802808, 2018.
 [6] M. Pirani, E. Nekouie, H. Sandberg, and K.H.Johansson, “A gametheoretic framework for securityaware sensor placement problem in networked control systems,” American Control Conference (to appear), 2019.
 [7] Q. Zhu and T. Basar, “ârobust and resilient control design for cyberphysical systems with an application to power systems,” in in Proc. 50th IEEE Conf. Decision Control European Control, 2011, pp. 4066–4071.
 [8] J. P. H. M. Felegyhazi, Game Theory in Wireless Networks: A Tutorial. EPFL Technical report, 2006.
 [9] J. Marden, G. Arslan, and J. S. Shamma, “Ieee transactions on systems, man, and cybernetics, part b (cybernetics),” IEEE Trans. Smart Grid, vol. 39, no. 6, pp. 1393–1407, 2009.
 [10] P. N. Brown and H. B. N. J. R. Marden, Security Against Impersonation Attacks in Distributed Systems. arXiv preprint arXiv:1711.00609, 2017.
 [11] S. Amin, G. A. Schwartz, and S. S. Sastry, “Security of interdependent and identical networked control systems,” Automatica, pp. 186–192, 2013.
 [12] S. M. Dibaji and H. Ishii, “Resilient consensus of secondorder agent networks: Asynchronous update rules with delays,” Automatica, vol. 81, pp. 123–132, 2017.
 [13] I. Shames, A. Teixeira, H. Sandberg, and K. H. Johansson, “Distributed fault detection for interconnected secondorder systems,” Automatica, vol. 47, pp. 2757–2764, 2011.
 [14] H. J. LeBlanc, H. Zhang, X. Koutsoukos, and S. Sundaram, “Resilient asymptotic consensus in robust networks,” IEEE Journal on Selected Areas in Communications, vol. 31, pp. 766–781, 2013.
 [15] F. Pasqualetti, A. Bicchi, and F. Bullo, “Consensus computation in unreliable networks: A system theoretic approach,” IEEE Transactions on Automatic Control, vol. 57, pp. 90–104, 2012.
 [16] A. Ghosh, S. Boyd, and A. Saberi, “Minimizing effective resistance of a graph,” SIAM Review, vol. 50, no. 1, pp. 37–66, 2008.
 [17] A. J. Wood and B. F. Wollenberg, Power Generation, Operation, and Control. New York, NY, USA: Wiley, 2012.
 [18] M. Pirani, J. W. SimpsonPorco, and B. Fidan, “Systemtheoretic performance metrics for lowinertia stability of power networks,” in IEEE Conference on Decision and Control. IEEE, 2017.
 [19] W. Ren, R. Beard, and E. Atkins, “Information consensus in multivehicle cooperative control,” in IEEE Control systems magazine, 2007, pp. 71–82.
 [20] M. Pirani, E. M. Shahrivar, B. Fidan, and S. Sundaram, “Robustness of leader  follower networked dynamical systems,” IEEE Transactions on Control of Network Systems, 2017.
 [21] B. Bamieh, M. R. Jovanovic, P. Mitra, and S. Patterson, “Coherence in largescale networks: Dimensiondependent limitations of local feedback,” IEEE Transactions on Automatic Control, vol. 57, pp. 2235–2249, 2012.
 [22] E. Tegling, B. Bamieh, and D. F. Gayme, “The price of synchrony: Evaluating the resistive losses in synchronizing power networks,” IEEE Transactions on Control of Network Systems, vol. 2, pp. 254–266, 2015.
 [23] B. K. Poolla, S. Bolognani, and F. Dorfler, “Optimal placement of virtual inertia in power grids,” American Control Conference, 2016.