Distributed AttackRobust Submodular Maximization
for MultiRobot Planning
Abstract
We aim to guard swarmrobotics applications against denialofservice (DoS) failures/attacks that result in withdrawals of robots. We focus on applications requiring the selection of actions for each robot, among a set of available ones, e.g., which trajectory to follow. Such applications are central in largescale robotic/control applications, e.g., multirobot motion planning for target tracking. But the current attackrobust algorithms are centralized, and scale quadratically with the problem size (e.g., number of robots). Thus, in this paper, we propose a generalpurpose distributed algorithm towards robust optimization at scale, with local communications only. We name it distributed robust maximization (DRM). DRM proposes a divideandconquer approach that distributively partitions the problem among cliques of robots. The cliques optimize in parallel, independently of each other. That way, DRM also offers significant computational speedups up to the running time of its centralized counterparts. depends on the robots’ communication range, which is given as input to DRM. DRM also achieves a closetooptimal performance, equal to the guaranteed performance of its centralized counterparts. We demonstrate DRM’s performance in both Gazebo and MATLAB simulations, in scenarios of active target tracking with swarms of robots. We observe DRM achieves significant computational speedups (it is 3 to 4 orders faster) and, yet, nearly matches the tracking performance of its centralized counterparts.
I Introduction
Safecritical scenarios of surveillance and exploration often require both mobile agility, and fast capability to detect, localize, and monitor. For example, consider the scenarios:
• Adversarial target tracking
Track adversarial targets that move across an urban environment, aiming to escape; [20]
• Search and rescue
Explore a burning building to localize any people trapped inside. [14]
Such scenarios can greatly benefit from teams of mobile robots that are agile, act as sensors, and plan their actions rapidly. For this reason, researchers are pushing the frontier on robotic miniaturization and perception [20, 14, 15, 12, 4, 5, 24], to enable mobile agility and autonomous sensing; and develop distributed coordination algorithms [1, 25, 13, 7, 2], to enable multirobot planning, i.e., the joint optimization of robots’ actions.
Particularly, distributed planning algorithms (instead of centralized) are especially important when one wishes to deploy largescale teams of robots; e.g., at the swarm level with tens or hundreds of robots. One reason is the distributed algorithms scale better for larger numbers of robots than their centralized counterparts [1]. And another one, equally important, is that in largescale teams, not all robots can communicate with each other, but only with the robots within a certain communication range.
However, the safety of the above critical scenarios can still be at peril. For example, robots operating in adversarial scenarios may get cyberattacked or simply incur failures, both events resulting in a withdrawal of robots from the task. Hence, in such adversarial environments, distributed attackrobust planning algorithms become necessary.^{1}^{1}1We henceforth consider the terms attack and failure, equivalent, both resulting in robot withdrawals from the task at hand.
In this paper, we formalize a general framework for distributed attackrobust multirobot planning for tasks that require the maximization of submodular functions, such as in active target tracking with multiplerobots [8].^{2}^{2}2Submodularity is a diminishing returns property [19], that captures the intuition that the more robots participate in a task, the less the gain/return one gets by adding an extra robot towards the task. Particularly, we focus on worstcase attacks that can result in up to robot withdrawals from the task.
Attackrobust multirobot planning is computationally hard and requires accounting for all possible withdrawals, a problem of combinatorial complexity. Importantly, even in the presence of no withdrawals, the problem of multirobot planning is NPhard [27]. All in all, the necessity for distributed attackrobust algorithms, and the inherent computational hardness motivates our goal in this paper: to provide a distributed, provably closetooptimal approximation algorithm. To this end, we capitalize on recent algorithmic results on centralized attackrobust multirobot planning [31] and present a distributed attackrobust algorithm.
Related work. Researchers have developed several distributed, but attackfree, planning algorithms, such as [1, 25, 13, 7, 2]. For example, [1] developed a decentralized algorithm, building on the local greedy algorithm proposed in [9, Section 4], which guarantees a suboptimality bound for submodular objective functions. Particularly, in [1] the robots form a string communication network, and sequentially choose an action, given all the actions of the robots that have chosen so far. Authors of [7] proposed a speedup of [1]’s approach, by enabling the greedy sequential optimization to be executed over directed acyclic graphs, instead of string ones. In scenarios where the robots cannot observe all the chosen actions so far, distributed, but still attackfree, algorithms for submodular maximization are developed in [10, 11]. Other distributed, attackfree algorithms are also developed in the machine learning literature on submodular maximization, but towards sparse selection (e.g., for data selection, or sensor placement) [16], instead of planning.
Recently, researchers have also developed attackrobust planning algorithms [17, 26, 31, 29]. With the exception of [17], the algorithms in [26, 31, 29] are centralized. Particularly, [17] provide a distributed attackresilient algorithm against Byzantine attacks (instead of attacks that result in robot withdrawals). While [26, 31] provide centralized attackrobust algorithms for active information gathering [26] and target tracking [31] with multiple robots. Other attackrobust algorithms, that however apply towards sparse selection instead of planning, are the [21, 3, 28].
All in all, towards enabling attackrobust planning in multirobot scenarios, where local interrobot communication can be necessary, and realtime performance with centralized planning is hard to maintain as the number of robots increases, we make the following contributions on attackrobust distributed multirobot planning.
Contributions. We introduce the problem of distributed attackrobust submodular maximization for multirobot planning, and provide an algorithm, named distributed robust maximization (DRM). DRM distributively partitions the problem among cliques of robots, where all robots are within communication range. Then, naturally, the cliques optimize in parallel, using [31, Algorithm 1]. We prove for DRM:
Systemwide attackrobustness
DRM is valid for any number of worstcase attacks;
Superior running time
DRM offers significant computational speedups, up to the running time of its centralized counterparts. depends on the interrobot communication range, which is given as input to DRM.
Neartocentralized approximation performance
Even though DRM is a distributed, faster algorithm than its stateoftheart centralized counterpart [31, Algorithm 1], DRM achieves a neartocentralized performance, having a suboptimality bound equal to [31, Algorithm 1]’s.
Numerical evaluations. We present Gazebo and MATLAB evaluations of DRM, in scenarios of active target tracking with swarms of robots. All simulation results demonstrate DRM’s speedup benefits: DRM runs 3 to 4 orders faster than its centralized counterpart in [31], achieving running times 0.5 to 1.5msec for 100 robots. And, yet, DRM exhibits negligible deterioration in performance (target coverage).
All proofs are given in the appendix.
Ii Problem Formulation
We formalize the problem of distributed attackrobust submodular maximization for multirobot planning. At each timestep, the problem asks for assigning actions to the robots, to maximize an objective function despite attacks. For example, in active target tracking with aerial robots (see Fig. 1). The robots’ possible actions are their motion primitives; the objective function is the number of covered targets; and the attacks are fieldofview blocking attacks.
We next introduce our framework in more detail:^{3}^{3}3Notations. Calligraphic fonts denote sets (e.g., ). denotes ’s power set, and its cardinality. are the elements in not in .
Robots
We consider a multirobot team . At a given timestep, is robot ’s position in the environment (). We define .
Communication graph
Each robot communicates only with those robots within a prescribed communication range. Without loss of generality, we assume all robots to have the same communication range . That way, an (undirected) communication graph is induced, with nodes the robots , and edges such that if and only if . The neighbors of robot are all robots within the range , and are denoted by .
Action set
Each robot has an available set of actions to choose from; we denote it by . The robot can choose at most action at each time, due to operational constraints; e.g., in motion planning, denotes robot ’s motion primitives, and the robot can choose only motion primitive at a time to be its trajectory. For example, in Figure 1(b) we have 2 robots, where (and robot 1 chooses as its trajectory) and (and robot 2 chooses as its trajectory). We let . Also, denotes a valid assignment of actions to all robots. For instance, in Figure 1(b), .
Objective function
Attacks
At each time, we assume the robots encounter worstcase attacks. We assume the maximum number of anticipated attacks to be known and denote it by .
Problem 1 (Distributed attackrobust submodular maximization for multirobot planning).
The robots, by exchanging information only over the communication graph , assign an action to each robot to maximize against worstcase attacks/failures:
(1) 
where corresponds to the actions of the attacked robots. The first constraint ensures only action is chosen per robot.
Iii A Distributed Algorithm: Drm
We present Distributed Robust Maximization (DRM), a distributed algorithm for Problem 1 (Algorithm 1). DRM executes sequentially two main steps: distributed clique partition (DRM’s line 1), and per clique attackrobust optimization (DRM’s lines 28). During the first step, the robots communicate with their neighbors to partition into cliques of maximal size (using Algorithm 2, named DCP in DRM’s line 1).^{4}^{4}4A clique is a set of robots that can all communicate with each other. During the second step, each clique computes an attackrobust action assignment (in parallel with the rest), using the centralized algorithm in [31] —henceforth, we refer to the algorithm in [31] as centralrobust. centralrobust takes similar inputs to DRM: a set of actions, a function, and a number of attacks.
We describe DRM’s two steps in more detail below; and quantify its running time and performance in Section IV.
Iiia Distributed clique partition
We present the first step of DRM, namely, distributed clique partition (DRM’s line 1, that calls DCP, whose pseudocode is presented in Algorithm 2). Notably, the problem is inapproximable in polynomial time, since even finding a single clique of maximum size is inapproximable (unless NPP) [32] (even in a centralized way).
DCP builds on [23, Algorithm 2], which finds for each vertex in a graph a clique containing the vertex (DCP’s line 2). We refer to [23, Algorithm 2] as PerVrtxMaxClique in DCP. The cliques returned by PerVrtxMaxClique can overlap with each other, since PerVrtxMaxClique returns as many cliques as vertices/robots. In order to separate those cliques, in DCP’s lines 39 each robot communicates with its neighbors once, during which: a) each robot shares its clique with its neighbors (DRM’s line 2); b) robot and its neighbor follow a partition rule that, from their two cliques, the smaller one will lose the overlapped robots (DCP’s lines 69). That way, DCP aims to partition to fewer and larger cliques. The generated nonoverlapping cliques are returned by DCP’s line 10.
IiiB Per clique attackrobust optimization
We now present DRM’s second step: per clique attackrobust optimization (DRM’s lines 28). The step calls centralrobust as subroutine, and therefore we recall its steps here from [31]: centralrobust takes as input the available actions of a set of robots (i.e., the ), a monotone submodular , and a number of attacks , and constructs an action assignment by following a twostep process. First, it tries to approximate the anticipated worstcase attack to , and, to this end, builds a “bait” set as part of . Particularly, the bait set is aimed to attract all attacks at , and for this reason, it has cardinality (the same as the number of anticipated attacks). In more detail, centralrobust includes an action in the bait set (at most 1 action per robot, per Problem 1) only if for any other . That is, the bait set is composed of the “best” single actions. In the second step, centralrobust a) assumes the robots in the bait set are removed from , and then b) greedily assigns actions to the rest of the robots using the centralized greedy in [9, Section 2] which ensures a nearoptimal assignment (at least 1/2 close to the optimal).
In this context, DRM’s second step is as follows: assuming the clique partition step returns cliques (DRM’s line 1), now each clique in parallel with the others computes an attackrobust assignment for its robots using centralrobust (DRM’s lines 38). To this end, the cliques need to assess how many of the attacks each will incur. If there is no prior on the attack generation mechanism, then we consider each clique assumes a worstcase scenario where it incurs all the attacks. Otherwise, we consider there is a prior on the attack mechanism such that each clique infers it will incur attacks. Without loss of generality, in DRM’s pseudocode in Algorithm 1 we present the former scenario, where across all cliques; notwithstanding, our theoretical results on DRM’s performance (Section IV) hold for any such that . Overall, DRM’s lines 38 are as follows (see Fig. 2 for an example):
Drm’s lines 45 ()
Drm’s lines 67 ()
Drm’s line 8
All in all, now all robots have assigned actions, and is the union of all assigned actions across all cliques (notably, the robots of each clique know only , where is per the notation in DRM).
To close the section, we note that DRM is valid for any number of attacks since centralrobust in [31].
Iv Performance Analysis
We now quantify DRM’s performance, by bounding its computational and approximation performance. To this end, we use the following notion of curvature for set functions.
Iva Curvature
Definition 1 (Curvature [6]).
Consider nondecreasing submodular such that , for any (without loss of generality). Also, denote by the collection of admissible sets where can be evaluated at. Then, ’s curvature is defined as
(2) 
The curvature, , measures how far is from being additive. Particularly, Definition 1 implies , and if , then for all ( is additive). On the other hand, if , then there exist and such that ( has no contribution in the presence of ).
For example, in active target tracking, is the expected number of covered targets (as a function of the robot trajectories). Then, has curvature 0 if each robot covers different targets from the rest of the robots. In contrast, it has curvature 1 if, e.g., two robots cover the exact same targets.
IvB Running time and approximation performance
We present DRM’s running time and suboptimality bounds. To this end, we use the notation:

is the set of robots composing ’s largest clique;

is the set of possible actions of all robots in ; that is, ;

is the optimal value of Problem 1;

is a worstcase removal from (a removal from corresponds to a set of robot/sensor attacks); that is, .
Theorem 1 (Computational performance).
DRM runs in time.
The part corresponds to DRM’s clique partition step (DRM’s line 1), while to DRM’s attackrobust optimization step (DRM’s lines 28). Typically, is smaller than , since the latter grows quadratically fast, and, as a result, we henceforth ignore the former’s contribution in the running time.
In contrast, the centralized [31, Algorithm 1] runs in time. Thus, when (which happens when is partitioned into at least cliques), then DRM offers a significant computational speedup. The reasons are twoforth: parallelization of action assignment, and smaller clique size. Particularly, DRM splits the action assignment among multiple cliques, instead of performing the assignment in a centralized way, where all robots form one large clique (the ). That way, DRM enables each clique to work in parallel, reducing the overall running time to that of the largest clique (Theorem 1). Besides parallelization, the smaller clique size also contributes to the computational reduction. To illustrate this, assume is partitioned to cliques of equal size, and all robots have the same number of actions ( for all ). Then, , that is, DRM’s running time is smaller by the factor (than the running time of its centralized counterpart).
Theorem 2 (Approximation performance).
DRM returns a feasible such that if , then
(3) 
If, instead, , then DRM is the same as its centralized counterpart in [31], in which case the following suboptimality bound holds [31, Theorem 1]:
(4) 
By comparing eq. (3) and eq. (4), and focusing on the depended bounds, we conclude that even though DRM is a distributed, faster algorithm than its centralized counterpart, it still achieves a neartocentralized performance. At the same time, DRM’s dependent bounds are inversely proportional to the number of cliques, as well as, ’s size.
Generally, Theorem 2 implies DRM guarantees a closetooptimal value for any submodular . Specifically, DRM’s approximation factor is bounded by the depended bounds (rightmost two bounds in eq. (3)), which are nonzero for any finite number of robots. Similarly, the curvaturedependent bound is also nonzero for any with curvature .
V Numerical Evaluation
We present DRM’s Gazebo and MATLAB evaluations in scenarios of active target tracking with swarms of robots. The implementations’ code is available online.^{5}^{5}5https://github.com/raaslab/distributed_resilient_target_tracking.git
Compared algorithms. We compare DRM with two algorithms. First, the centralized counterpart of DRM in [31], named centralrobust (its nearoptimal performance has been extensively demonstrated in [31]). The second algorithm is the centralized greedy algorithm in [9], named centralgreedy. The difference between the two algorithms is that the former is attackrobust, whereas the latter is attackagnostic. For this reason, in [31] we demonstrated, unsurprisingly, that centralgreedy has inferior performance to centralrobust in the presence of attacks. However, we still include centralgreedy in the comparison, to highlight the differences among the algorithms both in running time and performance.
Va Gazebo evaluation over multiple steps with mobile targets
We use Gazebo simulations to evaluate DRM’s performance across multiple rounds (timesteps). That way, we take into account the kinematics and dynamics of the robots, as well as, the fact that the actual trajectories of the targets, along with the sensing noise, may force the robots to track fewer targets than expected. Due to the running efficacy of Gazebo (which is independent of DRM), we focus on smallscale scenarios of 10 robots. In the MATLAB simulation, we focus instead on largerscale scenarios of 100 robots.
Simulation setup. We consider 10 aerial robots that are tasked to track 50 ground mobile targets (Fig. 3(a)). We set the number of attacks equal to , and the robots’ communication range to be meters. We also visualize the robots, their fieldofview, their cliques, and the targets using the Rviz environment (Fig. 3(b)). Each robot has 4 candidates trajectories, , and flies on a different fixed plane (to avoid collision with other robots). Each robot has a square filedofview . Once a robot picks a trajectory, it flies a distance along that trajectory. Thus, each trajectory has a rectangular tracking region with length and width . We set the tracking length , and tracking width for all robots. We assume robots obtain noisy position measurements of the targets, and then use a Kalman filter to estimate the target’s position. We consider to be the expected number of targets covered, given all robots chosen trajectories (per round).
For each of the compared algorithms, at each round, each robot picks one of its 4 trajectories. Then, the robot flies a meters along the selected trajectory.
When an attack happens, we assume the attacked robot’s tracking sensor (e.g., camera) to be turnedoff; nevertheless, we assume it can be active again at the next round. The attack is a worstcase attack, per Problem 1’s framework. Particularly, we compute the attack via a bruteforce algorithm, which is viable for smallscale scenarios (as this one).
We repeat for 50 rounds. A video is available online.^{6}^{6}6https://youtu.be/T0Hb0UURCLM
Results. The results are reported in Fig. 4. We observe:
a) Superior running time: DRM runs considerably faster than both centralrobust and centralgreedy: 3 orders faster than the former, and 4 than the latter, with average running time 0.1msec (Fig. 4(a)).
b) Neartocentralized tracking performance: Despite that DRM runs considerably faster, it maintains neartocentralized performance: DRM covers on average 20 targets per round, while centralrobust covers 20.2 (Fig. 4(b)). As expected, the attackagnostic centralgreedy performs worse than all algorithms, even being centralized.
VB MATLAB evaluation over one step with static targets
We use MATLAB simulations to evaluate DRM’s performance in largescale scenarios. Specifically, we evaluate DRM’s running time and performance for various numbers of robots (from 10 to 100) and communication ranges (resulting from as few as 5 cliques to as many as 30 cliques). We compare all algorithms over a single execution round.
Simulation setup. We consider mobile robots, and 100 targets. We vary from 10 to 100. For each , we set the number of attacks equal to , and . Similarly to the Gazebo simulations, each robot moves on a fixed plane, and has four possible trajectories: forward, backward, left and right. We set and for all robots. We randomly generate the positions of the robots and targets in a 2D space of size . Particularly, we generate 30 Monte Carlo runs (for each ). We assume that the robots have available estimates of targets’ positions. For each Monte Carlo run, all compared algorithms are executed with the same initialization (same positions of robots and targets). DRM is tested across four communication ranges: . For a visualization of ’s effect on the formed cliques, see Fig. 5, where we present two of the generated scenarios. All algorithms are executed for one round in each Monte Carlo run.
Notably, since we consider largescale scenarios (up to robots, and up to 75 attacks, when , and ), computing the worstcase attack via a bruteforce algorithm is now infeasible (we recall that computing a worstcase attack is NPhard, and, as a result, to compute one in practice, in smallscale scenarios we need to use a bruteforce algorithm, otherwise, in largescale scenarios we need to use an approximation algorithm). Herein, given a trajectory assignment to all robots, the problem of computing a worstcase attack is a monotone submodular optimization problem, which can be solved nearoptimally using the greedy algorithm in [19]. Therefore, we henceforth consider greedy attacks, instead of worstcase attacks.
Results. The results are reported in Fig. 6, where we make the same qualitative conclusions as in the Gazebo evaluation:
a) Superior running time: DRM runs several orders faster than both centralrobust and centralgreedy: 3 to 4 orders, achieving running time from 0.5msec to 1.5msec (Figs. 6(ad)). Notably, we also observe centralrobust runs faster as increases, which is due to how centralrobust works, that causes centralrobust to become faster as tends to [31]).
b) Neartocentralized tracking performance: Although DRM runs considerably faster, it retains a tracking performance close to the centralized one (Figs. 6(eh)). On the other hand, unsurprisingly, the attackagnostic greedy performs worse than all algorithms.
To summarize, in all simulations above, DRM offered significant computational speedups, and, yet, still achieved a tracking performance that matched the performance of the centralized, nearoptimal algorithm in [31].
Vi Conclusion
We worked towards securing swarmrobotics applications against worstcase attacks resulting in robot withdrawals. Particularly, we proposed DRM, a distributed robust submodular optimization algorithm. DRM is generalpurpose: it applies to any Problem 1’s instance. We proved DRM runs considerably faster than its centralized counterpart, without compromising approximation performance. We demonstrated both its running time and nearoptimality in Gazebo and MATLAB simulations of active target tracking.
A future avenue is to investigate distributed algorithms where each robot communicates with neighboring robots even across different cliques than its own. That way, the robots can utilize more information towards an attackrobust action assignment. Another future avenue is to investigate distributed algorithms against an unknown number of attacks (e.g., captured by stochastic processes [22]).
Appendix
Via Proof of Theorem 1
DRM’s running time is equal to DCP’s running time, plus the running time for all cliques to execute centralrobust in parallel. Particularly, in DCP, each robot first finds its maximal clique using PerVrtxMaxClique, which runs in time. Then, it shares its maximal clique with its neighbors for graph partition, which also takes time. Thus, DCP runs in time. Next, since all cliques perform in parallel, the running time depends on the largest clique, which gives a time (the proof follows the proof of [31, Part 2 of Theorem 1]). Totally, Algorithm 1 runs in time.
ViB Proof of Theorem 2
We prove Theorem 2 by proving first the dependent bound and then the dependent bound. The proof is based on [29, Proof of Theorem 1].
We introduce the notation: denotes an optimal solution to Problem 1. Given an action assignment to all robots in , and a subset of robots , we denote by the actions of the robots in (i.e., the restriction of to ). And vise versa: given an action assignment to a subset of robots, we let denote this subset (i.e., ). Additionally, we let ; that is, is the restriction of to the clique selected by DRM’s line 1 (); evidently, . Moreover, we let correspond to bait actions chosen by centralrobust in , and denote the actions for the remaining robots in ; that is, . If , then . Henceforth, we let be the action assignment given by DRM to all robots in . Also, we let be remaining robots after the attack ; i.e., . Further, we let , , and . Finally, we let denote the remaining robots in after removing from it any subset of robots with cardinality .
Now the proof follows from the steps:
(5)  
(6)  
(7)  
(8)  
(9) 
(10)  
(11)  
(12)  
(13)  
(14) 
Ineq. (5) follows from the definition of (see [29, Proof of Theorem 1]). Eqs. (6) and (7) follow from the notation we introduced above. Ineq. (8) is implied by the fact that any action in is a bait. Eq. (9) holds from the notation. Ineq. (10) holds by the submodularity of , which implies for any sets [19]. Ineq. (11) holds since a) with respect to the left term in the sum, the robots in the sum correspond to robots whose actions are baits; and b) with respect to the right term in the sum, the greedy algorithm that has assigned the actions guarantees at least the optimal [9]. Ineq. (12) holds again due to the submodularity of , as above. The same for ineq. (13). Eq. (14) follows from the notation, which implies .
We now prove the dependent bounds in Theorem 2.
(15)  
(16)  
(17)  
(18)  
(19) 
where . Particularly, ineq. (15) holds from [29, Proof of Theorem 1]. Ineq. (16) holds from the monotonicity of : for all . For ineq. (17), on the one hand, if , we denote the most profitable action in it as . Clearly, . Due to the monotonicity of , we have since . Thus, . On the other hand, if , then only contains actions selected by the greedy algorithm. Note that, by greedy algorithm, the first section is also the most profitable action. We denote this action as . Similarly, we have . Thus, ineq. (17) holds. Ineq. (18) holds obviously from ineq. (17). Ineq. (19) holds by the definition of and from [21, Lemma 2].
References
 [1] (2015) Decentralized active information acquisition: theory and application to multirobot slam. In 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 4775–4782. Cited by: §I, §I, §I.
 [2] (2019) DecMCTS: Decentralized planning for multirobot active perception. The International Journal of Robotics Research 38 (23), pp. 316–337. Cited by: §I, §I.
 [3] (2017) A distributed algorithm for partitioned robust submodular maximization. In IEEE 7th International Workshop on Computational Advances in MultiSensor Adaptive Processing, pp. 1–5. Cited by: §I.
 [4] (2016) Past, present, and future of simultaneous localization and mapping: toward the robustperception age. IEEE Transactions on Robotics 32 (6), pp. 1309–1332. Cited by: §I.
 [5] (2017) Rapid exploration with multirotors: A frontier selection method for high speed flight. In IEEE/RSJ Int. Conf. on Intel. Robots and Systems, Vol. , pp. 2135–2142. Cited by: §I.
 [6] (1984) Submodular set functions, matroids and the greedy algorithm: tight worstcase bounds and some generalizations of the radoedmonds theorem. Discrete applied mathematics 7 (3), pp. 251–274. Cited by: Definition 1.
 [7] (2019) Distributed matroidconstrained submodular maximization for multirobot exploration: theory and practice. Autonomous Robots 43 (2), pp. 485–501. Cited by: §I, §I.
 [8] (2017) Detecting, localizing, and tracking an unknown number of moving targets using a team of mobile robots. The International Journal of Robotics Research 36 (1314), pp. 1540–1553. Cited by: §I.
 [9] (1978) An analysis of approximations for maximizing submodular set functions–II. In Polyhedral combinatorics, pp. 73–87. Cited by: §I, §IIIB, §V, §VIB.
 [10] (2018) Distributed submodular maximization with limited information. IEEE Transactions on Control of Network Systems 5 (4), pp. 1635–1645. Cited by: §I.
 [11] (2018) The impact of information in greedy submodular maximization. IEEE Transactions on Control of Network Systems. Cited by: §I.
 [12] (2012) Highspeed flight in an ergodic forest. In IEEE Intern. Confer. on Robotics and Automation, pp. 2899–2906. Cited by: §I.
 [13] (2019) Distributed state estimation using intermittently connected robot networks. IEEE Transactions on Robotics. Cited by: §I, §I.
 [14] (2017) Opportunities and challenges with autonomous micro aerial vehicles. In Robotics Research, pp. 41–58. Cited by: §I, §I.
 [15] (2014) Robotic tracking of coherent structures in flows. IEEE Transactions on Robotics 30 (3), pp. 593–603. Cited by: §I.
 [16] (2013) Distributed submodular maximization: Identifying representative elements in massive data. In Advances in Neural Information Processing Systems, pp. 2049–2057. Cited by: §I.
 [17] (2019) Resilient distributed state estimation with mobile agents: Overcoming Byzantine adversaries, communication losses, and intermittent measurements. Autonomous Robots 43 (3), pp. 743–768. Cited by: §I.
 [18] (2013) Game theory. Harvard university press. Cited by: §II.
 [19] (1978) An analysis of approximations for maximizing submodular set functions–I. Mathematical programming 14 (1), pp. 265–294. Cited by: §VB, §VIB, footnote 2.
 [20] (2013) Multirobot exploration strategies for tactical tasks in urban environments. In Unmanned Systems Technology XV, Vol. 8741, pp. 87410B. Cited by: §I, §I.
 [21] (2018) Robust monotone submodular function maximization. Mathematical Programming 172 (12), pp. 505–537. Cited by: §I, §VIB.
 [22] (2018) Robust rendezvous for multirobot system with random node failures: an optimization approach. Autonomous Robots, pp. 1–12. Cited by: §VI.
 [23] (2013) Fast algorithms for the maximum clique problem on massive sparse graphs. In International Workshop on Algorithms and Models for the WebGraph, pp. 156–169. Cited by: §IIIA.
 [24] (2018) Coverage control for multirobot teams with heterogeneous sensing capabilities. IEEE Robotics and Automation Letters 3 (2), pp. 919–925. Cited by: §I.
 [25] (2018) Anytime planning for decentralized multirobot active information gathering. IEEE Robotics and Automation Letters 3 (2), pp. 1025–1032. Cited by: §I, §I.
 [26] (2018) Resilient active information gathering with mobile robots. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4309–4316. Cited by: §I.
 [27] (2014) Multitarget visual tracking with aerial robots. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3067–3072. Cited by: §I, §II.
 [28] (2017) Resilient monotone submodular function maximization. In IEEE Conference on Decision and Control, pp. 1362–1367. Cited by: §I.
 [29] (2018) Resilient nonsubmodular maximization over matroid constraints. arXiv preprint arXiv:1804.01013. Cited by: §I, §VIB, §VIB, §VIB.
 [30] (2019) An approximation algorithm for distributed resilient submodular maximization. In International Symposium on MultiRobot and MultiAgent Systems, in print, Cited by: Distributed AttackRobust Submodular Maximization for MultiRobot Planning.
 [31] (2019) Resilient active target tracking with multiple robots. IEEE Robotics and Automation Letters 4 (1), pp. 129–136. Cited by: §I, §I, §I, §I, §I, §IIIB, §IIIB, §III, §IVB, §IVB, §VB, §VB, §V, §VIA.
 [32] (2006) Linear degree extractors and the inapproximability of max clique and chromatic number. In ACM Symposium on Theory of Computing, pp. 681–690. Cited by: §IIIA.