# Adversarial Task Allocation

###### Abstract

The problem of allocating tasks to workers is of long standing fundamental importance. Examples of this include the classical problem of assigning computing tasks to nodes in a distributed computing environment, as well as the more recent problem of crowdsourcing where a broad array of tasks are slated to be completed by human workers. Extensive research into this problem generally addresses important issues such as uncertainty and, in crowdsourcing, incentives. However, the problem of adversarial tampering with the task allocation process has not received as much attention.

We are concerned with a particular adversarial setting in task allocation where an attacker may target a specific worker in order to prevent the tasks assigned to this worker from being completed. We consider two attack models: one in which the adversary observes only the allocation policy (which may be randomized), and the second in which the attacker observes the actual allocation decision. For the case when all tasks are homogeneous, we provide polynomial-time algorithms for both settings. When tasks are heterogeneous, however, we show the adversarial allocation problem to be NP-Hard, and present algorithms for solving it when the defender is restricted to assign only a single worker per task. Our experiments show, surprisingly, that the difference between the two attack models is minimal: deterministic allocation can achieve nearly as much utility as randomized.

## Introduction

The problem of allocating a set of tasks among a collection of workers has been a fundamental research question in a broad array of domains, including distributed computing, robotics, and, recently, crowdsourcing [1, 2, 3]. Despite the extensive interest in the problem, however, there is little prior work on task allocation in settings where workers may be attacked, and their ability to successfully complete the assigned task compromised as a consequence. Such adversarial task allocation problems can arise, for example, when tasks are of high economic or political consequence, such as when we use crowdsourcing to determine which executables are malicious or benign, or which news stories constitute fake news.

We investigate the adversarial task allocation problem in which a rational attacker targets a single worker after tasks have already been assigned. We consider two models of information available to the attacker at the time of the attack: partial information, where the attacker only knows the defender’s policy (common in Stackelberg security games, for example), and full information, where the attacker observes the actual task assignment decision. We formalize the interaction between the attacker and requester (defender) as a Stackelberg game in which the defender first chooses an allocation policy, and the attacker subsequently attacks a single worker so as to maximize the defender’s losses from the attack. We seek a strong Stackelberg equilibrium (SSE) of this game. In the partial information setting, we study how to compute an optimal randomized assignment in an SSE for the requester, while in the full information model we focus on computing an optimal deterministic assignment.

We first consider a homogeneous task setting, where all tasks have the same utility. In this case, we show that an optimal randomized task assignment policy can be computed in linear time. Deterministic assignment is harder, and the algorithm we devise for that case only runs in pseudo-polynomial time (linear in the number of tasks, and quadratic in the number of workers). While randomized policies in Stackelberg games are always advantageous for the defender (if they can be used), our experiments show that the difference between optimal deterministic and randomized policies is small, and shrinks as we increase the number of tasks, suggesting that deterministic policy may be a good option especially when we are uncertain about what the attacker can observe.

Next, we turn to heterogeneous tasks settings. This case, it turns out, is considerably more challenging. Nevertheless, if we impose a restriction that only a single worker can be assigned to a task (optimal when tasks are homogeneous, but not in general), we can still compute an optimal randomized assignment in linear time. Optimal deterministic assignment is much harder even with this restriction in place, and we propose an integer programming approach for solving it.

#### Related Work

The problem of task allocation in adversarial settings has been considered from several perspectives. One major stream of literature is about robots acting in adversarial environments [4, 5]. Alighanbari and How [4] consider assigning weapons to targets, somewhat analogous to our problem, but do not model the decision of the adversary; their model also has rather different semantics than ours. Robotic soccer is another common adversarial planning problem, although the focus is typically on coordination among multiple robots when two opposing teams are engaged in coordination and planning [5].

Another major literature stream which considers adversarial issues is crowdsourcing. One class of problems is the issue of individual worker incentives in truthfully responding to questions [6], or in the amount of effort they devote to the task [7, 3], rather than adversarial reasoning per se. Another, more directly adversarial, considers situations where some workers simply answer questions in an adversarial way [8, 9]. However, the primary interest in this work is robust estimation when tasks are assigned randomly or exogenously, rather than task assignment itself. Similarly, prior research on machine learning when a portion of data is adversarially poisoned [10, 11, 12, 13] focuses primarily on the robust estimation problem, and not task allocation; in addition, it does not take advantage of structure in the data acquisition process, where workers, rather than individual data points, are attacked.

Our work has a strong connection to the literature on Stackelberg security games [14, 15, 16]. However, the mathematical structure of our problem is different: for example, we have no protection resources to allocate, and instead the defender’s decision is about allocating tasks to potentially untrusted workers.

## Model

Consider an environment populated with a single requester (hereafter denoted “defender”), a set of workers, , a set of binary labeling tasks, , and an adversary.
Each worker is characterized by an individual proficiency, or the probability of successfully completing a binary labeling task, denoted , and assume that for all workers (otherwise, we can always flip the received labels).
In our setting, these proficiencies are known to the defender.^{1}^{1}1The issue of learning such proficiencies from experience has itself been extensively studied [17, 18, 19].
Further, we assume that is sufficiently small that any worker can complete all tasks.
For exposition purposes, we index the workers by integers in decreasing order of their proficiency, so that s.t. , where the set of k most proficient workers is defined as: .
Each task is associated with a utility that the defender obtains if this task is completed correctly.
We assume that if the task is not completed, or incorrect, the defender obtains zero utility from it.
Let be the (unknown) correct label corresponding to a task .

The defender’s fundamental decision is the assignment of tasks to workers.
Formally, an assignment specifies a subset of tasks and the set of workers, assigned to each task .
Let denote the labels returned by workers in for .
Suppose that the defender faces a budget constraint, of assigning tasks; thus, each task can be assigned to a single worker, or a subset of tasks assigned to multiple workers.^{2}^{2}2If there are more tasks than budget, we can simply take the tasks with the highest utility.
Then the defender determines the final label to assign to according to some deterministic mapping (e.g., majority label), such that and .
We assume that whenever a single worker is a assigned to a task and returns a label , .
The defender’s expected utility when assigning a set of tasks to workers and obtaining the labels is then

(1) |

where is an indicator function and the expectation is with respect to labeler proficiencies (and resulting stochastic realizations of labels).

It is immediate that in our setting, if there is no adversary, all tasks should be assigned to the worker with the highest . Our focus, however, is how to optimally assign workers to tasks when there is an intelligent adversary who could subsequently (to the assignment) attack one of the workers. In particular, we assume that there is an adversary (attacker) with the goal of minimizing the defender’s utility ; thus, the game is zero sum. To this end, the attacker chooses a single worker to attack, for example, by deploying a cyber attack against the corresponding compute node, or against the device on which the human worker performs the tasks assigned to them. We encode the attacker’s strategy by a vector where iff a worker is attacked (and since exactly one is attacked). The attack takes place after the tasks have already been assigned to workers.

We distinguish between two forms of adversary’s knowledge about worker-to-task assignment before deploying the attack: 1) partial knowledge, when the adversary only knows the defender’s policy (which may be deterministic or randomized), and 2) complete knowledge, when the attacker knows the actual assignments of tasks to workers. The specific consequences of the attack—denial of service, where the targeted node is taken offline and cannot communicate the labels to the defender, or integrity attack, where incorrect labels are reported—are immaterial in our model, since the defender receives zero utility from the tasks assigned to the attacked worker in either case.

Clearly, when an attacker is present, the policy of allocating all tasks to the most competent worker (or any other) will yield zero utility for the defender. The challenge of how to split the tasks up among workers, trading off quality with robustness to attacks, becomes decidedly non-trivial. Our goal is to address this challenge for both models of adversarial knowledge, computing an optimal randomized assignment (i.e., a probability distribution over assignments ) in the partial knowledge environment, and an optimal deterministic assignment in the complete knowledge setting. Formally, we aim to compute a strong Stackelberg equilibrium of the game between the requester (leader), who chooses a task-to-worker assignment policy, and the attacker (follower), who attacks a single worker [20].

## Homogeneous tasks

We start by considering tasks which are homogeneous, that is, for any two tasks . Without loss of generality, suppose that all . Note that it is immediate that we never wish to waste budget, since assigning a worker always results in non-negative marginal utility. Next we consider the problem of optimal randomized assignment when the attacker only knows the (randomized) policy, and optimal deterministic assignment (when the attacker observes the actual assignment), showing that both can be solved efficiently.

### Randomized strategy

In general, a randomized allocation involves a probability distribution over all possible matchings with cardinality between tasks and workers. We first observe that this space can be narrowed to consider only matchings in which one worker is assigned to any task.

###### Proposition 1.

Suppose that tasks are homogeneous. There exists a Stackelberg equilibrium in which the defender commits to a randomized strategy with all assignments in the support assigning at most one worker per task.

###### Proof.

Consider an optimal randomized strategy commitment restricted to assign at most one worker per task, and the associated Nash equilibrium (which exists, by equivalence of Stackelberg and Nash in zero-sum games [21]). We now show that this remains an equilibrium even in the unrestricted space of assignments for the defender.

We prove by contradiction. Suppose that there is which assigns multiple workers for some tasks and is strictly better for the defender. Consider an arbitrary attack in the support of . Given , suppose that there is some task assigned to workers. Since only assignments can be made, there must be tasks which are not assigned. If any of these workers is attacked, then moving this worker to another task will not change the defender’s utility. Thus, w.l.o.g., suppose none of the workers are attacked, and consider moving of these to unassigned tasks; let this be . Under , the marginal utility of the workers completing their assigned task is at most . Under , the marginal utility of these workers is , since . Thus, is weakly improving. Since this argument holds for an arbitrary in the support of , the resulting must also be weekly improving given . Since is a strict improvement on the original Nash equilibrium strategy of the defender, then must be as well, which means that this could not have been a Nash equilibrium, leading to a contradiction. The result then follows from the known equivalence between Nash and Stackelberg equilibria in zero-sum games. ∎

As a consequence of this proposition, it suffices to consider assignment policies (randomized or deterministic) in which each task is assigned to a single worker, and all tasks are assigned. Since there are tasks, an assignment is then the split of these among the workers. Consider the unit simplex in , which represents how we split up tasks among workers. It is then sufficient to consider the space of assignments where means that each worker receives tasks, with the constraint that all are integers; i.e., .

A randomized allocation, in general, is a probability distribution over the set of assignments . In principle, considering the problem of computing an optimal randomized task allocation is daunting: even for only workers and tasks there are over 20 million possible assignments in . We now observe that in fact we can restrict attention to a far more restricted space of unit assignments, , where is a unit vector which is in th position and elsewhere; i.e., assigning a single worker to all tasks. Let denote a distribution over .

###### Proposition 2.

For any distribution over assignments and attack strategy , there exists a distribution over , , which results in the same utility.

###### Proof.

Fix an attacker strategy . For any probability distribution over , the expected utility of the defender is , where is the number of tasks assigned to worker . The expected utility of the defender for a distribution over is . Define . It suffices to show that . This follows since because and is a probability distribution. ∎

This result allows us to restrict attention to probability distributions over .

Next, we make another important observation which implies that in an optimal randomized assignment the support of must include the best workers for some . Below, we use as the rank of a worker in a decreasing order of proficiency.

###### Proposition 3.

In an optimal randomized assignment , suppose that for . Then there must be an optimal assignment in which .

###### Proof.

It is useful to write the utility of the defender as , where is the worker being attacked. Suppose that is an optimal randomized assignment, and there exist some worker , s.t. and . Since , there is such that . First, suppose that some node is being attacked. Thus, for all (by optimality of the attacker). Consequently, after was removed from the probability of assigning to , node is still attacked, and the defender receives a net gain of . Thus, if was optimal, so is the new assignment. Now, suppose that . Again, if is still being attacked after is moved to , the defender obtains a non-negative net gain as above. If instead this change results in some other now being attacked, the defender obtains another net gain of , by optimality condition of the attacker and the fact that . Again, if was optimal, so is the new assignment. ∎

The final piece of structure we observe is that in an optimal randomized assignment the workers in the support must have the same utility for the adversary. Define , i.e., the workers in the support of a strategy .

###### Proposition 4.

There exists an optimal randomized assignment with for all .

###### Proof.

Suppose that an optimal has two workers with . Define and . Let be the set of maximizing workers (with identical marginal value to the attacker, ), and let be some minimizing worker. Define . By optimality of the attacker, some is attacked and by our assumption . For any , define and similarly let .

First, suppose that . Then there is some with or, equivalently, . Then there exists small enough so that if we change to and to the attacker does not attack , and we gain and either lose the same as before to the attack (if ) or lose less (if ). Consequently, cannot have been optimal, and this is a contradiction.

Thus, it must be that . Suppose now that we move all of from onto all workers in , maintaining their utility to the attacker as constant (and thus the attacker does not change which worker is attacked). For any worker , the resulting , where . Moreover, it must be that . Since , we can find that . Consequently, the defender’s net gain from the resulting change is

which is non-negative since . We can then repeat the process iteratively, removing any other workers in the support but not in to obtain a solution with uniform for all with which is at least as good as the original solution . ∎

Algorithm 1 uses these insights for computing an optimal randomized assignment in linear time.

At the high level, it attempts to compute the randomized assignments for all possible most proficient workers who can be in the support of the optimal assignment, and then returns the assignment which yields the highest expected utility to the defender. For a given , we can find directly for all : . Consequently, the utility to the defender of an optimal randomized assignment for tasks is (since one worker is attacked, and it doesn’t matter which one).

### Deterministic strategy

Next we consider the setting in which the attacker observes the actual task assignment, in which case the defender’s focus becomes on computing an optimal deterministic assignment. Recall that we use to denote the number of tasks allocated to each worker. Although the space of deterministic allocations is large, we now observe several properties of optimal deterministic assignments which allow us to devise a polynomial time algorithm for this problem.

Our first several results are similar to the observations we made for randomized assignments, but require different arguments.

###### Proposition 5.

Suppose that tasks are homogeneous. For any optimal deterministic strategy there is a weakly utility-improving deterministic assignment for the requester which assigns each task to a single worker.

###### Proof.

Consider an optimal assignment and the corresponding best response by the attacker, , in which a worker is attacked. Let a task be assigned to a set of workers with . Then there must be another task which is unassigned. Now consider a worker . Since utility is additive, we can consider just the marginal utility of any worker to the defender and attacker; denote this by . Let be the set of tasks assigned to a worker under . Let , where is the marginal utility of worker of towards a task . Clearly, , since the attacker is playing a best response.

Suppose that we reassign from to . If , the attacker will still attack (since the utility of to the attacker can only increase), and the defender is indifferent. If , there are two cases: (a) the attacker still attacks after the change, and (b) the attacker now switches to attack . Suppose the attacker still attacks . The defender’s net gain is . If, instead, the attacker now attacks , the defender’s net gain is . ∎

Consequently, the strategy space defined above (where a single worker is assigned for any task) still suffices.

Given a deterministic assignment and the attack strategy , the defender’s expected utility is:

(2) |

We now derive a similar property of optimal deterministic assignments that held for randomized assignments: there is always an optimal deterministic assignment in which we assign the most proficient workers for some .

###### Proposition 6.

In an optimal deterministic assignment , suppose that for . Then there must be an optimal assignment in which .

###### Proof.

Consider an optimal deterministic assignment and the attacker’s best response in which a worker is attacked. Now, consider moving 1 task from to . Suppose that , that is, the worker is attacked. If the change results in being attacked, the net gain to the defender is . Otherwise, the net gain is . Suppose that another worker is attacked. If is now attacked, the net gain is Otherwise, the net gain is . ∎

Next, we present an allocation algorithm for optimal deterministic assignment (Algorithm 2) which has complexity , quadratic in the number of workers and linear in the number of tasks. The intuition behind the algorithm is to consider each worker as a potential target of an attack, and then compute the best deterministic allocation subject to a constraint that is attacked (i.e., that for all other workers ). Subject to this constraint, we consider all possible numbers of tasks that can be assigned to , and then assign as many tasks as possible to non-attacked workers in order of their proficiency. Optimality follows from the fact that we exhaustively search possible targets and allocation policies to these, and assign as many tasks as possible to the most effective workers.

### Experiments

We now experimentally consider two questions associated with our problem: 1) what is the impact of the distribution of worker proficiencies on the requester’s utility and the number of workers assigned, and 2) what is the difference between optimal randomized and deterministic assignment.
We sample worker proficiencies using two different distributions from which the workers’ proficiencies were sampled: a uniform distribution over the interval, and a power law distribution with in which proficiencies are truncated to be in this interval.
We use 100 tasks, unless stated otherwise, and vary the number of workers between 2 and 20.^{3}^{3}3When we vary the number of workers, we generate proficiencies incrementally, adding a single worker with a randomly generated proficiency each time.
For each experiment, we take an average of 20,000 sample runs.

In Figure 1 we compare the uniform and power law distributions in terms of the expected defender utility and the number of workers assigned any tasks in an optimal randomized (Figure 1a-b) and deterministic (Figure 1c-d) assignments. Consistently, under the power law distribution of proficiencies, the defender’s utility is lower, and fewer workers are assigned tasks in an optimal assignment.

Next, we experimentally compare the optimal randomized and deterministic assignments in terms of (a) defender’s utility, and (b) the number of workers assigned tasks. In this case, we only show the results for the uniform distribution over worker proficiencies.

It is, of course, well known that the optimal randomized assignment must be at least as good as deterministic (which is a special case), but the key question is by how much. As Figure 1(a) shows, the difference is quite small: always below 3%, and decreasing as we increase the number of tasks from 20 to 200. Similarly, Figure 1(b) suggests that the actual policies are not so different in nature: roughly the same number of most proficient workers are assigned to tasks in both cases.

The implication of this observation is that from the defender’s perspective it is not crucial to know precisely what the adversary observes about the assignment policy: one can safely use the optimal deterministic policy, which is near-optimal even when the attacker only observes the policy and not the actual assignment. On the other hand, the deterministic assignment is much more robust: in a randomized assignment, if the attacker actually observes the worker which tasks are assigned to, the defender will receive zero utility, whereas an optimal deterministic assignment can achieve a near-optimal utility in any case.

## Heterogeneous tasks

It turns out that the more general problem in which utilities are heterogeneous is considerably more challenging than the case of homogeneous allocation. First, we show that even if the tasks’ utilities slightly different, it may be beneficial to assign the same task to multiple workers. Consider the case of an environment populated with workers and tasks. WLOG, we order the tasks by their utility, i.e., . Regardless of the workers’ proficiencies, assigning one worker per task will result in an expected utility of . Still, assigning both worker to will result in an expected utility of which is promised to be higher than the previous one. Aside from the considerably greater complexity challenge associated in solving problems with heterogeneous utilities alluded to in this example, there is an additional challenge of resolving disagreement among workers, particularly when there are an even number of them. We leave this issue for future work, and for the moment tackle a restricted problem in which the defender nevertheless assigns only a single worker per task.

### Randomized strategy

If we assume that a single worker is assigned to each task, it turns out that we can apply Algorithm 1 directly in the case of randomized assignment as well. To show this, we need to extend Proposition 2 to the heterogeneous assignment case; the remaining propositions, with the provision that one worker is assigned per task, do not rely on the fact that tasks are homogeneous and can be extended with minor modifications. To this end, let be a binary variable which is 1 iff a worker is assigned to task . From our assumption, for each (since the budget constraint is , we would assign a worker to each task). Further, define .

###### Proposition 7.

Suppose tasks are heterogeneous and one worker is assigned to each task. Then for any distribution over assignments and attack strategy , there exists a distribution over , , which results in the same utility.

###### Proof.

Fix an attacker strategy , and let the set of assignments in which a single worker is assigned to each task. For any probability distribution over assignments , the expected utility of the defender is . The expected utility of the defender for a distribution over (i.e., over workers) is . Define . It suffices to show that . This follows since because and is a probability distribution. ∎

Thus, if we constrain the defender to use a single worker per task, we can randomize over workers, rather than full assignments, allowing us to compute a (restricted) optimal randomized assignment in linear time.

### Deterministic strategy

We now show that the defender’s deterministic allocation problem, denoted Heterogeneous tasksâ deterministic assignment (HTDA), is NP-hard even if we restrict the strategies to assign only a single worker per task.

###### Proposition 8.

HTDA is strongly NP-hard even when we assign only one worker per task.

###### Proof.

We reduce from the Bin packing problem (BP), which is a strongly NP-Hard problem. In the bin packing problem, objects of different volumes must be packed into a finite number of bins or containers each of volume in a way that minimizes the number of bins used. We define the set of items with sizes , the volume as , and the set of containers as . The decision problem is deciding if objects will fit into a specified number of bins. Our transformation maps the items to tasks with the following utilities , the containers to workers while considering the private case where all the workers have the same proficiency (i.e., , ). If we started with a YES instance for the BP problem, there is an assignment of items to containers under the volume . Let be that assignment. Then if , we assign task to worker on HTDA. Also, we assign task (with utility ) to worker . The utility of this task assignment is: . For the case that with a NO instance for the BP problem, assume in negation that this is a YES instance for the HTDA problem. I.e., there exists an assignment in HTDA such that , where and . This implies that . Substituting , we get that , hence, , . Note that this contradicts the assumption that this is a YES instance for the HTDA problem. The reduction can clearly be performed in polynomial time. ∎

We propose the following integer program for computing the optimal deterministic strategy for the defender (assuming only one worker is assigned per task):

(3a) | |||

(3b) | |||

(3c) | |||

(3d) | |||

(3e) |

The objective (3a) aims to maximize the defender’s expected utility given the adversary’s attack (second term). Constraint (3b) ensures that each allocation assigns all the possible tasks among the different workers and Constraint (3c) validates that the adversary’s target is the worker who contributes the most to the defender’s expected utility. Finally, Constraint (3d) ensures that only one worker is assigned for each task.

### Experiments

This analysis compares the defender’s expected utility while using optimal randomized and deterministic strategies when we restrict that only one worker can be assigned to each task. We used CPLEX version 12.51 to solve the linear and integer programs above. The simulations were run on a 3.4GHz hyperthreaded 8-core Windows machine with 16 GB RAM. We generated utilities of different tasks using 6 different uniform distributions: {[0,0.5],[0,1],[0,5],[0,10],[0,50],[0,100]}, varied the number of workers between 2 and 15, and considered 15 tasks. Worker proficiencies were again sampled from the uniform distribution over the [0.5,1] interval. Results were averages of 1,000 simulation runs.

Figure 3 shows proportion difference between randomized and deterministic allocations for different numbers of workers and distributions from which task utilities are generated. As we can observe, the difference is remarkably small: in all cases, the gain from using a randomized allocation is below 0.6%, which is even smaller (by a large margin) than what we had observed in the context of homogeneous tasks. However, there is an interesting difference we can observe from the homogeneous task setting: now increasing the number of workers considerably increases the advantage of the randomized allocation, whereas when tasks are homogeneous we saw the opposite trend.

## Discussion and Conclusions

We consider the problem of assigning tasks to workers in an adversarial setting when a worker can be attacked, and their ability to successfully complete assigned tasks compromised. In our model, since the defender obtains utility only from correctly annotated tasks, the nature of the attack is less important; thus, the attacker can compromise the integrity of the labels reported by the worker, or simply prevent the worker from completing the tasks assigned to them. A key feature of our model is that the attack takes place after the tasks have been assigned to workers, but has considerable structure in that exactly one worker is attacked. Additional structure is imposed by considering two settings: one in which the attacker only observes the defender’s (possibly randomized) task allocation policy, and the other in which the actual task assignment decision is known. We show that the optimal randomized allocation problem in the former setting (in the sense of Stackelberg equilibrium commitment) can be found in linear time. However, our algorithm for optimal deterministic commitment is pseudo-polynomial. Furthermore, when tasks are heterogeneous, we show that the problem is more challenging, as it could be optimal to assign multiple workers to the same task. If we nevertheless constrain that only one worker is assigned per task, we can still compute an optimal randomized commitment in linear time, while deterministic commitment becomes strongly NP-Hard (we exhibit an integer linear program for the latter problem).

## References

- [1] D. Alistarh, M. A. Bender, S. Gilbert, and R. Guerraoui, “How to allocate tasks asynchronously,” in IEEE Annual Symposium on Foundations of Computer Science (FOCS), pp. 331–340, IEEE, 2012.
- [2] P. Stone and M. Veloso, “Task decomposition, dynamic role assignment, and low-bandwidth communication for real-time strategic teamwork,” Artificial Intelligence, vol. 110, no. 2, pp. 241–273, 1999.
- [3] Y. Liu and Y. Chen, “Sequential peer prediction: Learning to elicit effort using posted prices.,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 607–613, 2017.
- [4] M. Alighanbari and J. P. How, “Cooperative task assignment of unmanned aerial vehicles in adversarial environments,” in American Control Conference, 2005., pp. 4661–4666, IEEE, 2005.
- [5] E. G. Jones, B. Browning, M. B. Dias, B. Argall, M. Veloso, and A. Stentz, “Dynamically formed heterogeneous robot teams performing tightly-coordinated tasks,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 570–575, IEEE, 2006.
- [6] A. Singla and A. Krause, “Truthful incentives in crowdsourcing tasks using regret minimization mechanisms,” in Proceedings of the international conference on World Wide Web, pp. 1167–1178, ACM, 2013.
- [7] L. Tran-Thanh, T. D. Huynh, A. Rosenfeld, S. D. Ramchurn, and N. R. Jennings, “Budgetfix: budget limited crowdsourcing for interdependent task allocation with quality guarantees,” in International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pp. 477–484, 2014.
- [8] A. Ghosh, S. Kale, and P. McAfee, “Who moderates the moderators? crowdsourcing abuse detecti in user-generated content,” in Proceedings of the ACM Conference on Electronic Commerce (EC), pp. 167–176, 2011.
- [9] J. Steinhardt, G. Valiant, and M. Charikar, “Avoiding imposters and delinquents: Adversarial crowdsourcing and peer prediction,” in Annual Conference on Advances in Neural Information Processing Systems (NIPS), pp. 4439–4447, 2016.
- [10] Y. Chen, H. Xu, C. Caramanis, and S. Sanghavi, “Robust matrix completion and corrupted columns,” in International Conference on Machine Learning (ICML), pp. 873–880, 2011.
- [11] H. Xu, C. Caramanis, and S. Sanghavi, “Robust pca via outlier pursuit,” in Annual Conference on Advances in Neural Information Processing Systems (NIPS), pp. 2496–2504, 2010.
- [12] J. Feng, H. Xu, S. Mannor, and S. Yan, “Robust logistic regression and classification,” in Annual Conference on Advances in Neural Information Processing Systems (NIPS), pp. 253–261, 2014.
- [13] Y. Chen, C. Caramanis, and S. Mannor, “Robust sparse regression under adversarial corruption,” in International Conference on Machine Learning (ICML), pp. 774–782, 2013.
- [14] V. Conitzer and T. Sandholm, “Computing the optimal strategy to commit to,” in Proceedings of the ACM Conference on Electronic Commerce (EC), pp. 82–90, ACM, 2006.
- [15] D. Korzhyk, V. Conitzer, and R. Parr, “Complexity of computing optimal stackelberg strategies in security resource allocation games.,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 805–810.
- [16] M. Tambe, Security and game theory: algorithms, deployed systems, lessons learned. Cambridge University Press, 2011.
- [17] V. S. Sheng, F. Provost, and P. G. Ipeirotis, “Get another label? improving data quality and data mining using multiple, noisy labelers,” in Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 614–622, ACM, 2008.
- [18] P. Dai, D. S. Weld, et al., “Artificial intelligence for artificial artificial intelligence,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 1153–1159, 2011.
- [19] E. Manino, L. Tran-Thanh, and N. R. Jennings, “Efficiency of active learning for the allocation of workers on crowdsourced classification tasks,” arXiv preprint arXiv:1610.06106, 2016.
- [20] H. v. Stackelberg, “Theory of the market economy,” 1952.
- [21] D. Korzhyk, Z. Yin, C. Kiekintveld, V. Conitzer, and M. Tambe, “Stackelberg vs. nash in security games: An extended investigation of interchangeability, equivalence, and uniqueness,” Journal of Artificial Intelligence Research, vol. 41, pp. 297–327, 2011.