Game Theoretic Analysis of Tree Based Referrals for Crowd Sensing Social Systems with Passive Rewards

Game Theoretic Analysis of Tree Based Referrals for Crowd Sensing Social Systems with Passive Rewards


Participatory crowd sensing social systems rely on the participation of large number of individuals. Since humans are strategic by nature, effective incentive mechanisms are needed to encourage participation. A popular mechanism to recruit individuals is through referrals and passive incentives such as geometric incentive mechanisms used by the winning team in the 2009 DARPA Network Challenge and in multi level marketing schemes. The effect of such recruitment schemes on the effort put in by recruited strategic individuals is not clear. This paper attempts to fill this gap. Given a referral tree and the direct and passive reward mechanism, we formulate a network game where agents compete for finishing crowd sensing tasks. We characterize the Nash equilibrium efforts put in by the agents and derive closed form expressions for the same. We discover free riding behavior among nodes who obtain large passive rewards. This work has implications on designing effective recruitment mechanisms for crowd sourced tasks. For example, usage of geometric incentive mechanisms to recruit large number of individuals may not result in proportionate effort because of free riding.


=0pt \belowcaptionskip=0pt

1 Introduction

The widespread presence of smart phones, wearables, GPS devices and other hand held sensors have enabled individuals to collect valuable data from their surrounding environments. Analyzing such crowd sensed data can help monitor urban and industrial pollution levels [[1]], improve our understanding of urban traffic patterns [[2]], and can even prove to be useful for surveillance and emergency response [[3]]. Some crowd sensing tasks, such as traffic monitoring, can be performed using opportunistic sensing that does not actively involve the user, while others such as pollution monitoring or surveillance may require active participation of the user [[4]]. Such participatory sensing systems, due to the human in the loop, raises unique challenges such as recruiting and incentivizing a large number of individuals to participate in the sensing activity.

Offering referral rewards for recruiting friends to sign up for completing a task is widely used in multi level marketing [[5]]. Such mechanisms generate referral trees where a parent node represents an individual that has recruited or referred another individual who is represented as a child to the referring parent node. In such schemes, apart from rewarding individuals for finishing the tasks, additional rewards, in the form of incentives, are provided to encourage individuals to recruit people from their social connections. A popular strategy is to provide a proportion of the reward earned by recruited individuals to the recruiting individual recursively. Such mechanisms, where the parent node obtains a proportion of the reward earned by the child nodes, are termed as geometric incentive mechanisms [[5]] and such indirect rewards are called passive rewards.

Geometric incentive mechanisms have also been used with success in participatory crowd sourcing tasks. The team that won the 2009 DARPA Network Challenge used a geometric incentive mechanism to recruit a large number of volunteers to complete the task [[6]]. The challenge involved locating the positions of 10 large red weather balloons scattered in the continental United States in the shortest possible time [[6]]. The prize winning team developed a mechanism where an individual who finds the location of the balloon passes half of her reward to her parent (recruiter), who in turn passes half of it to her parent, and so on. Thus, an ancestor who is hop away from the node who finished the task gets part of the reward (the mechanism may restrict reward sharing to only a few levels). A mathematical analysis of the of this strategy revealed that for a rational individual recruiting the maximum number of people she knows is the best response strategy [[7]].

Clearly for such a large search task, a large number of recruits is key. However, that is not the only important factor; the efforts put in by those individuals to finish the task is equally critical. Particularly, the incentive mechanism which was used to recruit individuals should also ensure that it does not disincentivize individuals to work hard. The effect of geometric incentive mechanisms, that were used to recruit individuals, on the efforts put in by them to finish the task is not clear. In this paper we aim to fill that gap.

By formulating a network game we show the effect geometric incentive mechanisms have on the efforts put in by rational agents to finish the task. We analyze the behavior of agents connected over a referral network (tree) and compute an equilibrium effort profile. Our analysis technique is game theoretic, i.e., we model each individual as a strategic agent who wants to maximize her utility in a given situation. The equilibrium we consider is a pure strategy Nash equilibrium (PSNE), and we show that it always exists and is unique for interesting model parameters. More importantly, our results show that while geometric incentive schemes are an excellent tool for recruiting individuals they may indeed disincentivize a few individuals from putting any effort.

2 Model

2.1 The Task Arrival and Processing Model

Let the set of individuals be denoted by , and connected over a hierarchical directed tree . The direction of the edges are from the root towards the leaves. We assume that if an individual performs a crowd sourced task, she gets a direct incentive, and the nodes along the path from the individual to the root receive an indirect incentive. We assume that the tasks arrive at a Poisson rate , which is a common knowledge, and are queued up until served. We assume that each agent competes for the task (as is the case in multi level marketing and the 2009 DARPA Network Challenge). Each node attempts to capture a task from the work queue according to a Poisson process with rate . If the total reward of a task is , we assume that a node gets a direct incentive of if she grabs the job (), with the rest shared as indirect incentives. The cost of maintaining an attempt rate is , where is a known constant. The task is assigned exclusively to the agent who attempts it first after the task arrives. We assume the agents have uniform skill of performing a task. The time to complete a task is small compared to the inter-attempt times of any agent and hence the task completion time is ignored.

We can model the task arrival and departure in a server-queue model where the arrival rate is and the consolidated service rate of the entire network is , because the superposition of Poisson processes is Poisson with the rate being the sum of the rates [[8]]. When agent tries to capture a job from the task queue, the probability that she can grab the job is given by . This is because the inter-attempt times are exponentially distributed for a Poisson process, and using the memoryless property of exponential random variables [[9]]. There are two regimes the system could operate in.

Case 1: When , the queueing process is a positive recurrent Markov chain and all tasks will be served, and the output process would also be Poisson with rate (Burke’s Theorem [[8]]), the rate at which agent would grab the task is therefore given by . Hence, the direct reward to is .

Case 2: When , the queueing process is a null or transient Markov chain and with high probability (probability approaching unity) the queue will be non-empty at a steady state [[9]]. In such a setting, any agent who attempts to grab a task actually gets a task, and hence the direct reward of agent would be .

If a node hits a task, she receives a ‘direct’ reward, and each node on the directed path from the root to receives an ‘indirect’ reward. Since the efforts are costly (), each agent has to decide on the efforts to maximize her net payoff, i.e., (direct + indirect) reward minus the cost. If there are indirect rewards, a node will reduce efforts to reduce costs. This induces a game between the nodes. The strategy of player is to choose the attempt rate . In the following section, we derive an expression for the expected utility for an agent .

2.2 Indirect Reward and the Utility Model

The task arrival process and its incentive sharing scheme induce competition among the nodes. Direct incentives are earned when an agent grabs a task.

The indirect incentives are shared with other players as specified by the reward sharing matrix where is the fraction of the total reward received by when grabs the task. Note that . Since the network is a directed tree, if does not appear in subtree of then . We call the matrix monotone non-increasing if , whenever the hop-distance , and anonymous if the depends only on the distance of and on , , and not on the identities of the nodes. Hence these are reasonable assumptions to make.

While some of our results extend to general networks and general matrices, in this paper, we focus on monotone non-increasing and anonymous reward sharing matrices. To ensure that the total reward is bounded above by , we assume . Let us denote as the subtree rooted at . Combining the direct and indirect rewards and costs, the expected utility of agent given by,


where , and,


In both (2) and (3), the first term on the RHS denotes the expected utility of agent due to her own effort , the second term denotes the indirect utility coming from the efforts of all the nodes , and the third term denotes the cost to maintain the effort. The utility model is parametrized by the reward , reward sharing matrix , cost , and tree .

2.3 Effort Sharing and Effort Zones

The effort sharing function is a recursive function, which is computed for a given tree and a reward sharing matrix , in a bottom up fashion, i.e., from the leaves towards the root. Let us denote the set of directed trees by . For any tree, is initialized to for all the leaves, and then computed as follows.

Effort Sharing Function: A mapping given by the recursive formula,

. (4)

This function is maximum when is a leaf, and decreases for nodes with large subtrees below them (i.e., the ones with large passive rewards). In Sec. 3 we will see that, for interesting model parameters, the effort equilibrium levels are proportional to this function, leading to smaller effort levels for nodes with large subtrees below them. Based on this we partition the space of parameters into four regions:

Region I: (5)
Region II: (6)
Region III: (7)
Region IV: (8)

We will analyze PSNE effort profiles for the game in the whole parameter space. We will see in Sec. 3 that the game exhibits different behavior in each of these regions (and hence the regions are divided in this way).

Effort Zones: We partition the space of agents’ effort profile into four regions:


Notice that . We define it separately to show a result that characterizes the set of Nash equilibria for region . If we consider a single server abstraction of the entire network, then the arrival rate sees a consolidated service rate of . If the service rate is smaller, equal, or larger than the arrival rate, then according to the conditions (912), the task queue would be either over loaded (Zone 1), critically loaded (Zone 2, Zone 3), or under loaded (Zone 4) respectively.

The crowd sourcing manager would like to operate in , so that over a long period of time, all incoming tasks are served and there is no accumulation of tasks.

We see from (1) that the zones correspond to different utility structures,
(i) , if ;         (ii) , if .

Figure 1: Graphical illustration of the parameter space and the Nash equilibria characterization.

In the following section, we show that in each of the regions to , the PSNE are mapped to the zones to , sometimes uniquely. The PSNE always exists, which is a non-trivial result. Fig. 1 graphically illustrates the parameter space and the main results on the characterization of the equilibria (detailed analysis is carried out in Sec. 3). It gives the intuition that since the reward to cost ratio is small in regions to , the sum of the equilibrium efforts could fall below the incoming rate . However, in region , the ratio is large enough to ensure a unique PSNE, in which the sum equilibrium effort can efficiently serve the incoming task stream.

3 Analytical Results

We compute the PSNE for the four regions to in Eqn (5)–(8) (hence a PSNE always exists). The results serve to predict the efforts put in by strategic agents connected via a network for the given reward sharing matrix .

The first result shows that is concave in . This helps in calculation of the PSNE as it is a utility maximization process for all agents.

Lemma 3.1.

is a concave function in .

Proof:   We show this in two steps.

Step 1: is a linear function in , which is a special case of a concave function. Now we show that is concave in . Differentiating Eq. (2),


and by taking the second derivative we get,


Since , this is due to the fact that and . Hence, is concave in .

Step 2: We observe that, . This is because that when , and when , from Equations (1), (2) and (3). The function is the minimum of two concave functions and hence is concave.

The next result characterizes the PSNE in region .

Theorem 3.2 (PSNE in ).

If , then the PSNE effort profile is unique, and is given by .

Proof:   When , we argue that all players will put zero effort in Nash Equilibrium. Suppose , it implies . Taking derivative of the utility at , we get,

This is a contradiction for to be an equilibrium, since each player would be better off by decreasing their effort from . Hence . So, , which implies . But , since . Thus for all .

The intuition is that in , the reward to cost ration is small enough. Hence, no individual gets any positive payoff by putting any positive effort. In , the ratio reaches a critical value, where there exists multiple PSNE. The agents collectively are indifferent between just meeting the incoming rate and keeping the sum effort smaller than this. Hence we get the following theorem.

Theorem 3.3 (PSNE in ).

If , then any is a PSNE.

Proof:Case 1: Suppose . Then and we repeat the analysis of Theorem 3.2 to show that , even for . Hence it is not an equilibrium as players will keep decreasing their efforts. Case 2: When , we have and . Thus, is constant when , for all . Hence each element of the set is a PSNE.

For the sake of convenience, we will discuss region , before . In the final region , the ratio crosses a minimum threshold, which guarantees that the equilibrium effort profile is unique and sufficient to serve the incoming task rate, i.e., . We show this using a few intermediate lemmas.

Lemma 3.4.

If , and if there exists a PSNE effort profile , it cannot lie in .

Proof:   We prove this via contradiction. Suppose a PSNE effort profile , i.e., . Hence . But, (differentiating Eq. (3)). This is because, and from Eqn (8) we know that . Hence, utility of is increasing in at . So, is better off by increasing his effort from , which is a contradiction to the fact that is a PSNE.

Corollary 3.5.

For , if there exists a PSNE effort profile then .

Lemma 3.6.

If the utility structure is given by , then there exists a unique PSNE effort profile given by,


where the function is defined in (4).

Proof:   If a PSNE effort profile exists in the given game, then it must satisfy, . This implies, .

Thus in order to find the Nash equilibrium we have to solve the following optimization problem for each .


Due to concavity of , this is a convex optimization problem with linear constraints, which can be solved using KKT theorem. At the minimizer of problem (16), such that, (i) , (ii) , (iii) , (iv) .
Case 1: and in this case .
Case 2: and in this case . This leads us to (from Eq. (13)),


For a given tree and its equilibrium profile , let (variable substitution), then manipulation of (17) leads to,


We do another variable substitution to denote the RHS of Eqn (18) by ( since LHS is . That is,

Lemma 3.7.


Proof:   We prove this claim via induction on the levels of . Let the depth of be .

From Eqn (18), . From Cases 1 and 2 above,


Step 1: For an arbitrary node at level , from (20), . Hence, the proposition is true as for a leaf.

Now, select an arbitrary node (which is not a leaf) at level . From (20) we get, .

Step 2: Let be true for all nodes upto level . Consider an arbitrary node at level . From (20) and (4),


which concludes the induction.
To find an expression for PSNE we now evaluate . The sum of efforts of all the players is defined as . Hence, . Substituting for from (19) in this expression and solving for yields,


Using (19) we get,


Combining (23) and the claim above, the PSNE is given by,

KKT equations led to a unique solution of the optimization problem, hence PSNE is unique.

Theorem 3.8 (PSNE in ).

If , then there exists a unique PSNE effort profile given by Eqn (15), which lies in .

Proof:   From Corollary 3.5 when , . From Lemma 3.6 with this utility function, the unique PSNE effort profile is given by Eqn (15). We use this expression to compute the sum , and substitute the value of from (8) to get .

In , the sum of the equilibrium efforts is more than . Hence, it shows that in order to meet the goal of efficiently serving the incoming tasks, the reward to cost ratio has to be above a threshold given by (8). We also see that when the parameters are in , the equilibrium effort of agent is proportional to .

Now we discuss region . In region , the reward to cost ratio is a little larger than . However, it is still not enough to guarantee the sum equilibrium effort levels to cross (which was the case in ). In particular, we show that it is necessary and sufficient for the equilibrium effort profile to live in . To show that we need to be nonempty.

Lemma 3.9.

If , is nonempty.

Proof:   The proof is constructive. Let us consider the following summation: . Substituting (using (7)), we get . Now, let us construct a , such that , where, and . Hence, by construction, . We see that,

So, , hence is nonempty.

Theorem 3.10 (Necessary and Sufficient Condition for PSNE in ).

If , an effort profile is a PSNE, if and only if .

Proof:   (): We show that if , it is a PSNE. From Lemma 3.9, , so we can pick a . Since, implies , the utility undergoes a transition at this point, , and . Since is continuous but not differentiable at , we look at the left and right derivatives at this point.

The first inequality comes because . Since , the equality in the third line comes by replacing in Eq. (13), and the inequality in the third line is obtained by reorganizing the expression in Eqn (11). Hence for each agent , the utility is maximized at , when other players are playing the equilibrium strategy. So, is a PSNE.

(): We are given that , and is a PSNE. We first show that should necessarily be in . We show this via contradiction. Let us consider the two following cases.

Case 1: Let be the PSNE if it exists such that , that is, then, . But, (from (7)). Hence, utility of is increasing in at . So, is better off by increasing his effort from , which is a contradiction to the fact that is a PSNE.

Case 2: Let be the PSNE if it exists such that , that is, then, . Thus Eqn (22) is valid. Using (7) in (22) we get which is a contradiction. Hence, if the PSNE effort profile exists, it must lie in .

Now, since , the utility will have a transition at this point. We know that is a PSNE, hence the utility must be maximized at this point for all agents , given that the other agents stick to the equilibrium strategies. Hence, it must hold that,


Condition (24) yields, , which is satisfied since . Condition (25) yields, , since we have shown that . Reorganizing this inequality, we get, . Hence, .

(a) Referral Tree.
(b) Effort and rewards, .
(c) Effort and rewards, .
Figure 2: Referral tree and PSNE effort/reward profiles for the referral tree shown in Fig.a. , , .

To summarize this section, in the model considered, pure strategy Nash equilibria always exist. The regions correspond to different reward to cost ratio, and the results predict the efforts expected from rational agents when the ratio lies in one of these regions. It gives a measure of how large needs to be in order to efficiently serve the incoming task process (to operate in ). The results suggest that nodes having network advantages due to recruiting other nodes free ride on others. Thus, a geometric incentive mechanism may disincentivize nodes that have large subtrees from putting efforts.

4 Numerical Results

We numerically compute the PSNE (using Theorem 3.8) to illustrate the effect of the referral tree on individual efforts in region  — the most interesting region where all incoming tasks are served. We consider a referral tree shown in Fig. a. To illustrate the analytical results we enforce the structure in Eqn (26) on the reward sharing matrix . This is motivated by the geometric incentive mechanism used by the winning team in the DARPA Network challenge [[6]].


Here, and . The PSNE effort profiles and expected utility are shown in Fig. b and Fig. c. As seen in both the figures, node who has three children does not put any effort due to passive rewards. In the case, node does not put any effort, but when the passive reward is reduced by increasing to , node puts in some effort. Although the sum effort is higher when , the variability in the effort among the nodes is also high compared to .

Fig. 3 illustrates the effect of on sum effort. Trees were generated randomly. To generate a tree of size we carry out the following process recursively. Given a tree of size , the node chooses a parent at random from nodes to attach itself. We discover that the sum effort saturates as number of nodes in the tree increases. This suggests that recruiting large number of individuals using geometric incentives may not yield proportional increase in productivity.

Figure 3: Mean sum effort vs. number of nodes for varying . Trees are generated randomly. Results obtained by averaging over 500 samples. , , .

5 Conclusions

In this paper we performed a game theoretic analysis on the efforts put in by individuals recruited using a referral tree, which is a popular crowd sensing mechanism for recruiting individuals. We propose a queuing model with Poisson task arrivals where agents compete to finish the task. Agents receive not only direct rewards for finishing a task, but also indirect rewards if any agent in their subtree finishes a task. We provide a complete analysis and a closed form solution for all possible system parameters. We compute the PSNE effort profile for the complete parameter space and show that the PSNE always exists. In some regions of the parameter space it is unique, while in others it is not. Our results uncover free riding behavior among nodes who obtain large passive rewards from their subtrees. This has implications on crowd sourced tasks such as the DARPA Network challenge. In particular, usage of geometric incentive mechanisms to recruit large number of individuals may not result in proportionate effort due to free riding.


This work was a part of the course project for the “Game Theory” course which the authors took during January–April 2012 session at IISc. Authors thank the instructor Prof. Y Narahari and the TA Swaprava Nath for introducing them to DARPA Network Challenge and for their comments on improving the presentation of this paper.


  1. R. K. Ganti, F. Ye, and H. Lei, “Mobile crowdsensing: current state and future challenges,” IEEE Communications Magazine, vol. 49, no. 11, pp. 32–39, 2011.
  2. B. Pan, Y. Zheng, D. Wilkie, and C. Shahabi, “Crowd sensing of traffic anomalies based on human mobility and social media,” in International Conference on Advances in Geographic Information Systems, 2013, pp. 344–353.
  3. E. T.-H. Chu, Y.-L. Chen, J.-Y. Lin, and J. W. Liu, “Crowdsourcing support system for disaster surveillance and response,” in International Symposium on Wireless Personal Multimedia Communications, 2012, pp. 21–25.
  4. P. Dutta, P. M. Aoki, N. Kumar, A. Mainwaring, C. Myers, W. Willett, and A. Woodruff, “Common sense: participatory urban sensing using a network of handheld air quality monitors,” in ACM Conference on Embedded Networked Sensor Systems, 2009, pp. 349–350.
  5. Y. Emek, R. Karidi, M. Tennenholtz, and A. Zohar, “Mechanisms for multi-level marketing,” in ACM Conference on Electronic Commerce, 2011, pp. 209–218.
  6. G. Pickard, W. Pan, I. Rahwan, M. Cebrian, R. Crane, A. Madan, and A. Pentland, “Time-critical social mobilization,” Science, vol. 334, no. 6055, pp. 509–512, 2011.
  7. G. Pickard, I. Rahwan, W. Pan, M. Cebrián, R. Crane, A. Madan, and A. Pentland, “Time critical social mobilization: The darpa network challenge winning strategy,” arXiv preprint arXiv:1008.3172, 2010.
  8. D. P. Bertsekas, R. G. Gallager, and P. Humblet, Data networks.    Prentice-Hall International New Jersey, 1992, vol. 2.
  9. R. W. Wolff, Stochastic modelling and the theory of queues.    Englewood Cliffs, NJ, 1989, vol. 96.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description