REAP: An Efficient Incentive Mechanism for Reconciling Aggregation Accuracy and Individual Privacy in Crowdsensing

REAP: An Efficient Incentive Mechanism for Reconciling Aggregation Accuracy and Individual Privacy in Crowdsensing

Zhikun Zhang,  Shibo He, 
Jiming Chen,  and Junshan Zhang, 
Z. Zhang, S. He and J. Chen (Corresponding author) are with State Key Laboratory of Industrial Control Technology, Zhejiang University, and Cyber Innovation Joint Research Center, Hangzhou, China. E-mail: zhangzhk@zju.edu.cn, s18he@iipc.zju.edu.cn, cjm@zju.edu.cnJunshan. Zhang is School of Electrical, Computer and Energy Engineering, Arizona State University, USA. E-mail: junshan.zhang@asu.edu
Abstract

Incentive mechanism plays a critical role in privacy-aware crowdsensing. Most previous studies on co-design of incentive mechanism and privacy preservation assume a trustworthy fusion center (FC). Very recent work has taken steps to relax the assumption on trustworthy FC and allows participatory users (PUs) to add well calibrated noise to their raw sensing data before reporting them, whereas the focus is on the equilibrium behavior of data subjects with binary data. Making a paradigm shift, this paper aim to quantify the privacy compensation for continuous data sensing while allowing FC to directly control PUs. There are two conflicting objectives in such scenario: FC desires better quality data in order to achieve higher aggregation accuracy whereas PUs prefer adding larger noise for higher privacy-preserving levels (PPLs). To achieve a good balance therein, we design an efficient incentive mechanism to REconcile FC’s Aggregation accuracy and individual PU’s data Privacy (REAP). Specifically, we adopt the celebrated notion of differential privacy to measure PUs’ PPLs and quantify their impacts on FC’s aggregation accuracy. Then, appealing to Contract Theory, we design an incentive mechanism to maximize FC’s aggregation accuracy under a given budget. The proposed incentive mechanism offers different contracts to PUs with different privacy preferences, by which FC can directly control PUs. It can further overcome the information asymmetry, i.e., the FC typically does not know each PU’s precise privacy preference. We derive closed-form solutions for the optimal contracts in both complete information and incomplete information scenarios. Further, the results are generalized to the continuous case where PUs’ privacy preferences take values in a continuous domain. Extensive simulations are provided to validate the feasibility and advantages of our proposed incentive mechanism.

Crowd sensing, data aggregation, privacy preservation, incentive mechanism

I Introduction

The recent proliferation of portable mobile devices (e.g., smartphone, smartwatch, tablet computer, etc.), integrated with a set of sensors (e.g., GPS, camera, accelerometer, etc.), has spurred much interest in mobile crowdsensing [1, 2]. Due to its advantage in reducing the deployment cost in large-scale sensing applications, crowdsensing has been applied to a large variety of areas such as smart transportation, environmental monitoring, health-care, etc [3, 4, 5, 6].

Typically, sensing data collected from participatory users (PUs) will be aggregated by the fusion center (FC) for data analytics. To identify public health condition, for example, FC can collect the daily exercise data from PUs and carry out data aggregation such as average and histogram. Clearly, contributing sensing data to FC is costly for PUs, since resources such as energy and bandwidth will be consumed and data privacy may be sacrificed. Therefore, they would be reluctant to participate in crowdsensing without a proper incentive mechanism that compensates their cost. Most previous studies focused on resources consumption for data sensing and reporting in incentive mechanism design [7, 8, 9]. Only quite a few consider PUs’ privacy losses[10, 11] and common assumption made by these works is that FC is trustworthy such that privacy merely breaches when FC releases the aggregation results to the public.

In reality, the trustworthy FC assumption may not hold, e.g., when FC is compromised by malicious attackers, or the communication channels between PUs and FC are eavesdropped. Very recent work [12] take the first attempt to remove the trustworthy authority assumption and study how to trade private data in a game-theoretic model. In [12], PUs can fully control their privacy by adding well calibrated noise to the raw data before reporting them. However, the private data is assumed to be binary, which is not often times applicable to real-world system. Further, the focus of [12] is on examining the equilibrium behavior of data subjects such that data collector have no direct control of them. Different from [12], this paper aim to quantify the privacy compensation for continuous data sensing while allowing FC to directly control PUs.

One challenge in doing this is to reconcile the following conflict: PUs prefer adding larger noise for higher privacy preserving levels (PPLs) whereas FC desires better quality data for higher aggregation accuracy. Another challenge is to overcome the information asymmetry problem between FC and PUs, since it is difficult (perhaps impossible) to know PUs’ privacy preferences. Further, privacy preferences of PUs are typically heterogenous, e.g., women have higher privacy preferences about their age than men, and patients are more concerned about their location privacy, which incur diverse privacy losses for different PUs under the same PPL. An efficient incentive mechanism needs to differentiate the diverse privacy losses of PUs and provide appropriate rewards that capture their contribution to FC without knowing individual PU’s precise privacy preference.

To tackle these challenges, we propose REAP111The name REAP comes from REconciling Aggregation accuracy and individual Privacy., an efficient incentive mechanism based on Contract Theory. By Contract Theory, FC can add some kind of enforcement to incentivize PUs by signing specific contracts with them, so that FC has direct control over PUs. Different contracts should be designed for different types of PUs, each of which specifies one type of PPL and the corresponding payment that a PU will receive if he/she can sacrifice the given PPL. A key concern here is to design a proper menu of contracts satisfying incentive compatibility such that all PUs can maximize their utilities only when they truthfully reveal their privacy preferences.

Specifically, we adopt differential privacy to quantify individual privacy and -accuracy to measure FC’s aggregation accuracy. Then, the quantitative relationship between individual PU’s PPL and FC’s aggregation accuracy is derived. In light that the contribution of each PU to the aggregation accuracy can be quantified, we design a menu of optimal contracts that maximize FC’s aggregation accuracy under a given budget. We first consider the complete information scenario as a benchmark, where FC knows the precise type of each PU. This benchmark serves as the best aggregation accuracy that FC can achieve. We further consider the optimal contract design in incomplete information scenario where FC only knows the probability distribution of PUs’ types. Closed-form solutions for both scenarios are derived. Further, we generalize our results to the continuous case where PUs’ privacy preferences can take value in a continuous domain. In such a case, the optimization problem turns out to be a functional extreme value problem that can be solved by an optimal control based approach.

The contributions of this paper are there folds:

  1. We propose REAP, a Contract Theory based incentive mechanism, to compensate PUs’ data privacy losses and hence resolve the information asymmetry issues between PUs and FC.

  2. We adopt proper measures to quantify both individual PUs’ PPLs and FC’s aggregation accuracy, by which the quantitative relationship between individual privacy and aggregation accuracy is derived.

  3. Closed-form solutions are derived for both complete information and incomplete information scenarios. We also generalize our results to the case of continuous privacy preferences.

The rest of this paper is organized as follows. The related work is discussed in Section II. Section III presents an overview to the crowdsensing system, and quantify PUs’ PPLs as well as their impacts on FC’s aggregation accuracy. In Section IV, we leverage Contract Theory to address the information asymmetry problem and generalize our results to the continuous case in Section V. Simulation results are illustrated in Section VI to validate our theoretical results. Section VII concludes this paper.

Ii Related Work

Recently, various incentive mechanisms have been proposed to incentivize users’s participation in moblie crowdsensing systems. Most of these mechanisms are based on either auction [13, 10, 11, 14, 15, 16] or other game-theoretic models [17, 18, 19, 20, 21], which aim to achieve different objectives. Specifically, in [20, 14], the authors aim to maximize the social welfare. The objective of [17, 18] is to maximize the profit of the platform, and [16, 21] design mechanisms to minimize FC’s payment. The basic requirement of these mechanisms is to guarantee that all users’ cost is compensated, at least in the expectation sense. Most previous studies only compensate users’ resource consumption for sensing and reporting data, their privacy loss is not remunerated explicitly.

Interestingly, Ghosh et al. took the first step to view privacy as a good and aim to compensate users’ privacy loss in their seminal work [22] in data mining field. In [22], data owners bid their privacy loss based on their privacy preference, and the system chooses a set of users and the corresponding PPLs to achieve the best statistic accuracy under a given budget. Based on this work, a few improved mechanisms [23, 24, 25] have been proposed, especially consider the correlation between privacy preference and private data. Most of these mechanisms require a trustworthy authority, which is not available in most cases. Recently, Wang et. al. [12] removed the trustworthy authority assumption in data mining field and proposed a game-theoretic approach to compensate users’ privacy loss. However, the private data considered in [12] is always binary bit, which is not widely applicable in mobile crowdsensing systems. Further, [12] do not consider the information asymmetry problem between FC and PUs. Thus motivated, in this paper, we consider a more realistic crowdsensing scenario where the FC is untrusted and allow PUs to take full control of their private data, which take continuous value. Moreover, a novel incentive mechanism based on Contract Theory is proposed to handle the information asymmetry problem.

Another line of related work is privacy-preserving mechanism design in mobile crowdsensing systems. These works do not take users’ data privacy into consideration. Instead, they consider the privacy issue of the mechanism itself. For example, [26, 27] aimed to preserve users’ anonymity within the incentive mechanism, and [28] aimed to preserve users’ bid privacy.

Iii System Model

In this section, we first present the system overview. Then, we quantify PUs’ PPLs and their impacts on FC’s aggregation accuracy.

Iii-a System Overview

The mobile crowdsensing system considered in this paper consists of an untrusted FC, a task agent and a set of PUs as shown in Fig. 1. Different from most of the previous works on privacy-preserving data aggregation in crowdsensing, we remove the trustworthy FC assumption, since FC may be compromised by malicious attackers, or the communication channels between PUs and FC maybe eavesdropped.

The FC aims to collect a set of sensing data from PUs, denoted as , where is a real number. Then it carries out some aggregation operations, such as average, max/min, histogram, etc, to abstract some valuable patterns. For easy exposition, we will investigate the average aggregation222We leave the discussion of other kinds of data aggregations in future work, i.e., , which constitute a large portion of currently deployed crowdsensing system. For example, some map application such as Baidu map collect GPS data (e.g., location and speed) from mobile vehicles and conduct average aggregation to monitor the real-time traffic condition. In the healthcare application, FC intends to collect PUs’ daily exercise data and conduct average aggregation to monitor public health condition.

Clearly, the sensing data may contain sensitive information about PUs. Abuse of these sensitive information may breach PUs’ privacy. Considering the healthcare application, the exercise data allow adversaries to infer individual PU’s health condition or living habit. Therefore, PUs may not be willing to contribute their raw sensing data due to the privacy concern. To dispel PUs’ worry about privacy, we propose to allow for PUs to add well-calibrated noise to their raw sensing data before reporting them to the FC, and their PPLs can be strictly quantified by differential privacy as depicted in Section III-B.

Fig. 1: Framework of REAP.

However, there are two conflicting objectives in this setting: FC desires better quality data in order to achieve higher aggregation accuracy whereas PUs prefer adding larger noise for higher PPLs (these conflicts will further be quantified in Section III-C). In this paper, we aim to design an efficient mechanism to reconcile these conflicts. The framework of the proposed crowdsensing system is shown in Fig. 1 and the workflow is as follows:

  • Firstly, the task agent announces a sensing task to the FC.

  • Incentive Mechanism. Then, FC designs a menu of contract items (each specifies a privacy-payment pair) that maximize the aggregation accuracy under given budget, and broadcast them to all PUs. PUs can choose to sign any one of the contract that maximize their own utilities. Once the contract is signed, PUs must report a privacy-preserving version of their sensing data with the PPLs specified in the contracts. In return, they will receive the corresponding payments.

  • Data Aggregation. Next, after receiving the privacy-preserving sensing data from PUs, FC conduct average aggregation on these data.

  • Finally, FC return the aggregated data to the task agent.

Iii-B Differentially Private Data Reporting

In this subsection, we adopt the celebrated notion of differential privacy [29] to quantify individual PU’s PPL and privacy loss, and then define PUs’ utility function.

Informally, differential privacy guarantees that, after receiving the observation, the attackers cannot distinguish the neighboring input with high confidence. Here, neighboring relationship is an important concept in differential privacy. In this paper, we adopt the neighboring relationship for continuous value as follows:

Definition 1 (-adjacency).

Two continuous data and are -adjacency, if , where is the range of PU ’s sensing data .

Then, we can give the formal definition of differential privacy.

Definition 2 (-differential privacy [30]).

A random algorithm achieves -differential privacy, if for all pairs of -adjacency data and , and observation ,

(1)

Intuitively, PU ’s accurate sensing data can be either or from an attacker’s view. After adding noise , both and can result in with certain probability. Thus, an attacker cannot distinguish PU ’s accurate sensing data with high confidence when he observe . Clearly, smaller means higher PPL, since it is harder to distinguish and when observing .

The Laplacian mechanism [31] is the first and probably most widely used mechanism for achieving differential privacy, it satisfies -differential privacy by calibrating the Laplacian noise parameter based on the following lemma:

Lemma 1.

If the Laplacian mechanism is used, i.e., , we can achieve -differential privacy if .

By differential privacy, we can also define PUs’ privacy loss. According to the utility theoretic characterization of differential privacy [32], the relationship between the expected utilities with two adjacent data can be characterized by based on (1). Following [22], the privacy loss can be modeled as the difference between the utility with true data and the utility with perturbed data, which is a linear function of when it is small. Since for small value of . Then, we can define PUs’ utility in Definition 3.

Definition 3 (PUs’ utility).

Any ’s utility is defined as

(2)

where is PU ’s reward when he/she contribute sensing data to FC. is the privacy preference of PU which indicate how much PUs care about their privacy. Clearly, different PUs may have different privacy preferences [33], for instance, patients in hospital have higher privacy preference to their location than others. Naturally, individual PU’s privacy preference is private information and unknown to FC, or in other words, there exists information asymmetry between FC and PUs.

Notice that we only consider the cost incurred by PUs’ privacy loss in order to ease the presentation in this paper, meanwhile the result in this paper can be extended to incorporate the sensing cost. For instance, similar to [11], setting PU ’s sensing cost to , we can rewrite PU ’s utility as and define to incorporate the sensing cost in the payment.

Iii-C Privacy versus Accuracy

In this subsection, we illustrate the conflicts between FC’s aggregation accuracy PUs’ PPLs by deriving their quantitative relationship.

To quantify the aggregation accuracy of the privacy preserving sensing data, we adopt the following accuracy definition.

Definition 4 (-accuracy).

The aggregation of privacy-preserving sensing data achieves -accuracy if

where is the aggregation result of accurate sensing data.

Intuitively, this definition indicates that the aggregation error is larger than , with probability at most . From estimation’s perspective, stands for confidence interval and stands for confidence level. Clearly, for a given confidence level, a smaller confidence interval means better aggregation accuracy. Thus, we can leverage the confidence interval under a certain confidence level to measure the aggregation accuracy, where a smaller means better aggregation accuracy.

Then, we derive the quantitative relationship between individual PU’s privacy and FC’s aggregation accuracy as the following lemma:

Lemma 2.

For a given confidence level , the aggregation accuracy of the privacy-preserving sensing data can be found as

(3)

where is PU ’s PPL, is the number of PUs, and is the range of PUs’ sensing data333Notice that the range of the sensing data should be the same for all PUs in a specific crowdsensing application, for example, the heart rate of a normal adult is always in the range bpm. Thus, all PUs’ should take the same value, i.e., .. The proof can be found in Appendix A.

Recall that a smaller and means higher PPL and aggregation accuracy, by examining Formula (3), we can see that the FC and PUs have conflicting objectives. The FC wants PUs to adopt lower PPLs, which increases FC’s aggregation accuracy. PUs want to adopt higher PPLs to better preserve their privacy, which decrease FC’s aggregation accuracy. In the next section, we resolve this conflict through Contract Theory.

Iv Incentive mechanism design: A Contract Theoretic Approach

So far, we have quantified the conflicts between PUs’ privacy and FC’s aggregation accuracy. In this section, we introduce the contract mechanism to resolve the conflicting objectives between PUs and FC.

Iv-a Contract Formulation

Contract theory generally studies how economic decision-makers construct contractual arrangement in the presence of information asymmetry, i.e., FC typically does not know each PUs’ privacy preference , and aim to design a menu of contracts to incentivize PUs to participate in crowdsensing to maximize the aggregation accuracy. To facilitate later discussion, we classify PUs into different types based on their privacy preferences, i.e., the privacy preference of type-i PUs is .

In this section, we consider the case where PUs have finite types of privacy preference, say types , and provide some insight to the contract design. We leave the discussion of the case where takes continuous value in the next section. To facilitate the analysis, we sort PUs’ types in ascending order, i.e., , i.e., a higher type of PU has a higher privacy preference. Using Contract theory, FC designs a contract that specifies the relationship between a PU’s PPL and the corresponding payment that a PU will receive if he/she can sacrifice the given PPL. Specifically, a contract is a set of privacy-payment pairs called contract items. Each PU choose to sign a contract item and report -differentially private sensing data for the payment . Once the contract is signed, a PU must report a privacy-preserving version of sensing data and FC must reward him according to the item.

Each type of PUs choose the contract item that maximizes their utilities in (2). FC aims to optimize the contract and maximize the aggregation accuracy, i.e., minimize in (3). Since is a positive constant, minimizing is equivalent to minimize .

In the following subsection, we will consider the optimal contract design under two information scenarios.

  • Complete information: The complete information scenario is served as a benchmark, where FC knows each PU’s precise type, and can offer a specific contract to each PU directly. Clearly, FC can achieve the best aggregation accuracy in this scenario, which serves as the upper bound of FC’s achievable aggregation accuracy in any information scenario.

  • Incomplete information: In the incomplete information scenario, the FC do not know each PU’s precise type, but know the distribution of each type, e.g., type- has PUs. In this scenario, FC should decide and broadcast a menu of optimal contracts to all PUs, and each PU can choose the contract that maximize his/her utility.

Iv-B Optimal Contract Design under Complete Information

In the complete information scenario, FC knows each PU’s precise type. We will leverage the optimal aggregation accuracy achieved in this case as a benchmark to evaluate the performance of the proposed contract under incomplete information scenario. As FC knows each PU’s type, it can offer a specific contract to each PU directly. In this scenario, FC only need to guarantee that each PU’s utility is nonnegative so that they are willing to contribute their sensing data. In Contract Theory, we call this individual rationality constraint.

Definition 5 (Individual Rationality).

A menu of
contracts satisfy Individual Rationality (IR) constraint if they provide nonnegative utility to all PUs, i.e.,

(4)

Thus, we can design the optimal contract under complete information by solving the following optimization problem:

Problem 1.
(5)
(6)

where is the total budget that FC possesses.

Then, we provide the solution to this optimization problem.

Lemma 3.

The inequality in (5)(6) can take the equal sign simultaneously, i.e., and .

It is easy to show that both (5) and (6) can take the equal sign by contradiction. Given , if there exists an optimal contract that satisfies , then we can always find a larger to achieve better aggregation accuracy until the equality satisfies. Similarly, If there exists an optimal contract that satisfies , we can always find a larger , which means larger , to achieve better aggregation accuracy until the equality satisfies, which lead to the correctness of this lemma.

Lemma 3 shows that both IR constraints and budget constraint are tight at the optimal solution to Problem (1), which indicate that the FC can provide a zero utility to each type-i PU with and spend all the feasible budget. Therefore, Problem 1 can be reduced to the following problem:

Problem 2.
(7)
(8)

By solving Problem 2, we have the following theorem.

Theorem 4.

In the complete information scenario, the optimal contract is given by

(9)
(10)

The proof can be found in Appendix B. By looking into the parameters in the optimal contract provided in Theorem 4, we have the following observation.

Observation 1.

Recall that a smaller means higher PPL, Theorem 4 shows that the PPL to a type- PU decreases in , and increases in , which conforms to our intuition. That is, more budget can incentivize PUs to choose lower PPLs to achieve higher aggregation accuracy, and FC tends to buy less privacy from PUs with higher privacy preference to reduce payment.

Iv-C Optimal Contract Design under Incomplete Information

In the incomplete information scenario, FC does not know each PU’s precise type, while the distribution of PUs’ types is assumed to be known, i.e., type- have PUs. In practice, the distribution of PUs’ types can be obtained through questionnaire survey or analysis of the historical behavior of PUs [34, 35]. Clearly, FC should design an optimal contract for each type of PUs to achieve best accuracy, but due to the lack of knowledge about each PU’s precise type, FC can only broadcast all contracts to all PUs. However, if choosing the contract designed for other types can bring them higher utilities, some selfish PUs may pretend to be other types. To encourage all PUs to truthfully reveal their types, the optimal contracts should guarantee that choosing the contract corresponding to their own type can always achieve the highest utilities. Formally, we define this requirement as incentive compatibility constraint.

Definition 6 (Incentive Compatibility).

A menu of contracts satisfies Incentive Compatibility (IC) constraint if the contract designed for type- PUs brings them the highest utility, i.e.,

(11)

Apart from the incentive compatibility constraint, the contract under incomplete information should also satisfy the individual rationality constraint in Definition 5. Thus, we can design the optimal contract under incomplete information by solving the following optimization problem:

Problem 3.
(12)
(13)
(14)

In Problem 3, there are IR constraints and IC constraints, which makes it difficult to solve the optimization problem. Next, we show that these constraints can be reduced to a set of fewer equivalent constraints by the following lemmas.

Lemma 5.

The IR constraints can be reduced to the following one constraint:

(15)
Proof.

Notice that we have sort PUs’ type in ascending order, i.e., , and based on IC constraint, we have

Thus, if the IR constraint of type- satisfied, i.e., , it will satisfied for all other types automatically. Therefore, we can keep the last IR constraint and reduce the others. Moreover, if there exists an optimal contract that satisfies , we can always find a larger to achieve better aggregation accuracy until , which end the proof. ∎

Lemma 5 shows that only the highest type of PUs receive a zero utility, and lower types of PUs receive positive utilities that are decreasing in their types. The reason is that FC does not know each PU’s type, it needs to provide incentives in terms of positive utilities to PUs to attract them revealing their truthful types. This is called information loss compared to complete information.

Lemma 6 (Monotonic Property).

If , then holds.

Proof.

Based on the IC constraint, we have

Adding these two inequalities result in . Thus, we have if , then for all and , which lead to the correctness of this lemma. ∎

Intuitively, Lemma 6 shows that a PU with higher type should be assigned lower PPL, since his unit cost is higher and the FC needs to compensate this PU more when the contribution to the aggregation accuracy are the same. Further, this Lemma can be leveraged to prove the correctness of Lemma 7.

Lemma 7.

The IC constraints can be reduced to the following constraints.

(16)

The proof can be found in Appendix C. Lemma 7 ensures that if the contract item designed for type- PUs bring them the same utilities with the contract item designed for type- PUs, all the IC constraints for type- PUs are satisfied, which means type- PUs will truthfully select the contract item designed for their corresponding type.

Based on Lemma 5 and Lemma 7, we can reduce Problem 3 to the following problem:

Problem 4.
(17)
(18)
(19)

By solving Problem 4, we can calculate the optimal contract as the following theorem.

Theorem 8.

In the incomplete information scenario, the optimal contract is given by

where

(20)
(21)
(22)

The proof of Theorem 8 is given in Appendix D. Next, we compare FC’s aggregation accuracy under incomplete and complete information scenarios. In Fig. 2, we show the ratio of FC’s aggregation accuracy under incomplete information and complete information scenarios when there are three types. correspond to the lines from bottom to top, respectively. In this figure, we only show and , and . Other parameters are . The ratio is a function of PUs’ realization in three types, which is always larger than or equal to , as FC achieves best aggregation accuracy under complete information scenario. By analyzing Fig. 2, we have the following observation.

Fig. 2: The ratio of FC’s aggregation accuracy under incomplete information and complete information as a function of PUs’ realization in three types, i.e., .
Observation 2.

Compared with complete information, FC achieves worse aggregation accuracy, i.e., larger , under incomplete information. The gap between FC’s aggregation accuracy under two information scenarios is minimized when all PUs belong to the highest type, i.e., type-. For fixed , the gap increases when the number of type- PUs decrease until they reach a small value.

The ratio reaches when all PUs belong to the highest type, since in this situation, all PUs obtains zero utilities as in the complete information scenario. When the number of type- PUs decrease, the information loss increase, which lead to the increase of the gap. However, when the number of type- PUs reach a small value, the effect of information loss decreases compared to the complete information, so that the ratio increase.

Iv-D Discussions on Practical Implementation

By solving the above optimization problem, we could provide a menu of optimal contracts to incentivize all types of PUs’ participation in crowdsensing. However, PUs’ action, if cannot be monitored by FC, may deviate from the contract in practice, e.g., a selfish PU may add noise with higher PPL than which signed in the contract to achieve higher utility. To ensure that all PUs generate noise strictly with the PPLs signed in the contract, we need a trusted app installed in the mobile device [36]. Once the contract is signed, the noise level would be controlled by the trusted app, whose PPL can be monitored by FC.

V Generalization to the Continuous Case

In this section, we will analyze the optimal contract design when PUs’ types are continuous.

We assume that PUs’ types are in the interval , and the probability density function of is . Similar to the analysis in the discrete case, FC can design the optimal contracts by solving the following optimization problem:

Problem 5.
(23)
(24)
(25)

where (23) is the budget constraint, (24) is the IR constraints and (25) is the IC constraints.

Notice that the IR and IC constraints in (24) and (25) are infinite since is a continuous value. The infinite constraints makes it difficult to solve the optimization problem. Similarly, we first reduce the IR and IC constraints by the following two lemmas.

Lemma 9.

The infinite IR constraints can be reduced to the following one constraint,

(26)
Proof.

We can derive the following inequalities based on the IC constraints,

Thus, the IR constraint satisfied for type- PUs implies that it satisfied for all . Then, we can reduce IR constraint to . Moreover, if there exists an optimal contract such that , we can always find a larger to achieve better aggregation until , which lead to correctness of this lemma. ∎

Lemma 10.

The infinite IC constraints can be reduced to the following two constraints,

(27)
(28)
Proof.

Based on (25), we can derive the following two local conditions for type- PUs,

(29)
(30)

Since (29)(30) hold for all , we have

(31)
(32)

By differentiating (31), we can simplify (32) as

(33)

Then, we prove that (31) and (33) hold globally. By integrating (31) from to , we have

(34)

Rearrange (34), we have

Since is non-increasing, we have . Thus, we can conclude that for all , which indicate that (31) and (33) hold globally. ∎

Similar to the analysis in the discrete case, the budget constraint (23) can take the equal sign, i.e.,

(35)

Then, we can transform Problem 5 to the following problem:

Problem 6.

Notice that Problem 6 is a functional extreme value problem, we can utilize the optimal control method to solve this problem.

Let be the control variable, and let be the state variable. Then, we have

where the second equality is due to (28).

To deal with the budget constraint (23), we can define a new state variable

(36)

Based on (23), we can derive the following transversality condition,

(37)

Thus, the Hamiltonian of the optimal control problem is

where and is the co-state variables.

According to the Euler-Lagrange equation for optimal control problem, we have the following conditions,

Thus, we can calculate the co-state variables as,

where and are constants which can be calculated by the transversality conditions (37)(26).

Then, we the optimal contract is given by,

Vi Simulation Studies

In this section, we first validate the feasibility of the proposed contracts, and then analyze the impact of different system parameters on the aggregation accuracy.

Parameter Value
Number of PUs ()
Privacy preference ()
Number of PUs’ types Feasibility 20
() Performance
Budget constraint Feasibility
() Performance
TABLE I: Simulation settings

The simulation settings are shown in Table I. We assume there are PUs and their privacy preferences are from to in both simulations. For simplicity, we consider a uniform distribution of PUs’ privacy preference. To illustrate the feasibility of the proposed contract, we set the number of PUs’ types and the budget constraint to and respectively. To evaluate the impact of parameter and to the aggregation accuracy, we set the value ranges to and respectively.

Vi-a Contract Feasibility

In this subsection, we illustrate that the proposed optimal contracts satisfy both the monotonic property and incentive compatibility property.

Fig. 3: Contract monotonicity.
Fig. 4: Contract incentive compatibility.
Fig. 5: Aggregation accuracy Vs. budget constraint.
Fig. 6: Aggregation accuracy Vs. number of user types.

Fig. 3 shows that decreases when PUs’ types increase. Since a smaller means higher PPL, Fig. 3 indicates that PUs with higher type tend to choose higher PPL, which validate the monotonic property discussed in Lemma 6. Besides, the result is accord with our intuition that the FC choose to buy less privacy from the PUs with higher privacy preference to reduce the payment. In another hand, we find that under the same budget constraint, PUs’ PPLs under complete information scenario are lower than which in incomplete information scenario. The reason is that in complete information scenario, the FC knows each PU’s precise type, so that the contract designed to all types of PUs can take zero utilities, as Lemma 3 shows. However, in the incomplete information scenario, PUs’ precise types is unknown to the system. Thus, only the highest type contract can take zero utility, whereas other types of PUs’ utilities should remain strictly positive, since otherwise, the PUs with lower type will pretend to be higher type to achieve higher utility.

In Fig. 4, we show the utility function of type-, type- and type- PUs when they choose all types of contracts. Notice that the utility function is concave for all types of PUs, and each type of PUs achieve their optimal utilities when they choose the corresponding contract, e.g., type- PUs achieve their optimal utilities when they choose type- contract, which validate the incentive compatibility property. Additionally, we observe that the PUs with lower type can achieve higher utility when they choose the same contract. The reason is that the lower type PUs have lower privacy preference , according to PUs’ utility definition , a smaller result in higher utility.

Vi-B System Performance

In this subsection, we show the impact of different system parameters on the aggregation accuracy.

Fig. 5 shows the impact of the amount of budget on the aggregation accuracy when other parameters are fixed. We observe that decreases when the amount of budget increases. Since a smaller means a better aggregation accuracy, 5 indicates that larger amount of budget lead to better aggregation accuracy. The reason is obvious, when the FC possesses more budget, it can provide more incentive to drive PUs to choose lower PPL, which lead to better aggregation accuracy.

In Fig. 6, we evaluate the impact of number of PUs’ types on the aggregation accuracy when other parameters are fixed. Fig. 6 shows that, the aggregation accuracy decreases with the number of PUs’ types. Recall the reduced IR constraint and IC constraints , we can set the utility of higher type PUs more close to , which means less additional payments. Thus, the increase of PUs’ types lead to more additional payment which decrease the aggregation accuracy under budget constraint.

Vii Conclusion

In this paper, we designed an incentive mechanism REAP to compensate PUs’ privacy loss. Unlike previous mechanisms, we did not require FC to be trustworthy and allow PUs to add well calibrated noise to their sensing data before reporting them to FC. Then, in order to achieve better aggregation accuracy under a budget constraint, we devised a contract-based incentive mechanism to induce PUs to lower down their PPL. Optimal contracts with closed form were derived in both complete and incomplete information scenarios. Our results were generalized to the continuous case. Extensive simulations were conducted to validate the feasibility of our proposed incentive mechanism.

References

  • [1] X. Duan, C. Zhao, S. He, P. Cheng, and J. Zhang, “Distributed algorithms to compute walrasian equilibrium in mobile crowdsensing,” IEEE Transactions on Industrial Electronics, 2016.
  • [2] S. He, D.-H. Shin, J. Zhang, and J. Chen, “Near-optimal allocation algorithms for location-dependent tasks in crowdsensing,” IEEE Transactions on Vehicular Technology, 2016.
  • [3] R. K. Ganti, N. Pham, H. Ahmadi, S. Nangia, and T. F. Abdelzaher, “Greengps: a participatory sensing fuel-efficient maps application,” in Proceedings of ACM MobiSys’10, pp. 151–164.
  • [4] R. Gao, M. Zhao, T. Ye, F. Ye, Y. Wang, K. Bian, T. Wang, and X. Li, “Jigsaw: Indoor floor plan reconstruction via mobile crowdsensing,” in Proceedings of ACM MobiCom’14, pp. 249–260.
  • [5] Y. Cheng, X. Li, Z. Li, S. Jiang, Y. Li, J. Jia, and X. Jiang, “Aircloud: a cloud-based air-quality monitoring system for everyone,” in Proceedings of ACM SenSys’14, pp. 251–265.
  • [6] S. Hu, L. Su, H. Liu, H. Wang, and T. F. Abdelzaher, “Smartroad: Smartphone-based crowd sensing for traffic regulator detection and identification,” ACM Transactions on Sensor Networks, vol. 11, no. 4, p. 55, 2015.
  • [7] T. Luo, H.-P. Tan, and L. Xia, “Profit-maximizing incentive for participatory sensing,” in Proceedings of IEEE INFOCOM’14, pp. 127–135.
  • [8] Q. Zhang, Y. Wen, X. Tian, X. Gan, and X. Wang, “Incentivize crowd labeling under budget constraint,” in Proceedings of IEEE INFOCOM’15, pp. 2812–2820.
  • [9] X. Zhang, G. Xue, R. Yu, D. Yang, and J. Tang, “Truthful incentive mechanisms for crowdsourcing,” in Proceedings of IEEE INFOCOM’15, pp. 2830–2838.
  • [10] H. Jin, L. Su, H. Xiao, and K. Nahrstedt, “Inception: incentivizing privacy-preserving data aggregation for mobile crowd sensing systems,” in Proceedings of ACM MobiHoc’17, pp. 341–350.
  • [11] M. Zhang, L. Yang, X. Gong, and J. Zhang, “Privacy-preserving crowdsensing: Privacy valuation, network effect, and profit maximization,” in Proceedings of IEEE GLOBECOM’16, pp. 1–6.
  • [12] W. Wang, L. Ying, and J. Zhang, “The value of privacy: Strategic data subjects, incentive mechanisms and fundamental limits,” pp. 249–260.
  • [13] D. Yang, G. Xue, X. Fang, and J. Tang, “Incentive mechanisms for crowdsensing: Crowdsourcing with smartphones,” IEEE/ACM Transactions on Networking, 2015.
  • [14] H. Jin, L. Su, D. Chen, K. Nahrstedt, and J. Xu, “Quality of information aware incentive mechanisms for mobile crowd sensing systems,” in Proceedings of ACM MobiHoc’15, pp. 167–176.
  • [15] X. Zhang, Z. Yang, Z. Zhou, H. Cai, L. Chen, and X. Li, “Free market of crowdsourcing: Incentive mechanism design for mobile sensing,” IEEE Transactions on Parallel and Distributed Systems, vol. 25, no. 12, pp. 3190–3200, 2014.
  • [16] I. Koutsopoulos, “Optimal incentive-driven design of participatory sensing systems,” in Proceedings of IEEE INFOCOM’13, pp. 1402–1410.
  • [17] L. Duan, T. Kubo, K. Sugiyama, J. Huang, T. Hasegawa, and J. Walrand, “Incentive mechanisms for smartphone collaboration in data acquisition and distributed computing,” in Proceedings of IEEE INFOCOM’12, pp. 1701–1709.
  • [18] T. Luo, S. S. Kanhere, H.-P. Tan, F. Wu, and H. Wu, “Crowdsourcing with tullock contests: A new perspective,” in Proceedings of IEEE INFOCOM’15, pp. 2515–2523.
  • [19] D. Peng, F. Wu, and G. Chen, “Pay as how well you do: A quality based incentive mechanism for crowdsensing,” in Proceedings of ACM MobiHoc’15, pp. 177–186.
  • [20] M. H. Cheung, R. Southwell, F. Hou, and J. Huang, “Distributed time-sensitive task selection in mobile crowdsensing,” in Proceedings of ACM MobiHoc’15, pp. 157–166.
  • [21] H. Xie, J. Lui, W. Jiang, and W. Chen, “Incentive mechanism and protocol design for crowdsensing systems,” in Allerton, 2014.
  • [22] A. Ghosh and A. Roth, “Selling privacy at auction,” in Proceedings of ACM EC’11, pp. 199–208.
  • [23] L. K. Fleischer and Y.-H. Lyu, “Approximately optimal auctions for selling privacy when costs are correlated with data,” in Proceedings of ACM EC’12, pp. 568–585.
  • [24] K. Ligett and A. Roth, “Take it or leave it: Running a survey when privacy comes at a cost,” in Proceedings of WINE’12.   Springer, pp. 378–391.
  • [25] K. Nissim, S. Vadhan, and D. Xiao, “Redrawing the boundaries on purchasing data from privacy-sensitive individuals,” in Proceedings of ACM ITCS’14, pp. 411–422.
  • [26] Q. Li and G. Cao, “Providing efficient privacy-aware incentives for mobile sensing,” in Proceedings of IEEE ICDCS’14, pp. 208–217.
  • [27] ——, “Providing privacy-aware incentives for mobile sensing,” in Proceedings of IEEE PerCom’13, pp. 76–84.
  • [28] H. Jin, L. Su, B. Ding, K. Nahrstedt, and N. Borisov, “Enabling privacy-preserving incentives for mobile crowd sensing systems,” in Proceedings of IEEE ICDCS’16, pp. 344–353.
  • [29] C. Dwork, “Differential privacy: A survey of results,” in Proceedings of TAMC’08.   Springer, pp. 1–19.
  • [30] J. Le Ny and G. J. Pappas, “Differentially private filtering,” IEEE Transactions on Automatic Control, vol. 59, no. 2, pp. 341–354, 2014.
  • [31] C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in Proceedings of Theory of Cryptography Conference.   Springer, 2006, pp. 265–284.
  • [32] M. M. Pai and A. Roth, “Privacy and mechanism design,” ACM SIGecom Exchanges, vol. 12, no. 1, pp. 8–29, 2013.
  • [33] L. Xu, C. Jiang, Y. Chen, Y. Ren, and K. R. Liu, “Privacy or utility in data collection? a contract theoretic approach,” IEEE Journal of Selected Topics in Signal Processing, vol. 9, no. 7, pp. 1256–1269, 2015.
  • [34] Y. Zhang, L. Song, W. Saad, Z. Dawy, and Z. Han, “Contract-based incentive mechanisms for device-to-device communications in cellular networks,” IEEE Journal on Selected Areas in Communications, vol. 33, no. 10, pp. 2144–2155, 2015.
  • [35] L. Duan, L. Gao, and J. Huang, “Cooperative spectrum sharing: a contract-based approach,” IEEE Transactions on Mobile Computing, vol. 13, no. 1, pp. 174–187, 2014.
  • [36] H. Zhuo, S. Zhong, and N. Yu, “A privacy-preserving remote data integrity checking protocol with data dynamics and public verifiability,” IEEE Transactions on Knowledge and Data Engineering, vol. 23, no. 9, pp. 1432–1437, 2011.

Appendix A Proof of Lemma 3.2

Proof.

The aggregation error of the randomized sensing data can be written as

Recall that the variance of Laplacian random variable is , i.e., , we can derive that

Therefore, from the Chebyshev’s inequality, we have

which indicates that the aggregated randomized sensing data satisfies -accuracy.

Thus, for a given confidence level , we have

Substituting into the above formula, and set for all , we derive

Appendix B Proof of Theorem 4.2

Proof.

Substituting (8) to (7), we have

(38)

The Lagrangian of Problem 2 can be written as

where is the Lagrangian multiplier.

Based on the KKT condition, we have

Solving the above equation obtain . Substituting this formula to (38), we have

Therefore, is given by

(39)

Substituting (39) to , can be calculated as

(40)

Appendix C Proof of Lemma 4.5

Proof.

We will conduct the proof of this lemma by three steps.

Firstly, we prove that if satisfies, then hold for all .

Based on the IC constraint, we have

(41)
(42)

Formula (42) can be transformed to the following form

Recall the monotonic property in Lemma 6, we know that and . Thus, we have or . Following the same step, we have

These inequalities lead to the correctness of this step.

Secondly, we prove that if satisfies, then hold for all .

Similar to the proof of the first step, we have

which lead to the correctness of this step. Notice that for an optimal contract, we have holds, since otherwise, we can always find a larger to achieve a better aggregation accuracy until the equal signs hold.

Thirdly, we prove that implies .

It is obvious that , rearrange this inequality, we have

Since , then hold, i.e., . Thus, we have .

In summary, implies , which end the proof of this lemma. ∎

Appendix D Proof of Theorem 4.6

Proof.

Based on (18) and (19), we have

(43)

Let , we can rewrite (43) as .

Following the same procedure, we can conclude that

(44)

where is defined by (20).

Then, we have

Rearrange the above equation by , we can get