Game-Theoretic Design of Optimal Two-Sided Rating Protocols for Service Exchange Dilemma in Crowdsourcing
Abstract—Despite the increasing popularity and successful examples of crowdsourcing, its openness overshadows important episodes when elaborate sabotage derailed or severely hindered collective efforts. A service exchange dilemma arises when non-cooperation among self-interested users, and zero social welfare is obtained at myopic equilibrium. Traditional rating protocols are not effective to overcome the inefficiency of the socially undesirable equilibrium due to specific features of crowdsourcing: a large number of anonymous users having asymmetric service requirements, different service capabilities, and dynamically joining/leaving a crowdsourcing platform with imperfect monitoring. In this paper, we develop the first game-theoretic design of the two-sided rating protocol to stimulate cooperation among self-interested users, which consists of a recommended strategy and a rating update rule. The recommended strategy recommends a desirable behavior from three predefined plans according to intrinsic parameters, while the rating update rule involves the update of ratings of both users, and uses differential punishments that punish users with different ratings differently. By quantifying necessary and sufficient conditions for a sustainable social norm, we formulate the problem of designing an optimal two-sided rating protocol that maximizes the social welfare among all sustainable protocols, provide design guidelines for optimal two-sided rating protocols and a low-complexity algorithm to select optimal design parameters in an alternate manner. Finally, illustrative results show the validity and effectiveness of our proposed protocol designed for service exchange dilemma in crowdsourcing.
Crowdsourcing has emerged in recent years as a paradigm for leveraging human intelligence and activity at a large scale, it offers a distributed and cost-effective approach to obtain needed content, information or services by soliciting contributions from an undefined set of people, instead of assigning a job to designated employees [1, 2]. Over the past decade, numerous successful crowdsourcing platforms, such as Amazon Mechanical Turk (AMT) , Yahoo! Answers , Upwork  emerge. With the help of crowdsourcing platforms and with the power of a crowd, crowdsourcing is becoming increasingly popular as it provides an efficient and cheap method of obtaining solutions to complex tasks that are currently beyond computational capabilities but possible for humans [6, 7, 8].
Over the past decade, techniques for securing crowdsourcing operations have been expanding steadily, so has the number of applications of crowdsourcing . However, users in a crowdsourcing platform have the opportunity to exhibit antisocial behaviors due to the openness of crowdsourcing. Meanwhile, crowdsourcing has overshadowed important episodes when elaborate sabotage derailed or severely hindered collective efforts [10, 11]. As part of crowdsourcing, service exchange applications have proliferated as the medium that allows users to exchange valuable services. In a typical service exchange application, a user plays a dual role: as a client (sometimes also called a requester) who submits requested services to a crowdsourcing platform, and as a server (or worker) who chooses to devote a high/low level of efforts to work on a job and provides solutions to the client in exchange for rewards . Since providing services incurs costs to servers in terms of power, time, bandwidth, privacy leakage, etc., rational and self-interested users would be more inclined to devote low level efforts when being a server, and seek for services from others as a client, rather than providing services as a server. Under such circumstances, non-cooperative behaviors among self-interested users decrease their social welfare, which is a social dilemma. Therefore, an increased level of cooperation is considered to be socially desirable for service exchange in crowdsourcing platforms.
The main reason why users in the above service exchange game have the incentive not to cooperate with each other is the absence of punishments for such malicious behaviors. Self-interested users always adjust their strategies over time to maximize their own utilities, however, they cannot receive a direct and immediate benefit by choosing to be a server and devoting a high-level effort to provide high-quality services to other users (as clients). Such a conflict leads to an inevitable fact that, many users would be apt to be a client to request services, or be apt to be a server but devote a low-level effort to provide low-quality services. Thus, an important functionality of the crowdsourcing platform is to provide a good incentive mechanism for service exchange. And there is an urgent need to stimulate cooperation among self-interested users in crowdsourcing, under which self-interested users will be compelled to follow the social norm such that the inefficiency of the socially undesirable equilibrium will be overcome, i.e., if a user chooses to be a server in the first stage, and provides high-quality services in the second stage, then he should be rewarded immediately, otherwise, he should be punished .
Incentives are key to the success of crowdsourcing as it heavily depends on the level of cooperation among self-interested users. There are two types of incentives, monetary and non-monetary. Incentive mechanisms based on monetary incentivize individuals to provide high-quality services relying on monetary or matching rewards in the form of micropayments, which in principle can achieve the social optimum by internalizing external effects of self-interested individuals. Although monetary incentives, in some sense, are the best and easiest way to motivate people , several challenges prevent monetary incentives from success in service exchange applications. Firstly, it is difficult to price small services (e.g., answer, knowledge, resources etc.) being exchanged between users as these are not real goods . Deploying auctions to set the price may reduce the price to a certain degree, while it may cause implementation complexity, high delay, and currency inflation . Secondly, as pointed out by ,  and , “free-riding” may happen when rewards are paid before providing services, a server always has the incentive to take the reward without devoting enough effort, whereas if rewards are paid after the service exchange is completed, “false-reporting” may arise since the client has an incentive to lower or refuse rewards to servers by lying about the outcome of the task. Thirdly, although a monetary scheme is simple to be designed, it often requires a complex accounting infrastructure, which introduces computation overheads and substantial communication, and thus difficult to be implemented in reality [20, 21].
In addition to monetary incentives, some applications are endowed with different non-monetary incentive types, such as natural incentives, personal development, solidary incentives, material incentives, etc. . Among these non-monetary incentives, rating protocols (as a form of solidary incentives) originally proposed by Kandori  have been shown to work effectively as incentive mechanisms to force cooperation in crowdsourcing platforms [13, 15, 24, 12, 23]. Generally speaking, a rating protocol labels each user by a rating label based on his past behaviors indicating his social status in the system. And users with different ratings are treated differently by the other users they interact with, and the rating of a user who complies with (resp. deviates from) the social norm goes up (resp. down). Hence, a user with high/low rating can be rewarded/punished by other users in a crowdsourcing platform who have not had past interactions with him. Furthermore, the use of ratings as a summary record of a user requires significantly less amount of information being maintained . Hence, the rating protocol has a potential to form a basis for successful incentive mechanisms for service exchange in crowdsourcing platforms. Motivated by the above considerations, this paper is devoted to the study of incentive mechanisms based on rating protocol.
However, there are several major reasons that prevent existing works on the rating protocol to be directly implemented for incentive provision for service exchange in crowdsourcing: (i) Users have asymmetric service requirements and they can freely and frequently change their partners they interact with in most crowdsourcing platforms, which results in asymmetric interactions among those users, and it is more difficult to model and analyze [26, 27]; (ii) Taking into account the service capability of users and the spatial/temporal requirements of tasks, using the framework of anonymous random matching games in which each user is repeatedly matched with different partners over time for service exchange is inappropriate [12, 15]; (iii) User population is large, users are anonymous and not sufficiently patient, especially when those users with bad ratings may attempt to leave and rejoin the system as new members to avoid punishments (i.e., whitewashing)[23, 28]; (iv) In the presence of imperfect monitoring, a user’s rating may be wrongly updated, which will impact on rating protocol design, as well as social welfare loss [13, 24].
In this paper, we take into account the above features of service exchange in crowdsourcing into consideration, and propose a game-theoretic framework for designing and analyzing a class of rating protocols based incentive mechanisms, in order to stimulate cooperation among self-interested users and maximize the social welfare. To the best of our knowledge, update of ratings of both users (we name it as a two-sided rating) matched in the service exchange game is rarely tackled by other works. Using game theory to analyze how cooperation can be enforced and how to maximize the social welfare under the designed two-sided rating protocol, we rigorously analyze how users’ behaviors are influenced by intrinsic parameters and design parameters as well as users’ evaluation of their individual long-term utilities, in order to characterize the optimal design that maximizes users’ utilities and enforces cooperation among them. The main contributions of this paper are summarized as follows:
We model the service exchange problem as an asymmetric game model with two stages, and show that inefficient outcome arises when no user cooperates with each other, and thus zero social welfare is obtained at myopic equilibrium, which is a social dilemma.
we develop the first game-theoretic design of two-sided rating protocols to stimulate cooperation among self-interested users, which consists of a recommended strategy and a rating update rule. The recommended strategy recommends a desirable behavior chosen from three predefined recommended plans according to intrinsic parameters, while the rating update rule involves the update of ratings of both users, and uses differential punishments that punish users with different ratings differently.
We formulate the problem of designing an optimal two-sided rating protocol that maximizes the social welfare among all sustainable rating protocols, provide design guidelines for determining whether there exists a sustainable two-sided rating protocol under a given recommended strategy, and design an algorithm achieving low-complexity computation via a two-stage procedure, each stage consists of two steps (we call this a two-stage two-step procedure), in an alternate manner.
We use simulation results to demonstrate how intrinsic parameters (i.e., costs, imperfect monitoring, user’s patience) impact on optimal recommended strategies, the design parameters to characterize the optimal design of various protocols, and the performance gain of the proposed optimal two-sided rating protocol.
The remainder of this article is organized as follows. In section II, we describe the service exchange dilemma game with two-sided rating protocols. In section III, we formulate the problem of designing an optimal two-sided rating protocol. Then we provide the optimal design of two-sided rating protocols in Section IV. Section V presents simulation results to illustrate key features of the designed protocol. Finally, conclusions are drawn in Section VI.
Ii System Models
Ii-a Service Exchange Dilemma Game
We consider a crowdsourcing where each user can offer a valuable service to other users. Examples of services are sensing task, expert knowledge, information resource, computing power, storage space, etc. In each period, a client generates a service request, which is sent to a server that can provide the requested service. We model such a process using uniform random matching, that is each user in the community is involved in two matches in every period, one as a client and the other as a server, each user is equally likely to receive exactly one request in every period, and the matching is independent in different periods. Note that the user with whom a user interacts as a client can be different from that with whom he interacts as a server, reflecting asymmetric interests between a pair of users in a given instant. Such a model well approximates the matching process between users in large-scale crowdsourcing systems where users interact with others in an ad-hoc fashion and the interactions between users are constructed randomly over time.
In this model, a user decides whether or not to request service (choosing to be a client/server), if the user chooses to be a server, he strategically determines his service quality (devoting a high/low level of efforts). Note that the decisions are sequential: the decision on role selection is made first, and then the decision on service quality is made next. We model this interaction as a sequential game. Formally, we have a two-stage game. In the first stage, a user’s action is chosen from a set , where stands for “choosing to be a client” (request service), whereas stands for “choosing to be a server”(offer service). In the second stage, the server has a binary choice of whether being whole-hearted or being half-hearted in providing the service, while the client has no choice. The set of actions for the server is denoted by , where stands for “high level of effort” and for “low level of effort”.
We assume that any strategy is costly (consumes a cost for choosing ). If the server devotes a high level of efforts to fulfill the client’s request, the client receives a service benefit of , while the server suffers a service cost of . If the server devotes a low level of efforts to the request, both users receive zero payoffs. Obviously, the server’s action determines the payoffs of both users. After a server takes an action, the client sends a report about the action of the server to the third-party device or infrastructure that manages the rating scores of users. However, the report is inaccurate, either by the client’s incapability of accurate assessment or by some system error with a small probabilities . That is, is reported when the server takes action with probability , and vice versa. Assuming a binary set of reports, it is without loss of generality to restrict , because when , reports are completely random and do not contain any meaningful information about the actions of users.
We find the subgame perfect equilibrium of the two-stage game. Each pair of users’ decisions made in the first stage ( or ) result in a different second-stage game ( or ). We first compute expected utilities in the second-stage game, and then turn back to compute expected utilities when both users choose their actions in the first-stage before knowing their productivities. When a user requests services as a client, a matching rule is used to determine corresponding server. We model the interaction between a pair of matched users in the second stage as a gift-giving game , and the payoff matrix of the gift-giving game between a client and a server is presented in Table I. We assume that so that the service of a user creates a positive net social benefit, and social welfare is maximized when all servers choose action in the gift-giving games they play, which yields payoff to every user. On the contrary, action is the dominant strategy for the server, which constitutes a Nash equilibrium of the gift-giving game.
We now take a step back and compute expected utilities for such a case. When the client consumes a cost for choosing in the first stage, and receives payoff in the second stage, while the server will choose and suffer a low cost, which is approximated by 0 here, also results in zero payoff. We summarize this in the and cells of the pay-off matrix in Table II. When both users choose to request services as clients, each user consumes a cost , but receives zero service benefit as there is no user offering service. We note this in the cell of the pay-off matrix in Table II. The case when both users choose to be servers is very similar, the only difference is that each user suffers no cost. The expected utility of each user is zero as no user requests service. The cell of the pay-off matrix in Table II describes such a case.
In summary, for any choice of parameters, only can be a Nash equilibrium of the service exchange game. When every user chooses his action to maximize his current payoff myopically, an inefficient outcome arises where every user receives zero payoff, which is a social dilemma. Under the current framework, nobody will take the initiative to help others, and do not expect to get help from others.
Ii-B Two-sided Rating Protocols
We consider a two-sided rating protocol that consists of a recommended strategy and a rating update rule. The recommended strategy prescribes the contingent plan according to intrinsic parameters that the server should take based on ratings of both his own and his client’s. Here, we focus on one plan, while two other plans will be introduced in the later half of this article. The rating update rule involves the update of ratings of both users depending on their past actions as a server or a client, and uses differential punishments that punish users with different ratings differently. To the best of our knowledge, two-sided rating protocol in crowdsourcing is rarely tackled by other works. In the following, we give a formal definition of a two-sided rating protocol.
A two-sided rating protocol is represented as a 5-tuple , i.e., a set of binary rating labels , a social strategy , a client/server ratio , a recommended strategy , and a rating update rule .
denotes the set of binary rating labels, where 0 is the bad rating, and 1 is the good rating.
represents the adopted social strategy for a user with rating , where .
keeps a record of the client/server ratio for a user with rating , which contains his history and current choices of .
defines the strategy which the server with rating should select when faced with the client with rating .
specifies how a user’s rating should be updated based on its adopted strategies and current rating as follows:
We characterize the erroneous report by a mapping , where 0 and 1 represent “L” and “H”, respectively. is the probability distribution over , and is the probability that the client reports “r” given the server’s actual service quality “q”.
A schematic representation of a rating update rule is provided in Figure 1. Given a rating protocol , each user is tagged with a binary rating label representing its social status. Obviously, the higher is, the better the social status the user has. Ratings of users are stored and updated by the system administrator based on strategies adopted by the user in the transactions that he is engaged in. The rating scheme can update a user’s rating at the end of each transaction or at the beginning of the next transaction. Under the rating update rule (2), a -server (i.e., a server with rating ) will have rating 1 with probability , and have rating 0 with probability , if the reported service quality is no lower than the recommended service quality ; otherwise, it will have rating 1 with probability , and have rating 0 with probability . While a -client will have rating 1 with probability and have rating 0 with probability , if the client/server ratio ; otherwise, it will have rating 1 with probability , and have rating 0 with probability . Obviously, and can be referred to as the strength of reward imposed on servers and clients when they cooperate with each other, respectively, while can be referred to as the strength of punishment imposed on servers when they do not offer high level efforts, similarly, can be referred to as the strength of punishment imposed on clients when they expect to get excessive service from others rather than to serve others.
Iii Problem Formulation
Iii-a Stationary Rating Distribution
Given a two-sided rating protocol , suppose that each user always follows a given recommended strategy and keep in any period. As time passes, ratings of users are updated, and transition probabilities of a user’s rating can be expressed as follows. Then the transition from to is determined by the rating update rule , taking into account the rate for a user choosing to be a client and the error probability , as shown in the following expressions:
Here we set , , the stationary distribution can be derived as follows.
Since the coefficients in the equations that define a stationary distribution are independent of the recommended strategy that users should follow, the stationary distribution is also independent of the recommended strategy, as can be seen from Eq.(5). Thus, we will write the stationary distribution as to emphasize its dependence on .
Iii-B Sustainable Conditions
The purpose of designing a social norm is to enforce a user to follow the recommended strategy and keep in any period. We call a user who complies with such a social norm as a “compliant user”, otherwise, the user who deviates from the social norm is called as a “non-compliant user”. The compliant user will be rewarded, on the contrary, a non-compliant user will be punished in order to regulate his behavior. Since we consider a non-cooperative scenario, it is important to check whether a user can improve his long-term payoff by a unilateral deviation. Note that any unilateral deviation from an individual user would not affect the evolution of rating scores and thus the stationary distribution, because we consider a continuum of users.
Let be the cost paid by a -server who is matched with a -client and follows a recommended strategy , that is, if , and if . Similarly, let be the benefit received by a -client who is matched with a -server follows a recommended strategy , that is, if and if . Since we consider uniform random matching, the expected period payoff of a -user under a rating protocol and a chosen rate before he is matched is given by
To evaluate the long-term payoff of a compliant user, we use the discounted sum criterion in which the long-term payoff of a user is given by the expected value of the sum of discounted period payoffs starting from the current period. Let be the transition probability that a -user becomes a -user in the next period under a rating protocol when he follows the recommended strategy and selects the chosen rate , which can be expressed as
The expected long-term utility of a user in the repeated game starting from the current period is the infinite-horizon discounted sum of his expected one-period utility with a discount factor
Where is the rate at which a user discounts his future payoff, and reflects his patience. It is obvious that larger discount factor reflects a more patience user. With a simple manipulation based on Eq.(6), Eq.(7) and Eq.(8), we have
Since users always aim to strategically maximize their own benefits, they will find in their own self-interest to comply with the social norm under a given two-sided rating protocol, if and only if they cannot benefit in terms of their long-term utilities upon deviations. We call such a protocol as a sustainable two-sided rating protocol, and give its formal definition as follows:
(Sustainable Two-sided Rating Protocols) A two-sided rating protocol is sustainable if and only if , for all , and .
In other words, a sustainable two-sided rating protocol should maximize a user’s expected long-term utility at any period, such that no user can gain from a unilateral deviation regardless of the rating of his matched partner when every other user follows the recommended strategy and selects . It is obvious that the social welfare will be maximized when compliant users keep (i.e., ). Checking whether a rating protocol is sustainable in the second stage using the preceding definition requires computing deviation gains from all possible recommended strategies. By employing the criterion of unimprovability in Markov decision theory , a user’s strategic decision problem can be formulated as a Markov decision process under a two-sided rating protocol , where the state is the user’s rating , and the action is his chosen strategy , we thus establish the one-shot deviation principle for sustainable two-sided rating protocols, which provides simpler conditions.
(One-Shot Deviation Principles) A two-sided rating protocol satisfies the one-shot deviation principle if and only if
For the “if” part: A user’s expected long-term utility when he adopts the recommend strategy for all , can be expressed as in Eq.(8) (here, we fix ). If the user unilaterally deviates from to at rating , his expected long term utility becomes
Where is the transition probability that a non-compliant -server becomes a -server in the next period under , which is expressed as
By comparing these two payoffs and , and solving the following inequality:
If =0, then for each , , and , we have
While if =1 and , then , and . Else if =0, then for each , self-interested users have no incentive to deviate from . Hence, we have
For the “only if” part: Suppose the rating protocol is satisfied with the one-shot deviation principle, then clearly there are no profitable one-shot deviations. We can prove the converse by showing that if is not satisfied with the one-shot deviation principle, there is at least one profitable one-shot deviation. Since and are bounded, this is true by the unimprovability property in Markov decision theory. ∎
Lemma 1 shows that if a user cannot gain by unilaterally deviating from only in the current period and following afterwards, he can neither gain by switching to any other recommended strategy , and vice versa. of Eq.(13) can be interpreted as the current gain from choosing in the second stage, while of Eq.(13) represents the discounted expected future loss due to the different transition probabilities incurred by choosing .
After analyzing sustainable conditions in the second-stage, we then step back to analyze sustainable conditions in the first stage when both users choose their strategies in the first-stage before knowing their productivities. In the first stage, users decide the optimal chosen rate , and follow the recommended strategy in their self-interest. Under the service exchange dilemma game, a -user will find it optimal to choose to be a client in the first stage, as his revenue is maximized when his matched -server chooses to follow the recommended strategy in the second stage, which yields payoff for him. On the contrary, choosing to be a sever will suffer a cost . However, social welfare is maximized if and only if every user chooses to be a server or a client with the same probability , which we name it as the principle of fairness inspired by , and derive incentive constraints that characterize sustainable conditions in the first stage as shown in Lemma 2.
(The Principle of Fairness) A two-sided rating protocol satisfies the principle of fairness if and only if
For the “if” part: Assume that each user selects in the first stage, and adopts the recommend strategy in the second stage, then his expected long-term utility can be expressed as
Where is the transition probability that a compliant -user becomes a -user in the next period when he selects in the first stage under the rating protocol , which can be found in Eq.(7).
As a -user can receive the benefit if and only if he chooses to be a client in the current period under the recommended strategy , otherwise, he will suffer a cost . Without loss of generality, we now suppose that a user deviates from to =1 in the current period, and following afterwards, then his expected long-term utilities can be expressed as
Where can be computed based on Eq.(2).
If =1, Eq.(20) can be rewritten as
While if =0, Eq.(20) can be rewritten as
For the “only if” part: Suppose is satisfied with the principle of fairness, then clearly there are no profitable deviations (i.e., or ) in the first stage. We can prove the converse by showing that if is not satisfied with the principle of fairness, there is at least one profitable deviation. Since the RHS of Eq.(16) is bounded, this is true by the unimprovability property in Markov decision theory. ∎
Using one-shot deviation principle and the principle of fairness, we can derive incentive constraints that characterize necessary and sufficient conditions for a two-sided rating protocol to be sustainable, which is formalized in the next theorem.
A two-sided rating protocol is sustainable if and only if it is satisfied with both of the one-shot deviation principle and the principle of fairness.
This proof can be directly obtained from Lemma 1 and 2, and is omitted here. ∎
Iii-C Optimization Problem with Constraints
Given a sustainable two-sided rating protocol , evolves as a Markov chain due to the rating update, where the transition probability is . The expected one-period utility a user obtains in one transaction is , which is defined as the social welfare in this paper. Obviously, a sustainable two-sided rating protocol always achieves a higher social welfare than a non-sustainable one, and hence it is only necessary to consider sustainable protocols in order to maximize the social welfare. As a result, the design of the two-sided rating protocol that maximizes the social welfare can be formulated as follows:
The two-sided rating protocol design problem can be formulated as follows:
Iv OPTIMAL DESIGN OF Two-Sided RATING PROTOCOLS
In this section we investigate the design of an optimal two-sided rating protocol that solves the two-sided rating protocol design problem under a given recommended strategy , i.e., selecting the optimal rating update rule , which are determined by design parameters . In order to characterize an optimal design which is denoted as , we investigate the impacts of design parameters on the social welfare , and the incentive for satisfying constraints in Eq.(25).
Iv-a Existence of a Sustainable Two-sided Rating Protocol
We first investigate whether there exists a sustainable two-sided rating protocol under , i.e., determining whether there exists a feasible solution for the design problem of Eq.(25).
A sustainable two-sided rating protocol under the recommended strategy exists if and only if
For the “if” part: Among the eight design parameters, , , and can be referred to as reward factors imposed on compliant users, while , , and can be referred to as punishment factors imposed on non-compliant users. The incentive for self-interested users to be a compliant user is maximized when we maximize all of reward factors and punishment factors, i.e., . Then, Eq.(25) can be transformed into
It is obvious that , hence, Eq.(27) can be revised as follows
By solving Eq.(28), we can obtain Eq.(26), that is a sufficient condition that there exists a feasible solution for the design problem. It shows that Eq.(25) always has a feasible solution if users have sufficient patience (i.e., when the discount factor is large). We assume that as no one can be 100% patient. Therefore, there exists a sustainable two-sided rating protocol if Eq.(26) holds.
Iv-B Optimal Values of the Rating Update Rule
In this section, we assume that Eq.(26) holds, that is, there exists a feasible solution for the two-sided rating protocol design problem of Eq.(25). Our goal is to select to maximize the social welfare , that is, maximizing reward factors , and minimizing punishment factors and . With this idea, Theorem 3 gives the optimal value of reward/punishment factors except .
Given a sustainable two-sided rating protocol , is always the optimal solution to Eq.(25).
Social welfare monotonically increases with reward factors , and the upper bound of them is 1, with which the incentive constraints in Eq.(25) are satisfied. As monotonically decreases with punishment factors , and given , the design problem of Eq.(25) is transformed into the selection of the smallest , with which the incentive constraints Eq.(16) are satisfied. It is obvious that the smallest can be obtained when we select the largest , and the upper bound of is 1. Since is only determined by , rather than , in order to provide sufficient incentive and get as less as possible, we have . Hence, this statement follows. ∎
By substituting into Eq.(5), we have
It is obvious that as . Hence, the social welfare monotonically decreases with , while monotonically increases with both and , and should be sufficiently small in order to increase . However, and also cannot be too small since it cannot provide sufficient punishment for self-interested users to comply with the social norm.
Given , the design problem in Eq.(25) w.r.t and can be rewritten as