Customized Local Differential Privacy for Multi-Agent Distributed Optimization

Customized Local Differential Privacy for Multi-Agent Distributed Optimization

Abstract

Real-time data-driven optimization and control problems over networks may require sensitive information of participating users to calculate solutions and decision variables, such as in traffic or energy systems. Adversaries with access to coordination signals may potentially decode information on individual users and put user privacy at risk. We develop local differential privacy, which is a strong notion that guarantees user privacy regardless of any auxiliary information an adversary may have, for a larger family of convex distributed optimization problems. The mechanism allows agent to customize their own privacy level based on local needs and parameter sensitivities. We propose a general sampling based approach for determining sensitivity and derive analytical bounds for specific quadratic problems. We analyze inherent trade-offs between privacy and suboptimality and propose allocation schemes to divide the maximum allowable noise, a privacy budget, among all participating agents. Our algorithm is implemented to enable privacy in distributed optimal power flow for electric grids.

\savesymbol

AND

I Introduction

Advances in sensing and computing enable various infrastructures, such as traffic or energy networks, to perform optimization and control problems in real-time throughout a network. Often the scale of such problems desires a distributed implementation that can be solved quickly enough to allow for high frequency control actions. To enable this, a network may be split up into sub-networks governed by different agents, who exchange their local optimization variables with neighbors and/or a central operator to iteratively solve the optimization problem. Exchanging optimization variables between agents and the changes therein may reveal private information, such as whether someone is home and what kind of appliances someone is using [11]. In addition, there is growing understanding that secondary information may be inferred from the communicated variables, including the parameters used in the local objective and constraints, which may reveal sensitive information such as prices and capacity [10].

To make matters more challenging, different agents may be competing with each another to serve an operator with their service. Knowing the control capacity of and prices negotiated by other players can help in negotiating with the operator and leads to strategic behavior and untruthful communication, which harms the quality of solution to the distributed optimization problem. As such, both personal privacy and commercial privacy needs require the development of agent-to-agent distributed optimization algorithms that can mask sensitive information in objectives and constraints.

In recent years, various privacy-preserving algorithms have been proposed for distributed optimization and control problems, using various privacy metrics. The differential privacy framework [8] has gained most attention, and is particularly lauded for its robustness to auxiliary side information that an adversary might have to complement information gained from a particular algorithm, providing stronger privacy guarantees than other existing metrics. The framework assumes a setting in which sensitive information is stored in a database by a trustworthy curator, which can provide answers to external queries. A system is made differentially private by randomizing its answers in such a way that the distribution over published outputs is not too sensitive to changes in the stored data. Perturbation can be designed to make it provably difficult for an adversary to make inferences about individual records from the published outputs.

In the setting of distributed optimization, each agent is its own curator managing its own locally private information and communication of its optimization variables to neighboring agents or a central operator. In order to preserve differential privacy, each curator has to ensure that the output of queries, that is the communicated variables, remain approximately unchanged if local parameters relating to its objective or constraints are modified.

Related Work

This work complements an existing and rapidly growing body of literature on incorporating differential privacy into resource allocation and, most relevant here, in distributed optimization, control and networked systems. Earlier work by Hsu et al. [12] developed differential privacy-preserving algorithms for convex optimization problems that are solved in a central fashion, considering general linear programs (LPs) with either private objectives or constraint. Dong et al. [6] consider privacy in a game theoretic environment, motivated by traffic routing in which the origins and destinations of drivers are considered private. Jia et al. [14] consider occupancy-based HVAC control a thend treat the control objective and the location traces of individual occupants as private variables, using an information-theoretic privacy metric. A recent elaborate tutorial paper by Cortés et al. [5] covers differential privacy for distributed optimization, and distinguishes between objective-perturbing and message-perturbing strategies for distributed optimization. In the first category, each agent’s objective function is perturbed with noise in a differentially private manner, which guarantees differential privacy at the functional level and is preferred for systems with asymptotically stable dynamics [18]. In the second category, coordination messages are perturbed with noise before sent, either to neighbors or a central node, depending on the specific algorithm. Huang et al. [13] proposed a technique for disguising private information in the local objective function for distributed optimization problems with strongly convex separable objective functions and convex constraints. Han et al. [10] considered problems where the private information is encoded in the individual constraints, the objective functions need to be convex and Lipschitz continuously differentiable, and the constraints have to be convex and separable. Other related works are Mo and Murray [17] who aim to preserve privacy of agents’ initial states in average consensus and Katewa et al. [15] who explore an alternative trade-off between privacy and the value of cooperation (rather than performance) in distributed optimization. In [23], a privacy-aware optimization algorithm is analyzed using the cryptography notion of zero knowledge proofs. More recently, [26] considers differentially private algorithms over time-varying directed networks.

The above works are selective in that these consider privacy-preserving mechanisms for constraints, objectives or initial states. An exception is the work Hsu et al. [12] on LPs, which can handle both private objectives and constraints. We complement the literature by proposing a mechanism that preserves private objectives and constraints for optimization problems with strongly convex objectives and convex constraints.

Contributions

Motivated by personal and commercial privacy concerns in distributed OPF, we investigate the problem of preserving differential privacy of local objectives and constraints in distributed constrained optimization with agent-to-agent communication. Compared to previous works on privacy-aware distributed optimization [10, 13], we consider the notion of local differential privacy, which is a refined (and more stringent) version of differential privacy [7]. It allows each agent in a network to customize its own privacy level, based on individual preferences and characteristics. Furthermore, most previous works consider privacy protection for either individual objective function [13] or the individual constraint [10], our more general formulation enables us to provide privacy guarantees on both local objective function parameters and local constraint parameters. Specifically, the proposed algorithm solves a general class of convex optimization problems where each agent has a local objective function and a local constraint, and agents communicate with neighbors/adjacent agents with no need for a central authority.

We show that the private optimization algorithm can be formulated as an instance of the Inexact Alternating Minimization Algorithm (IAMA) for Distributed Optimization [20]. This algorithm allows provable convergence under computation and communication errors. This property is exploited to provide privacy by injecting noise large enough to hide sensitive information, while small enough to exploit the convergence properties of the IAMA. We derive the trade-off between the privacy level and sub-optimality of the algorithm. The trade-off between sub-optimality and differential privacy allows us to determine a privacy budget that captures the allowable cumulative variance of noise injected throughout the network that achieves a desired level of (sub-)optimality. We propose two pricing schemes to ensure fair and efficient allocation of the privacy budget over all participating/bidding DER owners.,

Ii Preliminaries and Problem Statement

In this section, we consider a distributed optimization problem on a network of sub-systems (nodes). The sub-systems communicate according to a fixed undirected graph . The vertex set represents the sub-systems and the edge set specifies pairs of sub-systems that can communicate. If , we say that sub-systems and are neighbors, and we denote by the set of the neighbors of sub-system . Note that includes . The cardinality of is denoted by . We use a vector to denote the local variable of subsystem and can be of different dimensions for different . The collection of these local variables is denoted as . Furthermore, the concatenation of the local variable of sub-system and the variables of its neighbors is denoted by . With appropriate selection matrices and , the variables have the following relationship: and , , which implies the relation between the local variable and the global variable , i.e. , . We consider the following distributed optimization problem:

Problem II.1 (Distributed Optimization).
(1)
(2)

where is the local cost function for node which is assumed to be strongly convex with a convexity modulus , and to have a Lipschitz continuous gradient with a Lipschitz constant . The constraint is assumed to be a convex set which represents a convex local constraint on , i.e. the concatenation of the variables of sub-system and the variables of its neighbors.

The above problem formulation is fairly general and can represent a large class of problems in practice. In particular it includes the following quadratic programming problem, which we study as a particular instance in our applications.

Problem II.2 (Distributed Quadratic Problem).
(3)

where . In particular, we will assume that the smallest eigenvalue of satisfies .

Ii-a Local Differential Privacy

We present definitions and properties for differential privacy. Let be two databases with private parameters in some space containing information relevant in executing an algorithm. Let denote a metric defined on that encodes the adjacency or distance of two elements. A mechanism or algorithm is a mapping from to some set denoting its output space.

Definition II.3 (Differential Privacy).

. A randomized algorithm is -differential private if for all and for all database satisfying , it holds that

(4)

where the probability space is over the mechanism .

This definition of differential privacy is suitable for cases where one uniform level of privacy needs to across all elements in the databases. We now consider a distributed algorithm in a network with agents for solving an optimization problem in a collaborative way, where denotes the private parameters of agent . The outputs of the mechanism are the message exchanged between nodes in the network over the time horizon of iterations. This mechanism induces local mechanisms , each executed by one agent. The output of one local mechanism is the message sent out by node , i.e. . It is important to realize that although one local mechanism, say , does not necessarily have direct access to the input/database of other nodes, the output of could still be affected by because of the interactions among different nodes. For this reason, we explicitly write as input to all local mechanisms.

We now let each agent  specify its own level of privacy . To formalize this specification, we require a definition:

Definition II.4 (Local Differential Privacy).

Consider a (global) mechanism for a network with nodes, and local mechanisms induced by . We say that the mechanism is -differentially locally private for node , if for any it satisfies that

(5)

where . Moreover, we say that the mechanism is -differentially private, if is -differentially locally private for all nodes, where .

Figure 1 presents the concept of local differential privacy pictorially, showing the various considerations that can be taken when designing for local/customized privacy. Firstly, one may desire to include a central node 0 that communicates with all subsystems or implement a fully distributed problem between the subsystems that does not rely on any central node. The former will lead to better convergence properties as information spreads more easily throughout the network, the latter will benefit privacy by making it harder to collect information from across the network. Regardless, the method allows for the privacy to be purely local, strengthening the notion developed in [10], which implements differential privacy in distributed optimization through a trusted central node, assuming a star-shaped communication structure with no agent-to-agent communication. Secondly, subsystems may have varying levels of privacy. In Figure 1, subsystem 1 and 3 have a local privacy specification, while subsystems 0 and 2 do not. The systems with local differential privacy have outgoing messages perturbed by noise as indicated by the dashed arrows. As such, the method is flexible to various forms of distributed optimization or control problems with heterogeneous privacy and control properties across its nodes/agents, extending the work in [13], which considers a similar fully distributed algorithm but specifies a uniform privacy level for all agents.

Figure 1: A system with 4 subsystems. Arrows denote message directions in distributed optimization. Here, node 0 denotes a central node that communicates with all other nodes, which can be left out (shaded part). The other 3 nodes represent subsystems. Nodes that have a privacy specification are patterned with dots and send out messaged perturbed by noise, as indicated by dashed arrows.

Iii Main Results

Iii-a Differentially Private Distributed Optimization

In this section, we describe a distributed optimization algorithm (Algorithm 1) for solving Problem II.1 with privacy guarantee, based on the results in [20]. To solve the optimization problem in a distributed way, we split the overall problem into small local problems according to the physical couplings of the sub-systems. This approach gives the resulting algorithm the desired feature that each node only needs to communicate with its neighbors and the computations can be performed in parallel for every subsystem. To guarantee local differential privacy, a noise term is added to the message at each time, before it is sent out to other nodes.

0:   Initialize , and
   for  do
       1:
       2: Send to all the neighbors of agent .
       3: .
       4: Send to all the neighbors of agent .
       5:
   end for
Algorithm 1 Differentially private distributed algorithm

We start with defining the private parameters of agent  to be the collection of parameters for its local objective function and constraints in Problem II.1, . Algorithm 1 can be seen as a global mechanism which takes input data and produces output for all and . In particular, the output of up to iteration is given by

(6)

where we use the bold letter to denote the collection of at time and to denote the collection of at time . Recall that we are mainly concerned with local privacy at different nodes, we use to denote the local mechanism induced by at node , which takes input data and produces output for , namely

(7)

which injects noise  to the solution of its local problem, according to some distribution.

The definition of the distance between to two problems and depends on specific applications. For the quadratic problem in (3), the local objective and local constraint are parametrized by the matrix and vectors . One possible definition of the distance for this special case is given by weighted sum of matrix/vector norms

(8)

for some weights . The choice of the weights will put more emphasis sensitivity on a specific matrix or vector and its corresponding parameters. These can be used to define the “region of adjacency”, which represents the parameter differences that are considered to be meaningful for protecting with a differential privacy mechanism. The choice of the norm can also change the behavior, for instance by treating all parameters equal (such as for the -norm or the Frobenius norm) or only worrying about the maximum distance (such as for the -norm). For some applications, only certain entries of the matrices and vectors represent private information. In this case, the distance should be defined only with respect to these “private entries”.

In the classical setup of differential privacy where the database representation is considered, there is a natural definition of adjacency: namely two databases are adjacent if they differ by only one element. When extending the adjacency definition to the space of functions, there does not exist a natural candidate for adjacency. Even with a choice such as (8), it is not completely clear how to normalize the norms. We should point out that this is a common situation encountered in similar problem setups such as [10, 13]. However, resolving this issue would require an explicit connection between the privacy level and a concrete specification supplied by the application (such as in [5, Section B]), which is out of scope for this paper.

Iii-B Privacy analysis

In this section, we derive the differential privacy level with Algorithm 1. To state the main result, we first define

(9)

where encapsulates the local objective function and constraints . The main result is given in the following theorem.

Theorem III.1 (Local Differential Privacy).

Consider Algorithm 1 for solving Problem II.1. Assume that each element of the noise vector in Algorithm 1 is independently chosen according the Laplace distribution with the density function for and . Then for all , this algorithm is locally -differentially private for node where

(10)

and

(11)

is called the sensitivity of the optimization problem.

Proof:

The proof is given in the Supplementary Material. ∎

Iii-C Sensitivity Calculation

In order to evaluate the privacy level provided in Theorem III.1, we need to calculate the sensitivity . This calculation is itself an optimization problem which can be written as follows.

(12)
s.t.

This problem belongs to the class of bi-level optimization problems, for which there is in general no efficient algorithm to find the global optimal solution. In the rest of this section, we will specialize our problem to the the class of quadratic problems in (3) and provide some refined analysis. In this case, we represent the optimality condition of in terms of the KKT condition of the optimization problem . Hence, the optimization problem (12) can be rewritten explicitly

(13)
s.t. (14)
(15)
(16)

where the set is defined as

where represents Lagrangian multipliers for the optimization problem (12). Notice that we used the distance as defined for the quadratic problem in (8).

We state our first observation regarding the sensitivity of quadratic problems as formulated in Problem II.2.

Lemma III.2 (Sensitivity for Problem ii.2).

If we specialize Problem II.1 to the problem in (3) with the distance defined in (8), can be simplified as

(17)

that is we lose explicit dependency on lagrange variable .

In the rest of this section, we discuss how to estimate the sensitivity for generic quadratic problems using a sample-based method. We point out that a more general discussion of this approach can be found in [2]. For the sake of notation, we write in (13) as where denotes the collection of all variables . Furthermore we use to denote the polynomial constraints given by (14), (15) and (16). With these notations, the optimization problem in (13) can be rewritten as

(18)
s.t.

The idea of sample-based approach is to randomly draw many instances of from the set , and find the maximum using these samples. Namely, we solve the following problem

(19)
s.t.

where are randomly drawn samples from . More specifically for our problem, we randomly sample the parameters from their sets. For each sampled set of parameters, we solve the original optimization problem (3) to obtain and , hence obtain one estimate . After samples, the maximal is set to be , which gives a lower bound on the sensitivity. To quantify the quality of this approximation, the following definition is introduced.

Definition III.3 (Random Sampling, Definition 1, 2 in [3]).

Let denote the optimal solution to problem (18) and be a candidate solution retrieved through solving (19). We say is an -level robustly feasible solution, if , where is defined as

(20)

where the measure is induced from the uniform sampling of the constraint set.

In other words, is the portion of in that was not explored after samples, which, if explored, would yield a higher function value than and thereby a tighter lower bound on sensitivity . The following result relates the number of samples to the quality of the approximation.

Lemma III.4 (Sampling Rule, Corollary 1 in [3]).

For a given and and let . Then with probability no smaller than , the solution given by (19) is a solution with -level robust feasibility for Problem (18).

This result gives the minimum number of samples to find an approximate optimal solution to (19) with high probability (larger than ).

Iii-D Analytical Upperbound for Sensitivity in Special Cases

By making additional assumptions on the problem statement for the Distributed Quadratic Problem, formulated in Problem 3, we derive an analytical upper bound on the sensitivity .

Lemma III.5 (Special Case - Protecting in Problem ii.2).

Assume and the distance between two problems is defined as . That is, the privacy requirement only concerns about the matrix . Also assume that the local variable is bounded as , then

(21)

where is the lower bound on the eigenvalues defined in Problem II.2.

Lemma III.6 (Special Case - Protecting in Problem ii.2).

Assume and the distance between two problems is defined as . That is, the privacy requirement only concerns about the vector . Then

where is the lower bound on the eigenvalues defined in Problem II.2.

These lemmas give closed-form expressions of the upper bounds on for two special cases, and they will be useful for our applications in Section IV. Notice however that the upperbounds do not serve as a straighforward design principle for scaling eigenvalues to lower sensitivity. As a result of scaling the eigenvalues, the distance metric  will also change; higher eigenvalues generally result in smaller sensitivity but also a narrower and less meaningful “region of adjacency”, as discussed in Section III-A, Equation (8).

Iii-E Convergence properties of the distributed optimization algorithm

Privacy comes with a price. The noise term in Algorithm 1 makes the convergence rate slower than the case without noise. However, it is possible to prove that even with random noise, the algorithm converges in expectation. To show this, we first write out the dual problem of Problem II.1 as follows.

Problem III.7 (Dual Problem of Problem ii.1).

with the matrix and dual variable . We use to denote the conjugate function of , defined as and denotes the indicator function on a set

The dual problem is of the form:

Problem III.8 (Dual Problem Form).

The stochastic proximal-gradient method (stochastic PGM), as given in Algorithm 2, is a method to solve the dual problem above.

0:   Require and step size
   for  do
       1:
   end for
Algorithm 2 Stochastic Proximal-Gradient Method

The proximity operator in Algorithm 2 is defined as

This algorithm has been studied extensively [21, 22] and requires the following assumptions.

Assumption III.9 (Assumptions for Problem iii.8).

  • is a strongly convex function with a convexity modulus , and has a Lipschitz continuous gradient with a Lipschitz constant .

  • is a lower semi-continuous convex function, not necessarily smooth.

  • The norm of the gradient of the function is bounded, i.e., for all

  • The variance of the noise is equal to , i.e., for all .

The key observation in our proof of convergence is the following lemma, showing that executing Algorithm 1 on the original problem II.1 is equivalent to executing Algorithm 2 on the dual problem III.7.

Lemma III.10 (Equivalence Dual Problem).

Consider using Agorithm 1 on Problem II.1 and using Algorithm 2 on Problem III.7. Further assume that Algorithm 1 is initialized with the sequence , for , and Algorithm 2 is initialized with the sequence where for . Then for all and all , and the error terms in Algorithm 1 and Algorithm 2 have the relationship .

A proof sketch is provided in the supplementary material, largely building on results in [19]. Based on the equivalence shown in Lemma III.10, we are ready to provide the following theorem showing the convergence properties of Algorithm 1.

Theorem III.11 (Suboptimality).

Consider Algorithm 1. Assume that the local variables are bounded as for all and for all . We have that for any , the expected suboptimality is bounded as

(22)
Proof:

Consider applying Algorithm 2 on Problem III.8 with Assumption III.9, we can first show that

(23)

The proof of this claim follows the same flow as the proof of Theorem 1 in [21] by noticing the following two facts: if the function is convex, closed and proper, then

(24)

and the variance of the gradient of is bounded by for all .

We then apply this result to our Problem III.7. The gradient of the first objective is equal to:

with . With our assumption in the theorem, the dual gradient is bounded as:

Finally the claim follows because of the equivalence result in Lemma III.10. ∎

Theorem 22 shows that the suboptimality gap for Algorithm 1 is bounded above by a function that is linear in the number of agents and linear in the noise variances . The convergence rate is of order . This is an improvement compared to the motivating work by Han et. al [10], which achieved a convergence rate of , and is similar to the algorithm proposed by Huang et. al [13].

Iv Application: Distributed Optimal Power Flow

This section presents a simplified optimal power flow (OPF) problem that inspires the proposed control approach. We consider the setting of a radial distribution feeder, and consider the flow of real power on its branches. We formulate the power flow model and the OPF objectives and develop the distributed OPF problem according to the quadratic problem, as defined in (3). We then discuss the parameters that are subject to privacy requirements and interpret the trade-offs developed in Section III.

Iv-a Simplified Optimal Power Flow

Solving the simplified OPF problem requires a model of the electric grid describing both topology and impedances. This information is represented as a graph , with denoting the set of all buses (nodes) in the network, and the set of all branches (edges). For ease of presentation and without loss of generality, here we introduce part of the linearized power flow equations over radial networks, also known as the LinDistFlow equations [1]. In such a network topology, each bus has one upstream parent bus and potentially multiple downstream child buses . By  we denote the set of all buses downstream of branch . We assume losses in the network to be negligible and model the power flowing on a branch as the sum of the downstream net load:

(25)

In this model, capital represents real power flow on a branch from node to node for all branches , lower case is the real power consumption at node , and is its real power generation. This nodal consumption and generation is assumed to be uncontrollable. In addition, we consider controllable nodal injection , available at a subset of nodes that have a Distributed Energy Resource (DER). In this case study, we aim to prevent overload of real power flow over certain critical branches in an electric network. This aim is formulated through constraints

(26)

denotes a subset of branches for which power flow limitations are defined, denoting the upper and lower power flow bounds on branch . In addition, each controlled node is ultimately limited by the local capacity on total apparent power capacity,

(27)

We consider a scenario in which the operator negotiates different prices for different capacities, potentially at different points in time, with different third party DER owners. Let refer to the real power used for the optimization scheme from agent , and denotes the price for procuring a kWatt from agent . The optimal power flow determines the control setpoints that minimizes an economic objective subject to operational constraints.

(28)
s.t.

The OPF problem (28) can be recast as an instance of the quadratic distributed optimization problem (3). First, note that the objective is quadratic in the optimization variables , and separable per node. Second, for all nodes , the capacity box constraints (27) are linear and fully local. The safety constraints (26) require communication to and computation by a central trusted node. To ensure strong convexity of the local problems, the economic cost objectives are shared between each agent and the central trusted node. Hence, , the objective reads

(29)

with capacity constraint (27). The central node 0 has objective function

(30)

with safety constraints 26. As such, this distributed problem assumes a star-shaped communication structure, in which the a centrally trusted node receives all from the agents. The agents retrieve iterates of from the central node and compute a simple problem with only economic cost and a local capacity constraint.

Iv-B Private Information in Distributed OPF

We consider assigning privacy requirements to two sets of parameters; the prices that the DSO charges to different agents in the network, and the capacities available to all agents . Together, these parameters provide important strategic insight into the commercial position of each agent. An operator may charge different prices for different levels of commitment or for the varying value that the operator gets from the actions of a specific agent at specific time periods or places in the network. In a natural commercial context, the operator may have an interest to hide the prices to other agents. In addition, in a negotiation setting, a strategic agent may want to find out the capacity available by other agents in the network to adjust its bid to the operator, so as to be the first or only agent to be considered, which could lead to asymmetric and potentially unfair bidding situations. As such, in order to give all agents with capacity a fair chance to participate, there is value in hiding the capacity (and price) parameters.

To formulate this as an instance of local differential privacy, we need to define the adjacency metric for all considered parameters. In the case of both prices and capacity, this is achieved by considering the maximum range in which these parameters are expected to lie. The distance metric proposed is the -norm. Given this metric, we need to define a proper adjacency relation, which determines the maximum change in a single parameter that we aim to hide with the differentially private algorithm.

Definition IV.1.

(Adjacency Relation for Distributed OPF): For any parameter set and , we have if and only if there exists such that

(31)

and , , for all .

By setting , and respectively as the maximum price offered per unit of energy (i.e. if ) and the maximum capacity in the network (i.e. ), we ensure that all parameters in the network are properly covered by the definition.

Iv-C Interpreting Trade-offs

We analyze and interpret the theoretical results that illuminate an inherent trade-off between privacy level and suboptimality. Assuming a fixed noise variance across all iterations, Equations 10 and 22, we have the following trade-off:

(32)

Remember that better privacy relates to a lower privacy level  and a more optimal solution relates to lower suboptimality. Unsurprisingly, an increasing number of iterations leads to worse privacy and better suboptimality. Conversely, a higher noise variance leads better privacy and worse suboptimality. Figure 2 shows the region of attainable values for the simplified OPF problem. The left figure shows that, for a fixed reasonable level of privacy (), the sub-optimality will decrease for a larger number of iterations . The right figure shows that for a fixed level of privacy, the sub-optimality bound tightens for higher variance levels. For a fixed level of sub-optimality, a higher noise variance  achieves a lower (and hence better) privacy level. Ideally, parameters are chosen along the Pareto front of this graph.

Figure 2: Achievable tradeoffs between privacy level and suboptimality , with Pareto front. (left) indicates increasing number of iterations and (right) increasing noise variance. We assume .

As a result, a system designer may want to define specifications,

(33)

Based on the specifications, we then want to determine feasible values for the number of iterations  and noise variance . For the sake of analysis, we let all upper bounds be the same in the second equation, . In addition, we consider the normalized noise-to-signal ratio , which is more intuitive as a tunable parameter. With these steps, we can write

(34)
Figure 3: Feasible parameter sets for varying levels of (left) and (right), assuming .

These equations tell us that the suboptimality is mostly governed by , as typically (in other words, we need to be on the order of or larger to affect suboptimality). These equations provide a specific test to determine feasible parameters  that satisfy a set of specifications . Note that this set may be empty if the specification are too stringent. Figure 3 shows the feasible set for varying levels of specifications .

We now further specify the tradeoff relations for the simplified OPF problem. Note that , and and . This yields

(35)

where we have maintained the assumption that  for the second inequality. The first equation shows that the ratio of the number of iterations to the normalized noise needs to be sufficiently small, capped by the specified privacy level  and the agent’s maximum capacity. It also shows the effect of the sensitivity on this trade-off. The latter equation shows that with increasing number of agents  injecting noise, we need more iterations to achieve the same level of suboptimality. Similarly, if the maximum capacity  of the agents increases or the maximum price  decreases, we require more iterations or lower noise variance to maintain the same level of suboptimality.

V Sharing or Pricing the Privacy Budget?

Theorem III.11 provides a relationship between sub-optimality and the cumulative variance of the Laplacian noise inserted by all subsystems. In real scenarios, a system designer or operator may specify a desired level of (sub-)optimality achieved after iterations, that is . Rewriting Equation (22), we can compute a bound on the amount of cumulative Laplacian noise allowed at run-time

(36)

Hence, once is specified, a cumulative privacy budget is set. Remember that each individual agent may define a different distance metric, or different weights, which may lead to different sensitivities and hence different noise levels required by the various agents. Since the privacy budget is limited, a fair and transparent allocation procedure is required to divide the allowable noise over all agents. Here, we propose two approaches for going about such allocation.

The first approach, would entail a proportional division of the pie. This could be done in two ways. The first way involves splitting the noise variance budget by the number of agents: . In this case, the allowed noise variance determines the maximum level of local differential privacy  that can be maintained, given a set sensitivity  (or vice versa), as outlined in (10)), which will differ from agent to agent. The second way involves setting all local differential privacy levels  equal, and, given set sensitivities , splitting the noise among all agents. This is equivalent to equating Equation (10) for all agents , leading to equations for :

(37)

where we assume that the noise variances are constant for all time steps .

The second approach we anticipate is a pricing scheme, in which the value of privacy is left to a market or negotiation. In the context of d-OPF, it is natural to assume that different DER owners with varying privacy levels will have varying degrees of willingness to pay or incur a deduction on their revenue for preserving privacy of their local parameters. Here, we propose two scenarios to perform allocation via the so-called Kelly mechanism [16]. We assume a one-directional bid done by all agents after seeing a price