Differentially Private Oblivious RAM
Abstract
In this work, we investigate if statistical privacy can
enhance the performance of ORAM mechanisms while providing
rigorous privacy guarantees. We propose a formal and rigorous framework for developing ORAM protocols with statistical security viz., a differentially private ORAM (DPORAM).
We present Root ORAM, a family of DPORAMs that provide a tunable, multidimensional
tradeoff between the desired bandwidth overhead, local storage and system security.
We theoretically analyze Root ORAM to quantify both its security and performance.
We experimentally demonstrate the benefits of Root ORAM and find that (1) Root ORAM can reduce local storage overhead by about for a reasonable values of privacy budget, significantly enhancing performance in memory limited platforms such as trusted execution environments, and (2) Root ORAM allows tunable tradeoffs between bandwidth, storage, and privacy, reducing bandwidth overheads by up to  (at the cost of increased storage/statistical privacy), enabling significant reductions in ORAM access latencies for cloud environments. We also analyze the privacy guarantees of DPORAMs through the lens of information theoretic metrics of Shannon entropy and Minentropy [16]. Finally, Root ORAM is ideally suited for applications which have a similar access pattern, and we showcase its utility via the application of Private Information Retrieval.
10pt \cclogo
[1]Sameer Wagh
Differentially Private Oblivious RAM
Oblivious RAM, Differential Privacy
Proceedings on Privacy Enhancing Technologies \DOI10.1515/popets20180032 \startpage64 \received20180228 \revised20180615 \accepted20180616
2018 \journalissue4
1 Introduction
Oblivious RAM (ORAM), first introduced by Goldreich and Ostrovsky [27, 26], is a cryptographic primitive which allows a client to protect its data access pattern from an untrusted server storing the data. Since its introduction, substantial progress has been made by the research community in developing novel and efficient ORAM schemes [55, 40, 25, 37, 53, 10, 22, 52]. Recent work has also shown the promise of using ORAMs as a critical component in developing protocols for Secure MultiParty Computation [25].
ORAMs can mitigate sidechannel attacks [34, 17] in two typical deployment contexts:
(1) Trusted Execution Environments such as SGXbased enclaves [33], involving communications between lastlevel cache (LLC) and DRAM, and
(2) Clientserver environments, such as communications between smartphones and cloud servers. However, a key bottleneck in
the practical deployment of ORAM protocols in these contexts is the performance overhead.
For instance, even the most efficient ORAM protocols [40, 55, 37, 54] incur a
logarithmic bandwidth overhead > as well as a logarithmic local storage/stash overhead
In this paper, we propose a novel approach for developing practical ORAM protocols. Our key idea is to tradeoff performance at the cost of quantified statistical privacy. We first formalize the notion of a differentially private ORAM that provides statistical privacy guarantees. As the name suggests, we use the differential privacy framework developed by Dwork et al. [20] with its differential privacy modification [21]. In the current formulation of an ORAM, the output is computationally indistinguishable for any two input sequences. In a differentially private ORAM, we characterize the effect of a small change in the ORAM input to the change in the probability distribution at the output.
This formalization of a differentially private ORAM subsumes the current notion of ORAM security viz., leads to the currently accepted ORAM security definition in Section 2. Yet such a formalization opens up a large underlying design space currently not considered by the community. We also present Root ORAM, a tunable family of ORAM schemes allowing variable bandwidth overheads, system security and outsourcing ratios while providing quantified privacy guarantees of differentially private ORAMs. Root ORAM is not a silver bullet for all applications (see Section 7 for enabling requirements). But we hope that this first step in the direction of statistically private ORAMs opens the door for the research community to build more efficient ORAM protocols. In Section 7, we demonstrate an application where DPORAM provides a promising solution to the problem of private information retrieval.
1.1 Our Contributions
Root ORAM introduces a number of paradigm shifts in the design of ORAM protocols while building on the prevailing ideas of contemporary ORAM constructions. Our main contributions are:
Formalizing differentially private ORAMs: We formalize the notion of a differentially private ORAM, which to the extent of our knowledge is the first of its kind. A differentially private ORAM bounds the information leakage from memory access patterns of an ORAM protocol. For details, refer to Section 2.
Tunable protocol family: We propose a tunable family of ORAM protocols called Root ORAM. These schemes can be tailored as per the needs and constraints of the underlying application to achieve a desirable tradeoff between security, bandwidth and local storage. This serves as a key enabler for practical deployment and is discussed in more detail in Section 6.
Security and Utility: We analyze and provide theoretical guarantees for the security offered by Root ORAM schemes in the proposed differentially private ORAM framework. The proofs are general and will be useful for analyzing the security of alternative statistically private ORAM schemes in the future. We also theoretically analyze the utility benefits of using statistical privacy. These results are supported by extensive experiments using a complete implementation of Root ORAM. The central results of this paper are summarized below (for details, refer to Section 5).

We prove that the family of Root ORAM protocols described in Section 4 satisfy the differential privacy guarantees and give the relation between and the model parameters.

We concretely show the benefits of using differential privacy i.e., we demonstrate how a larger value of helps reduce the protocol overheads, thereby showing an explicit securityperformance tradeoff.
Practical Impact and Applications: We experimentally investigate the impact of DPORAM in the following contexts:

To reduce local storage requirements to run ORAM protocols in trusted hardware. Trusted execution environments such as the first generation Intel Skylake SGX processors have stringent memory constraints, with available memory for implementing programs (including the ORAM overhead) as low as 90MB [48]. For reasonable values of the privacy budget , Root ORAM reduces local storage by more than , thereby enhancing compatibility with Intel SGX (Section 6.3).

To reduce the bandwidth in embedded computing and IoT applications, where devices have limited available bandwidth. Depending on the system parameters chosen, DPORAM can reduce the bandwidth overhead by  at the cost of statistical security and higher local storage.
Root ORAM enables novel design points in developing ORAM protocols by leveraging the benefits of statistical privacy. It also supports design points with order of magnitude performance improvements over stateoftheart protocols (at the cost of a quantified loss in security). Finally, Root ORAM does not assume any serverside computation and requires practical amounts of clientside storage (depending on the parameters chosen). It is also extremely simple to implement at both the client and the server side.
2 Differentially Private ORAM
The notion of statistical privacy has been around in security/privacy applications [9, 39, 45] yet it has never been previously explored in the context of ORAMs. We believe formulating such a framework would greatly expand the ability of the research community to develop novel ORAM protocols with lowbandwidth and lowclient overhead, serving as an enabler for realworld deployment of this technology.
Formally, an ORAM is defined as a protocol (possibly randomized) which takes an input access sequence as given below,
(1) 
and outputs a resulting output sequence denoted by . Here, is the length of the access sequence, denotes whether the operation is a read or a write, denotes the address for that access, and denotes the data (if is a write). Denoting by the length of the access sequence , the currently accepted security definition for ORAM security can be summarized as follows [55]:
Definition 1.
(Currently accepted ORAM Security): Let as given in Eq. 1, denote an input access sequence. Let be the resulting randomized data request sequence of an ORAM algorithm. The ORAM guarantees that for any two sequences and , the resulting access patterns and are computationally indistinguishable if , and also that for any sequence the data returned to the client by ORAM is consistent with (i.e the ORAM behaves like a valid RAM) with high probability.
This framework for ORAMs is constructed with complete security at its core [55, 40, 25, 37] and there is no natural way to extend this to incorporate a statistical privacy notion. Hence, we introduce and formalize a statistically private ORAM viz., differentially private ORAM (DPORAM).
2.1 Formalizing DPORAM
The intuition behind a DPORAM is that given any two input sequences that differ in a single access, the distributions of their output sequences should be “close”. In other words, similar access sequences lead to similar distributions. We formally define it as follows:
Definition 2.
Differentially Private ORAM: Let , as defined in Eq. 1, denote the input to an ORAM. Let be the resulting randomized data request sequence of an ORAM algorithm. We say that an ORAM algorithm is differentially private if for all input access sequences and , which differ in at most one access, the following condition is satisfied by the ORAM,
(2) 
where is the base of the natural logarithm and is any set of output sequences of the ORAM.
First we note that we lose no generality by using this definition: it can capture the existing computational ORAM security paradigm using and negligible . The formalism also does not make any assumption about the size of the output sequences in . If the input to the ORAM is changed by a single access tuple , the output distribution does not change significantly. Given two sequences and , the two distributions generated (the red and the blue) are close to each other in the differential privacy sense.
Differential privacy provides two important composability properties [20] viz, “composition” and “group privacy”. The former (Theorem 1) refers to the degradation of privacy guarantees over multiple invocations of a differentially private mechanism and the latter (Theorem 2) refers to the privacy guarantees when neighboring databases differ by multiple entries. Together, they give privacy bounds for arbitrary sequences and provide rigorous privacy guarantees over multiple invocations or when access sequences differ in multiple accesses.
Theorem 1 (Composition for DPORAM).
Invoking an differentially private ORAM mechanism times guarantees differential privacy.
Theorem 2 (Group privacy for DPORAM).
An differentially private ORAM is differentially private for access sequences differing by accesses where and . In other words, given two access sequences and that differ in accesses.
(3) 
Theorem 1 holds even for adaptive queries as long as the randomness used in each mechanism is independent of each other. Together, Theorem 1 and 2 allow us to extend differential privacy guarantees to arbitrary access sequences from the guarantees for a single invocation on access sequences that differ by a single access. It is important to note since privacy guarantees degrade with both the number of invocations and the worst case hamming distance between access sequences, DPORAMs are best suited for applications where the input sequences differ in a small number of accesses. We present a case study of such an application  Private Information Retrieval (PIR)  in Section 7.
PIR is a cryptographic primitive for privately accessing data from a public database. ORAM schemes can be used in conjunction with trusted hardware to perform PIR queries [59, 7]. We demonstrate the utility of statistical ORAMs by showing how DPORAM can be used in conjunction with trusted hardware to perform efficient DPPIR queries [56]. This application is well suited to showcase the benefits of using statistical ORAMs as each PIR query corresponds to an access sequence of exactly one element.
3 Root ORAM overview
Symbol  Description 

Number of real data blocks outsourced  
Model parameter (to tune bandwidth)  
Model parameter (to tune security)  
Number of blocks in each bucket 
In this section, we describe our key design goals and give an overview of the Root ORAM protocol.
3.1 Design Goals
Statistically private ORAMs: We target protocols that offer performance benefits at the cost of statistical privacy which is quantified using the metric of differential privacy.
Tunable ORAM schemes: Conventional ORAM schemes operate at specific overheads with full privacy but cannot operate at lower overheads. We aim to provide an ORAM architecture that can be tuned to application requirements and can achieve privacy proportional to system resources such as the bandwidth and local storage.
Rigorous Analysis and Efficiency: We target systems amenable to rigorous security analysis. At the same time, we aim for efficient systems that can be easily implemented on both client and server side.
Finally, the design should use low storage both at the client as well as the server side. Server side computation is not always practical and hence we do not assume any such capability. Next we describe the key ideas of Root ORAM protocol. Over the years, several different definitions have been used to quantify ORAM bandwidth overhead.
We will use the original and straightforward definition of bandwidth as the average number of blocks transferred for one access [37].
Definition 3.
The bandwidth cost of a storage scheme is given by the average number of blocks transferred in order to read or write a single block.
3.2 Approach Overview
Root ORAM protocol can broadly be split into four components, the storage, the access, the new mapping and the eviction. These are briefly described below. The notation for Root ORAM is illustrated in Table 1. Treebased ORAMs make a relatively easy proofofconcept to demonstrate the benefits of DPORAMs and hence we construct Root ORAM as a treebased ORAM.
Storage: The data to be outsourced is assumed to be split into units called blocks. Blocks are stored at the serverside storage in binary trees, each with a depth of . For simplicity of proofs, we call each of these trees as “subtrees” as they can be thought of as subtrees of a larger virtual tree (cf Fig. 1). Each node is a bucket that can hold up to data blocks ( is typically a small constant such as 4 or 5). This is represented in Fig. 1. A stash at the client is used to store a small amount of data. Each data block is mapped to a leaf and this mapping is stored recursively in smaller ORAMs.
Access: The main invariant is that any data block is along the path from the associated leaf to the corresponding subtree root or is in the stash (as shown in Fig. 1). Hence, to access a data block, the client looks up the mapping to find the subtree and the associated leaf that the data block is mapped to and then traverses the path from that leaf to the subtree root.
New Mapping: The data block is then read or written with the new data and then mapped to a new leaf. It is important to note that this new mapping is not uniform among the leaves. The flexibility and the choice of this nonuniform distribution is given in Section 4.
The intuition behind using a nonuniform distribution is that it provides performance benefits such as improving stash storage (refer to Theorem 4). At the same time, we theoretically quantify the security impact of nonuniform distributions using the framework of differential privacy (refer to Theorem 3).
Eviction: Finally, new randomized encryptions are generated and all the data (including some blocks from the stash) are written to the accessed path with blocks being pushed as further down the path as possible (towards the leaf). Root ORAM also uses the recursion technique developed previously [22, 54, 55] to store the mapping in smaller ORAMs.
3.3 Comparison with Path ORAM [55]
Root ORAM is a generalization of the Path ORAM protocol [55], yet there are critical differences between the two protocols. In this subsection, we highlight some of the critical differences between the two papers.
Differentially Private ORAM: Root ORAM introduces a new metric to quantify ORAM security, which extends the current formalism to include the notion of a statistically private ORAM. We bound the privacy offered by the Root ORAM (as well as Path ORAM) using this metric.
Tunable Statistical ORAM: Path ORAM incurs a fixed bandwidth cost that cannot be tuned. Thus, applications that cannot accommodate high bandwidth costs are unable to achieve access pattern security. Root ORAM on the other hand is tunable and applications with limited bandwidth can achieve security proportional to their resources.
Multidimensional design space: We demonstrate the feasibility of new design points by showing a multidimensional tradeoff between bandwidth, security and client storage. We support a range of operating conditions by tuning the protocol parameters and demonstrate the tradeoff between resource overheads and statistical privacy both theoretically and experimentally.
Note that Path ORAM is an instantiation of Root ORAM for . For more details, see the remark at the end of Section 6.
4 Root ORAM details
In this section, we provide the details of Root ORAM. Basic notation is given in Table 1. denotes size of each block in bits, denotes path from leaf to the subtree root, the node at level in and indicates data block is currently mapped to leaf .
4.1 Server Storage
Server Storage: The server stores data in the form of binary trees as shown in Fig. 1. Each node of the tree is a bucket containing multiple data blocks, real or dummy (a dummy block is a randomized encryption of 0). For the simplicity of analysis, we consider the roots of these subtrees to be at level and subsequent levels where would correspond to the leaves of each subtree.
Bucket structure: Each node is a bucket consisting of blocks, each block can either be real or dummy (encryptions of ).
Path structure: The leaves are numbered in the set . denotes the path (set of buckets along the way) from leaf to the subtree root and denotes the bucket in at level . It is important to emphasize here that the path length in Root ORAM is blocks compared to the blocks in Path ORAM.
Dummy blocks and randomized encryption: We use the standard padding technique (fill buckets with dummy blocks when needed) along with randomized encryption to ensure indistinguishability of real and dummy blocks.
4.2 Invariants of the scheme
Main Invariant: The main invariant in Root ORAM is that each real data block is mapped to a leaf and at any point in the execution of the ORAM, the real block will be somewhere in a bucket or in the local Stash. This path is from the root of a subtree to the leaf and consists of buckets. It is also important to note that the invariant does not say that the mapping of each data block is uniform over the set of leaves, as shall be clarified by the second invariant.
Secondary Invariant: We maintain the secondary invariant that after each access to a data block, its mapping changes according to a leaf dependent nonuniform distribution (i.e., its new mapping is randomly sampled from this distribution ). There is tremendous flexibility in choosing this distribution; for our purposes, we consider a distribution in which a data block is more likely to be remapped to another leaf in the same subtree than to another subtree leaf. This distribution is formally given by Eq. 4 and shown graphically in Fig. 2.
(4) 
Where is the probability that the new mapping is leaf given the previous mapping was leaf , denotes the root of the subtree of leaf , is the Kronecker delta defined as
and and are functions of the model parameter and are given by:
(5)  
The reason behind using a nonuniform distribution is that it gives performance benefits such as lower stash usage, captured theoretically in Theorem 4. This particular choice of happens to be ideal for Root ORAM as can be seen by the analysis from Section 5. Theorem 3 gives the relation between the model parameter and the desired level of privacy (given by ). In practice, the acceptable privacy budget would decide the parameter used in the model. We refer the reader to Section 6.4 for details of choosing the parameters.
4.3 Client Storage
Position Map: The client side stores a position map which maps real data blocks to leaves of the server tree. This position map is stored recursively using smaller ORAMs. The recursion technique [22, 54, 55] aims to recursively store the ORAM position maps into subsequently smaller ORAMs. The final ORAM position map is stored locally.
Stash: The client maintains a local stash, which is a small amount of storage locally at the client which is used to store overflown data blocks locally
4.4 Protocol Details
The main functions of the protocol are Access and updateMapping. In the former, we read blocks along a path of a subtree, try to write blocks back to the same path (with new encryptions) and if there is insufficient storage, the excess data blocks are stored locally in the Stash. The latter function generates a distribution where a data block is more likely to be remapped to another leaf in the same subtree than to another subtree leaf. We use subtree to denote the subtree the data block is currently mapped to, to denote union and to denote removal from stash.
1: 2: 3: = updateMapping 4:Stash = Stash 5: Read() from Stash 6:if then 7: Stash = Stash 8:end if 9:flush() 10:return
The updateMapping function implements the new mapping function as described in Section 4.2. The values of the probabilities are as shown in Fig. 2 (and Eq. 5).
1: 2: UniformReal(0,1) 3:if then 4: return Uniform 5:else 6: return Uniform 7:end if
The flush() function is implemented by writing blocks from the stash into the subtree, along the path from the associated leaf to the subtree root while writing them as low in the subtree as possible. The pseudocode for flush() is given below.
1: 2:for do 3: 4: 5: blocks from 6: Stash = Stash 7: writeToBucket() 8:end for
5 Theoretical evaluation
5.1 Notation
We begin by developing some notation to present the central results of this paper. We fix to be the total number of outsourced blocks. We denote by , the Root ORAM protocol with bucket size and model parameters . We define the sequence of load/store operations by where are the logical block addresses loaded/stored, is the sequence of leaf labels seen by the server and is the sequence of new leaf labels.
Let denote the random variable which equals the number of real data blocks in the stash after a sequence of load/store operations s, denote Root ORAM
5.2 Security Results
Theorem 3 (Differentially Private Protocols).
The Root ORAM protocol with parameters is differentially private for the following choice of and
(6)  
where is the Kronecker delta, is the size of the access sequence and total stash size.
Proof of Theorem 3: Using a conservative security analysis, we prove the bounds on Root ORAM protocols given in Theorem 3. The proof is split into two components, viz., the bound and the bound. For the bound, we first set up the differential privacy framework, then a model to evaluate probabilities of a given input sequence leading to a specific output sequence and finally compute the maximum change that could result from a change in the input. For the bound, we first demonstrate the significance and need for in the security bound and then proceed to conservatively prove the bound.
The bound:
We follow the notation described in Section 5.1. We consider two input sequences and that differ in only one access, say , for some . We know that the server sees a sequence x given by
where is the position of address for the load/store operation, along with the associated path to the root of the subtree. Now, we need to compute the ratio of probabilities that and both lead to the same observed sequence at the server. In other words, we compute
for any set of input sequence and observed leaf sequence and respectively, of a given fixed size .
We evaluate the above probability by invoking the secondary invariant viz., after each access the mapping of that data block changes randomly according to a fixed distribution given in Eq. 4. Under this invariant, the probability that a sequence of load/store operations a leads to a particular observed sequence x can be computed according to the rules below. Since the position map of each location changes independently and randomly according to , we can compute the probability that the input sequence a leads to the output sequence x by simply multiplying the probabilities of each individual access. This is shown graphically in Table. 2.

If the block is accessed for the first time, its location is random and hence the probability is .

If the block was accessed previously at , then the probability is or depending on whether and belong to the same subtree.

Finally, we multiply all the above probabilities for access 1 to .
If at any point during the above enumeration the stash size exceeds , we set the probability to 0. Refer to Section 5.2.2 for details. The probability that the stash size exceeds is bounded by Theorem 5.
Obs. Seq.  a’  b  a  c  a  b  d 
Real Seq.  x  y 
x 
z  y  z  x 
Probability 
Next, we compute the maximum change in probabilities over two neighboring access sequences and that differ in the access. Let the logical address accessed in the two sequences be and respectively i.e., the access in is and in is . Since and agree in all other locations, let the previous location of access of block be (leaf ) and the next location be . Similarly, let denote the previous location of access of and the next location. If any of these 4 do not exist i.e., the symbol was never accessed before or was never accessed afterwards, we define that leaf to be for the sake of clarity of the equations (if data element was never accessed after the location of access change, then ). Let be the location of the access of the access. Note that all are specific leaves from x and hence are same for and .
It is easy to see that the probabilities can differ in at most 3 places viz., , and (probabilities for and depend on the previous access and hence do not change). To make the equations crisp, we define the following extension to the Kronecker delta function,
where is the root of the subtree associated with leaf . This modification of the Kronecker delta is for the simplicity of the equations. The modification ensures that if a symbol is accessed for the first time, then its probability evaluates to , as it should.
Now if and i.e., both the ratios are welldefined, we can calculate the ratio of the probabilities as:
After observing that , we can see that this maximum value of the ratio of probabilities occurs when belong to the same subtree and belong to a different subtree. In this case, the ratio is given by,
Evaluating this in terms of our parameters, and given by Eq. 5 and plugging this into the differential privacy equation:
It is important to note that the above equation holds for all observed access sequences x. And hence, we can see that Root ORAM guarantees . This completes the bound.
The bound
In this subsection, we show the need for in quantifying the security. We demonstrate this necessity using generic treebased ORAM constructions. We assume that the total stash size is . For demonstration purpose, we construct a minimal working example. Let:
(7)  
where denotes the read operation and denotes data which is not important for the demonstration. In words, one access sequence consists of accesses to the same element and the second access sequence consists of different accesses to elements .
It can be seen that the sequence is a possible output sequence of . It is not hard to see that the same sequence can never occur as . The reason for this is simply because if there are more than data blocks mapped to the same leaf, the tree ORAM invariant is broken. Hence the accesses to the same location cannot all be different elements.
To demonstrate this further, we consider a situation where a program is using a treebased ORAM protocol to hide its access pattern. We also assume that the program has the following traits,
If is the input access pattern, and we observe a sequence of or more access made to the same location in , we can immediately infer that Secret = 1. It is important to note that the probability of an observed sequence can suddenly jump from a nonzero value to 0 with the change of a single accessed block. We quantify this by the in the differential privacy framework for ORAMs.
We compute the maximum probability for a sequence such that some neighboring sequence (i.e., differing in one access) has zero probability. In particular we choose the following two sequences:
If for , then we have already shown the bound and hence ( if ). So it remains to find the maximum when one of these is . Let us assume that sequence x has zero probability when the input sequence is (i.e., ). In this case, is simply the maximum value of . Then, a conservative upper bound on can be found by noting the following: at each location, the associated probability is either or or . Since is the largest of these, we can get a upper bound on as
(8) 
Finally, to complete the proof, we note that the above bound should hold for each possible output access sequence x. To extend this to all possible subsets , say the support of contains and . Hence,
This shows that Root ORAM is differentially private for . This completes the bound.
5.3 Performance Results
Theorem 4 (SecurityPerformance Tradeoff).
Let and be two Root ORAM protocols
(9) 
where the expectation is taken over randomness of in .
Theorem 5 (Stash Bounds).
The probability that the stash size of the DPORAM protocol with parameters exceeds for is bounded by
Theorem 6 (Bandwidth).
The bandwidth of the Root ORAM protocol with parameters is blocks per real access.
6 Systems evaluation
In the previous sections, we have established the design space made possible by formalizing DPORAM, a tunable protocol construction, and theoretical security and performance analysis. In this section, we demonstrate the multidimensional tradeoff between bandwidth, stash storage, and security using a complete systems implementation of Root ORAM.
6.1 Details of the implementation
Root ORAM is built entirely in C++. All experiments were performed on a 1.4 GHz Intel processor with 4GB of RAM. For the Amazon EC2 experiments, remote servers were setup and latency measurements were performed over a TCP connection for reliable data downloads. For experiments measuring access latency for applications with bandwidth constraints, we cap both the upload and download bandwidth to a value . The values of used were KB/s. We used the trickle application to constrain the bandwidth at the client machines to these desired values. Finally, we use the worst case linear access pattern for the simulations.
We study the effect of system parameters on the performance of Root ORAM. In particular, we study the interdependence between local stash required, bandwidth, and security (given by ). We also study the access latency of Root ORAM protocols in two different settings (1) We measure the access latency over remote Amazon EC2 servers varying the protocol bandwidth parameter (and consequently the bandwidth itself) (2) We limit the bandwidth at the client end to a specific value (to emulate constrained bandwidth environments) and measure the access latency of Root ORAM protocols. In light of the recent paper by Bindschaedler et al. [11], we base our experimental evaluation by giving due importance to the constants involved in the overheads of the system.
6.2 Evaluation results
Bandwidth, Security and Stash Tradeoffs: Fig. 2(a) shows how statistical privacy reduces stash sizes. Note that increasing values of lead to lower stash values, the improvement of which is captured by the axis of Fig. 2(a). While Theorem 4 shows that relaxing the security improves performance, Fig. 2(a) empirically shows these performance improvements for concrete values of the security parameter . For instance, Root ORAM provides a % improvement in stash usage for , a % improvement for and about % improvement for ( where is the access sequence length). As shown in Appendix B, the loss in Shannon entropy of the output sequence is small for moderate values of . For instance, for , an results in a loss in entropy of roughly bits and for the loss is less than bits (compared to bits without any security). Furthermore, as seen in Section 7.3, in the context of the Private Information Retrieval application the use of anonymous communication channels can further reduce the effective privacy values by multiple orders of magnitude. Similar parameter values for differentially private systems are being increasingly adopted by the research community [56] as well as seen in deployed systems such as RAPPOR [23] (), Apple Diagnostics [2, 1] ( for Health information types, for Lookup Hints and Safari crash domain detection and for Autoplay intent detection) and US census data release [3, 4, 5] ( for OnTheMap LEHD OriginDestination Employment Statistics (LODES)). Research works which extensively explore the problem of setting privacy budgets state that the adopted privacy budget values range from to (refer to Table 1 from Hsu et al. [32] or Fig. from Lee et al. [38]).
Fig. 2(b) depicts the tradeoff between the stash improvement (relative to ), security and bandwidth (parameter ) for the Root ORAM protocol. We can see the significant performance gains in the high bandwidth regime. Note that the stash size of the Root ORAM protocol (as in Theorem 5) can be split into two components viz., exponential component (bounded by ) and the randomness component (bounded by ). The former dominates the latter for small values of bandwidth i.e., large values of and hence the stashsecurity tradeoff is less significant in those regimes, which agrees with the results in Fig. 2(b). Fig. 2(a) and Fig. 2(b) thus capture the effect of varying the security () on the performance by the reduction in stash size (compared to a baseline of ) and show that statistical privacy can be used to improve the performance of ORAM schemes.
Absolute Stash Values: Fig. 4 shows the absolute values of the stash size (in Bytes) as a function of the bandwidth. The stash roughly grows exponentially with reducing bandwidth, which serves as an experimental validation of Theorem 5. We can see that the required stash values are low enough to be practical in most systems today. For instance, we can achieve a outsourcing ratio at a bandwidth of about 20KB (for 1GB of outsourced data and local storage of 100MB). Similarly, we can achieve outsourcing ratio with a bandwidth of 60KB and an outsourcing ratio of with a bandwidth of 90KB.
Realworld implementation: We compute the latency overhead of a memory accesses as a function of the bandwidth parameter as well as the constrained application bandwidth . Fig. 4(a) depicts the access latency as a function of the bandwidth (varying ). We can see how Root ORAM provides a spectrum of acceptable bandwidthlatency choices compared to a single design point for Path ORAM. In Fig. 4(b), we compare the access latency when the application bandwidth is limited (for a fixed value of ). We find the latency as a function of the constrained application bandwidth (constrained at the client side) for a few different values of . The bandwidth for a given value of can be computed using Theorem 6 as blocks. We find that for limited application bandwidth, the system parameters significantly affect the access latency. Hence applications with constrained bandwidths can greatly benefit from using Root ORAM. For instance, in a scenario where the application bandwidth is limited to 10KB/s (KB/s), we can improve the access latency by roughly using Root ORAM.
6.3 Practical Impact
Next, we consider the significance of local storage and bandwidth improvements offered by Root ORAM in the typical deployment contexts of (1) trusted execution environments and (2) clientserver/cloud settings.
Local storage: Trusted Execution Environments, such as enclaves created using Intel SGXProcessors, have severe memory constraints, with total local memory of only 94MB [48], which is a significant bottleneck for ORAM deployment
Even in the context of smartphone applications, our results indicate that for 1TB of outsourced data, Root ORAM can bring down the local storage overhead (extending Fig. 4 results for 1MB block sizes and ) from 500MB to less than 250MB (as low as 100MB for higher ).
Bandwidth: Root ORAM allows tunable tradeoffs between bandwidth, storage, and privacy. In many embedded computing and IoT applications, bandwidth is a significant bottleneck for ORAM deployment. Root ORAM can reduce bandwidth overhead by up to  (at the cost of increased local storage and statistical privacy), providing dramatic gains in network access latency as shown in Fig. 4(b).
6.4 Choosing parameters
To use Root ORAM as a system, we require a lower bound on the number of accesses (to bound the worst case leakage). If this is unknown, is set to (one more than the total stash size). Typical to differentially private systems, a privacy budget is set i.e., an upper bound is set for the system use. For the particular application we take into account the worst case hamming distance between access patterns. If this distance is too large, we recommend using .
Once the privacy budget is set, using the results of Section 5 and Section 6.2, Root ORAM parameters can be chosen using acceptable values of bandwidth, stash and security parameters ( and ). Two of the three parameters can be set to desired values independently viz., two among security parameter , the bandwidth parameter () and stash size () can be chosen independently. The third parameter is determined by the choice of the other two and the optimal tradeoff choice would be determined by the specific application requirements. Finally, depending on the application under consideration and the effect of different block sizes on the bandwidth and storage overhead, an optimal block size can be chosen. For instance, in the application of PIRTor [46], Tor clients query about 4MB of data from Tor directory servers to retrieve information about Tor relays (refer to Section 7 connection between the ORAMs and PIR). This can be accomplished by using an ORAM with 4MB block size or a smaller block size ORAM with multiple invocations. The different performance overhead of such choices in system design are quantified in Theorem 3 and 5, and the resulting security is quantified via composition theorems from Section 2.
Remark: It is important to note that when , the storage structure in Root ORAM reduces to a single subtree. Hence, the nonuniform distribution in Root ORAM reduces to a uniform distribution over all the leaves. Another sanity check is that both and equal when . At the same time, since , no levels in the tree are cached. Hence Root ORAM when instantiates exactly into the Path ORAM protocol.
7 Applications: Efficient Private Information Retrieval
In this section, we demonstrate how DPORAM in conjunction with trusted hardware can be used to perform differentially private Private Information Retrieval (DPPIR) queries. The idea of using ORAM in conjunction with trusted hardware has been previously explored by the research community [47, 8, 59, 7]. An important line of research is in developing faster PIR protocols using a combination of trusted hardware and ORAM [59, 7].
7.1 Private Information Retrieval (PIR)
Private Information Retrieval is a cryptographic primitive that provides privacy to a database user. Specifically, the protocol allows the user to hide his/her queries when accessing a public database from the database holder. The critical difference between the PIR and ORAM problem settings is that one assumes a public database (PIR) and the other assumes a private database (ORAM). In a Differentially PrivatePIR scheme (DPPIR), the PIR privacy guarantees are relaxed and quantified using differential privacy.
7.2 Differentially PrivatePIR schemes (DPPIR)
Differentially Private PIR has been proposed by Toledo el. al. in [56]. The definition relies on an indistinguishability game between the adversary and a number of honest users as follows:
DPPIR indistinguishability game
Among the set of honest users , one is identified by the adversary as the target user . The adversary provides the target user two queries and provides all other users a single query . The target user selects one of the two queries and then all users use a PIR system to retrieve records. The adversary observes all the transmitted information including all the information from corrupt servers. The privacy of a DPPIR protocol is formulated as follows (from Toledo et al. [56]):
Definition 4.
Differentially Private PIR: A protocol provides private PIR if there are nonnegative constants and , such that for any possible adversaryprovided queries , and , and for all possible adversarial observations in the observation space we have that
(10) 
The security of DPPIR schemes translates to the privacy of the underlying queries. Hence, the privacy guarantees of DPPIR are easier to interpret as they directly relate to the “program secret” i.e., the PIR query.
DPPIR construction from DPORAM
To construct a DPPIR protocol using Root ORAM, we assume the PIR database is on a server with a trusted processor such as Intel SGX [33] or 4765 cryptographic coprocessor by IBM [6]. DPORAM based DPPIR operates on a public database (as required by any PIR application) but is encrypted by the trusted hardware to hide memory accesses. Different users of the DPPIR application use the same underlying DPORAM. The DPORAM protocol is run within the trusted hardware which also stores the ORAM stash and hence is common across different users and multiple ORAM invocations. The DPORAM block size is set equal to the PIR database block size. To perform a DPPIR query, a client does the following:

Step 1 (Initialization): In the initialization step, the client and the trusted hardware set up an authenticated encrypted channel (AEC) for communication (with or without an anonymous communication channel). The trusted hardware also initializes the ORAM storage structure with the entries of the PIR database. The ORAM is initialized with block size equal to the PIR block size. Other parameters are chosen according to application constraints (refer to Section 6.4).

Step 2 (Send Query): The client sends his PIR query (some database index ) to the trusted hardware through the AEC set up in Step 1 (over an anonymous channel or directly over the network).

Step 3 (DPORAM): The trusted hardware decrypts the PIR query to get the decrypted index and initiates a DPORAM query using this index.

Step 4 (Receive Response): The trusted hardware retrieves the PIR block with index using the DPORAM protocol from the untrusted memory. It sends this block over the AEC to the client.
We show that the above constructed PIR protocol satisfies the guarantees of DPPIR protocols from Definition 4. More formally,
Theorem 7 (DpOram DpPir).
The PIR protocol described above completed using a DPORAM is DPPIR.
7.3 Application Requirements and Multiple Queries:
Application Requirements: Next we compare the application requirements for various DPPIR protocols. The 4 DPPIR protocols from Toledo et al. [56] all rely on the use of multiple servers and 2 of the 4 schemes rely on the use of anonymous communication channels. DPORAM based DPPIR described in Section 7.2.2 is a DPComputational PIR scheme in contrast with the DPInformationTheoretic PIR schemes in Toledo et al. [56]. Our DPPIR protocol requires a single server and the use of anonymous channels is optional, though the existence of the latter improves the performance of our proposed protocol as discussed later in this Section. Our protocols require the use of trusted hardware but this results in significant performance improvements as discussed in Section 7.4.
Single Queries: DPPIR protocols, as formalized in Section 7.2.1, quantify the privacy for a single PIR query. In Theorem 7, we quantify the privacy of our proposed DPPIR scheme for a single query.
Performance benefits of DPORAM directly enhance the performance of the PIR protocol (cf Section 7.4) and showcase the benefits of DPORAMs

Without anonymous channels (ACs): Without access to ACs, Theorem 7 gives the privacy guarantees of our DPPIR protocol.

With anonymous channels: If ACs are available, they can be used to boost the performance of our DPPIR protocol by leveraging the additional privacy offered by the communication channel. This leads to significant performance benefits which we summarize in the following theorem:
Theorem 8 (DPPIR with Anonymous Channels).
The composition of a differentially private PIR mechanism with a perfect anonymity system used by users, for sufficiently large number of users
(11)  
where is a negligible function
We defer the proof of Theorem 8 to Appendix A.
These bounds significantly enhance the privacy values of when using a DPPIR protocol in composition with a anonymous communication channel. For instance, assuming users use a DPPIR, each user is effectively using a DPPIR protocol
Multiple Queries: An important consideration in the use of DPPIR schemes is the effect of multiple queries on the security of the scheme. Multiple invocations of the DPPIR scheme results in a privacy loss. We extend Theorem 1 to prove Theorem 9 that bounds the privacy of DPPIR schemes under multiple invocations. Consequently, the privacy of multiple DPPIR invocations can be found by composing Theorem 9 with the bounds from Theorem 7 or Theorem 8 depending on the availability of anonymous communication channels.
Theorem 9 (DPPIR Composition Theorem).
invocations of a DPORAM based DPPIR protocol guarantees an overall DPPIR protocol.
7.4 Comparison with Prior Work
Next we compare the performance of our DPPIR scheme with (1) DPPIR schemes from [56] (2) PathPIR construction [43]. We begin by briefly describing the 4 DPPIR schemes from Toledo et al. [56]:

Direct Requests: For each real query, the client sends other dummy queries spread across identical databases. of the databases are assumed to be adversarial.

Anonymous Direct Requests: This protocol assumes the use of anonymous communication channels (ACs) and performs the above mentioned Direct request protocol in conjuction with the AC. The increased privacy occurs from the fact that each user sends requests yet derives privacy among requests (where is the number of users).

SparsePIR: This protocol is based on Chor’s PIR protocol [15]. Instead of generating random vectors for the servers, the client generates biased (hence sparse) random vectors using i.i.d Bernoulli trials with parameter .

Anonymous SparsePIR: Similar to anonymous direct requests, this protocol is the composition of the SparsePIR protocol with an anonymity system.
We compare our protocols with the exact same setup as in [56]. Different parameters are set to the following values: (1) Database with blocks (2) Number of databases (3) Number of adversarial databases (this showcases the most optimistic version of the results of [56]). Anonymous direct request has the same parameters as the direct request protocol with an additional assumption of number of users . SparsePIR and Anonymous SparsePIR protocols ignore the communication cost from the client side. This communcication overhead is information theoretically lower bounded by where is the size of vector to be sent and is the binary entropy function. Since the overhead for encoding the random vectors is linear (and hence very large) in Sparse PIR and Anonymous Sparse PIR protocols, we assume they are based on the 2D variant of Chor’s protocol [15].
Bandwidth comparison: As seen in Fig. 6, our DPPIR protocol provides orders of magnitude performance improvements over stateoftheart DPPIR protocols from [56]. The performance gains come from the logarithmic overhead of ORAM schemes compared to linear overhead of PIR schemes. PathPIR does not provide statistical security and hence is seen as a single datapoint in Fig. 6. PathPIR also achieves logarithmic overhead yet suffers from (1) heavy computation requirements at the client and the server due to the use of underlying homomorphic encryptions (2) large storage overhead due to logarithmic bucket sizes (3) scalability i.e., is better suited for small databases (or large block sizes).
Other comparisons: As discussed before, our DPPIR protocol requires the use of trusted hardware but results in significant performance improvements. At the same time, our DPPIR protocol requires a single server in contrast with multiple servers required for Toledo et al. [56]. It is interesting to note that our DPORAM based DPPIR (described in Section 7.2.2) is a DPComputational PIR scheme in contrast with the DPInformationTheoretic PIR schemes from Toledo et al. [56]. Our protocol as well as protocols from Toledo et al. [56] benefit from the use of anonymous channels. Computational costs for our DPPIR protocol are whereas they are for various schemes in Toledo et al. [56]. On the contrary, additional storage costs for our protocol are given by Theorem 5 but are 0 for schemes in Toledo et al. [56]. Finally, we remark that the setup cost for our protocol includes a one time ORAM database initialization
7.5 Other Applications
Our discussion above focused on the PIR, which itself is a fundamental privacy technology that can enable numerous applications, including PIRTor [46], PIR for ecommerce [31], PIR for MIX Nets [35]. Benefits of DPORAM extend to other applications as well. For instance, Gentry et al. [25] demonstrate the use of ORAMs as building blocks for secure computation. The benefits of DPORAM can be extended to such applications of ORAM protocols for improving performance. In fact, the use of differential privacy to boost the performance of secure computation is already gaining attention in the research community with work by He et al. [30]. Finally, DPORAM can be used in systems such as Dropbox and Google Drive to privately retrieve data at low network overheads and local storage.
8 Related work
Oblivious RAMs were first formalized in a seminal paper by Goldreich and Ostrovsky [27]. Since then, the research community has made substantial progress in making ORAMs practical [60, 50, 28, 29, 55, 40, 53]. Hierarchical constructions such as [50, 29, 36] were proposed building on [27] and tree based ORAM schemes such as [55, 40, 25, 37, 54, 51, 53] were proposed building on Shi et. al. [22]. A recent benchmark for ORAMs has been the Path ORAM protocol [55] that gives theoretical bounds on the local memory usage. Tessaro et. al. [14] build on [55] and extend it for multiple clients by level caching in tree based ORAM schemes. Root ORAM generalizes the construction of [55] to provide a tunable framework offering DPORAM guarantees. Root ORAM gets around the GoldreichOstrovsky lower bound by using (1) Statistical security, which voids the proof of the lower bound [27] (2) Stash storage that is not a constant (which is what gives the logarithmic GOlower bound). Our work opens up new opportunities for rethinking lower bounds for statistical ORAMs.
Gentry et. al. [25] have shown the promise of using ORAMs as a building block in developing protocols for Secure MultiParty Computation. This work is among the first in the line of research using ORAMs as critical component of building other cryptographic primitives. Recently, there has been a number of works using ORAMs for private information retrieval [47, 8, 59, 43], for private ad recommendation [7] and secure computation and machine learning [58, 41, 49, 25].
Several optimizations have been proposed to reduce the overhead of treebased ORAMs. Recently, Ring ORAM [40] reduced the bandwidth using the XOR technique leveraging serverside computation. The XOR technique is orthogonal to the ideas explored in this work and can be extended to Root ORAM, further influencing the protocol design space. Two optimizations for Shi et. al. [22] were proposed by Gentry et al. [25]. First, they reduce the storage overhead by a multiplicative factor and second, they reduce the time complexity of the protocol. They explore the benefits of using a multiple fanout tree structure instead of a conventional binary tree. ORAM has also been implemented at a chip level in prototypes such as the Ascend architecture [24] and the Phantom architecture [42].
Recently, Circuit ORAM [57] proposed a novel protocol to reduce the complexity of the eviction protocol in Path ORAM when implementing on a small private memory. This is ideally suited for secure computation environments and is the stateoftheart protocol when implementing ORAMs in trusted hardware. Though Circuit ORAM works with a constant memory, it increases the protocol complexity which leads to a higher bandwidth usage. Burst ORAM [37] builds on ObliviStore [53] by level caching and optimizing the online bandwidth (formalized in [12]) for bursty (realistic) access pattern. Onion ORAM [18] “breaks” the ORAM lower bound by leveraging server side computation and additively homomorphic encryptions and achieving constant bandwidth overhead.
Floram [19] is a stateoftheart ORAM construction which constructs an ORAM protocol in the Distributed ORAM model (DORAM). In the DORAM model, the ORAM memory is split across multiple servers. Whereas in the conventional ORAM setting two logical access sequences of the same length produce indistinguishable physical access sequences, in a DORAM, only the physical access sequences observed by a single server are indistinguishable. It is possible to augment our work with Floram to further boost its performance.
In summary, Root ORAM is the first protocol that demonstrates a tradeoff between performance and statistical privacy (quantified with differential privacy). The tunable securitybandwidthoutsourcing ratio construction and the formalization of differentially private ORAMs differentiates our work from prior approaches.
9 Limitations
In this work, we enable the design of practical ORAM schemes for applications with stringent bandwidth constraints and small local storage. For some applications, it might be acceptable to tradeoff statistical privacy for better performance and Root ORAM demonstrates the first step in this direction by introducing a tunable framework that provides differential privacy guarantees. Though Theorem 1  2 help us bound the privacy leakage for arbitrary access sequences, we acknowledge that Root ORAM is currently better suited for similar access sequences. For example, our approach is ideally suited for applications such as PIR (Section 7). The formalization of DPORAM opens up a number of research directions such as optimal securityperformance tradeoffs, rethinking lower bounds for statistically private ORAMs, as well as better performance improvement results. Our work has already inspired other researchers to rethink research ideas at the intersection of differential privacy and conventional cryptography [44, 13]. Finally, we note that the ideas developed in this work are orthogonal yet applicable to more recent works such as Ring ORAM, Onion ORAM, and Burst ORAM [40, 18, 37]. Similarly, DPORAM constructions for nontree based ORAMs would be interesting for future work.
10 Conclusions
To summarize, we introduce and formalize the notion of a differentially private ORAM, which to our knowledge is the first of its kind. We present Root ORAM, a tunable family of ORAM protocols which provide a multidimensional tradeoff between security, bandwidth and local storage requirements. We evaluate the protocol using theoretical analysis, simulations, and real world implementation on Amazon EC2. We analyze the benefits of statistical ORAMs in (1) trusted execution environments and (2) serverclient settings and demonstrate how statistical ORAMs can improve the performance of existing ORAMs. Finally, we showcase the utility of Root ORAM via the application of Private Information Retrieval.
11 Acknowledgments
We would like to thank the anonymous PETS reviewers for insightful feedback on the paper and the following funding agencies: Army Research Office YIP Award, National Science Foundation (CNS1409415, CIF1617286, CNS1553437), and Faculty research awards from Google, Intel, and Cisco.
Appendix A Theorem Proofs
Proof of Theorem 4: We use two key concepts viz., ORAM and the greedy postprocessing algorithm from prior works [55, 57] in proving the above result. We begin by briefly describing these concepts and then prove an equivalence between a greedily postprocessed ORAM and Root ORAM (Lemma 1 and Lemma 2). Finally, we complete the argument by proving the effectiveness of using a nonuniform distribution in reducing the stash usage in ORAM, thereby showing its effectiveness in Root ORAM. Next, we briefly discuss the concepts of ORAM and the greedy postprocessing algorithm and refer the reader to [55] for more details. We follow the notation from Section 5.1.
ORAM: This is an imaginary ORAM, used as a mathematical abstraction to facilitate proofs about Root ORAM. The ORAM has all parameters identical to the Root ORAM except it has an infinitely large bucket size (). This allows the ORAM to store as many blocks in a bucket as possible.
Greedy postprocessing: This is an algorithm that post processes the stash and the buckets in an ORAM such that after a sequence of s load/store operations, the distribution of the real blocks over the buckets and stash is exactly the same as that of the Root ORAM after being accessed using s. It is easy to see that the ORAM starts with an empty stash. The greedy post processing algorithm mentioned below processes the ORAM until the tree has no buckets with more than blocks.

Select any block in a bucket that stores more than blocks. Suppose that the bucket is at level and is the path from the bucket to the root.

Find the highest level (closer to the root) such that the bucket at level on path stores less than blocks. If such a bucket exists, move the block to level else move it to the stash.
Next, we state Lemma 1 and Lemma 2 and omit their proofs due to their similarity with [55] as well as space constraints.
Lemma 1.
The stash usage in the post processed Root ORAM is the same as Root ORAM protocol with the same parameters.
(12) 
For the sake of analysis, we combine the binary subtrees by appending a binary tree of depth above the subtrees. This creates an extended binary tree of height which contains the original subtrees at its bottom. We look at the bucket usage over rooted subtrees of this extended binary tree (rooted subtree is a subtree which contains the root of the extended tree). We denote by a generic rooted subtree. We use to denote the total number of buckets in and for the number of real blocks in for an Root ORAM after a sequence operations.
Lemma 2.
The stash usage in postprocessed Root ORAM is if and only if there exists a subtree in Root ORAM such that
Let and be two Root ORAM protocols with security parameters and respectively.
Suppose denotes the set of leaves of the extended binary tree and the set of leaves of the currently mapped subtree. The probability distribution functions for the updateMapping function in and differ only in the following way: some probability mass (for some ) moves from leaves to .
Thus, with probability mass (), the randomized mapping for both protocol and protocol behave identically. However, with probability mass , the data block will be mapped to a leaf in in protocol but to a leaf in in protocol . Hence, in the ORAM, the data block will be placed on a level less than (higher up in the tree) in whereas in it will be placed the same subtree i.e., on a level greater than (lower down in the tree). Hence for any subtler , if the data block in was placed in a bucket in , then so will the data block in . Hence,
Hence, for any given subtree , we have:
Using the above condition over all rooted subtrees , we have
Hence,
Finally, to complete the argument, we use the following result from basic information theory:
Lemma 3.
Let be a discrete random variable that takes on only nonnegative integer values. Then
(13) 
Proof of Theorem 5: Using Theorem 4, we know that the stash usage is “lower” for nonzero values of . Hence, it suffices to give stash bounds when . As in the proof for Theorem 4, we conceptually extend the server storage to contain a complete binary tree with height , where the subtrees form the lower levels of the extended binary tree. We can see that for , the Root ORAM protocol with the additional storage reduces to the Path ORAM protocol and hence the stash size of Root ORAM can bounded as:
(14) 
This completes the proof of the stash bounds
Proof of Theorem 6: The proof follows by noting that the depth of each subtree is equal to and hence number of blocks are transferred per access is times the depth.
Proof of Theorem 7: The proof follows directly from the setup and the definition of DPPIR. Given any two adversarial queries for database records, we consider these as ORAM input access sequences, each with only single access. Since these access sequences differ in a single access, for any output observation :
(15) 
which is the privacy guarantee for DPPIR.
Proof of Theorem 8: captures the failure probability of our system and hence we can union bound this failure probability across the users. Across users, the failure probability can be bounded as: (). With probability , the composite system is now differentially private and we can use the Composition Lemma
Appendix B Entropy Calculation for DP
Next, we provide an interpretation of the privacy guarantees of the protocol in terms of entropy [16]. Specifically, we find the worst case entropy of the observed access sequence for any given input access sequence. This entropy reflects the adversaries’ uncertainty in the observed access sequence given any input sequence. We compute this entropy as follows:
Let denote the random variables indicating the accessed location for the given access sequence a of logical block addresses (i.e., is the random variable for where as in Section 5). Let denote a function which maps an index in to the location of the previous access of the same data block.
In other words, if some data block was accessed at the and then at the location, then .
Let denote the Shannon Entropy. If we have perfect security, and hence the entropy rate is (entropy per access). Using DPORAM reduces this entropy and we compute this loss in entropy below. We know that if and , where is the distribution specified in Eq. 4. Hence, we can compute the entropy of the complete sequence as follows:
Where is the number of accesses such that i.e., accessed for the first time. We know that and the entropy rate is .
For the chosen distribution , we can compute as: