Revisiting Definitional Foundations of Oblivious RAM for Secure Processor Implementations

Revisiting Definitional Foundations of Oblivious RAM for Secure Processor Implementations

Syed Kamran Haider University of Connecticut syed.haider@uconn.edu Omer Khan University of Connecticut omer.khan@uconn.edu  and  Marten van Dijk University of Connecticut marten.van_dijk@uconn.edu
Abstract.

Oblivious RAM (ORAM) is a renowned technique to hide the access patterns of an application to an untrusted memory. According to the standard ORAM definition presented by Goldreich and Ostrovsky, two ORAM access sequences must be computationally indistinguishable if the lengths of these sequences are identically distributed. An artifact of this definition is that it does not apply to modern ORAM implementations adapted in current secure processors technology because of their arbitrary lengths of memory access sequences depending on programs’ behaviors (their termination times). As a result, the ORAM definition does not directly apply; the theoretical foundations of ORAM do not clearly argue about the timing and termination channels.

This paper conducts a first rigorous study of the standard Goldreich-Ostrovsky ORAM definition in view of modern practical ORAMs (e.g., Path ORAM) and demonstrates the gap between theoretical foundations and real implementations. A new ORAM formulation which clearly separates out termination channel leakage is proposed. It is shown how this definition implies the standard ORAM definition (for finite length input access sequences) and better fits the modern practical ORAM implementations. The proposed definition relaxes the constraints around the stash size and overflow probability for Path ORAM, and essentially transforms its security argument into a performance consideration problem. To mitigate internal side channel leakages, a generic framework for dynamic resource partitioning has been proposed to achieve a balance between performance and leakage via contention on shared resources.

Finally, a ‘strong’ ORAM formulation which clearly includes obfuscation of termination leakage is shown to imply our new ORAM formulation and applies to ORAM for outsourced disk storage. In this strong formulation constraints are not relaxed and the security argument for Path ORAM remains complex as one needs to prove that the stash overflows with negligible probability.

Privacy leakage; Oblivious RAM; Secure Processors
copyright: rightsretaineddoi: 10.475/123_4isbn: 123-4567-24-567/08/06conference: ; ; journalyear: 2017

1. Introduction

Security of private data storage and computation in an untrusted cloud server is a critical problem that has received considerable research attention. A popular solution to this problem is to use tamper-resistant hardware based secure processors including TPM (Arbaugh et al., 1997; Sarmenta et al., 2006; Trusted Computing Group, 2004), TPM+TXT (Grawrock, 2006), Bastion (Champagne and Lee, 2010), eXecute Only Memory (XOM) (Lie et al., 2003a, b; Lie et al., 2000), Aegis (Suh et al., 2003; Suh et al., 2005), Ascend (Fletcher et al., 2012), Phantom (Maas et al., 2013), Intel SGX (McKeen et al., 2013), and Sanctum (Costan et al., 2016). In this setting, a user’s encrypted data is sent to the secure processor in the cloud, inside which the data is decrypted and computed upon. The final results are encrypted and sent back to the user. The secure processor chip is assumed to be tamper-resistant, i.e., an adversary is not able to look inside the chip to learn any information.

While an adversary cannot access the internal state of the secure processor, sensitive information can still be leaked through the processor’s interactions with the (untrusted) main memory. Although all the data stored in the external memory can be encrypted to hide the data values, the memory access pattern (i.e., address sequence) may leak information. For example, existing work (Islam et al., 2012) demonstrates that by observing accesses to an encrypted email repository, an adversary can infer as much as 80% of the search queries. Similarly, (Zhuang et al., 2004) shows that the control flow of a program can be learned by observing the main memory access patterns which may leak the sensitive private data.

Oblivious RAM (ORAM), first proposed by Goldreich and Ostrovsky (Goldreich and Ostrovsky, 1996), is a cryptographic primitive that completely obfuscates the memory access pattern thereby preventing leakage via memory access patterns. Significant research effort over the past decade has resulted in more and more efficient ORAM schemes (Boneh et al., 2011; Damgård et al., 2011; Goodrich et al., 2011, 2012a, 2012b; Ostrovsky, 1990; Ostrovsky and Shoup, 1997; Shi et al., 2011; Stefanov et al., 2012; Stefanov et al., 2013; Williams and Sion, 2012).

Generally speaking, an ORAM interface translates a single logical read/write into accesses to multiple randomized locations. As a result, the locations touched in successive logical reads/writes have exactly the same distribution and are indistinguishable to an adversary. More precisely, according to the original definition of ORAM introduced by Goldreich and Ostrovsky (Goldreich and Ostrovsky, 1996), the ORAM access sequences and generated by the ORAM for any two logical access sequences and respectively are computationally indistinguishable if and have the same length distribution (where the distribution is over the coin flips used in the ORAM interface). Almost all follow-up ORAM proposals claim to follow the same definition of ORAM security.

A crucial subtlety regarding the above mentioned ORAM security definition is that it is only applicable to the class of ORAM access sequences whose length is identically distributed. Specifically, two ORAM access sequences and may in fact be distinguishable if they have different length distributions. In modern secure processors (Fletcher et al., 2012; Maas et al., 2013), a conventional DRAM controller is replaced with a functionally-equivalent ORAM controller that makes ORAM requests on last-level cache (LLC) misses. Since a program can have different number of LLC misses for different inputs, the lengths of their corresponding ORAM access sequences is not identically distributed, and can leak sensitive information (e.g., locality) via the program’s termination channel by revealing when the program terminates. Furthermore, the specific ORAM implementations also introduce further variance in the length of ORAM access sequences due to the additional caching/buffering used for performance reasons, e.g., a Path ORAM (Stefanov et al., 2013) caching the position map blocks for future reuse (Fletcher et al., 2015). Hence, the original ORAM definition ((Goldreich and Ostrovsky, 1996)) does not apply to practical ORAM implementations embraced by the modern secure processors due to their arbitrary distributions of lengths of ORAM access sequences. In other words, this definition does not clearly separates or includes leakage over the program’s termination channel.

Another source of leakage under Goldreich and Ostrovsky’s ORAM is the ORAM access timing, i.e., when an ORAM access is made. Since the ORAM requests are issued upon LLC misses, the ORAM access timing strongly correlates with the program’s locality and can potentially leak sensitive information via the ORAM timing channel. Periodic ORAM access schemes have been proposed to protect ORAM timing channel (Fletcher et al., 2012; Fletcher et al., 2014). Notice, however, that these schemes essentially transform the timing channel leakage into the termination channel leakage. Completely preventing termination channel leakage without sacrificing performance is a hard problem. Instead, the leakage can be bounded to only a few number of bits (Fletcher et al., 2014).

In this work, we show that Goldreich and Ostrovsky’s ORAM appropriately interpreted for infinite length input access sequences not only implies the standard ORAM definition ((Goldreich and Ostrovsky, 1996)) for finite length input access patterns, but also separates out termination channel leakage via ORAM access sequences. The proposed definition bridges the gap between theory and practice in the ORAM paradigm for secure processor technology and also simplifies proving the security of practical ORAM constructions. Specifically, for Path ORAM (Stefanov et al., 2013), by leveraging the background eviction (Ren et al., 2013) technique, our definition relaxes the bounds on stash size and stash overflow probability while greatly simplifying the security proof presented in (Stefanov et al., 2013) and yet offering similar security properties.

We also analyze a ‘strong’ ORAM definition stating that two sequences and must be computationally indistinguishable if the lengths of the input sequences and are equal. This definition implicitly includes a form of termination channel obfuscation and is applicable for ORAMs used for remote disk storage. Path ORAM satisfies this stronger definition – its security proof must now show that the stash over flow probability is negligible (a complex analysis).

The paper makes the following contributions:

  1. A first rigorous study of the original ORAM definition presented by Goldreich and Ostrovsky, in view of modern practical ORAMs (e.g., Path ORAM), demonstrating the gap between theoretical foundations and real implementations in secure processor architectures.

  2. We show that the Goldreich and Ostrovsky ORAM definition interpreted for infinite length input sequences separates out leakage over the ORAM termination channel leakage. We show how this definition implies the Goldreich and Ostrovsky ORAM definition for finite length input sequences, fits the modern practical ORAM implementations in secure processor architectures, and greatly simplifies the Path ORAM security analysis by relaxing the constraints around the stash size and overflow probability, and essentially transforms the security argument into a performance consideration problem.

  3. A generic framework for dynamic resource partitioning in secure processor architectures is proposed to control leakage via contention on shared resources, allowing leakage vs. performance trade-offs. In particular, this can be used to reason analyse termination channel leakage.

  4. We analyze a ‘strong’ ORAM definition which implies the Goldreich and Ostrovsky ORAM definition interpreted for infinite length input sequence. The ‘strong’ ORAM definition implicitly includes obfuscation of the ORAM termination channel and this is useful in ORAM for remote disk storage (in order to prove that Path ORAM satisfies this definition one now needs to show a negligible probability of stash overflow).

2. Background

2.1. Leakage Types via Address Bus Snooping

Privacy of user’s sensitive data stored in the cloud has become a serious concern in computation outsourcing. Even though all the data stored in the untrusted storage can be encrypted, an adversary snooping the memory address bus in order to monitor the user’s interactions with the encrypted storage can potentially learn sensitive information about the user’s computation/data (Zhuang et al., 2004; Islam et al., 2012). In particular, such an adversary can potentially learn secret information about the user’s program/data by observing the following three behaviors:

  1. The addresses sent to the main memory to read/write data (i.e., the address channel).

  2. The time when each memory access is made (i.e., the timing channel).

  3. The total runtime of the program (i.e., the termination channel).

The countermeasures to prevent leakage via the above mentioned channels are orthogonal to each other and can be implemented as needed.

2.2. Oblivious RAM

Oblivious RAM is a renowned technique that obfuscates a user’s access pattern to an untrusted storage so that an adversary monitoring the access sequence to the storage cannot learn any information about the user’s application or data. Informally speaking, the ORAM interface translates the user’s access sequence of program addresses into a sequence of ORAM accesses such that for any two access sequences and , the resulting ORAM access sequences and are computationally indistinguishable given that and are of same length. In other words, the ORAM physical access pattern () is independent of the logical access pattern (), except the lengths of the two access patterns which are correlated. Precisely, an ORAM protects against leakage via the memory address channel only (cf. Section 2.1). The data stored in ORAMs should be encrypted using probabilistic encryption to conceal the data content and also hide which memory location, if any, is updated. With ORAM, an adversary is not able to tell (a) whether a given ORAM access is a read or write, (b) which logical address in ORAM is accessed, or (c) what data is read from/written to that location. We revisit the formal definition of ORAM presented by Goldreich and Ovstrofsky (Goldreich and Ostrovsky, 1996) and discuss it in more detail in Section 3.

2.3. Path ORAM

Path ORAM (Stefanov et al., 2013) is currently the most efficient and simplified ORAM scheme for limited client (processor) storage. Over the past few years, several crucial optimizations to basic Path ORAM have been proposed which have resulted in practical ORAM implementations for secure processor setting.

Path ORAM (Stefanov et al., 2013) has two main hardware components: the binary tree storage and the ORAM controller (cf. Figure 1).

Binary tree stores the data content of the ORAM and is implemented on DRAM. Each node in the tree is defined as a bucket which holds up to data blocks. Buckets with less than blocks are filled with dummy blocks. To be secure, all blocks (real or dummy) are encrypted and cannot be distinguished. The root of the tree is referred to as level , and the leafs as level . Each leaf node has a unique leaf label . The path from the root to leaf is defined as path . The binary tree can be observed by any adversary and is in this sense not trusted.

ORAM controller is a piece of trusted hardware that controls the tree structure. Besides necessary logic circuits, the ORAM controller contains two main structures, a position map and a stash. The position map is a lookup table that associates the program address of a data block () with a path in the ORAM tree (path ). The stash is a piece of memory that stores up to a small number of data blocks at a time.

Figure 1. A Path ORAM for levels. Path is accessed.

At any time, each data block in Path ORAM is mapped (randomly) to some path via the position map. Path ORAM maintains the following invariant: if data block is currently mapped to path , then must be stored either on path , or in the stash (see Figure 1). Path ORAM follows the following steps when a request on block is issued by the processor.

  1. Look up the position map with the block’s program address , yielding the corresponding leaf label .

  2. Read all the buckets on path . Decrypt all blocks within the ORAM controller and add them to the stash if they are real (i.e., not dummy) blocks.

  3. Return block to the secure processor.

  4. Assign a new random leaf to (update the position map).

  5. Encrypt and evict as many blocks as possible from the stash to path . Fill any remaining space on path with encrypted dummy blocks.

Step 4 is the key to Path ORAM’s security. This guarantees that a random path will be accessed when block is accessed later and this path is independent of any previously accessed random paths (unlinkability). As a result, each ORAM access is random and unlinkable regardless of the request pattern.

Although, unlinkability property follows trivially from the construction of Path ORAM, another crucial property to be proven is the negligible stash overflow probability for a small sized stash, i.e., sized stash for being the security parameter.

2.4. Recursive Path ORAM

In practice, the position map is usually too large to be stored in the trusted processor. Recursive ORAM has been proposed to solve this problem (Shi et al., 2011). In a 2-level recursive Path ORAM, for instance, the original position map is stored in a second ORAM, and the second ORAM’s position map is stored in the trusted processor. The above trick can be repeated, i.e., adding more levels of ORAMs to further reduce the final position map size at the expense of increased latency. The recursive ORAM has a similar organization as OS page tables.

2.5. Background Eviction

In Steps 4 and 5 of the basic Path ORAM operation, the accessed data block is remapped from the old leaf to a new random leaf , making it likely to stay in the stash for a while. In practice, this may cause blocks to accumulate in the stash and finally overflow the stash. It has been proven in (Stefanov et al., 2013) that the stash overflow probability is negligible for . For smaller , background eviction (Ren et al., 2013) has been proposed to prevent stash overflow.

The ORAM controller stops serving real requests and issues background evictions (dummy accesses) when the stash is full. A background eviction reads and writes a random path in the binary tree, but does not remap any block. During the writing back phase (Step 5 in Section 2.3) of Path ORAM access, all blocks that are just read in can at least go back to their original places on , so the stash occupancy cannot increase. In addition, the blocks that were originally in the stash are also likely to be written back to the tree as they may share a common bucket with that is not full of blocks. Background eviction is proven secure in terms of the unlinkability property in (Ren et al., 2013).

3. Goldreich’s Oblivious RAM

Oblivious RAM was first proposed by Goldreich and Ostrofsky (Goldreich and Ostrovsky, 1996). In this section, we first revisit their definition of ORAM and then discuss its implications on modern real ORAM implementations for secure processor architectures, specifically Path ORAM.

3.1. Formal Definition

Let be a sequence of program addresses111Actually, triples representing write/read/halt, address, and data send to memory. requested by the CPU during a program execution, and let be a probabilistic access sequence to the actual storage such that it yields the correct data corresponding to . Then is called an oblivious RAM if it is a probabilitic RAM and satisfies the following definition.

Definition 3.1 (Oblivious RAM).

(Goldreich and Ostrovsky, 1996) For every two logical access sequences and and their corresponding probabilistic access sequences and , if and are identically distributed, then so are and .

Intuitively, according to Definition 3.1, the sequence of memory accesses generated by an oblivious RAM does not reveal any information about the original program access sequence other than its length distribution. Specifically, this definition only protects against the leakage over memory address channel (cf. Section 2.1).

In the above definition we usually interpret and as finite length sequences implying that and will also be finite length. If infinite length input sequences are allowed, then the orginal ORAM definition (i.e. Definition 3.1) turns out to be equivalent to Definition 4.1 in Section 4. We will argue below why it is important to admit infinite length input sequences.

3.2. A Bogus ORAM

Explained below, Definition 3.1 for finite length input sequences invites the construction of a strange ‘bogus’ ORAM in which the access sequence of any probabilistic RAM – even if it is not oblivious – can be padded with additional accesses so that it becomes oblivious. Since the access sequence of a non-oblivious probabilistic RAM is only padded, this reveals information about the input access sequence to the probabilistic RAM. This, of course, breaks our intuitive understanding of what oblivious means. The reason why our construction is oblivious is that the additional padding creates a 1-1 correspondence between the access sequence of the probabilistic RAM and the final length of the access sequence after padding; this allows us to abuse Definition 3.1 as we essentially code all the information about the access sequence of the probabilistic RAM in the termination channel (the length of the ORAM sequence). This means that each access pattern will produce a unique length – so, there are no two different sequences in Definition 3.1 for our ‘bogus’ construction that will be compared. The bogus construction does not introduce any smartness, it effectively pushes all the work of making the access pattern oblivious to making the termination channel oblivious. This observation will lead to a slightly stronger ORAM definition in Section 4 which is independent of the concept of a termination channel, i.e., the length of an ORAM access sequence does not play a role in the new definition (which turns out to be equivalent to Definition 3.1 for unrestricted and possibly infinite length input sequences).

Algorithm 1 shows how a (non-oblivious) probablistic RAM can be padded in order to create an ORAM: Here an input access sequence to is finite so that a finite length output sequence is created which can be uniquely interpreted as an integer in line 3.222A memory access in is a triple where represents a read/fetch/load, write/store, or halt. can be coded using a non-zero bit sequence of length 2. The resulting padded ORAM sequence has length , see line 4. This means that if and are identically distributed, then so are and (this already shows that only very specific and will result in identically distributed and ). Since line 4 only padds and with an access sequence taken from some a-priori fixed distribution, the padded access sequences and are identically distributed. We conclude that the bogus ORAM satisfies Definition 3.1 for finite length input sequences.

The above shows the importance of allowing infinite length input sequences in the ORAM definition.

1:procedure ORAMWrapper()()
2:     Access memory according to
3:     Represent as a binary bit string and interpret as an integer which is the number of accesses in
4:     Access memory according to another sequence (taken from some a-priori fixed distribution) such that the number of accesses in combined with is equal to
5:end procedure
Algorithm 1 Bogus ORAM

3.3. Applicability for Secure Processors

Modern secure processors (Fletcher et al., 2012; Maas et al., 2013) have embraced Path ORAM interface as a part of their trusted computing base (TCB). In these implementations, the ORAM controller serves the last level cache (LLC) misses by making ORAM requests to the main memory. Consider the LLC misses sequence of an execution to be the input () to the ORAM interface defined in Definition 3.1. In order to conclude indistinguishability (as per the above definition) of two ORAM access sequences generated as a result of two different LLC misses sequences (i.e., by running different programs, or running same program with different inputs), the ORAM access sequences must have the same length distribution. However, since the LLC misses pattern changes dynamically across various programs and different inputs to the same program (Jaleel, 2010), it is very unlikely that the corresponding ORAM access sequences of two different executions will have the same length distribution. In particular, this would leak information about the program behavior through the total runtime of the application (i.e., the termination channel).

Another perspective to look at this fact is that Definition 3.1 is completely satisfied by only a small class of ORAM access sequences whose lengths are identically distributed. However, in practice, under the secure processor setting, the lengths of ORAM access sequences can have arbitrary different distributions as discussed earlier. Furthermore, several optimizations and extensions proposed in the literature for Path ORAM, resulting in better performance/security, introduce further probabilistic variance in the total runtime of the program, i.e., the termination channel. This, as a result, prevents the ORAM definition under consideration from being directly applicable to secure processors.

3.4. ORAM Optimizations vs. Program Runtime

In the following discussion, we briefly talk about various optimizations and tricks proposed in the literature that have resulted in more and more efficient and secure Path ORAM implementations. Each of these techniques typically introduces some amount of variance in the length of the ORAM access sequence as function of the program input, hence, modifying the total runtime of the program that essentially correlates with the given input will leak some information about it.

3.4.1. Unified Path ORAM & PLB

Unified ORAM (Fletcher et al., 2015) is an improved and state-of-the-art recursion technique to recursively store a large position map. It leverages the fact that each block in a position map ORAM stores the leaf labels for multiple data blocks that are consecutive in the address space. In other words, we can find position maps of several blocks in a single access to the position map ORAM, although only one of them is of interest. Therefore, Unified ORAM caches position map ORAM blocks in a small cache called position map lookaside buffer (PLB) to exploit locality (similar to the TLB exploiting locality in page tables). To hide whether a position map access hits or misses in the cache, Unified ORAM stores both data and position map blocks in the same binary tree. Having good locality in position map blocks would result in more PLB hits and overall less number of position map accesses to the Unified ORAM tree, and vice versa.

3.4.2. ORAM Prefetching

In order to exploit data locality in programs under Path ORAM, ORAM prefetchers have been proposed (Ren et al., 2013; Yu et al., 2015). At first glance, exploiting data locality and obfuscation seem contradictory: on one hand, obfuscation requires that all data blocks are mapped to random locations in the memory. On the other hand, locality requires that certain groups of data blocks can be efficiently accessed together. However, Path ORAM prefetchers address this problem by (statically/dynamically) creating “super blocks” of data blocks exhibiting locality, and mapping the whole super block on the same path. As a result, a single path read for accessing one particular block yields the corresponding super block which is loaded into the LLC, effectively resulting in a prefetch. Consequently, good data locality in the program results in more prefetch hits and overall less number of ORAM accesses, and vice versa.

3.4.3. Timing Channel Protection

As noted earlier, the ORAM definition does not protect against leakage over timing channel (cf. Section 2.1), i.e., when an ORAM access is made. Periodic ORAM schemes have been proposed to protect the timing channel (Fletcher et al., 2012; Fletcher et al., 2014). A periodic ORAM always makes an access at strict periodic intervals, where the time interval between two consecutive accesses is public. If there is no pending memory request when an ORAM access needs to happen due to periodicity, a dummy access will be issued (the same operation as background eviction). Whereas, if a real request arrives before the next ORAM access time, it waits until the next ORAM access time to enforce a deterministic behavior. Hence, periodic ORAMs essentially transform the timing channel leakage to the termination channel leakage by potentially introducing extra ORAM accesses due to periodicity.

3.5. Implications on Path ORAM Stash Size

Proving that the stash overflow probability is negligible implies Path ORAM’s correctness and security. The stash overflow probability drops exponentially in the stash size. A significantly complex proof presented in (Stefanov et al., 2013) shows that, for , a negligible stash overflow probability can be achieved by configuring the stash size appropriately, where represents the number of blocks per node in Path ORAM’s binary tree. These parameter settings might be well suited for asymptotic analysis, however, real implementations might choose a different set of parameters to optimize various design points. For example, a smaller stash size is desired to save hardware area overhead. Similarly, studies (Ren et al., 2013) have shown that yields the best performance for Path ORAM.

For smaller stash sizes and/or , the stash overflow can be prevented through background eviction (cf. Section 2.5) which essentially adds ‘extra’ dummy accesses in the original ORAM access sequence. Notice, however, that satisfying Definition 3.1 requires restricting the ORAM access sequences to have identical length distributions, and hence does not apply to background eviction which would probabilistically modify the lengths of ORAM sequences depending upon the stash occupancy which is program input correlated.

As an example, consider a 2-level recursive Path ORAM where the original position map is stored in a second ORAM, and the second ORAM’s position map is stored in the trusted processor (cf. Section 2.4). Let and be two program address sequences and let and be their corresponding ORAM access sequences. Notice that each entry of the sequence consists of two accesses corresponding to the position map ORAM and data ORAM respectively, and is therefore likely to increases the stash occupancy by 2. Further notice that by definition of recursive ORAM structure, each position map ORAM block contains path/leaf labels of several data ORAM blocks consecutively located in the program’s address space.

Assume that accesses consecutive data blocks in the program’s address space, whereas accesses random data blocks. Then, subsequent accesses from sequence will exhibit higher temporal locality for position map blocks. This is because several position map accesses – corresponding to data blocks consecutive in the program’s address space – will access the same position map block which is likely to be present already in the stash. Therefore, the stash occupancy will grow at a rate of blocks per recursive access. Whereas, subsequent accesses from exhibit extremely poor temporal locality among position map blocks due to the randomized sequence , therefore the stash occupancy will grow at a rate of blocks per recursive access. Consequently, two ORAM accesses sequences exhibit two different stash occupancies due to the underlying program’s behavior.

4. Proposed Definition

In order to argue about indistinguishability of ORAM access sequences, we interpret Goldreich and Ostrovsky’s ORAM definition to also incorporate infinite length input access sequences and this implicitly obfuscates termination channel leakage so that the termination channel cannot be used for leakage in the definition (this separates out the termination channel and invalidates our bogus ORAM as an ORAM).

Definition 4.1 (Oblivious RAM for infinite access sequences).

For every two logical access sequences and of infinite length, their corresponding (infinite length) probabilistic access sequences and are identically distributed in the following sense: For all positive integers , if we truncate and to their first accesses, then the truncations and are identically distributed.

Concrete ORAM constructions to-date have the property that future memory accesses in do not influence how the oblivious RAM interface accesses memory now:

Definition 4.2 (Causality).

For all , extends the access sequence , where is the truncation of to the first accesses.

Assuming causality, Definition 4.1 implies Definition 3.1 for finite length input sequences and : Suppose that lengths and are identically distributed. Since and are finite length, also and are finite length. Because they are identically distributed there exists a maximum possible length , i.e., and will be . We may padd and to infinite length sequences and . Assuming Definition 4.1 teaches that and are identically distributed, in particular, and are identically distributed. Due to causality, and extend and . This implies that and must be identically distributed, hence, Definition 3.1 holds for finite length input sequences and .

If in Definition 3.1 we use the interpretation of ‘identically distributed for infinite length sequences and ’ given in Definition 4.1, then we may conclude that Definition 4.1 is equivalent to Definition 3.1 for unrestricted and possibly infinite length input sequences and .

4.1. A Stronger Definition

We can strengthen Definition 3.1 by requiring that and are identically distributed instead of and being identically distributed:

Definition 4.3 (‘Strong’ Oblivious RAM).

For every two logical access sequences and and their corresponding probabilistic access sequences and , if and are equal, then so are and .

Clearly, this strong definition for unrestricted and possibly infinite length input sequences and implies Definition 4.1 since it covers the case where .

Assuming causality, it turns out that the strong Definition 4.3 restricted to finite length input sequences and also implies Definition 4.1: Suppose that Definition 4.1 does not hold and there exist infinite length access sequences and and there exists an integer such that and are not identically distributed. Causality implies that and for some (finite) integers and . Let . Then (by using causality) and . We conclude that and are not identically distributed. This contradicts Definition 4.3 for and which both have length .

The next theorem enumerates our findings:

Theorem 4.4 ().

Suppose causality. Then, the ‘strong’ ORAM Definition 4.3 for finite length input sequences implies Goldreich and Ostrovsky’s ORAM Definition 3.1 phrased for infinite length input sequences (as in Definition 4.1) and this in turn implies Goldreich and Ostrovsky’s ORAM Definition 3.1 restricted to finite length input sequences. Our bogus ORAM satisfies Goldreich and Ostrovsky’s ORAM Definition 3.1 restricted to finite length input sequences.

Finally, we notice that the above definitions can also be adopted in a Universal Composability framework as in (Fletcher, 2016).

4.2. Application

In the secure processor setting an input sequence represent the LLC misses sequence. In practice, we may think of the processor to continuously access memory (DRAM) and therefore produce an infinite length input access sequence . This sequence is produced by several programs being contexed switched in and out, some programs terminating and new ones starting. This shows that for an ORAM definition to be useful in the secure processor architecture setting we require Goldreich and Ostrovsky’s ORAM Definition 3.1 phrased for infinite length input sequences. The termination channel is separated out as the ORAM interface does not terminate and keeps on executing. If a program (module) terminates it will communicate over a different I/O channel its computed result.333Or request input for continuing executing a next program module. The moment at which this happens leaks information to an observing adversary – in fact, the adversary can be another program running on the secure processor whose own termination channel leaks into what extent has slowed down the adversarial program by using shared resources. In Section 5 we propose a framework for analysing leakage over covert channels induced by shared resources.

In the secure processor architecture setting we do not need the ‘strong’ ORAM definition. It turns out, see Sections 4.3-4.4, that PathORAM + background eviction (and other optimizations) satisfies Goldreich and Ostrovsky’s ORAM Definition 3.1 phrased for infinite length input sequences and its security proof is straightforward. However, PathORAM + background eviction does not satisfy the ‘strong’ ORAM definition. We notice that PathORAM without optimizations such as background eviction does satisfy the ‘strong’ ORAM definition and proving this requires a much more complex analysis (as one needs to show that the stash only overflows with negligible probability).

The ‘strong’ ORAM definition makes sense and is useful in the remote disk storage setting because we will access the remote storage in bursts of requests and we wish the ORAM interface to only reveal the length of the burst and nothing more – in this way the ‘strong’ ORAM implicitly provides a useful characterization of leakage through the timing channel (i.e., when accesses happen).

The above definitions translate to write-only ORAM: HIVE (Blass et al., 2014) and Li et al. (Li and Datta, 2013) essentially use the ‘strong’ write-only ORAM definition since these papers discuss the remote disk storage setting. Flat ORAM (Haider and van Dijk, 2016), on the other hand, is designed and optimized for the secure processor setting and is secure under the Goldreich and Ostrovsky equivalent of a write-only ORAM definition for infinite length input sequences. Flat ORAM does not (and does not need to) satisfy the ‘strong’ write-only ORAM.

4.3. Adapting ORAM Optimizations

An ORAM interface satisfying Definition 4.1 “automatically” caters for the arbitrary and dynamically changing rate of accesses to memory per input access in , and is therefore naturally a better fit for practical ORAM implementations (e.g., Path ORAM) in the secure processor setting when compared to Goldreich and Ostrovsky’s ORAM Definition 3.1 for finite length input sequences: The cumulative effect of various performance optimizations outlined in Section 3.4 on the termination channel can be incorporated in the proposed ORAM by definition. E.g., the additional accesses added by the periodic ORAM schemes in order to hide the ORAM access timing, or ORAM prefetching resulting in a reduced number of accesses only results in an altered access sequence, which still remains infinite length.

4.4. Simplified Stash Analysis

Another crucial advantage of the proposed definition is that it greatly simplifies the stash size analysis for Path ORAM. As mentioned earlier, the stash must never overflow for the correctness or security of Path ORAM under the ‘strong’ ORAM Definition 4.3, which imposes certain restrictions on the minimum stash size and ORAM parameters, e.g., . Whereas, according to Definition 4.1, it is totally acceptable to have a substantial percentage of background eviction accesses among the overall ORAM accesses, if needed, in order to prevent stash overflow for arbitrary parameter settings. However, the impact of this relaxed ORAM definition with any chosen parameter settings is reflected in the overall performance of the system. The system performance can then essentially be benchmarked to tune the optimum settings for desired design points depending upon the application.

5. Privacy Leakage Analysis

Recall from Section 3.1 that a standard oblivious RAM protects only against leakage over the memory address channel. In this section, we first discuss common mitigation techniques for other leakage sources, e.g., the ORAM timing channel and termination channel. Later, we present a generic framework, called PRAXEN, that offers security vs. performance trade-offs against a wide range of hardware side channel attacks in a secure processing environment.

5.1. Timing Channel

5.1.1. Static Periodic Behavior

A straightforward approach to hide the ORAM timing behavior is to use a periodic ORAM scheme (Fletcher et al., 2012), as introduced in Section 3.4.3. An ORAM access is made strictly after predefined periods, whereas the access period is statically defined offline, i.e., before the program runs.

The security of this approach follows trivially as it completely trades off the timing channel leakage with the total runtime of the program, i.e., altering the termination channel behavior. Notice that even if the periodic ORAM controller dynamically changes some internal performance parameters, such as prefetching rate and threshold to control background evictions rate, the resultant ORAM access sequence being strictly periodic only alters the termination time of a program.

5.1.2. Dynamic Periodic Behavior

While the static periodic approach discussed above is secure, studies have shown that this approach can potentially result in significant performance overheads across a range of programs (Fletcher et al., 2014). On one hand, a constant rate of ORAM accesses throughout the program execution is desirable for security, whereas on the other hand, a dynamically varying access rate is desirable for performance. In order to achieve a balance between the two extremes, (Fletcher et al., 2014) proposes a framework that splits the program execution into coarse grained (logical) time epochs, and enforces, within each epoch, a strict ORAM access rate that is selected dynamically at the start of each epoch.

Let be the maximum program runtime in terms of the number of ORAM accesses such that all programs can complete with ORAM accesses. Let denote the list of epochs of a program execution, or the epoch schedule, where each epoch is characterized by its number of ORAM accesses, and let denote the list of allowed ORAM access rates. While running a program during a given epoch, the secure processor is restricted to use a single ORAM access rate, and picks a new rate configuration at the start of the next epoch. Given epochs and rates, there are possible epoch schedules – which can potentially reveal the dynamic behavior of the program. Thus, the timing channel leakage alone can be upper bounded by bits. To control the amount of leakage, can be set to a small value, e.g., , resulting in only bits leakage while achieving good performance.

5.2. Termination Channel

If the results of a program are sent back as soon as the application actually terminates, i.e., the actual termination time is visible to the adversary, sensitive information about the application’s input can be leaked by this behavior. Given the maximum number of ORAM accesses within which all programs can terminate, the maximum number of termination traces/lengths that any program can possibly have is upper bounded by , i.e., one trace per termination point. Therefore, applying the information theoretic argument from (Smith, 2009; Fletcher et al., 2014), at most bits about the inputs can leak through the termination time alone per execution. In practice, due to the logarithmic dependence on , termination time leakage is small. For example, should work for all programs, which is very small if the user’s input is at least a few kilobytes. Further, we can reduce this leakage through discretization of runtime. For example, if we “round up” the termination time to the next accesses, the leakage is reduced to lg bits. The overall leakage by both timing and termination channels can be given by bits.

5.3. Other Hardware Side Channels

While an outside adversary can only monitor an ORAM’s external side channels, such as timing/termination channels; in a modern multi-core secure processor, there also exist several internal hardware-based side channels due to the inevitable sharing of various structures. SVF (Demme et al., 2012) experimentally measured information leakage in a processor and showed that any “shared structure” can leak information. In particular, privacy leakage over a shared cache has been explicitly demonstrated in (Apecechea et al., 2014; Zhang et al., 2012) for two VMs sharing a cache (without TEE support) showing that secret key bits can leak from one VM to the other, even if the VMs are placed on different cores in the same machine.

Researchers have explored how to counter timing channel attacks due to cache interference (Wang and Lee, 2007; Domnitser et al., 2012) where solutions either rely on static or dynamic cache partitioning. The static approach lowers processor efficiency but has a strong security guarantee: no information leakage. Current solutions based on the dynamic cache partitioning approach improve processor efficiency but do not guarantee bounds on information leakage. We note that efficient cache partitioning is important as it improves processor efficiency (Suh et al., 2004; Qureshi and Patt, 2006; Xie and Loh, 2009; Lee et al., 2011; Sanchez and Kozyrakis, 2012; Beckmann and Sanchez, 2013; Kasture and Sanchez, 2014).

Researchers have also explored how to counter timing channel attacks due to network-on-chip interference in multi-cores (Wang and Suh, 2012; Wassel et al., 2013). Both these schemes use static network partitioning to enable information-leak protection through the processor communication patterns.

Finally, the most important shared resource channel in the ORAM context that leaks information from the hardware layer is the shared ORAM controller that connects (via a traditional memory controller) the processor to the off-chip memory. A recent work (Bao and Srivastava, 2017) shows that, under Path ORAM, an adversary running a malicious thread at one of the cores of the multi-core system can learn sensitive information about the behavior of user thread(s) running on other core(s) by introducing contention at the shared ORAM controller and observing the service times of its own requests. Again, a static partitioning scheme for this information leakage channel can be used at the cost of efficiency.

We want to design a generic dynamic resource partitioning scheme, applicable to any shared resource(s), based on the insight that leakage can be quantified using information theory (Smith, 2009; Askarov et al., 2010; Zhang et al., 2011), in order for achieving a balance between security and performance.

5.4. PRAXEN: A PRivacy Aware eXecution ENvironment

1:procedure PrivacyAwareScheduler()
2:     while True do
3:         if  then makes the approach reliable
4:              Obtain for thread corresponding to NextDecisionPoint
5:              
6:              
7:              Change configuration to the one indicated in at time
8:              
9:         end if
10:     end while
11:end procedure
Algorithm 2 Resource Scheduling

In order to control privacy leakage while still dynamically sharing resources for efficiency, we propose a generic resource scheduling strategy which only takes a small, yet a sufficient, amount of information about the current and past execution of application threads into account.

Each application thread is associated with a configuration which serves as input to the resource scheduler for allocating resources to each thread, i.e., the scheduler assigns resources to each thread according to some (probabilistic) algorithm

For example, based on the collection of configurations , the resource scheduler may first, by using interpolation and extrapolation, reconstruct a complete approximate picture of all performance indicators which measure how all resources are being used by each of the threads. This rough picture is used to allocate the current resources to each thread – this allocation will not change (it is static) until one of the application thread’s configurations changes. The reason not to use current measured performance indicators (as an input in ) for scheduling is because these dynamically change with respect to execution decisions based on each application thread’s state and this gives an uncontrolled amount of leakage. As we will see, the above static allocation allows precise control of privacy leakage from one application to another.

We call a change of thread ’s configuration from a current configuration to a new configuration a decision point for . Each decision point is associated with an actual time . At a decision point, the scheduler takes the real, i.e. actual measured, performance indicators of thread in combination with its history of resource allocations to select a new configuration together with

  • A future time at which the next decision point for occurs, as well as

  • A set of future configurations from which the next configuration for will be taken.

We record the tuples in a history ordered by time . Notice that according to this ordering and the above requirements state

(1)

So, at time a decision has been made about what configuration for can be selected at the next decision point, and when this decision is applied.

For a current time we can extract from the past history of decision points the most recent tuples with for . We compute the time of the next upcoming decision point as

Let be the application thread which corresponds to the upcoming decision point. At this decision point the scheduler is allowed to only change ’s configuration: The scheduler computes

(2)

where represents the past history of decision points of thread and represent the (history of) measured performance indicators of only thread . Here satisfies (1) and . If the scheduler decides not to change ’s configuration, then in (2). Our approach is formalized in Algorithm 2.

Leakage Analysis: In the worst case all cores/threads, except for one, can collaborate (i.e., act as malicious threads) to observe one specific (victim) thread (and, in particular, observe its configuration changes). Note that the collaborating threads can only observe the victim thread through changes in resource allocation. We argue that this information is fully captured by ’s configuration changes and the times when these changes happened: The reason is that each epoch has (1) a static resource allocation among threads – e.g., DRAM bandwidth, ORAM access rate etc. – preventing internal side channel leakages within an epoch, and (2) indistinguishability of real vs. dummy ORAM accesses – preventing external side channel leakages within an epoch. Therefore, accros time the collaborating threads can only observe and use the output of in order to extract information about thread . Hence, the privacy leakage of thread is at most the information about thread contained in which includes the history of configurations (that form the inputs to ). We notice that each decision point at time in is the result of an algorithm which takes as inputs a history of past decision points before time together with the corresponding (which is also a function of past decision points before time ), and . Therefore, by using induction on , we can prove that only the decision points corresponding to thread in contribute to leakage (through ) of thread . We conclude that privacy leakage of a specific thread is at most the information about thread given by the history of ’s decision points (i.e., configuration changes and the times at which these happen): The number of leaked bits is at most Shannon entropy

and can be bounded as follows:

In order to compute (2) assume that

  • First computes the new configuration based on inputs , , and and

  • Next computes the new sets of possible future configurations and possible future decision points based on inputs , , and (but not otherwise an upper bound cannot be proven).

We may order the random variables in as follows:

where the first sum equals 0 because computes as a function of and (up to moment ); and the second sum is upper bounded by because . Let then the thread leaks at most bits.

Given that algorithms and have enough freedom to reallocate resources, our framework offers a controlled leakage model while maintaining optimum performance. This methodology can be used on almost all resource sharing paradigms. It particularly has applications in settings where there is a finite bounded leakage budget.

6. Conclusion

We present a first rigorous study of the original oblivious RAM definition presented by Goldreich and Ostrovsky, in view of modern practical ORAMs (e.g., Path ORAM), and demonstrate the gap between theoretical foundations and real ORAM implementations. Goldreich and Ostrovsky’s ORAM definition appropriately interpreted for infinite length input access sequebces separates out the ORAM termination channel and fits modern practical ORAM implementations in the secure processor setting. The proposed definition greatly simplifies the Path ORAM security analysis by relaxing the constraints around the stash size and overflow probability, and essentially transforms the security argument into a performance consideration problem. A generic framework for dynamic resource partitioning has also been proposed, which mitigates the sensitive information leakage via internal hardware based side channels – such as contention on shared resources – with minimal performance loss.

Acknowledgements.
The work is partially supported by NSF grant CNS-1413996 for MACS: A Modular Approach to Cloud Security.

References

  • (1)
  • Apecechea et al. (2014) Gorka Irazoqui Apecechea, Mehmet Sinan Inci, Thomas Eisenbarth, and Berk Sunar. 2014. Fine grain Cross-VM Attacks on Xen and VMware are possible! Cryptology ePrint Archive, Report 2014/248. (2014). http://eprint.iacr.org/.
  • Arbaugh et al. (1997) W. Arbaugh, D. Farber, and J. Smith. 1997. A Secure and Reliable Bootstrap Architecture. In Proceedings of the 1997 IEEE Symposium on Security and Privacy. 65–71. citeseer.nj.nec.com/arbaugh97secure.html
  • Askarov et al. (2010) Aslan Askarov, Danfeng Zhang, and Andrew C. Myers. 2010. Predictive Black-box Mitigation of Timing Channels. In Proceedings of the 17th ACM Conference on Computer and Communications Security (CCS ’10). ACM, New York, NY, USA, 297–307. https://doi.org/10.1145/1866307.1866341
  • Bao and Srivastava (2017) C. Bao and A. Srivastava. 2017. Exploring Timing Side-channel Attacks on Path-ORAMs. International Symposium on Hardware Oriented Security and Trust (HOST). (2017).
  • Beckmann and Sanchez (2013) Nathan Beckmann and Daniel Sanchez. 2013. Jigsaw: Scalable Software-defined Caches. In Proceedings of the 22Nd International Conference on Parallel Architectures and Compilation Techniques (PACT ’13). IEEE Press, Piscataway, NJ, USA, 213–224. http://dl.acm.org/citation.cfm?id=2523721.2523752
  • Blass et al. (2014) Erik-Oliver Blass, Travis Mayberry, Guevara Noubir, and Kaan Onarlioglu. 2014. Toward robust hidden volumes using write-only oblivious RAM. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security. ACM, 203–214.
  • Boneh et al. (2011) Dan Boneh, David Mazieres, and Raluca Ada Popa. 2011. Remote Oblivious Storage: Making Oblivious RAM practical. Manuscript, http://dspace.mit.edu/bitstream/handle/1721.1/62006/MIT-CSAIL-TR-2011-018.pdf. (2011).
  • Champagne and Lee (2010) David Champagne and Ruby B Lee. 2010. Scalable architectural support for trusted software. In High Performance Computer Architecture (HPCA), 2010 IEEE 16th International Symposium on. IEEE, 1–12.
  • Costan et al. (2016) Victor Costan, Ilia Lebedev, and Srinivas Devadas. 2016. Sanctum: Minimal Hardware Extensions for Strong Software Isolation. In 25th USENIX Security Symposium (USENIX Security 16). USENIX Association, Austin, TX, 857–874. https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/costan
  • Damgård et al. (2011) Ivan Damgård, Sigurd Meldgaard, and Jesper Buus Nielsen. 2011. Perfectly Secure Oblivious RAM without Random Oracles. In TCC.
  • Demme et al. (2012) J. Demme, R. Martin, A. Waksman, and S. Sethumadhavan. 2012. Side-channel vulnerability factor: A metric for measuring information leakage. In Computer Architecture (ISCA), 2012 39th Annual International Symposium on. 106–117. https://doi.org/10.1109/ISCA.2012.6237010
  • Domnitser et al. (2012) Leonid Domnitser, Aamer Jaleel, Jason Loew, Nael Abu-Ghazaleh, and Dmitry Ponomarev. 2012. Non-monopolizable Caches: Low-complexity Mitigation of Cache Side Channel Attacks. ACM Trans. Archit. Code Optim. 8, 4, Article 35 (Jan. 2012), 21 pages. https://doi.org/10.1145/2086696.2086714
  • Fletcher et al. (2015) Christopher Fletcher, Ling Ren, Albert Kwon, Marten van Dijk, and Srinivas Devadas. 2015. Freecursive ORAM: [Nearly] Free Recursion and Integrity Verification for Position-based Oblivious RAM. In Proceedings of the Int’l Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS).
  • Fletcher et al. (2014) Christopher Fletcher, Ling Ren, Xiangyao Yu, Marten Van Dijk, Omer Khan, and Srinivas Devadas. 2014. Suppressing the Oblivious RAM Timing Channel While Making Information Leakage and Program Efficiency Trade-offs. In Proceedings of the Int’l Symposium On High Performance Computer Architecture.
  • Fletcher et al. (2012) Christopher Fletcher, Marten van Dijk, and Srinivas Devadas. 2012. Secure Processor Architecture for Encrypted Computation on Untrusted Programs. In Proceedings of the 7th ACM CCS Workshop on Scalable Trusted Computing; an extended version is located at http://csg.csail.mit.edu/pubs/memos/Memo508/memo508.pdf (Master’s thesis). 3–8.
  • Fletcher (2016) Christopher Wardlaw Fletcher. 2016. Oblivious RAM: from theory to practice. Ph.D. Dissertation. Massachusetts Institute of Technology.
  • Goldreich and Ostrovsky (1996) O. Goldreich and R. Ostrovsky. 1996. Software protection and simulation on oblivious RAMs. In J. ACM.
  • Goodrich et al. (2011) Michael T. Goodrich, Michael Mitzenmacher, Olga Ohrimenko, and Roberto Tamassia. 2011. Oblivious RAM simulation with efficient worst-case access overhead. In Proceedings of the 3rd ACM workshop on Cloud computing security workshop (CCSW ’11). ACM, New York, NY, USA, 95–100. https://doi.org/10.1145/2046660.2046680
  • Goodrich et al. (2012a) Michael T. Goodrich, Michael Mitzenmacher, Olga Ohrimenko, and Roberto Tamassia. 2012a. Practical oblivious storage. In Proceedings of the second ACM conference on Data and Application Security and Privacy (CODASPY ’12). ACM, New York, NY, USA, 13–24. https://doi.org/10.1145/2133601.2133604
  • Goodrich et al. (2012b) Michael T. Goodrich, Michael Mitzenmacher, Olga Ohrimenko, and Roberto Tamassia. 2012b. Privacy-preserving group data access via stateless oblivious RAM simulation. In SODA.
  • Grawrock (2006) David Grawrock. 2006. The Intel Safer Computing Initiative: Building Blocks for Trusted Computing. Intel Press.
  • Haider and van Dijk (2016) Syed Kamran Haider and Marten van Dijk. 2016. Flat ORAM: A Simplified Write-Only Oblivious RAM Construction for Secure Processor Architectures. arXiv preprint arXiv:1611.01571 (2016).
  • Islam et al. (2012) Mohammad Islam, Mehmet Kuzu, and Murat Kantarcioglu. 2012. Access Pattern disclosure on Searchable Encryption: Ramification, Attack and Mitigation. In Network and Distributed System Security Symposium (NDSS).
  • Jaleel (2010) Aamer Jaleel. 2010. Memory characterization of workloads using instrumentation-driven simulation. Web Copy: http://www. glue. umd. edu/ajaleel/workload (2010).
  • Kasture and Sanchez (2014) Harshad Kasture and Daniel Sanchez. 2014. Ubik: Ef cient Cache Sharing with Strict QoS for Latency-Critical Workloads. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS ’14).
  • Lee et al. (2011) Hyunjin Lee, Sangyeun Cho, and B.R. Childers. 2011. CloudCache: Expanding and shrinking private caches. In High Performance Computer Architecture (HPCA), 2011 IEEE 17th International Symposium on. 219–230. https://doi.org/10.1109/HPCA.2011.5749731
  • Li and Datta (2013) Lichun Li and Anwitaman Datta. 2013. Write-only oblivious RAM-based privacy-preserved access of outsourced data. International Journal of Information Security (2013), 1–20.
  • Lie et al. (2003a) D. Lie, J. Mitchell, C. Thekkath, and M. Horwitz. 2003a. Specifying and Verifying Hardware for Tamper-Resistant Software. In Proceedings of the IEEE Symposium on Security and Privacy.
  • Lie et al. (2003b) D. Lie, C. Thekkath, and M. Horowitz. 2003b. Implementing an Untrusted Operating System on Trusted Hardware. In Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles. 178–192.
  • Lie et al. (2000) David Lie, Chandramohan Thekkath, Mark Mitchell, Patrick Lincoln, Dan Boneh, John Mitchell, and Mark Horowitz. 2000. Architectural Support for Copy and Tamper Resistant Software. In Proceedings of the Int’l Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS-IX). 168–177.
  • Maas et al. (2013) Martin Maas, Eric Love, Emil Stefanov, Mohit Tiwari, Elaine Shi, Krste Asanovic, John Kubiatowicz, and Dawn Song. 2013. Phantom: Practical oblivious computation in a secure processor. In Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security. ACM, 311–324.
  • McKeen et al. (2013) Frank McKeen, Ilya Alexandrovich, Alex Berenzon, Carlos V Rozas, Hisham Shafi, Vedvyas Shanbhogue, and Uday R Savagaonkar. 2013. Innovative instructions and software model for isolated execution.. In HASP@ ISCA. 10.
  • Ostrovsky (1990) R. Ostrovsky. 1990. Efficient computation on oblivious RAMs. In STOC.
  • Ostrovsky and Shoup (1997) Rafail Ostrovsky and Victor Shoup. 1997. Private Information Storage (Extended Abstract). In STOC. 294–303.
  • Qureshi and Patt (2006) Moinuddin K. Qureshi and Yale N. Patt. 2006. Utility-Based Cache Partitioning: A Low-Overhead, High-Performance, Runtime Mechanism to Partition Shared Caches. In Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO 39). IEEE Computer Society, Washington, DC, USA, 423–432. https://doi.org/10.1109/MICRO.2006.49
  • Ren et al. (2013) Ling Ren, Xiangyao Yu, Christopher Fletcher, Marten van Dijk, and Srinivas Devadas. 2013. Design Space Exploration and Optimization of Path Oblivious RAM in Secure Processors. In Proceedings of the Int’l Symposium on Computer Architecture. Available at Cryptology ePrint Archive, Report 2013/76.
  • Sanchez and Kozyrakis (2012) Daniel Sanchez and Christos Kozyrakis. 2012. Scalable and Efficient Fine-Grained Cache Partitioning with Vantage. IEEE Micro 32, 3 (2012), 26–37. https://doi.org/10.1109/MM.2012.19
  • Sarmenta et al. (2006) Luis F. G. Sarmenta, Marten van Dijk, Charles W. O’Donnell, Jonathan Rhodes, and Srinivas Devadas. 2006. Virtual Monotonic Counters and Count-Limited Objects using a TPM without a Trusted OS. In Proceedings of the 1st STC’06.
  • Shi et al. (2011) E. Shi, T.-H. H. Chan, E. Stefanov, and M. Li. 2011. Oblivious RAM with Worst-Case Cost. In Asiacrypt. 197–214.
  • Smith (2009) Geoffrey Smith. 2009. On the Foundations of Quantitative Information Flow. In Proceedings of the 12th International Conference on Foundations of Software Science and Computational Structures: Held As Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2009 (FOSSACS ’09). Springer-Verlag, Berlin, Heidelberg, 288–302. https://doi.org/10.1007/978-3-642-00596-1_21
  • Stefanov et al. (2012) E. Stefanov, E. Shi, and D. Song. 2012. Towards practical oblivious RAM. In NDSS.
  • Stefanov et al. (2013) Emil Stefanov, Marten van Dijk, Elaine Shi, Christopher Fletcher, Ling Ren, Xiangyao Yu, and Srinivas Devadas. 2013. Path ORAM: An Extremely Simple Oblivious RAM Protocol. In Proceedings of the ACM Computer and Communication Security Conference.
  • Suh et al. (2003) G. Edward Suh, Dwaine Clarke, Blaise Gassend, Marten van Dijk, and Srinivas Devadas. 2003. aegis: Architecture for Tamper-Evident and Tamper-Resistant Processing. In Proceedings of the ICS (MIT-CSAIL-CSG-Memo-474 is an updated version). ACM, New-York. http://csg.csail.mit.edu/pubs/memos/Memo-474/Memo-474.pdf(revisedone)
  • Suh et al. (2005) G. Edward Suh, Charles W. O’Donnell, Ishan Sachdev, and Srinivas Devadas. 2005. Design and Implementation of the aegis Single-Chip Secure Processor Using Physical Random Functions. In Proceedings of the ISCA’05. ACM, New-York. http://csg.csail.mit.edu/pubs/memos/Memo-483/Memo-483.pdf
  • Suh et al. (2004) G. E. Suh, L. Rudolph, and S. Devadas. 2004. Dynamic Partitioning of Shared Cache Memory. J. Supercomput. 28, 1 (April 2004), 7–26. https://doi.org/10.1023/B:SUPE.0000014800.27383.8f
  • Trusted Computing Group (2004) Trusted Computing Group. 2004. TCG Specification Architecture Overview Revision 1.2.
    http://www.trustedcomputinggroup.com/home. (2004).
  • Wang and Suh (2012) Yao Wang and G. Edward Suh. 2012. Efficient Timing Channel Protection for On-Chip Networks. In Proceedings of the 2012 IEEE/ACM Sixth International Symposium on Networks-on-Chip (NOCS ’12). IEEE Computer Society, Washington, DC, USA, 142–151. https://doi.org/10.1109/NOCS.2012.24
  • Wang and Lee (2007) Zhenghong Wang and Ruby B. Lee. 2007. New Cache Designs for Thwarting Software Cache-based Side Channel Attacks. In Proceedings of the 34th Annual International Symposium on Computer Architecture (ISCA ’07). ACM, New York, NY, USA, 494–505. https://doi.org/10.1145/1250662.1250723
  • Wassel et al. (2013) Hassan M. G. Wassel, Ying Gao, Jason K. Oberg, Ted Huffmire, Ryan Kastner, Frederic T. Chong, and Timothy Sherwood. 2013. SurfNoC: A Low Latency and Provably Non-interfering Approach to Secure Networks-on-chip. SIGARCH Comput. Archit. News 41, 3 (June 2013), 583–594. https://doi.org/10.1145/2508148.2485972
  • Williams and Sion (2012) Peter Williams and Radu Sion. 2012. Single round access privacy on outsourced storage. In Proceedings of the 2012 ACM conference on Computer and communications security (CCS ’12). ACM, New York, NY, USA, 293–304. https://doi.org/10.1145/2382196.2382229
  • Xie and Loh (2009) Yuejian Xie and Gabriel H. Loh. 2009. PIPP: Promotion/Insertion Pseudo-partitioning of Multi-core Shared Caches. In Proceedings of the 36th Annual International Symposium on Computer Architecture (ISCA ’09). ACM, New York, NY, USA, 174–183. https://doi.org/10.1145/1555754.1555778
  • Yu et al. (2015) Xiangyao Yu, Syed Kamran Haider, Ling Ren, Christopher Fletcher, Albert Kwon, Marten van Dijk, and Srinivas Devadas. 2015. Proram: dynamic prefetcher for oblivious ram. In Proceedings of the 42nd Annual International Symposium on Computer Architecture. ACM, 616–628.
  • Zhang et al. (2011) Danfeng Zhang, Aslan Askarov, and Andrew C. Myers. 2011. Predictive Mitigation of Timing Channels in Interactive Systems. In Proceedings of the 18th ACM Conference on Computer and Communications Security (CCS ’11). ACM, New York, NY, USA, 563–574. https://doi.org/10.1145/2046707.2046772
  • Zhang et al. (2012) Yinqian Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2012. Cross-VM side channels and their use to extract private keys. In Proceedings of the 2012 ACM conference on Computer and communications security. ACM, 305–316.
  • Zhuang et al. (2004) Xiaotong Zhuang, Tao Zhang, and Santosh Pande. 2004. HIDE: an infrastructure for efficiently protecting information leakage on the address bus. In Proceedings of the 11th ASPLOS. https://doi.org/10.1145/1024393.1024403
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
316041
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description