ZombieLoad: Cross-Privilege-Boundary Data Sampling

ZombieLoad: Cross-Privilege-Boundary Data Sampling

Michael Schwarz Graz University of Technology michael.schwarz@iaik.tugraz.at Moritz Lipp Graz University of Technology moritz.lipp@iaik.tugraz.at Daniel Moghimi Worcester Polytechnic Institute amoghimi@wpi.edu Jo Van Bulck imec-DistriNet, KU Leuven jo.vanbulck@cs.kuleuven.be Julian Stecklina Cyberus Technology julian.stecklina@cyberus-technology.de Thomas Prescher Cyberus Technology thomas.prescher@cyberus-technology.de  and  Daniel Gruss Graz University of Technology daniel.gruss@iaik.tugraz.at

In early 2018, Meltdown first showed how to read arbitrary kernel memory from user space by exploiting side-effects from transient instructions. While this attack has been mitigated through stronger isolation boundaries between user and kernel space, Meltdown inspired an entirely new class of fault-driven transient execution attacks. Particularly, over the past year, Meltdown-type attacks have been extended to not only leak data from the L1 cache but also from various other microarchitectural structures, including the FPU register file and store buffer.

In this paper, we present the ZombieLoad attack which uncovers a novel Meltdown-type effect in the processor’s previously unexplored fill-buffer logic. Our analysis shows that faulting load instructions (\ieloads that have to be re-issued for either architectural or microarchitectural reasons) may transiently dereference unauthorized destinations previously brought into the fill buffer by the current or a sibling logical CPU. Hence, we report data leakage of recently loaded stale values across logical cores. We demonstrate ZombieLoad’s effectiveness in a multitude of practical attack scenarios across CPU privilege rings, OS processes, virtual machines, and SGX enclaves. We discuss both short and long-term mitigation approaches and arrive at the conclusion that disabling hyperthreading is the only possible workaround to prevent this extremely powerful attack on current processors.

side-channel attack, transient execution, fill buffer, Meltdown
ccs: Security and privacy Side-channel analysis and countermeasuresccs: Security and privacy Systems securityccs: Security and privacy Operating systems security\pdfstringdefDisableCommands

1. Introduction

In 2018, Meltdown (Lipp2018meltdown) was the first microarchitectural attack completely breaching the security boundary between the user and kernel space and, thus, allowed to leak arbitrary data. While Meltdown was fixed using a stronger isolation between user and kernel space, the underlying principle turned out to be an entire class of transient-execution attacks (Canella2019). Over the past year, researchers have demonstrated that Meltdown-type attacks cannot only leak kernel data to user space, but also leak data across user processes, virtual machines, and SGX enclaves (Vanbulck2018; Weisse2018foreshadowNG). Furthermore, data cannot only be leaked from the L1 cache but also from other microarchitectural structures, including the register file (Stecklina2018LazyFP), the line-fill buffer (Lipp2018meltdown; VanSchaik2019RIDL), and, as shown in concurrent work, the store buffer (Genkin2019storebuffer).

Instead of executing the instruction stream in order, most modern processors can re-order instructions while maintaining architectural equivalence, creating the illusion of an in-order machine. Instructions then may already have been executed when the CPU detects that a previous instruction raises an exception. Hence, such instructions following the faulting instruction (\ietransient instructions) are rolled back. While the rollback ensures that there are no architectural effects, side effects might remain in the microarchitectural state. Most Meltdown-type data leaks exploit overly aggressive performance optimizations around out-of-order execution.

For many years, the microarchitectural state was considered invisible to applications, and hence security considerations were often limited to the architectural state. Specifically, microarchitectural elements often do not distinguish between different applications or privilege levels (Jang2016; Pessl2016; Schwarz2017SGX; Lipp2018meltdown; Schwarz2018KeyDrown; Evtyushkin2018BranchScope; Canella2019).

In this paper, we show that, first, there still are unexplored microarchitectural buffers, and second, both architectural and microarchitectural faults can be exploited. With our notion of “microarchitectural faults”, \iefaults that cause a memory request to be re-issued internally without ever becoming architecturally visible, we demonstrate that Meltdown-type attacks can also be triggered without raising an architectural exception such as a page fault. Based on this, we demonstrate ZombieLoad, a novel, extremely powerful Meltdown-type attack targeting the fill buffer logic.

ZombieLoad exploits that load instructions which have to be re-issued internally, may first transiently compute on stale values belonging to previous memory operations from either the current or a sibling hyperthread. Using established transient execution attack techniques, adversaries can recover the values of such “zombie load” operations. Importantly, in contrast to all previously known transient execution attacks (Canella2019), ZombieLoad reveals recent data values without adhering to any explicit address-based selectors. Hence, we consider ZombieLoad an instance of a novel type of microarchitectural data sampling attacks. We present microarchitectural data sampling as the missing link between traditional memory-based side-channels which correlate data adresses within a victim execution, and existing Meltdown-type transient execution attacks that can directly recover data values belonging to an explicit address. In this paper, we combine primitives from traditional side-channel attacks with incidental data sampling in the time domain to construct extremely powerful attacks with targeted leakage in the address domain. This not only opens up new attack avenues, but also re-enables attacks that were previously assumed to be mitigated.

We demonstrate ZombieLoad’s real-world implications in a multitude of practical attack scenarios that leak across processes, privilege boundaries, and even across logical CPU cores. Furthermore, we show that we can leak Intel SGX enclave secrets loaded from a sibling logical core. We demonstrate that ZombieLoad attackers may extract sealing keys from Intel’s architectural quoting enclave, ultimately breaking SGX’s confidentiality and remote attestation guarantees. ZombieLoad is furthermore not limited to native code execution, but also works across virtualization boundaries. Hence, virtual machines can attack not only the hypervisor but also different virtual machines running on a sibling logical core. We conclude that disabling hyperthreading, in addition to flushing several microarchitectural states during context switches, is the only possible workaround to prevent this extremely powerful attack.


The main contributions of this work are:

  1. We present ZombieLoad, a powerful data sampling attack leaking data accessed on the same or sibling hyperthread.

  2. We combine incidental data sampling in the time domain with traditional side-channel primitives to construct a targeted information flow similar to regular Meltdown attacks.

  3. We demonstrate ZombieLoad in several real-world scenarios: cross-process, cross-VM, user-to-kernel, and SGX.

  4. We show that ZombieLoad breaks the security guarantees provided by Intel SGX.

  5. We are the first to do post-processing of the leaked data within the transient domain to eliminate noise.


Section 2 provides background. Section 3 provides an overview of ZombieLoad, and introduces a novel classification scheme for memory-based side-channel attacks. Section 4 describes attack scenarios and the respective attacker models. Section 5 introduces and evaluates the basic primitives required for mounting ZombieLoad. Section 6 demonstrates ZombieLoad in real-world attack scenarios. LABEL:sec:countermeasures discusses possible countermeasures. We conclude in LABEL:sec:conclusion.

Responsible Disclosure.

We provided Intel with a PoC leaking uncacheable-typed memory locations from a concurrent hyperthread on March 28, 2018. We clarified to Intel on May 30, 2018, that we attribute the source of this leakage to the LFB. In our experiments, this works identically for Foreshadow (Meltdown-P), undermining the completeness of L1-flush-based mitigations. This issue was acknowledged by Intel and tracked under CVE-2019-11091. We responsibly disclosed the main attack presented in this paper to Intel on April 12, 2019. Intel verified and acknowledged our findings and assigned CVE-2018-12130 to this issue. Both issues were part of an embargo ending on May 14, 2019.

2. Background

In this section, we describe the background required for this paper.

2.1. Transient Execution Attacks

Today’s high-performance processors typically implement an out-of-order execution design, allowing the CPU to utilize different execution units in parallel. The instruction stream is decoded in-order into simpler micro-operations (\muops(Fog2016) which can be executed as soon as the required operands are available. A dedicated reorder buffer stores intermediate results and ensures that instruction results are committed to the architectural state in the order specified by the program’s instruction stream. Any fault that occurred during the execution of an instruction is handled at instruction retirement, leading to a pipeline flush which squashes any outstanding \muopresults from the reorder buffer.

In addition, modern CPUs employ speculative execution optimizations to avoid stalling the instruction pipeline until a conditional branch is resolved. The processor predicts the most likely outcome of the branch and continues execution along that direction. If the branch is resolved and the prediction was correct, the speculative results retire in-order yielding a measurable performance improvement. On the other hand, if the prediction was wrong, the pipeline is flushed, and any speculative results are squashed in the reorder buffer. We refer to instructions that are executed speculatively or out-of-order but whose results are never architecturally committed as transient instructions (Canella2019; Lipp2018meltdown; Vanbulck2018).

While the results and the architectural effects of transient instructions are discarded, measurable microarchitectural side effects may remain and are not reverted. Attacks that exploit these side effects to observe sensitive information are called transient execution attacks (Lipp2018meltdown; Kocher2019spectre; Canella2019). Typically, these attacks utilize a cache-based covert channel to transmit the secret data observed transiently from the microarchitectural domain to an architectural state. However, other covert channels can be utilized as well (Schwarz2018netspectre; Bhattacharyya2019smotherspectre). In line with a recent exhaustive survey (Canella2019), we refer to attacks exploiting misprediction (Kocher2019spectre; Kiriansky2018speculative; Koruyeh2018spectre5; Maisuradze2018spectre5; Horn2018spectre4) as Spectre-type, whereas attacks exploiting transient execution after a CPU exception (Lipp2018meltdown; Vanbulck2018; Stecklina2018LazyFP; Weisse2018foreshadowNG; Kiriansky2018speculative; Canella2019) are classified as belonging to Meltdown-type.

2.2. Memory Subsystem

The CPU architecture defines different instructions to load data from memory. In this section, we give a high-level overview of how out-of-order CPUs handle memory loads. However, as the actual implementation of the microarchitecture is usually not publicly documented, we rely on patents held by Intel to back up possible implementation details.


To improve the performance of memory accesses, CPUs contain small and fast internal caches that store frequently used data. Caches are typically organized in multiple levels that are either private per core or shared amongst them. Modern CPUs typically use -way set-associative caches containing cache lines per set, each typically wide. Usually, modern Intel CPUs have a private first-level instruction (L1I) and data cache (L1D) and a unified L2 cache. The last-level cache (LLC) is shared across all cores.

Virtual Memory

CPUs use virtual memory to provide memory isolation between processes. Virtual addresses are translated to physical memory locations using multi-level translation tables. The translation table entries define the properties, \egaccess control or memory type, of the referenced memory region. The CPU contains the translation-look-aside buffer (TLB) consisting of additional caches to store address-translation information.

Memory Order Buffer


that deal with memory operations are handled by dedicated execution units. Typically, Intel CPUs contain 2 units responsible for loading data and one for storing data. While the reorder buffer resolves register dependencies, out-of-order executed \muopscan still have memory dependencies. In an out-of-order CPU, the memory order buffer (MOB), incorporating a load buffer and a store buffer, controls the dispatch of memory operations and tracks their progress to resolve memory dependencies.

Data Loads

For every dispatched load operation an entry is allocated in the load buffer and in the reorder buffer. The allocated load-buffer entry holds information about the operation, \egordering constraints, the reorder buffer ID or the age of the most recent store. To determine the physical address, the upper of the linear address are translated by the memory management unit. Concurrently, the untranslated lower are already used to index the cache set in the L1D (US5613083). If the address translation is cached in the TLB, the physical address is available immediately. Otherwise, the page miss handler (PMH) is activated to perform a page-table walk to retrieve the address translation as well as the corresponding permission bits. With the physical address, the tag and, thus, the way of the cache is determined. If the requested data is in the L1D (cache hit), the load operation can be completed.

If data is not in the L1D, it needs to be served from higher levels of the cache or the main memory via the line-fill buffer (LFB). The LFB serves as an interface to other caches and the main memory and keeps track of outstanding loads. Memory accesses to uncacheable memory regions, and non-temporal moves all go through the LFB. If a load corresponds to an entry of a previous load operation in the load buffer, the loads can be merged (US7346735; Abramson1996).

On a fault, \ega physical address is not available, the page-table walk will not immediately abort (US5613083). Still, an instruction in a pipelined implementation must undergo each stage regardless of whether a fault occurred or not (US5717882), and is reissued in case of a fault. Only at the retirement of the faulting \muop, the fault is handled, and the pipeline is flushed (US5613083; US5564111). If a fault occurs within a load operation, it is still marked as “valid and completed” in the MOB (US5717882).

2.3. Processor Extensions


Initially, all instructions were hardwired in the CPU core. However, to support more complex instructions, microcode allows implementing higher-level instructions using multiple hardware-level instructions. Importantly, this allows processor vendors to support complex behavior and even extend or modify CPU behavior through microcode updates (Intel_vol3). Preferably, new architectural features are implemented as microcode extensions, \egIntel SGX (US20120159184A1).

While the execution units perform the fast-paths directly in hardware, more complex slow-path operations are typically performed by issuing a microcode assist which points the sequencer to a predefined microcode routine (Costan2016). To do so, the execution unit associates an event code with the result of the faulting micro-op. When the micro-op of the execution unit is committed, the event code causes the out-of-order scheduler to squash all in-flight micro-ops in the reorder buffer (Costan2016). The microcode sequencer uses the event code to read the micro-ops associated with the event in the microcode (US5625788A).

Intel TSX

Intel TSX is an x86 instruction set extension to support hardware transactional memory (IntelTSX_CPP) which has been introduced with Intel Haswell CPUs. With TSX, particular code regions are executed transactionally. If the entire code regions completes successfully, memory operations within the transaction appear as an atomic commit to other logical processors. If an issue occurs during the transaction, a transactional abort rolls back the execution to an architectural state before the transaction and, thereby, discarding all performed operations. Transactional aborts can be caused by different issues: Typically, a conflicting memory operation occurs where another logical processor either reads from an address which has been modified within the transaction or writes to an address which is used within the transaction. Further, the amount of read and written data within the transaction may not exceed the size of the LLC and L1 cache respectively for the transaction to succeed (Intel_vol3). In addition, some instructions or system event might cause the transaction to abort as well (IntelTSX_CPP).

Intel SGX

With the Skylake microarchitecture, Intel introduced Software Guard Extension (SGX), an instruction-set extension for isolating trusted code (Intel_vol3). SGX executes trusted code inside so-called enclaves, which are mapped in the virtual address space of a conventional host application process but are isolated from the rest of the system by the hardware itself. The threat model of SGX assumes that the operating system and all other running applications could be compromised and, therefore, cannot be trusted. Any attempt to access SGX enclave memory in non-enclave mode results in abort page semantics, \ieregardless of the current privilege level, reads return the dummy value 0xff and writes are ignored (sgxdeveloperref). Furthermore, to protect against powerful physical attackers probing the memory bus, the SGX hardware transparently encrypts the memory region used by enclaves (Costan2016).

A dedicated eenter instruction redirects control flow to an enclave entry point, whereas eexit transfers back to the untrusted host application. Furthermore, in case of an interrupt or fault, SGX securely saves CPU registers inside the enclave’s save state area (SSA) before vectoring to the untrusted operating system. Next, the eresume instruction can be used to restore processor state from the SSA frame and continue a previously interrupted enclave.

SGX-capable processors feature cryptographic key derivation facilities through the egetkey instruction, based on a CPU-level master secret and a secure measurement of the calling enclave’s initial code and data. Using this key, enclaves can securely seal secrets for untrusted persistent storage, and establish secure communication channels with other enclaves residing on the same processor. Furthermore, to enable remote attestation, Intel provides a trusted quoting enclave which unseals an Intel-private key and generates an asymmetric signature over the local enclave identity report.

Over the past years, researchers have demonstrated various attacks to leak sensitive data from SGX enclaves, \egthrough memory safety violations (Lee2017SGXROP), race conditions (Weichbrodt2016), or side-channels (Moghimi2017cachezoom; Schwarz2017SGX; Vanbulck2017pagetable; Vanbulck2018nemesis). More recently, SGX was also compromised by transient execution attacks (Vanbulck2018; Chen2018SGXpectre) which necessitated microcode updates and increased the processor’s security version number (SVN). All SGX key derivations and attestations include SVN to reflect the current microcode version, and hence security level.

3. Attack Overview

In this section, we provide an overview of ZombieLoad. We describe what can be observed using ZombieLoad and how that fits into the landscape of existing side-channel attacks. By that, we show that ZombieLoad is a novel category of side-channel attacks, which we refer to as data-sampling attacks, opening a new research field.

3.1. Overview

ZombieLoad is a transient-execution attack (Canella2019) which observes the values of memory loads on the current physical CPU. ZombieLoad exploits that the fill buffer is accessible by all logical CPUs of a physical CPU core and that it does not distinguish between processes or privilege levels.

The load buffer acts as a queue for all memory loads from the memory subsystem. Whenever the CPU encounters a memory load during execution, it reserves an entry in the load buffer. If the load was not an L1 hit, it additionally requires a fill-buffer entry. When the requested data has been loaded, the memory subsystem frees the corresponding load- and fill-buffer entries, at which point the corresponding load instruction may retire.

However, we observed that under certain complex microarchitectural conditions (\ega fault), where the load requires a microcode assist, it may first read stale values before being re-issued eventually. As with any Meltdown-type attack, this opens up a transient execution window in which this value can be used for subsequent calculations before the execution is aborted and rolled back. Thus, an attacker can encode the leaked value into a microarchitectural element, such as the cache.

In contrast to previous Meltdown-type attacks, however, it is not possible to select the value to leak based on an attacker-specified address. ZombieLoad simply leaks any value which is currently loaded by the physical CPU core. While this at first sounds like a massive limitation, we show that this opens a new field of side-channel attacks. We show that ZombieLoad is an even more powerful attack when combined with existing techniques known from traditional side-channel attacks.

3.2. Microarchitectural Root Cause

For Meltdown, Foreshadow, and Fallout, the source of the leakage is apparent. Moreover, for these attacks, there are plausible explanations on what is going wrong in the microarchitecture, \iewhat the root cause of the leakage is (Lipp2018meltdown; Vanbulck2018; Weisse2018foreshadowNG; Genkin2019storebuffer). For ZombieLoad, however, this is not entirely clear.

While we identified some necessary building blocks to observe the leakage (\cfSection 5), we can only provide a hypothesis on why the interaction of the building blocks leads to the observed leakage. As we could only observe data leakage on Intel CPUs, we assume that this is indeed an implementation issue (such as Meltdown) and not an issue with the underlying design (as with Spectre). For our hypothesis, we combined our observations with the nearly non-existent official documentation of the fill buffer (Intel_opt; Intel_vol3). Ultimately, we could neither prove nor disprove our hypothesis, leaving the verification or falsification of our hypothesis to future work.

Stale-Entry Hypothesis.

Every load is associated with an entry in the load buffer and potentially an entry in the fill buffer (Intel_opt).

When a load encounters a complex situation, such as a fault, it requires a microcode assist (Intel_vol3). This microcode assist triggers a machine clear, which flushes the pipeline. On a pipeline flush, instructions which are already in flight still finish execution (Hennessy2017).

As this has to be as fast as possible to not incur additional delays, we expect that fill-buffer entries are optimistically matched as long as parts of the physical address match. Thus, the load continues with a wrong fill-buffer entry, which was valid for a previous load. This leads to a use-after-free vulnerability (Gruss2018useafterfree) in the hardware. Intel documents the fill buffer as being competitively shared among hyperthreads (Intel_vol3), giving both logical cores access to the entire fill buffer (\cfLABEL:appendix:fill-buffer-size). Consequently, the stale fill-buffer entry can also be from a previous load of the sibling logical core. As a result, the load instruction loads valid data from a previous load.

Leakage Source.

We devised 2 experiments to reduce the number of possible sources of the leaked data.

In our first experiment, we marked a page as “uncacheable” via the page-table entry and flushed the page from the cache. As a result, every memory load from the page circumvents all cache levels and directly travels from the main memory to the fill buffer (Intel_vol3). We then write the secret onto the uncacheable memory page to ensure that there is no copy of the data in the cache. When loading data from the uncacheable memory page, we can see leakage, but the leakage rate is only in the order of bytes per second, \eg (, ) on an i7-8650U. We can attribute this leakage to the fill buffer. This was also exploited in concurrent work (VanSchaik2019RIDL). Our hypothesis is further backed by the MEM_LOAD_RETIRED.FB_HIT performance counter, which shows multiple thousand line-fill-buffer hits ( (, )).

Intel claims that the leakage is entirely from the fill buffer. However, our second experiment shows that the line-fill buffer might not be the only source of the leakage. We rely on Intel TSX to ensure that memory accesses do not reach the line-fill buffer as follows. Inside a transaction, we first write the secret value to a memory location which was previously initialized with a different value. The write inside the transaction ensures that the address is in the write set of the transaction and thus in L1 (Intel_opt; Schwarz2018DF). Evicting data from the write set from the cache leads to a transactional abort (Intel_opt). Hence, any subsequent memory access to the data from the write set ensures that it is served from the L1, and therefore, no request to the line-fill buffer is sent (Intel_vol3). In this experiment, we see a much higher rate of leakage which is in the order of kilobytes per second. More importantly, we only see the value written inside the TSX transaction and not the value that was at the memory location before starting the transaction. Our hypothesis that the line-fill buffer is not the only source of the leakage is further backed by observing performance counters. The MEM_LOAD_RETIRED.FB_HIT and MEM_LOAD_RETIRED.L1_MISS performance counters, do not increase significantly. In contrast, the MEM_LOAD_RETIRED.L1_HIT performance counter shows multiple thousand L1 hits.

While accessing the data to leak on the victim core, we monitored the MEM_LOAD_RETIRED.FB_HIT performance counter on the attacker core for . If the address was cached, we measured a Pearson correlation of () between the correct recoveries and line-fill buffer hits, indicating no association. However, while continuously flushing the data on the victim core, ensuring that a subsequent access must go through the LFB, we measure a strong correlation of (). This result indicates that the line-fill buffer is not the only source of leakage. However, a different explanation might be that the performance counters are not reliable in such corner cases. Future work has to investigate whether other microarchitectural elements, \egthe load buffer, are also involved in the observed data leakage.

3.3. Classification

Instruction Pointer




Memory-based Side-channel Attacks

Data Sampling (this paper)

Figure 1. The 3 properties of a memory operation: instruction pointer of the program, target address, and data value. So far, there are techniques to infer the instruction pointer from target address and the data value from the address. With ZombieLoad, we show the first instance of an attack which infers the data value from the instruction pointer.

In this section, we introduce a way to classify memory-based side-channel and transient-execution attacks. For all these attacks, we assume a target program which executes a memory operation at a certain address with a specific data value at the program’s current instruction pointer. Figure 1 illustrates these three properties as the corner of a triangle, and techniques which let an attacker infer one of the properties based on one or both of the other properties.

Traditional memory-based side-channel attacks allow an attacker to observe the location of memory accesses. The granularity of the location observation depends on the spatial accuracy of the used side channel. Most common memory-based side-channel attacks (Percival2005; Yarom2014; Gruss2016Flush; Gruss2016Prefetch; Gruss2015Template; Pessl2016; Xu2015controlled; Vanbulck2017pagetable; Jang2016; Gras2018TLB) have a granularity between one cache line (Yarom2014; Gruss2016Flush; Gruss2016Prefetch; Gruss2015Template) \ieusually , and one page (Jang2016; Gras2018TLB; Vanbulck2017pagetable; Xu2015controlled), \ieusually . These side channels establish a connection between the time domain and the space domain. The time domain can either be the wall time or also commonly the execution time of the program which correlates with the instruction pointer. These classic side channels provide means of connecting the address of a memory access to a set of possible instruction pointers, which then allows reconstructing the program flow. Thus, side-channel resistant applications have to avoid secret-dependent memory access to not leak secrets to a side-channel attacker.







































Page Number

Page Offset

Figure 2. Meltdown-type attacks provide a varying degree of target control (gray hatched), from full virtual addresses in the case of Meltdown to nearly no control for ZombieLoad.

Since early 2018, with transient execution attacks (Canella2019) such as Meltdown (Lipp2018meltdown) and Spectre (Kocher2019spectre), there is a second type of attacks which allow an attacker to observe the value stored at a memory address. Meltdown provided the most control over target address. With Meltdown, the full virtual address of the target data is provided, and the corresponding data value stored at this address is leaked. The success rate depends on the location of the data, \iewhether it is in the cache or main memory. However, the only constraint for Meltdown is that the data is addressable using a virtual address (Lipp2018meltdown). Other Meltdown-type attacks (Vanbulck2018; Genkin2019storebuffer) also connect addresses to data values. However, they often impose additional constraints, such as that the data has to be cached in L1 (Vanbulck2018; Weisse2018foreshadowNG), the physical address has to be known (Weisse2018foreshadowNG), or that an attacker can choose only parts of the target address (Genkin2019storebuffer).

Figure 2 illustrates which parts of the virtual and physical address an attacker can choose to target data values to leak. For Meltdown, the virtual address is sufficient to target data in the same address space (Lipp2018meltdown). Foreshadow already requires knowledge of the physical address and the least-significant 12 bits of the virtual address to target any data in the L1, not limited to the own address space (Vanbulck2018; Weisse2018foreshadowNG). When leaking the last writes from the store buffer, an attacker is already limited in choosing which value to leak. It is only possible to filter stores based on the least-significant 12 bits of the virtual address, a more targeted leakage is not possible (Genkin2019storebuffer).

Zombie loads provide no control over the leaked address to an attacker. The only possible target selection is the byte index inside the loaded data, which can be seen as an address with up to 6-bit in case an entire cache line is loaded. Hence, we do not count ZombieLoad as an attack which leaks data values based on the address. Instead, from the viewpoint of the target control, ZombieLoad is more similar to traditional memory-based side-channel attacks. With ZombieLoad, an attacker observes the data value of a memory access. Thus, this side channel establishes a connection between the time domain and the data value. Again, the time domain correlates with the instruction pointer of the target address. ZombieLoad is the first instance of a class of attacks which connects the instruction pointer with the data value of a memory access. We refer to such attacks as data sampling attacks. Essentially, this new class of data sampling attacks is capable of breaking side-channel resistant applications, such as constant-time cryptographic algorithms (Gueron2012aesni).

Following the classification scheme from Canella \etal(Canella2019), ZombieLoad is a Meltdown-type transient execution attack, and we propose Meltdown-MCA as the generic name. This reflects that the (microarchitectural) fault type being exploited by ZombieLoad is the microcode assist (MCA, explained further).

4. Attack Scenarios & Attacker Model

Following most side-channel attacks, we assume the attacker can execute unprivileged native code on the target machine. Thus, we assume a trusted operating system if not stated otherwise. This relatively weak attacker model is sufficient to mount ZombieLoad. However, we also show that the increased attacker capabilities offered in certain scenarios, \egSGX and hypervisor attacks, may further amplify the leakage while remaining within the threat model of the respective scenario.

At the hardware level, we assume a ubiquitous Intel CPU with simultaneous multithreading (SMT, also known as hyperthreading) enabled. Crucially, we do not rely on existing vulnerabilities, such as Meltdown (Lipp2018meltdown), Foreshadow (Vanbulck2018; Weisse2018foreshadowNG), or Fallout (Genkin2019storebuffer).

User-Space Leakage

In the cross-process user-space scenario, an unprivileged attacker leaks values loaded by another concurrently running user-space application. We consider such a cross-process scenario most dangerous for end users, who are not commonly using Intel SGX nor virtual machines. Moreover, many secrets are likely to be found in user-space applications such as browsers or password managers.

The attacker can execute unprivileged code and is co-located with the victim on the same physical but a different logical CPU core. This is a typical case for hyperthreading, where both attacker and victim run on one hyperthread of the same CPU.

Kernel Leakage

In addition to leakage across user-space applications, ZombieLoad can also leak across the privilege boundary between user and kernel space. We demonstrate that the value of loads executed in kernel space is leaked to an unprivileged attacker, executing either on the same or a sibling logical core.

In this scenario, the unprivileged attacker performs a system call to the kernel, running on the same logical core. Importantly, we found that kernel load leakage may even survive the switch back from the kernel to user space. Hyperthreading is hence not a strict requirement for this scenario.

Intel SGX Leakage

In addition to leaking values loaded by the kernel, ZombieLoad can observe loads executed inside an Intel SGX enclave. In this scenario, the attacker is executing on a sibling logical core, co-located with the victim enclave on the same physical core. We demonstrate that ZombieLoad can leak secrets loaded during the enclave’s execution from a concurrent logical core, but we did not observe leakage on the same logical core after exiting the enclave synchronously (eexit) or asynchronously (on interrupt).

While in the aftermath of the Foreshadow (Vanbulck2018) attack, current SGX attestations indicate whether hyperthreading has been enabled at boot time, Intel’s official security advisory (IntelL1TF) merely suggests that a remote verifier might reject attestations from a hyperthreading-enabled system “if it deems the risk of potential attacks from the sibling logical processor as not acceptable”. Hence, machines with up-to-date patched microcode may still run with hyperthreading enabled.

Within the SGX threat model, we can leverage the attacker’s first rate control over the untrusted operating system. An attacker can, for instance, modify page table entries (Vanbulck2017pagetable), or precisely execute the victim enclave at most one instruction at a time (VanBulck2017sgx).

Virtual Machine Leakage

With ZombieLoad, it is possible to leak loaded values across virtual-machine boundaries. In this scenario, an attacker running inside a virtual machine can leak values from a different virtual machine co-located on the same physical but different logical core. Thus, an attacker can leak values loaded from a virtual machine running on the sibling logical core.

As the attacker is running inside an untrusted virtual machine, the attacker is not restricted to unprivileged code execution. Thus, the attacker can, for instance, modify guest-page-table entries.

Hypervisor Leakage

In the hypervisor scenario, an attacker running inside a virtual machine utilizes ZombieLoad to leak the value of loads executed by the hypervisor.

As the attacker is running inside an untrusted virtual machine, the attacker is not restricted to unprivileged code execution.

5. Building Blocks

In this section, we describe the building blocks for the attack.

5.1. Zombie Loads


max width= \diagboxScenarioVariant 1 2 Unprivileged Attacker \circlet \circletfillhl \circletfill \circlet Privileged Attacker (root) \circletfill \circletfill \circletfill \circletfill

Symbols indicate whether a variant can be used in the corresponding attack scenario (\circletfill), can be used depending on the hardware configuration as discussed in Section 5.1 (\circletfillhl), or cannot be used (\circlet).

Table 1. Overview of different variants to induce zombie loads in different scenarios.

The main primitive for mounting ZombieLoad is a load which triggers a microcode assist, resulting in a transient load containing wrong data. We refer to such a load as a zombie load. Zombie loads are loads which either architecturally or microarchitecturally fault and thus cannot complete, requiring a re-issue of the load at a later point. We identified multiple different scenarios to create such zombie loads required for a successful attack. All variants have in common that they abuse the clflush instruction to reliably create the conditions required for leaking from a wrong destination (\cfSection 3.2). In this section, we describe 2 different variants that can be used to leak data (\cfSection 5.2) depending on the adverary’s capabilities. Table 1 overviews which variant is applicable in which scenario, depending on the operating system and underlying hardware configuration.

Variant 1: Kernel Mapping.

Page p 2 MB

User mapping v

4 KB

2 MB

Kernel address k

4 KB

2 MB

cache line


faulting load

Figure 3. Variant 1: Using huge kernel pages for ZombieLoad. Page p is mapped using a user-accessible address (v) and a kernel-space huge page (k). Flushing v and then reading from k using Meltdown leaks values from the fill buffer.

The first variant is a ZombieLoad setup which does not rely on any specific CPU feature. We require a kernel virtual address , \iean address where the user-accessible bit is not set in the page-table entry. In practice, the kernel is usually mapped with huge pages (\ie pages). Thus refers to a physical page . Note that although we use such huge pages for our experiments, it is not strictly required, as the setup also works with pages. We also require the user to have read access to the content of the physical page through a different virtual address .

Figure 3 illustrates such a setup. In this setup, accessing the page via the user-accessible virtual address provides an architecturally valid way to access the contents of the page. Accessing the same page via the kernel address results in a zombie load similar to Meltdown (Lipp2018meltdown) requiring a microcode assist. Note that while there are other ways to construct an inaccessible address , \egby clearing the present bit (Vanbulck2018), we were only able to exploit zombie loads originating from kernel mappings.

To create precisely the scenario depicted in Figure 3, we allocate a page p in the user space with the virtual address v. Note that p is a regular page which is accessible through the virtual address v. We retrieve its physical address through /proc/pagemap, or alternatively using a side channel (Gruss2016Prefetch; Islam2019spoiler). Using the physical address and the base address of the direct-physical map, we get an inaccessible kernel address k which maps to the allocated page p. If the operating system does not use stronger kernel isolation (Gruss2017Kaslr), \egKPTI (LWN_kpti), the direct-physical map in the kernel is mapped in the user space and uses huge pages which are marked as not user accessible. In the case of a privileged attacker, \egwhen attacking a hypervisor or SGX enclave, an attacker can easily create such pages if they do not exist.

Variant 2: Microcode-Assisted Page-Table Walk.

A variant similar to Variant 1 is to trigger a microcode-assisted page-table walk. If a page-table walk requires an update to the access or dirty bit in the page-table entry, it falls back to a microcode assist (Costan2016).

In this setup, we require one physical page which has 2 user-accessible virtual addresses, and . This can be easily achieved by using a shared-memory segment or memory-mapped file, which is mapped twice in the application. The virtual address can be used to access the contents of architecturally. For , we have to clear the accessed bit in the page-table entry. On Linux, this is not possible in the case of an unprivileged attacker, and can thus only be used in attacks where we assume a privileged attacker (\cfSection 4). However, we experimentally verified that Windows 10 (1803 build 17134.706) periodically clears the accessed bits. We assume that the page-replacement algorithm is responsible for this. Thus, this variant enables the attack on Windows for unprivileged attackers.

When accessing the page through the virtual address , the accessed bit of the page-table entry has to be set. This, however, cannot be done by the page-miss handler (Costan2016). Instead, microarchitecturally, the load faults, and a micro-code assist is triggered which repeats the page-table walk and sets the accessed bit (Costan2016).

If the access to is done transiently, \iebehind a misspeculated branch or after an exception, the accessed bit cannot be set architecturally. Thus, the leakage is not only exploitable once but instead for every access.

5.2. Data Leakage

To leak data with the setup described in Section 5.1, we constantly flush the first cache line of p through the virtual address v. We achieve this by executing the unprivileged clflush instruction (or clflushopt instruction if available) on the user-accessible virtual address v. For Variant 1, we leverage Meltdown to read from the kernel address k which maps to the cache line flushed before. As with Meltdown-US (Lipp2018meltdown), various methods of preventing an architectural exception can be used. We verified that ZombieLoad with Variant 1 works with exception prevention (\iespeculative execution), handling (\iea custom signal handler), and suppression (\ieIntel TSX).

For Variant 2, we transiently, \iebehind a mispredicted branch, read from the address .

Counterintuitively, the resulting values leaked for all variants are not coming from page p. Instead, we get access to data which is currently loaded on the current or sibling logical CPU core. Thus, it appears that we reuse fill-buffer entries, and leak the data which the entries references. For Variant 1 and Variant 2, this allowed us to access all bytes from the cache line that the fill-buffer entry references.

5.3. Data Sampling

Independent of the setup for ZombieLoad, we cannot directly control the address of the data to leak. Both the virtual addresses k and v, as well as the physical address of p is arbitrary and does not correlate with the leaked data. In any case, we simply get the value referenced by one fill-buffer entry which we cannot specify.

However, there is at least control within the fill-buffer entry, \iewe can target specific bytes within the fill-buffer entry. The least-significant 6 bits of the virtual address v refer to the byte within the fill-buffer entry. Hence, we can target a single byte at a specific position from the fill-buffer entry. While at first, this does not sound powerful, it allows leaking sensitive information, such as AES keys, byte-by-byte as shown in Section 6.1.

As described in Section 4, the leakage is not limited to the own process. With ZombieLoad, we observe values from all processes running on the same as well as on the sibling logical CPU core. Furthermore, we also observe leakage across privilege boundaries, \iefrom the kernel, hypervisor, and Intel SGX enclaves. Thus, ZombieLoad allows sampling of all data which is loaded by any application on the current physical CPU core.

5.4. Performance Evaluation

In this section, we evaluate ZombieLoad and the performance of our proof-of-concept implementations111https://github.com/IAIK/ZombieLoad.



max width= Variant Setup CPU -arch. 1 2 Lab Core i7-3630QM Ivy Bridge \cmark \cmark Lab Core i7-6700K Skylake-S \cmark \cmark Lab Core i5-7300U Kaby Lake \cmark \cmark Lab Core i7-7700 Kaby Lake \cmark \cmark Lab Core i7-8650U Kaby Lake-R \cmark \cmark Lab Core i7-8565U Whiskey Lake \xmark \xmark Lab Core i7-8700K Coffee Lake-S \cmark \cmark Lab Core i9-9900K Coffee Lake-R \xmark \xmark Lab Xeon E5-1630 v4 Broadwell-EP \cmark \cmark Cloud Xeon E5-2670 Sandy Bridge-EP \cmark \cmark Cloud Xeon Gold 5120 Skylake-SP \cmark \cmark Cloud Xeon Platinum 8175M Skylake-SP \cmark \cmark Cloud Xeon Gold 5218 Cascade Lake-SP \xmark \xmark

Table 2. Tested environments.

We evaluated the different variants of ZombieLoad, described in Section 5.1, on different environments listed in Table 2. The tested CPUs range from Sandy Bridge (released 2012) to Cascade Lake (released 2019). We were able to mount Variant 1 and Variant 2 on different microarchitectures except for Whiskey Lake, Coffee Lake-R, and Cascade Lake-SP.


To evaluate the performance of each variant, we performed the following experiment on an i7-8650U. While reading a specific value on one logical core, we performed each variant of ZombieLoad on the sibling logical core for , recording the number of successful and unsuccessful recoveries. For Variant 1 using TSX to suppress the exception, we achieve an average transmission rate of (, ) and a true positive rate of (, ). With Variant 2 in combination with signal handling, we achieved an average transmission rate of (, ) and a true positive rate of (, ). Variant 2 in combination with TSX, achieves an average transmission rate of (, ) and a true positive rate of (, ).

6. Case Study Attacks

In this section, we present 5 attacks using ZombieLoad in real-world scenarios.

6.1. AES-NI Key Leakage

To demonstrate that data sampling is a powerful side channel, we extract an AES-128 key. The victim application uses AES-NI, which is resistant against timing and cache-based side-channel attacks (Gueron2012aesni).

However, even with the hardware-assisted AES-NI, the key has to be loaded from memory to a 128-bit XMM register. This is usually the case before invoking AESKEYGENASSIST, which is used to derive the AES round keys. The round-key derivation is entirely done in hardware using the XMM registers. Hence, there is no memory load required for the derivation of the 11 round keys used in AES-128. Thus, when the key is loaded from memory before the round-key derivation starts is the point where we can mount ZombieLoad to leak the value of the key. For OpenSSL (v3.0.0), this is in the function aesni_set_encrypt_key which is called by EVP_EncryptInit_ex. Note that instead of leaking the key, we can also leak the round keys loaded in the encryption process. However, to attack the round keys, an attacker needs to leak (and distinguish) more different values, making the attack more complex.

When leaking the key using ZombieLoad, we have first to detect which load corresponds to the key. Moreover, as we can only leak one byte at a time, we also have to combine the leaked bytes to the full AES-128 key correctly.

Side-Channel Synchronization.

For the attack, we assume a shared library implementing the AES encryption which can be used by both the attacker and the victim, \egOpenSSL. Even though OpenSSL (v3.0.0) has a side-channel resistant AES-NI implementation, we can still rely on classical memory-based side-channel attacks to monitor the control flow. For example, using \FlushReload, we can detect when a specific part of the code is executed (Gruss2015Template; Garcia2017constant). While this does not leak any secrets, it acts as a synchronization primitive for ZombieLoad.

We constantly monitor a cache line of the code which is executed right before the key is loaded from memory. In OpenSSL (v3.0.0), this is the second cache line of aesni_set_encrypt_key, \ie after the start of the function. Similarly to Schwarz \etal(Schwarz2018DF), we leverage the cache state of the cache line as a trigger for the actual attack. Only if we detect a cache hit on the monitored cache line, we start leaking values using ZombieLoad. Hence, we already filter out most bytes not related to the AES key.

Note that if there is no cache line before the load which can be used as a trigger, we can still use a nearby cache line (\iea cache line after the load) as a filter. In a parallel thread, we collect the timestamps of cache hits in the nearby cache line. If we also save the time stamps of the values leaked using ZombieLoad, in an offline post-processing step we can filter out values which were leaked at a different instruction-pointer location.

To further reduce unrelated loads, it is also possible to slow down the victim using performance-degradation techniques such as flushing the code (Allan2016degrade; Garcia2017constant). For OpenSSL, we used performance degradation on the code directly following the load of the key.

Domino Attack.





















Figure 4. Additionally leaking domino bytes comprised of bits of different AES-key bytes to filter out unrelated loads.

Inevitably, even when synchronizing ZombieLoad by using a cache-based trigger, we also leak values not related to the key. Moreover, for practical reasons, the size of the Flush+Reload covert channel is limited, and we can only transmit a single key byte from the transient domain at a time. Hence, we have a probability distribution for every byte in the AES key. As the bytes in the AES key are independent of each other, we can only assume that the byte with the highest probability is the correct key byte. Thus, if there is a key byte suffering from noise from unrelated loads, we may assume that the noise is the correct key byte, which leads to a wrong key.

Therefore, we propose the Domino attack, an innovative transient error detection technique for reducing noise when leaking multi-byte loads. In addition to leaking every single key byte, we also transmit a specially crafted domino byte composed by combining bits from two adjacent key bytes. Note that creating such a domino byte is possible, as the transient domain has access to the full AES key and can use it for arbitrary computations (\cfSection 6.3). Figure 4 illustrates the idea of the Domino attack. In this case, we leak (4,4) domino bytes consisting of 4 bits of two adjacent key bytes respectively. By combining the lower nibble of one key byte with the higher nibble of the next key byte, we transmit a domino byte which encodes partial information of two key bytes. Hence, in a post-processing step, we combine the probability distribution of two adjacent key bytes with the probability distribution of the domino byte to select the two adjacent key bytes with the highest combined probability. Note that the selection of bits can be adapted to the noise which can be measured before leaking the key, \egmultiple (7,1) domino bytes can be leaked that are shifted by only a single bit.


We evaluated the attack in a cross-user-space attack (\cfSection 4). We always ran the attack until the correct key was recovered, \ieuntil the key with the highest probability is the correct key. In a practical attack, the number of attacks can even be reduced, as typically it is easy to verify whether a key candidate is correct. Thus, an attacker can simply test all key candidates with a probability over a certain threshold and does not have to wait until the highest probability corresponds to the correct key.

On average, we recovered the entire AES-128 key of the victim in under using the cache-based trigger and the Domino attack. During this time, the key was loaded approximately imes by the victim.

6.2. SGX Sealing Key Extraction

In this section, we show that privileged SGX attackers can drastically improve ZombieLoad’s temporal resolution and bridge from incidental data sampling in the time domain to the targeted reconstruction of arbitrary enclave secrets (\cfFigure 1). We first explain how state-of-the-art enclave execution control and transient post-processing techniques can be leveraged to reliably leak register values at any point during an enclave invocation. Then we demonstrate the impact of this attack by recovering a full 128-bit SGX sealing key, as used by Intel’s trusted provision and quoting enclaves to decrypt the long-term EPID private attestation key.

Leaking Enclave Registers.

We consider Intel SGX root attackers that co-locate with a victim enclave on the same physical CPU. As a system attacker, we can increase ZombieLoad’s temporal resolution by leveraging previous research results exploiting page faults (Xu2015controlled; Vanbulck2017pagetable) or interrupts (Vanbulck2018nemesis; Moghimi2017cachezoom) to regulate the victim enclave’s execution. We use the SGX-Step (VanBulck2017sgx) framework to precisely single-step the victim enclave one instruction at a time, allowing the attacker to reach a code part where sensitive information is stored in CPU registers. At such a point, we switch to unlimited zero-stepping (Vanbulck2018) by either setting the system timer interrupt to a very short interval or revoking code page execute permissions before resuming the victim enclave. This technique provides ZombieLoad attackers with a primitive to repeatedly force-reload CPU registers from the interrupted enclave’s SSA frame (\cfSection 2.3). Our experiments show that even though execution of the enclave instruction never completes, any direct operands plus SSA register file contents are loaded from memory each time. Importantly, since the enclave does not make progress, we can perform unlimited ZombieLoad attack attempts to reconstruct CPU register values from these implicit SSA memory accesses.

We further reduce noise from unrelated non-enclave loads on the victim CPU by opting for timer-based zero-stepping with a user space interrupt handler (Vanbulck2018nemesis) to avoid repeatedly invoking the operating system. Furthermore, we found that executing the ZombieLoad attack code in a separate address space avoids unnecessarily slowing down the spy through implicit TLB invalidations on enclave entry/exit (sgxdeveloperref).

Note that the SSA frame spans multiple cache lines. With ZombieLoad, we do not have explicit address-based control over which cache line is being leaked. Hence, leaked data might come from different saved registers that are at the same offset within a cache line. To filter out such noisy observations, we use the Domino transient error detection technique introduced in Section 6.1. Specifically, we implemented a “sliding window” that transmits 7 different domino bytes for each candidate key byte, stuffed with increasing bits from the next adjacent key byte candidate. Any noisy observations that do not match the overlap can now efficiently be filtered out.

Attack on sgx_get_key.

The Intel SGX design includes a secure key derivation facility through the egetkey instruction (\cfSection 2.3). Enclaves execute this instruction to query a 128-bit cryptographic key from the hardware, based on the calling enclave’s code layout or developer identity. This is the underlying primitive used by Intel’s trusted prebuilt quoting enclave to securely unseal a long-term private attestation key from persistent storage (Costan2016; Vanbulck2018).

The official Intel SGX SDK (sgxdeveloperref) offers a convenient sgx_get_key wrapper procedure that first executes egetkey with the necessary parameters, and eventually copies the retrieved key into a provided buffer. We reverse engineered the proprietary intel_fast_memcpy function and found that in this case, the key is copied using two 128-bit moves to/from the xmm0 SSE register. We revert to zero-stepping on the last instruction of the memcpy invocation. At this point, the attacker-induced zero-step enclave resumptions will repeatedly reload a.o., the xmm0 register containing the 128-bit key from the memory hierarchy.


We evaluated the attack on a Kaby Lake i7-7700 CPU with an up-to-date Foreshadow-patched microcode revision 0x8e.

In the first experiment, we implemented a benchmark enclave that uses sgx_get_key to generate a new report key with different random key IDs. We performed 100 key-recovery experiments on sgx_get_key with different random keys. Our results show that of the times the full 128-bit key is among the key candidates with average remaining key space entropy of 8.8 bits. Among these cases, of the times the exact full key has been recovered. In the other of the cases where the full key is not among the key candidates, of the times, we have partial key bytes among the recovered key candidates. The average correct key bytes are 10 out of 16 bytes with the remaining global entropy of 13.59 bits. In the remaining of the times where the correct key is not among the key candidates, our attack which uses the Domino technique with a sliding window did not reveal any candidates, which means an attacker can simply repeat the attack in such cases. Also in cases, where some of the key bytes are part of the candidates, most of failed key bytes resides in the first few bytes of the key. The reason for this behavior is that the explained Domino attack will have a stronger effect on key bytes in the middle that are surrounded by more key bytes.

In the second experiment, we perform an attack on Intel’s trusted quoting enclave. The quoting enclave performs a call to sgx_get_key to derive the sealing key which is used to decrypt the EPID provisioning blob. We executed the attack on a quoting enclave that is signed with debug keys, so we can use it as a ground truth to easily verify that we have recovered the correct sealing key. We executed the attack multiple times on our setup, and we managed to recover the correct 128-bit sealing key after multiple executions of the attack and checking the candidates against each other. The recovered sealing key matches the correct key, and can indeed successfully decrypt the EPID blob for our debug signed quoting enclave. While we did not yet reproduce this attack to recover the sealing key from the official quoting enclave image signed by Intel, we believe that this experimental evaluation showcased all the required primitives to break Intel SGX’s remote attestation guarantees, as demonstrated before by Foreshadow (Vanbulck2018).

6.3. Cross-VM Covert Channel

To evaluate the performance of ZombieLoad, we implement a covert channel which can be used for all attack scenarios described in Section 4. However, in this section, we focus on the cross-VM covert channel. While covert channels are possible for Intel SGX, the kernel, and the hypervisor, these are somewhat artificial scenarios. Moreover, there are various covert channels available to user-space applications for stealthy inter-process communication (Ge2016; Maurice2017Hello).

For VMs, however, there are not many known covert channels which can be used between two VMs. So far, all cross-VM covert channels either relied on \PrimeProbe (Ristenpart2009; Xu2011; Liu2015; Maurice2015C5; Maurice2017Hello), DRAMA (Pessl2016), or bus locking (Wu2012). We show that ZombieLoad can be used as a fast and reliable covert channel between VMs scheduled on the same physical core.


For the fastest result, the sender repeatedly loads the value to be transmitted from the L1 cache into a register. By not only loading the value from one memory address but instead from multiple memory addresses, the sender ensures that potentially multiple fill-buffer entries are used. In addition, this also thwarts an optimization of Intel CPUs which combines multiple loads from the same cache line to a single load (Abramson1996).

On a CPU supporting AVX2, the sender can encode up to 256 bits per load (\egusing the VMOVAPS load).


The receiver mounts ZombieLoad to leak the values loaded by the sender. However, as the receiver leaks the loads only in the transient domain, the leaked value have to be transferred into the architectural domain. We encode the leaked values into the cache and recover them using \FlushReload. When encoding values in the cache, we require at least 2 cache lines, \ie, per bit to prevent the adjacent-cache-line prefetcher from interfering with the encoding. In practice, we require one physical page, \ie, per possible value to prevent interference of the prefetcher. To reduce the recover bottleneck, we transfer single bytes from the transient to the architectural domain which already requires 256 runs of \FlushReload.

As a result, our proof-of-concept limits the transmission of actual data to a single byte per leaked load. However, we can use the remaining bits in the load to ensure that the channel is free of errors.

Transient Error Detection.








Figure 5. The packet format used in the covert channel. Every 32-bit packet consists of 8 data bits, 8-bit checksum (two’s complement), 8-bit sequence number, and a constant prefix.

The transmission of the data between sender and receiver is free of any noise. However, the receiver does not only recover values from the sender, but also other loads from the current and sibling logical core. Hence, to get rid of this noise, we encode the data as shown in Figure 5. This allows the receiver to filter out data not originating from the sender.

Although we cannot transfer the entire packet into the architectural domain, we can compute on the packet in the transient domain. Thus, we run the error detection in the transient domain, and only transmit valid packets to the architectural domain.

The challenge to run the error detection in the transient domain is that the number of instructions is limited, and not all instructions can be used. For reliable results, we cannot use instructions which speculate on either control or data flow. Hence, the error-detection code has to be as short as possible and branch free.

Our packet structure allows for extremely efficient error detection. We encode the data in the first byte and the two’s complement of the data in the second byte as a checksum. To detect errors, we XOR the value of the first byte (\iethe data) onto the second byte (\iethe two’s complement of the data). If both values are received correctly, the XOR ensures that the bits 8 to 15 of the packet are zero. Thus, for a correct packet, the least-significant 16 bits of the packet represent a value between 0 and 255, and for a wrong packet, these bits represent a value which is larger than 255. We use these resulting 16-bit value as an index into our oracle array, \iean array consisting of 256 pages. Therefore, any value which is not a correct byte is out of bounds and has thus no effect on the cache state of the array. A correct byte is also a valid index into the oracle array and ensures that the first cache line of the corresponding page is cached. Finally, by applying a cache-based side-channel attack, such as \FlushReload, we can recover the byte from the cache state of the oracle array (Lipp2018meltdown; Kocher2019spectre).

The error detection in the transient domain has the advantage that we do not require computation time in the architectural domain. Instead of waiting for the exception to become architecturally visible by doing nothing, we already use this time to perform the required computation. An additional advantage is that while we are still in the transient domain, we can work on noise-free data. Thus, we do not require complex error correction after receiving the data (Maurice2017Hello).

In addition to the error detection, we also encode a sequence number into the packet. The sequence number allows ordering the received packets. It can be recovered using the same method as the data value, \egusing an oracle array and a cache-based side-channel attack.


We evaluate the covert channel both in a lab environment as well as in a public cloud. In the lab environment, we used 2 virtual machines running inside QEMU KVM on an i7-8650U. For the cloud scenario222The cloud provider asked us not to disclose its name at this point., we used 2 co-located virtual machines running CentOS 7.6.1810 with a Linux kernel version of 3.10.0-957 on a Xeon E5-2670 CPU.

Both on the cloud, as well as on our lab machine, we achieved an error-free transmission. On our lab machine, we observed transmission rates of up to . As TSX was not available in the cloud scenario, we achieved a transmission rate of (, ) with Variant 1 and signal handling.

6.4. Browsing-Behavior Monitoring

ZombieLoad is also well suited for detecting specific byte sequences within loaded data. We demonstrate an attack for which we leverage ZombieLoad to fingerprint a web browser session. For this attack, we assume an unprivileged attacker running on one logical core and a web browser running on the sibling logical core. In this scenario, it is irrelevant whether the attacker and victim run on a native machine or whether they are in (different) virtual machines.

We present two different attacks, a keyword detection attack which can fingerprint website content, and an URL recovery attack to monitor a victim’s browsing behavior.

Keyword Detection.

The keyword detection allows an attacker to gain information on the type of content the victim is consuming. For this attack, we constantly sample data using ZombieLoad and match leaked values against a list of pre-defined keywords.

We leverage the fact that we have access to a full cache line and can do arbitrary computations in the transient domain (\cfSection 6.3). As a result of the computation, we only have to externalize a small integer indicating which keyword has matched via a cache side channel.

One limitation is the length of the keyword list, as in the transient domain, only a limited number of memory accesses are possible before the transient execution aborts. The most reliable solution is to store the keyword list entirely in CPU registers. Hence, the length of the keyword list is limited by the available registers. Moreover, the length is also limited by the amount of code that is transiently executed to compare leaked values to the keyword list.

URL Recovery.

In the second attack, we recover accessed websites from browser sessions without prior selection of interesting keywords. We take a more indirect approach that relies on modern websites performing many individual HTTP requests to the same domain, \egto load additional resources such as scripts and images.

In the transient domain, we again sample data using ZombieLoad. While still in the transient domain, we detect the substring “www.” inside the leaked data. When we discover a match, we leak the character following “www.” to the architectural domain using a cache side channel. This already results in a set of first characters of domain names which we refer to as the candidate set.

In the next iteration, for every domain in the candidate set, we take the last four leaked characters (\egww.X”). We use this string in the transient domain to filter leaked values, similar to the “www.” substring in the first iteration. If a match is found, we leak the next character. We can repeat these steps until we see a string ending with a top-level domain.

Note that this attack is not limited to URLs. Potentially all data which follows a predictable pattern, such as session cookies or credit-card numbers, can be leaked with this variant.


We evaluated both attacks running an unmodified Firefox browser version 66.0.2 on the same physical core as the attacker. Our proof-of-concept implementation of the keyword-checking attack can check four up to 8-byte long keywords. Due to excessive precomputations of browsers when entering an URL, a keyword is sometimes already matched during the autocompletion of the URL. For highly dynamic websites, such as nytimes.com, keywords reliably match on the first access of the website. Accessing mostly static websites, such as gnupg.org, have a probability of matching a keyword in this setup. We observed false positives after the first website access when continuing to use the browser. We hypothesize that memory locations containing the keywords get re-used and may thus leak at a later time again.

For the URL recovery attack, we simulated user behavior by accessing popular websites and refreshing them in a defined time interval. We counted the number of refreshes necessary until we recovered the entire URL including top level domain. For each website, the experiment was repeated 100 times.

Website Minimal Average Maximum
nytimes.com 1 1 3
facebook.com 1 2 4
kernel.org 2 6 13
gnupg.org 2 10 34
Table 3. Number of accesses required to recover a website name. The experiment was repeated 100 times per website.

The actual number of refreshes needed depends on the nature of the website that is visited. If it is a highly dynamic page, such as facebook.com or nytimes.com, a small number of reloads is sufficient to recover the entire name. For static pages, such as gnupg.org or kernel.org, the number of reloads necessary increases by a factor of 10, approximately. See Table 3 for a detailed overview of required reloads.

6.5. Targeted Data Leakage

Inherently, ZombieLoad is a 1-dimensional side channel, \iethe leakage is only controlled by the time. Hence, leakage cannot be steered using specific addresses as is the case, \egfor Meltdown (Lipp2018meltdown). While this data sampling is still sufficient for several real-world attacks, it is still a limiting factor for general attacks.

In this section, we show how ZombieLoad can be combined with prefetch gadgets (Canella2019) for targeted data leakage.



if (x < array_len) {
    y = array[x];
} \end{lstlisting}
\caption{A simple prefetch gadget relying on Spectre-PHT~\cite{Kocher2019spectre}.
By mistraining the branch, this gadget loads an arbitrary out-of-bounds value for targeted leakage.}
\paragrabf{Speculative Data Leakage.}
\Cref{lst:prefetch-gadget} illustrates such a gadget.
It is a common pattern in software for accessing an element of an array~\cite{Canella2019}.
First, the code checks whether the index lies within the bounds of the array.
Only if this is the case, the element is accessed, \ie loaded.
While it is evident that for a user-controlled index the corresponding array element can be loaded, such a gadget is even more powerful.
On a CPU vulnerable to Spectre, an attacker can mistrain the branch predictor, \eg by providing several valid values for the array index.
Then, by providing an out-of-bounds index, the branch is misspeculated and speculatively accesses an out-of-bounds value.
Alternatively, the attacker can alternate between valid and out-of-bounds indices randomly to achieve a high percentage of mispredictions without any prior branch predictor mistraining.
\AttackName cannot only leak architecturally accessed data but also speculatively accessed data.
Hence, \AttackName can even see the value of loads which are never architecturally visible.
Such loads include, among others, speculative memory loads and prefetches.
Thus, any Spectre gadget which is not hardened, \eg using a memory fence~\cite{IntelSpecAnalysis,AMDSpecAnalysis,ARMSpecAnalysis,Canella2019} or a mask~\cite{Carruth2018Hardening,Canella2019}, can be used to specify data to leak.
Moreover, \AttackName does not require classic Spectre gadgets containing an indirect array access~\cite{Kocher2019spectre}.
A simple out-of-bounds access (\cf \Cref{lst:prefetch-gadget}) is sufficient.
While such gadgets have been demonstrated for breaking KASLR~\cite{Schwarz2018netspectre}, they were considered as relatively harmless as they do not leak data~\cite{Canella2019}.
Hence, most approaches for finding gadgets do not consider such gadgets~\cite{Wang2017oo7,Guarnieri2018spectector}.
In the Linux kernel, however, such gadgets are also patched if they are discovered, mainly as they can be used together with the Foreshadow vulnerability to leak arbitrary kernel memory~\cite{Corbet2018smatch,Stecklina2019L1TF}.
So far, 172 such gadgets have been fixed in kernel 5.0~\cite{Canella2019}.
With \AttackName, we show that such gadgets are indeed powerful and have to be patched as well.
\paragrabf{Potential Incompleteness of Countermeasures.}
Mainly, there are 2 methods to prevent exploitation of Spectre-PHT: memory fences after branches~\cite{IntelSpecAnalysis,AMDSpecAnalysis,ARMSpecAnalysis,Canella2019}, or constraining the index to a valid range using a bitmask~\cite{Carruth2018Hardening,Canella2019}.
The variant using fences is implemented in the Microsoft compiler~\cite{Kocher2018mitigations,Kocher2019spectre}, whereas the variant using bitmasks is implemented in GCC~\cite{LWN_GCC_SLH} and LLVM~\cite{Carruth2018Hardening}, and also used in the Linux kernel~\cite{LWN_GCC_SLH}.
Both methods prevent exploitation of Spectre-PHT~\cite{Canella2019}, as the misspeculation cannot load any data.
Hence, this is also effective against \AttackName, as fixed gadgets cannot be exploited to load arbitrary values.
However, even with these countermeasures in place, there is a remaining leakage which can be exploited using \AttackName.
When architecturally loading an in-bounds value, \AttackName can leak up to 64 bytes of the load.
Hence, with \AttackName, there is a potential leakage of up to 63 bytes which are out of bounds if the last in-bounds value is at the beginning of a cache line or the base of the array is at the end of a cache line.
\paragrabf{Data Leakage.}
To demonstrate the feasibility of prefetch gadgets for targeted data leakage, we leverage an artificial prefetch gadget as given in \Cref{lst:prefetch-gadget}.
For our evaluation, we used such a gadget in the system-call path of the Linux kernel 5.0.7.
We execute \AttackName on one logical core and on the other we execute system calls that switch between out-of-bounds and in-bounds array indices to achieve a high frequency of mispredictions in the gadget.
This approach yields leaked values with a large noise component from unrelated loads.
We repeat this setup without trying to generate mispredictions to generate a baseline of noise values.
We generate frequency distributions for both runs and subtract the noise frequency from the misprediction run.
We then choose the byte value that was seen most frequently.
With this crude statistical method, we can recover kernel memory at one byte per \SI{10}{\second} with \SI{38}{\percent} accuracy.
Probing bytes for \SI{20}{\second} improves the accuracy to \SI{46}{\percent}.
As with Meltdown~\cite{Lipp2018meltdown}, common byte values such as \texttt{0x00} and \texttt{0xFF} occur too often and have to be removed from the leaked data for the recovery to work.
Our approach is thus blind to these values.
The speed and accuracy can be improved if there is a priori knowledge of the target data.
For example, a 7-bit ASCII string can be leaked with a probing time of \SI{10}{\second} per byte with \SI{72}{\percent} accuracy.
As \AttackName leaks loaded values across logical cores, a straight-forward mitigation is disabling the use of hyperthreading.
Hyperthreading improves performance for certain workloads by \SI{30}{\percent} to \SI{40}{\percent}~\cite{bulpin2004multiprogramming,Phoronix2018HT}, and as such disabling it may incur an unacceptable performance impact.
Depending on the workload, a more efficient mitigation is the use of co-scheduling~\cite{Ousterhout1982}.
Co-scheduling can be configured to prevent the execution of code from different protection domains on a hyperthread pair.
Current topology-aware co-scheduling algorithms~\cite{Schoenherr2013} are not concerned with preventing kernel code from running concurrently with user-space code.
With such a scheduling strategy, leaks between user processes can be prevented but leaks between kernel and user space cannot.
To prevent leakage between kernel and user space, the kernel must additionally ensure that kernel entries on one logical core force the sibling logical core into the kernel as well.
This discussion applies in an analogous way to hypervisors and virtual machines.
\paragrabf{Flushing Buffers.}
We have demonstrated that \AttackName also works across protection boundaries on a single logical core.
Hence, disabling hyperthreading or co-scheduling are not fully effective as mitigation.
We have not found an instruction sequence that reliably prevents leakage across protection boundaries.
Even flushing the entire L1 data cache (using \texttt{MSR\_IA32\_FLUSH\_CMD}) and issuing as many dummy loads as there are fill-buffer entries (“load stuffing”) is not sufficient.
There is still remaining leakage, which we assume is caused by the replacement policy of the line-fill buffer.
Hence, to fully mitigate the leakage, we require a microcode update which provides a method to flush the line-fill buffer.
\paragrabf{Selective Feature Deactivation.}
Weaker countermeasures target individual building blocks (\cf \Cref{sec:building-blocks}).
The operating system kernel can make sure always to set the accessed and dirty bits in page tables to impair \VariantTwo.
Unfortunately, \VariantOne is always possible, if the attacker can identify an alias mapping of any accessible user page in the kernel.
This is especially true if the attacker is running in or can create a virtual machine.
Hence, we also recommend disabling VT-x on systems that do not need to run virtual machines.
\paragrabf{Removing Prefetch Gadgets.}
To prevent targeted data leakage, prefetch gadgets need to be neutralized, \eg using \textit{array\_index\_nospec} in the Linux kernel.
This function clamps array indices into valid values and prevents arbitrary virtual memory to be prefetched.
Placing these functions is currently a manual task and due to the incomplete documentation of how Intel CPUs prefetch data, these mitigations cannot be complete.
Note that Spectre mitigations using \texttt{lfence} instructions might also be incomplete against \AttackName.
Another way to prevent prefetch gadgets from reaching sensitive data is to prevent this data from being mapped in the address space of the prefetch gadget.
Exclusive Page-Frame Ownership~\cite{Kemerlis2014} (XPFO) partially achieves this for the Linux kernels mapping of physical memory.
Prefetch gadgets can also be neutralized using Speculative Load Hardening~\cite{Carruth2018Hardening} (SLH).
SLH prevents speculative execution by introducing artificial data dependencies via a compiler pass.
SLH incurs a performance overhead of \SI{10}{\percent} to \SI{50}{\percent} for typical applications.
To the best of our knowledge, its overhead for kernel or hypervisor code has not been studied yet.
\paragrabf{Instruction Filtering.}
The above discussion mostly focusses on attacks across process or virtual-machine boundaries.
For attacks inside of a single process (\eg JavaScript sandbox), the sandbox implementation must make sure that the requirements for mounting \AttackName are not met.
One example is to prevent the generation and execution of the \texttt{clflush} instructions, which so far is a crucial part of the attack.
\paragrabf{Secret Sharing.}
On the software side, we can also rely on secret sharing techniques used to protect against physical side-channel attacks~\cite{Shamir1979secretsharing}.
We can ensure that a secret is never directly loaded from memory but instead only combined in registers before being used.
As a consequence, observing the data of a load does not reveal the secret.
For a successful attack, an attacker has to leak all shares of the secret. This mitigation is, of course, incomplete if register values are written to and subsequently loaded from memory as part of context switching.
With \AttackName, we showed a novel Meltdown-type attack targeting the processors fill-buffer logic.
\AttackName enables an attacker to leak recently loaded values used by the current or sibling logical CPU.
We show that \AttackName allows leaking across user-space processes, CPU protection rings, virtual machines, and SGX enclaves.
We demonstrated the immense attack potential by monitoring browser behaviour, extracting AES keys, establishing cross-VM covert channels or recovering SGX sealing keys.
Finally, we conclude that disabling hyperthreading is the only possible workaround to mitigate \AttackName on current processors.
We thank Werner Haas (Cyberus Technology), Claudio Canella (Graz University of Technology), Jon Masters (Red Hat), Alex Ionescu (CrowdStrike), and Martin Schwarzl (Graz University of Technology).
The research presented in this paper was partially supported by the Research Fund KU Leuven.
Jo Van Bulck is supported by a grant of the Research Foundation  Flanders (FWO).
The project was supported by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No 681402).
It was also supported by the Austrian Research Promotion Agency (FFG) via the K-project DeSSnet, which is funded in the context of COMET  Competence Centers for Excellent Technologies by BMVIT, BMWFW, Styria and Carinthia.
Additional funding was provided by a generous gift from Intel.
Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding parties.
{\footnotesize \bibliographystyle{acm-url}
\section{Fill-buffer Size}\label{appendix:fill-buffer-size}
In this section, we analyze the size of the fill buffer in terms of fill-buffer entries usable per logical core.
Intel describes the fill buffer as a competitively-shared resource during HT operation”~\cite{Intel_vol3}.
Hence, with 10 fill-buffer entries (Sandy Bridge and newer microarchitectures)~\cite{Intel_vol3}, we expect that when hyperthreading is enabled, every logical core can use up to 10 entries.
Our experimental setup measures the time it takes to execute $n$ stores to DRAM, for $n = 1, \dots, 20$.
We expect that the time increases linearly with the number of stores $n$ as long as there are unused fill-buffer entries.
To ensure that the stores occupy the fill buffer, we leverage non-temporal stores which bypass the cache and directly go to DRAM.
We repeated our experiments \SI{1000000} times, and we always measured the best case, \ie the minimum latency, to get rid of any noise.
 \caption{One logical core can leverage the entire fill buffer (12 entries).
 If both logical cores execute stores, the fill buffer is competitively shared, leading to an increased latency for both logical cores.}
\Cref{fig:fbsize-split} shows that both logical cores can indeed leverage the entire fill buffer.
When running the experiment on one (isolated) logical core, while the other (isolated) logical core does nothing, we get a latency increase when executing more than 12 stores.
When we run the experiment on both logical cores in parallel, the latency increase is still after 12 stores.
 \caption{One pre-Skylake, we measure 10 fill-buffer entries, matching Intels documentation. On Skylake and newer, we measure 12 fill-buffer entries.}
Interestingly, the documented number of fill buffers does not match our experiments for Skylake and newer microarchitectures.
While we measure 10 entries on pre-Skylake CPUs as it is documented, we measure 12 entries on Skylake and newer (\cf \Cref{fig:fbsize-skylake}).
From our experiments we conclude that both logical cores can leverage the entire fill buffer
Therefore, every logical core can potentially use any entry in the fill buffer.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description