Attacker Control and Impact for Confidentiality and Integrity

Attacker Control and Impact for Confidentiality and Integrity

Aslan Askarov Department of Computer Science, Cornell University aslan@cs.cornell.edu and andru@cs.cornell.edu  and  Andrew C. Myers
Abstract.

Language-based information flow methods offer a principled way to enforce strong security properties, but enforcing noninterference is too inflexible for realistic applications. Security-typed languages have therefore introduced declassification mechanisms for relaxing confidentiality policies, and endorsement mechanisms for relaxing integrity policies. However, a continuing challenge has been to define what security is guaranteed when such mechanisms are used. This paper presents a new semantic framework for expressing security policies for declassification and endorsement in a language-based setting. The key insight is that security can be characterized in terms of the influence that declassification and endorsement allow to the attacker. The new framework introduces two notions of security to describe the influence of the attacker. Attacker control defines what the attacker is able to learn from observable effects of this code; attacker impact captures the attacker’s influence on trusted locations. This approach yields novel security conditions for checked endorsements and robust integrity. The framework is flexible enough to recover and to improve on the previously introduced notions of robustness and qualified robustness. Further, the new security conditions can be soundly enforced by a security type system. The applicability and enforcement of the new policies is illustrated through various examples, including data sanitization and authentication.

Key words and phrases:
Security type system, information flow, noninterference, confidentiality, integrity, robustness, downgrading, declassification, endorsement, security policies
copyright: ©:

\@sect

section1[Introduction]Introduction

Many common security vulnerabilities can be seen as violations of either confidentiality or integrity. As a general way to prevent these information security vulnerabilities, information flow control has become a popular subject of study, both at the language level [23] and at the operating-system level (e.g., [14, 12, 30]). The language-based approach holds the appeal that the security property of noninterference [13], can be provably enforced using a type system [27]. In practice, however, noninterference is too rigid: many programs considered secure need to violate noninterference in limited ways.

Using language-based downgrading mechanisms such as declassification [17, 21] and endorsement [20, 29], programs can be written in which information is intentionally released, and in which untrusted information is intentionally used to affect trusted information or decisions. Declassification relaxes confidentiality policies, and endorsement relaxes integrity policies. Both endorsement and declassification have been essential for building realistic applications, such as various applications built with Jif [15, 18]: games [5], a voting system [11], and web applications [9].

A continuing challenge is to understand what security is obtained when code uses downgrading. This paper contributes a more precise and satisfactory answer to this question, particularly clarifying how the use of endorsement weakens confidentiality. While much work has been done on declassification (usefully summarized by Sands and Sabelfeld [24]), there is comparatively little work on the interaction between confidentiality and endorsement.

To see such an interaction, consider the following notional code example, in which a service holds both old data (old_data) and new data (new_data), but the new data is not to be released until time embargo_time. The variable new_data is considered confidential, and must be declassified to be released:

if request_time >= embargo_time
  then return declassify(new_data)
  else return old_data

Because the requester is not trusted, the requester must be treated as a possible attacker. Suppose the requester has control over the variable request_time, which we can model by considering that variable to be low-integrity. Because the intended security policy depends on request_time, the attacker controls the policy that is being enforced, and can obtain the confidential new data earlier than intended. This example shows that the integrity of request_time affects the confidentiality of new_data. Therefore, the program should be considered secure only when the guard expression, request_time >= embargo_time, is high-integrity.

A different but reasonable security policy is that the requester may specify the request time as long as the request time is in the past. This policy could be enforced in a language with endorsement by first checking the low-integrity request time to ensure it is in the past; then, if the check succeeds, endorsing it to be high-integrity and proceeding with the information release. The explicit endorsement is justifiable because the attacker’s actions are permitted to affect the release of confidential information as long as adversarial inputs have been properly sanitized. This is a common pattern in servers that process possibly adversarial inputs.

Robust declassification has been introduced in prior work [28, 16, 10] as a semantic condition for secure interactions between integrity and confidentiality. The prior work also develops type systems for enforcing robust declassification, which are implemented as part of Jif [18]. However, prior security conditions for robustness are not satisfactory, for two reasons. First, these prior conditions characterize information security only for terminating programs. A program that does not terminate is automatically considered to satisfy robust declassification, even if it releases information improperly during execution. Therefore the security of programs that do not terminate, such as servers, cannot be described. A second and perhaps even more serious limitation is that prior security conditions largely ignore the possibility of endorsement, with the exception of qualified robustness [16]. Qualified robustness gives the endorse operation a somewhat ad-hoc, nondeterministic semantics, to reflect the attacker’s ability to choose the endorsed value. This approach operationally models what the attacker can do, but does not directly describe the attacker’s control over confidentiality. The introduction of nondeterminism also makes the security property possibilistic. However, possibilistic security properties have been criticized because they can weaken under refinement [22, 25].

The main contribution of this paper is a general, language-based semantic framework for expressing information flow security and semantically capturing the ability of the attacker to influence both the confidentiality and integrity of information. The key building blocks for this semantics are attacker knowledge [1] and its (novel) dual, attacker impact, which respectively describe what attackers can know and what they can affect. Building upon attacker knowledge, the interaction of confidentiality and integrity, which we term attacker control, can be characterized formally. The robust interaction of confidentiality and integrity can then be captured cleanly as a constraint on attacker control. Further, endorsement is naturally represented in this framework as a form of attacker control, and a more satisfactory version of qualified robustness can be defined. All these security conditions can be formalized in both progress-sensitive and progress-insensitive variants, allowing us to describe the security of both terminating and nonterminating systems.

We show that the progress-insensitive variants of these improved security conditions are enforced soundly by a simple security type system. Recent versions of Jif have added a checked endorsement construct that is useful for expressing complex security policies [9], but whose semantics were not precisely defined; this paper gives semantics, typing rules and a semantic security condition for checked endorsement, and shows that checked endorsement can be translated faithfully into simple endorsement at both the language and the semantic level. Our type system can easily be adjusted to enforce the progress-sensitive variants of the security conditions, as has been shown in the literature [26, 19].

The rest of this paper is structured as follows. Section Attacker Control and Impact for Confidentiality and Integrity shows how to define information security in terms of attacker knowledge. Section Attacker Control and Impact for Confidentiality and Integrity introduces attacker control. Section Attacker Control and Impact for Confidentiality and Integrity defines progress-sensitive and progress-insensitive robustness using the new framework. Section Attacker Control and Impact for Confidentiality and Integrity extends this to improved definitions of robustness that allow endorsements, generalizing qualified robustness. A type system for enforcing these robustness conditions is presented in Section Attacker Control and Impact for Confidentiality and Integrity. The checked endorsement construct appears in Section Attacker Control and Impact for Confidentiality and Integrity, which introduces a new notion of robustness that allows checked endorsements, and shows that it can be understood in terms of robustness extended with simple endorsements. Section Attacker Control and Impact for Confidentiality and Integrity introduces attacker impact. Additional examples are presented in Section Attacker Control and Impact for Confidentiality and Integrity, related work is discussed in Section Attacker Control and Impact for Confidentiality and Integrity, and Section Attacker Control and Impact for Confidentiality and Integrity concludes.

This paper is an extended version of a previous paper by the same authors [4]. The significant changes include proofs of all the main theorems, a semantic rather than syntactic definition of fair attacks, and a renaming of “attacker power” to “attacker impact”.

\@sect

section1[Semantics ]Semantics


\@sect

paragraph4[Information flow levels]Information flow levels We assume two security levels for confidentiality — public and secret — and two security levels for integrity — trusted and untrusted. These levels are denoted respectively and . We define information flow ordering between these two levels: , and . The four levels define a security lattice, as shown on Figure 2. Every point on this lattice has two security components: one for confidentiality, and one for integrity. We extend the information flow ordering to elements on this lattice: if the ordering holds between the corresponding components. As is standard, we define join as the least upper bound of and , and meet as the greatest lower bound of and . All four lattice elements are meaningful; for example, it is possible for information to be both secret and untrusted when it depends on both secret and untrusted (i.e., attacker-controlled) values. This lattice is the simplest possible choice for exploring the topics of this paper; however, the results of this paper straightforwardly generalize to the richer security lattices used in other work on robustness [10].

Figure 1. Information flow lattice
Figure 2. Syntax of the language

\@sect

paragraph4[Language and semantics]Language and semantics

Figure 3. Semantics of expressions
Figure 4. Semantics of commands

We consider a simple imperative language with syntax presented in Figure 2. The semantics of the language is fairly standard and is given in Figures 4 and 4. For expressions, we define big-step evaluation of the form , where is the result of evaluating expression in memory . For commands, we define a small-step operational semantics, in which a single transition is written as , where and are the initial command and memory, and and are the resulting command and memory. The only unusual feature is the annotation on each transition, which we call an event. Events record assignments: an assignment to variable of value is recorded by an event . This corresponds to our attacker model, in which the attacker may only observe assignments to public variables. We write to mean that trace is produced starting from using zero or more transitions. Each trace is composed of individual events , and a prefix of up to the -th event is denoted as ; we use the operator to denote the concatenation of two traces or events. If a transition does not affect memory, its event is empty, which is either written as or is omitted, e.g.: .

Finally, we assume that the security environment maps program variables to their security levels. Given a memory , we write for the public part of the memory; similarly, is the trusted part of . We write when memories and agree on their trusted parts, and when and agree on their public parts.


\@sect

subsection2[Attacker knowledge]Attacker knowledge

This section provides background on the attacker-centric model for information flow security [1]. We recall definitions of attacker knowledge, progress knowledge, and divergence knowledge, and introduce progress-(in)sensitive release events.


\@sect

paragraph4[Low events]Low events Among the events that are generated during a trace, we distinguish a sequence of low (or public) events. Low events correspond to observations that an attacker can make during a run of the program. We assume that the attacker may observe individual assignments to public variables. Furthermore, if the program terminates, we assume that a termination event may also be observed by the attacker. If attacker can detect divergence of programs (cf. Definition 0.3) then divergence is also a low event.

Given a trace , low events in that trace are denoted as . A single low event is often denoted as , and a sequence of low events is denoted as . We overload the notation for semantic transitions, writing if only low events produced from configuration are relevant; that is, there is a trace such that . Low events are the key element in the definition of attacker knowledge [1].

The knowledge of the attacker is described by the set of initial memories compatible with low observations. Any reduction in this set means the attacker has learned something about secret parts of the initial memory.

Definition 0.1 (Attacker knowledge).

Given a sequence of low events , initial low memory , and program , attacker knowledge is

Attacker knowledge gives a handle on what information the attacker learns with every low event. The smaller the knowledge set, the more precise is the attacker’s information about secrets. Knowledge is monotonic in the number of low events: as the program produces low events, the attacker may learn more about secrets.

Two extensions of attacker knowledge are useful: progress knowledge [3, 2] and divergence knowledge [3].

Definition 0.2 (Progress knowledge).

Given a sequence of low events , initial low memory , and a program , define progress knowledge as

Progress knowledge represents the information the attacker obtains by seeing public events followed by some other public event. Progress knowledge and attacker knowledge are related as follows: given a program , memory and a sequence of low events obtained from , we have that for all ,

To illustrate this with an example, consider program with initial memory . This program produces a sequence of two low events . The knowledge after the first event is a set of all possible memories that agree with on the public parts and can produce the low event . Note that no low events are possible after the first assignment unless is non-zero. Progress knowledge reflects this: is a set of memories such that . Finally, the knowledge after two events is a set of memories where .

Using attacker knowledge, one can express many confidentiality policies [7, 2, 8]. For example, a strong notion of progress-sensitive noninterference [13] can be expressed by demanding that knowledge between low events does not change:

Progress knowledge enables expressing more permissive policies, such as progress-insensitive noninterference, which allows leakage of information, but only via termination channels (in [3] it is called termination-insensitive). This is expressed by requiring equivalence of the progress knowledge after seeing events with the knowledge obtained after -th event:

In the example , the knowledge inclusion between the two events is strict: . Therefore, the example does not satisfy progress-sensitive noninterference. On the other hand, the low event that follows the loop does not reveal more information than the knowledge about the existence of that event. Formally, , hence the program satisfies progress-insensitive noninterference.

These definitions also allow us to reason about knowledge changes along parts of the traces. We say that knowledge is preserved in a progress-(in)sensitive way along a part of a trace, assuming that the respective knowledge equality holds for the low events that correspond to that part.

Next, we extend possible observations to a divergence event (we write to mean configuration diverges). For attackers that can observe program divergence , we define knowledge on the sequence of low events that includes divergence:

Definition 0.3 (Divergence knowledge).

Note that the above definition does not require divergence immediately after — it allows for more low events to be produced after . Divergence knowledge is used in Section Attacker Control and Impact for Confidentiality and Integrity.

Let us consider events at which knowledge preservation is broken. We call these events release events.

Definition 0.4 (Release events).

Given a program and a memory , such that

  1. is a progress-sensitive release event, if

  2. is a progress-insensitive release event, if

It is easy to validate that a progress-insensitive release event is also a progress-sensitive event. For example, in the program , the second assignment is both a progress-sensitive and a progress-insensitive release event. The reverse is not true — in the program the assignment to is a progress-sensitive release event, but is not a progress-insensitive release event.


\@sect

section1[Attacks]Attacks To reason about program security in the presence of active attacks, we introduce a formal model of the attacker. Our formalization follows that in [16], where attacker-provided code can be injected into the program. This section provides examples of how attacker-injected code may affect attacker knowledge, followed by a semantic characterization of the attacker’s influence on knowledge.

First, we extend the syntax to allow execution of attacker-controlled code:

Next, we introduce notation to highlight that the trace is produced by attacker-injected code. The semantics of the language is extended accordingly.

We limit attacks that can be substituted into holes to so-called fair attacks, which represent reasonable limitations on the impact of the attacker. Unlike earlier approaches, where fair attacks are defined syntactically [16, 10], we define them semantically. This allows us to include a larger set of attacks. To ensure that we include all syntactic attacks we make use of a reachability translation, explained below.

Roughly, we require a fair attack to not give new knowledge and to not modify trusted variables. A refinement of this idea is that an attack is fair if it gives new knowledge but only because the reachability of the attack depends on a secret. To capture this refinement, we define an auxiliary translation to make reachability of attacks explicit. We assume a trusted, public variable reach that does not appear in the source of . Let operator be a source-to-source transformation of that makes reachability of attacks explicit.

Definition 0.5 (Explicit reachability translation).

Given a program , define as follows:

  1. for all other commands

The formal definition uses that any trace can be represented as a sequence of subtraces , where even-numbered subtraces correspond to the events produced by attacker-controlled code.

Given a trace , we denote the trusted events in the trace as . We use notation for a single trusted event, and for a sequence of trusted events.

Definition 0.6 (Fair attack).

Given a program , such that , say that is a fair attack on if for all memories , such that and , i.e., there are intermediate configurations , , for which

then for all , , it holds that and .

For example, in the program the attacks and attack are fair, but attack is not.


\@sect

subsection2[Examples of attacker influence]Examples of attacker influence This section presents a few examples of attacker influence on knowledge. We also introduce pure availability attacks and progress attacks, to which we refer later in this section.

In the examples below, we use notation when a low event is generated by attacker-injected code.

Consider program where is a secret variable, and is an untrusted public variable. The attacker’s code executes before the low assignment and may change the value of . Consider memory , where , and the two attacks and . These attacks result in different values being assigned to variable . The first trace results in low events , while the second trace results in low events . Therefore, the knowledge about the secret is different in each trace. We have

Clearly, this program gives the attacker some control over what information about secrets he learns. Observe that it is not necessary for the last assignment to differ in order for the knowledge to be different. For example, consider attack . This attack results in low events , which do the same assignment to as does. Attacker knowledge, however, is different from that obtained by :

Next, consider program . This program gives away knowledge about the value of independently of untrusted variables. The only way for the attacker to influence what information he learns is to prevent that assignment from happening at all, which, as a result, will prevent him from learning that information. This can be done by an attack such as , which makes the program diverge before the assignment is reached. We call attacks like this pure availability attacks. Another example of a pure availability attack is in the program . In this program, any attack that sets to 0 prevents the assignment from happening.

Consider another example: . As in the previous example, the value of may change the reachability of . Assuming the attacker can observe divergence, this is not a pure availability attack, because diverging before the last assignment gives the attacker additional secret information, namely that . New information is also obtained if the attacker sees the low assignment. We name attacks like this progress attacks. In general, a progress attack is an attack that leads to program divergence in a way that observing that divergence (i.e., detecting there is no progress) gives new knowledge to the attacker.


\@sect

subsection2[Attacker control]Attacker control

We represent attacker control as a set of attacks that are similar in their influence on knowledge. Intuitively, if a program leaks no information to the attacker, the control corresponds to all possible attacks. In general, the more attacks are similar, the less influence the attacker has. Moreover, the control is a temporal property and depends on the trace that has been currently produced. The longer a trace is, the more influence an attack may have, and the smaller the control set is.


\@sect

paragraph4[Similar attacks]Similar attacks The key element in the definition of control is specifying when two attacks are similar. Given a program , memory , consider two attacks and that produce traces and respectively:

We compare  and  based on how they change attacker knowledge along their respective traces. First, if knowledge is preserved along a subtrace of one of the traces, say , it must be preserved along a subtrace of as well. Second, if at some point in there is a release event , there must be a matching low event in , and the attacks are similar along the rest of the traces.

Visually, this requirement is described by the two diagrams in Figure 5. Each diagram shows the change of knowledge as more low events are produced. Here the -axis corresponds to low events, and the -axis reflects the attacker’s uncertainty about initial secrets. Whenever one of the traces reaches a release event, depicted by vertical drops, there must be a corresponding low event in the other trace, such that the two events agree. This is depicted by the dashed lines between the two diagrams.

Figure 5. Similar attacks and traces

Formally, these requirements are stated using the following definitions.

Definition 0.7 (Knowledge segmentation).

Given a program , memory , and a trace , a sequence of indices such that and is called

  1. progress-sensitive knowledge segmentation of size , if
    , denoted by
    .

  2. progress-insensitive knowledge segmentation of size if
    , denoted by
    .

Low events for are called segmentation events.

Note that given a trace, there can be more than one way to segment it, and for every trace consisting of low events, this can be trivially achieved by a segmentation of size . We use knowledge segmentation to define attack similarity:

Definition 0.8 (Similar attacks and traces ).

Given a program , memory , and two attacks and that produce traces and , define and as similar along and for the progress-sensitive attacker, if there are two segmentations and (for some ) such that

  1. ,

  2. , and

  3. .

For the progress-insensitive attacker, the definition is similar except that it uses progress-insensitive segmentation . If two attack–trace pairs are similar, we write (for progress-insensitive similarity, .

The construction of Definitions 0.7 and 0.8 can be illustrated by program

Consider memory with , and two attacks , and . Both attacks reach the assignments to low variables. However, for the assignment to is a progress-insensitive release event, while for the knowledge changes at an earlier assignment.


\@sect

paragraph4[Attacker control]Attacker control We define attacker control with respect to an attack and a trace as the set of attacks that are similar to the given attack in its influence on knowledge.

Definition 0.9 (Attacker control (progress-sensitive)).

To illustrate how attacker control changes, consider program where is an untrusted variable and is a secret trusted variable. To understand attacker control of this program, we consider an initial memory and attack . The low event in this trace is a release event. The attacker control is the set of all attacks that are similar to and trace in its influence on knowledge. This corresponds to attacks that set to values such that . The assignment to changes attacker knowledge as well, but the information that the attacker gets does not depend on the attack: any trace starting in and reaching the second assignment produces the low event ; hence, the attacker control does not change at that event.

Consider the same example but with the two assignments swapped: . The assignment to is a release event that the attacker cannot affect. Hence the control includes all attacks that reach this assignment. The result of the assignment to depends on . However, this result does not change attacker knowledge. Indeed, in this program, the second assignment is not a release event at all. Therefore, the attacker control is simply all attacks that reach the first assignment.


\@sect

paragraph4[Progress-insensitive control]Progress-insensitive control For progress-insensitive security, attacker control is defined similarly using the progress-insensitive comparison of attacks.

Definition 0.10 (Attacker control (progress-insensitive)).

Consider program . Here, any attack produces a trace that preserves progress-insensitive noninterference. If the loop is taken, the program produces no low events, hence, it gives no new knowledge to the attacker. If the loop is not taken, and the low assignment is reached, this assignment preserves attacker knowledge in a progress-insensitive way. Therefore, the attacker control is all attacks.


\@sect

section1[Robustness]Robustness

\@sect

paragraph4[Release control]Release control This section defines release control , which captures the attacker’s influence on release events. Intuitively, release control expresses the extent to which an attacker can affect the decision to produce some release event.

Definition 0.11 (Progress-sensitive release control ).

The definition for release control is based on the one for attacker control with the three additional clauses, explained below. These clauses restrict the set of attacks to those that either terminate or produce a release event. Because the progress-sensitive attacker can also learn new information by observing divergence, the definition contains an additional clause (on the third line) that uses divergence knowledge to reflect that.

(a) Release control
(b) Robustness
Figure 6. Release control and robustness

Figure (a)a depicts the relationship between release control and attacker control, where the -axis corresponds to low events, and the -axis corresponds to attacks. The solid line depicts attacker control , where vertical lines correspond to release events. The gray area denotes release control . In general, for a given attack and a corresponding trace , where contains a release event, we have the following relation between release control and attacker control:

(0.1)

Note the white gaps and the gray release control above the dotted lines on Figure (a)a. The white gaps correspond to difference . This is a set of attacks that do not produce further release events and that diverge without giving any new information to the attacker—pure availability attacks. The gray zones above the dotted lines are more interesting. Every such zone corresponds to the difference . In particular, when this set is non-empty, the attacker can launch attacks corresponding to each of the last three lines of Definition 0.11:

  1. either trigger a different release event , or

  2. cause program to diverge in a way that also releases information, or

  3. prevent a release event from happening in a way that leads to program termination

Absence of such attacks constitutes the basis for our security conditions in Definitions 0.13 and 0.14. Before moving on to these definitions, we introduce the progress-insensitive variant of release control.

Definition 0.12 (Release control (progress-insensitive)).

This definition uses the progress-insensitive variants of similar attacks and release events. It also does not account for knowledge obtained from divergence.

With the definition of release control at hand we can now define semantic conditions for robustness. The intuition is that all attacks leading to release events should lead to the same release event. Formally, this is defined as inclusion of release control into attacker control, where release control is computed on the prefix of the trace without a release event.

Definition 0.13 (Progress-sensitive robustness).

Program satisfies progress-sensitive robustness if for all memories , attacks , and traces , such that and contains a release event, i.e., , we have

Note that because of Equation 0.1, set inclusion in the above definition could be replaced with strict equality, but we use for compatibility with future definitions. Figure (b)b illustrates the relation between release control and attacker control for robust programs. Note how release control is bounded by the attacker control at the next release event.


\@sect

paragraph4[Examples]Examples We illustrate the definition of robustness with a few examples.

Consider program , and memory such that . This program is rejected by Definition 0.13. To see this, pick an , and consider the part of the trace preceding the low assignment. Release control is all attacks that reach the assignment to . On the other hand, the attacker control is the set of all attacks where , which is smaller than . Therefore this program does not satisfy the condition.

Program satisfies robustness. The only release event here corresponds to the first assignment. However, because the knowledge given by that assignment does not depend on untrusted variables, the release control includes all attacks that reach the assignment.

Program is rejected. Consider memory , and attack that leads to low trace . The attacker control for this attack and trace is the set of all attacks such that . On the other hand, release control is the set of all attacks that lead to termination, which includes attacks such that . Therefore, the release control corresponds to a bigger set than the attacker control.

Program is accepted. Depending on the attacker-controlled variable the release event is reached. However, this is an example of availability attack, which is ignored by Definition 0.13.

Program is rejected. Any attack leading to the low assignment restricts the control to attacks such that . However, release control includes attacks , because the attacker learns information from divergence.

The definition of progress-insensitive robustness is similar to Definition 0.13, but uses progress-insensitive variants of release events, control, and release control. As a result, program is accepted: attacker control is all attacks.

Definition 0.14 (Progress-insensitive robustness).

Program satisfies progress-insensitive robustness if for all memories , attacks , and traces , such that and contains a release event, i.e., , we have


\@sect

section1[Endorsement]Endorsement

This section extends the semantic policies for robustness in a way that allows endorsing attacker-provided values.


\@sect

paragraph4[Syntax and semantics]Syntax and semantics We add endorsement to the language:

We assume that every endorsement in the program source has a unique endorsement label . Semantically, endorsements produce endorsement events, denoted by, which record the label of the endorsement statement together with the value that is endorsed.

Whenever the endorsement label is unimportant, we omit it from the examples. Note that events need not mention variable name since that information is implied by the unique label .

Consider example program . This program does not satisfy Definition 0.13. The reasoning for this is exactly the same as for program from Section Attacker Control and Impact for Confidentiality and Integrity.


\@sect

paragraph4[Irrelevant attacks]Irrelevant attacks Endorsement of certain values gives attacker some control over the knowledge. The key technical element of this section is the notion of irrelevant attacks, which defines the set of attacks that are endorsed, and that are therefore excluded when comparing attacker control with release control. We define irrelevant attacks formally below, based on the trace that is produced by a program.

Given a program , starting memory , and a trace , irrelevant attacks, denoted here by , are the attacks that lead to the same sequence of endorsement events as in , until they necessarily disagree on one of the endorsements. Because the influence of these attacks is reflected at endorsement events, we exclude them from consideration when comparing with attacker control.

We start by defining irrelevant traces. Given a trace , irrelevant traces for are all traces that agree with on some prefix of endorsement events until they necessarily disagree on some endorsement. We define this set as follows.

Definition 0.15 (Irrelevant traces).

Given a trace , where endorsements are marked as , define a set of irrelevant traces based on the number of endorsements in  as : , and

is a prefix of with events all of which agree with events in , and

Define as a set of irrelevant traces w.r.t. .

With the definition of irrelevant traces at hand, we can define irrelevant attacks: irrelevant attacks are attacks that lead to irrelevant traces.

Definition 0.16 (Irrelevant attacks).

Given a program , initial memory , and a trace , such that , define irrelevant attacks as


\@sect

paragraph4[Security]Security The security conditions for robustness can now be adjusted to accommodate endorsements that happen along traces. The idea is to exclude irrelevant attacks from the left-hand side of Definitions 0.13 and 0.14. This security condition, which has both progress-sensitive and progress-insensitive versions, expresses roughly the same idea as qualified robustness [16], but in a more natural and direct way.

Definition 0.17 (Progress-sensitive robustness with endorsements).

Program satisfies progress-sensitive robustness with endorsement if for all memories , attacks , and traces , such that and contains a release event, i.e., , we have

(a) Irrelevant attacks
(b) Robustness w/o endorsements (unsatisfied)
(c) Robustness with endorsements (satisfied)
Figure 7. Irrelevant attacks and robustness with endorsements

We refer to the set as a set of relevant attacks. Figures (a)a to (c)c visualize irrelevant attacks and the semantic condition of Definition 0.17. Figure (a)a shows the set of irrelevant attacks, depicted by the shaded gray area. This set increases at endorsement events marked by stars. Figure (b)b shows an example trace where robustness is not satisfied — the gray area corresponding to release control exceeds the attacker control (depicted by the solid line). Finally, in Figure (c)c, we superimpose Figures (a)a and (b)b. This illustrates that when the set of irrelevant attacks is excluded from the release control (the area under white dashed lines), the program is accepted by robustness with endorsements.

\@sect

paragraph4[Examples]Examples

Program is accepted by Definition 0.17. Consider initial memory , and an attack ; this produces a trace . The endorsed assignment also produces a release event. We have that

  1. Release control is the set of all attacks that reach the low assignment.

  2. Irrelevant traces is a set of traces that end in endorsement event such that . Thus, irrelevant attacks must consist of attacks that reach the low assignment and set to values .

  3. The left-hand side of Definition 0.17 is therefore the set of attacks that reach the endorsement and set to .

  4. As for the attacker control on the right-hand side, it consists of attacks that set . Hence, the set inclusion of Definition 0.17 holds and the program is accepted.

Program is accepted. The endorsement in the first assignment implies that all relevant attacks must agree on the value of , and, consequently, they agree on the value of , which gets assigned to . This also means that relevant attacks belong to the attacker control (which contains all attacks that agree on ).

Program is rejected. Take initial memory such that . The set of relevant attacks after the second assignment contains attacks that agree on (due to the endorsement), but not necessarily on . The latter, however, is the requirement for the attacks that belong to the attacker control.

Program is rejected. Assume initial memory where . Consider attack that sets and consider the trace that it gives. This trace endorses in the branch, overwrites the value of with , and produces a release event . Consider another attack that sets , and consider the corresponding trace . This trace contains release event without any endorsements. Now, attacker control excludes , because of the disagreement at the release event. At the same time, is a relevant attack for , because no endorsements happen along .

Consider program , which contains no endorsements. In this case, for all possible traces