Attacker Control and Impact for Confidentiality and Integrity

Attacker Control and Impact for Confidentiality and Integrity

Abstract

Language-based information flow methods offer a principled way to enforce strong security properties, but enforcing noninterference is too inflexible for realistic applications. Security-typed languages have therefore introduced declassification mechanisms for relaxing confidentiality policies, and endorsement mechanisms for relaxing integrity policies. However, a continuing challenge has been to define what security is guaranteed when such mechanisms are used. This paper presents a new semantic framework for expressing security policies for declassification and endorsement in a language-based setting. The key insight is that security can be characterized in terms of the influence that declassification and endorsement allow to the attacker. The new framework introduces two notions of security to describe the influence of the attacker. Attacker control defines what the attacker is able to learn from observable effects of this code; attacker impact captures the attacker’s influence on trusted locations. This approach yields novel security conditions for checked endorsements and robust integrity. The framework is flexible enough to recover and to improve on the previously introduced notions of robustness and qualified robustness. Further, the new security conditions can be soundly enforced by a security type system. The applicability and enforcement of the new policies is illustrated through various examples, including data sanitization and authentication.

Security type system, information flow, noninterference, confidentiality, integrity, robustness, downgrading, declassification, endorsement, security policies
\lmcsheading

7 (3:17) 2011 1–33 Jun. 14, 2010 Sep. 26, 2011

A. Askarov]Aslan Askarov A. C. Myers]Andrew C. Myers

\subjclass

D.3.3, D.4.6

1 Introduction

Many common security vulnerabilities can be seen as violations of either confidentiality or integrity. As a general way to prevent these information security vulnerabilities, information flow control has become a popular subject of study, both at the language level [Sabelfeld:Myers:JSAC] and at the operating-system level (e.g., [MR92, asbestos, dstar]). The language-based approach holds the appeal that the security property of noninterference [Goguen:Meseguer:Noninterference], can be provably enforced using a type system [Volpano:Smith:Irvine:Sound]. In practice, however, noninterference is too rigid: many programs considered secure need to violate noninterference in limited ways.

Using language-based downgrading mechanisms such as _declassification_ [ml-ifc-97, pottier00] and _endorsement_ [Oerbaek:Palsberg:Trust, zznm02], programs can be written in which information is intentionally released, and in which untrusted information is intentionally used to affect trusted information or decisions. Declassification relaxes confidentiality policies, and endorsement relaxes integrity policies. Both endorsement and declassification have been essential for building realistic applications, such as various applications built with Jif [Myers:POPL99, jif]: games [as05], a voting system [Clarkson:Chong:Myers:Oakland08], and web applications [Chong+:SOSP07].

A continuing challenge is to understand what security is obtained when code uses downgrading. This paper contributes a more precise and satisfactory answer to this question, particularly clarifying how the use of endorsement weakens confidentiality. While much work has been done on declassification (usefully summarized by Sands and Sabelfeld [Sabelfeld:Sands:JCS]), there is comparatively little work on the interaction between confidentiality and endorsement.

To see such an interaction, consider the following notional code example, in which a service holds both old data (“old_data”) and new data (“new_data”), but the new data is not to be released until time “embargo_time”. The variable “new_data” is considered confidential, and must be declassified to be released:

if request_time >= embargo_time
  then return declassify(new_data)
  else return old_data

Because the requester is not trusted, the requester must be treated as a possible attacker. Suppose the requester has control over the variable “request_time”, which we can model by considering that variable to be low-integrity. Because the intended security policy depends on “request_time”, the attacker controls the policy that is being enforced, and can obtain the confidential new data earlier than intended. This example shows that the integrity of “request_time” affects the confidentiality of “new_data”. Therefore, the program should be considered secure only when the guard expression, “request_time >= embargo_time”, is high-integrity.

A different but reasonable security policy is that the requester may specify the request time as long as the request time is in the past. This policy could be enforced in a language with endorsement by first checking the low-integrity request time to ensure it is in the past; then, if the check succeeds, endorsing it to be high-integrity and proceeding with the information release. The explicit endorsement is justifiable because the attacker’s actions are permitted to affect the release of confidential information as long as adversarial inputs have been properly sanitized. This is a common pattern in servers that process possibly adversarial inputs.

_Robust declassification_has been introduced in prior work [zm01b, Myers:Sabelfeld:Zdancewic:JCS06, Chong:Myers:CSFW06] as a semantic condition for secure interactions between integrity and confidentiality. The prior work also develops type systems for enforcing robust declassification, which are implemented as part of Jif [jif]. However, prior security conditions for robustness are not satisfactory, for two reasons. First, these prior conditions characterize information security only for terminating programs. A program that does not terminate is automatically considered to satisfy robust declassification, even if it releases information improperly during execution. Therefore the security of programs that do not terminate, such as servers, cannot be described. A second and perhaps even more serious limitation is that prior security conditions largely ignore the possibility of endorsement, with the exception of _qualified robustness_ [Myers:Sabelfeld:Zdancewic:JCS06]. Qualified robustness gives the “endorse” operation a somewhat ad-hoc, nondeterministic semantics, to reflect the attacker’s ability to choose the endorsed value. This approach operationally models what the attacker can do, but does not directly describe the attacker’s control over confidentiality. The introduction of nondeterminism also makes the security property possibilistic. However, possibilistic security properties have been criticized because they can weaken under refinement [Roscoe95, SV98].

The main contribution of this paper is a general, language-based semantic framework for expressing information flow security and semantically capturing the ability of the attacker to influence both the confidentiality and integrity of information. The key building blocks for this semantics are _attacker knowledge_ [Askarov:Sabelfeld:SP07] and its (novel) dual, _attacker impact_, which respectively describe what attackers can know and what they can affect. Building upon attacker knowledge, the interaction of confidentiality and integrity, which we term _attacker control_, can be characterized formally. The robust interaction of confidentiality and integrity can then be captured cleanly as a constraint on attacker control. Further, endorsement is naturally represented in this framework as a form of attacker control, and a more satisfactory version of qualified robustness can be defined. All these security conditions can be formalized in both _progress-sensitive_and _progress-insensitive_variants, allowing us to describe the security of both terminating and nonterminating systems.

We show that the progress-insensitive variants of these improved security conditions are enforced soundly by a simple security type system. Recent versions of Jif have added a _checked endorsement_construct that is useful for expressing complex security policies [Chong+:SOSP07], but whose semantics were not precisely defined; this paper gives semantics, typing rules and a semantic security condition for checked endorsement, and shows that checked endorsement can be translated faithfully into simple endorsement at both the language and the semantic level. Our type system can easily be adjusted to enforce the progress-sensitive variants of the security conditions, as has been shown in the literature [Volpano:Smith:Probabilistic:CSFW, ONeil+:CSFW06].

The rest of this paper is structured as follows. Section 2 shows how to define information security in terms of attacker knowledge. Section LABEL:sec:attacks introduces attacker control. Section LABEL:sec:robust defines progress-sensitive and progress-insensitive robustness using the new framework. Section LABEL:sec:endorsement extends this to improved definitions of robustness that allow endorsements, generalizing qualified robustness. A type system for enforcing these robustness conditions is presented in Section LABEL:sec:enforcement. The checked endorsement construct appears in Section LABEL:sec:checked, which introduces a new notion of robustness that allows checked endorsements, and shows that it can be understood in terms of robustness extended with simple endorsements. Section LABEL:sec:attacker-impact introduces attacker impact. Additional examples are presented in Section LABEL:sec:examples, related work is discussed in Section LABEL:sec:related, and Section LABEL:sec:conclusion concludes.

This paper is an extended version of a previous paper by the same authors [am10]. The significant changes include proofs of all the main theorems, a semantic rather than syntactic definition of fair attacks, and a renaming of “attacker power” to “attacker impact”.

2 Semantics

Information flow levels

We assume two security levels for confidentiality — _public_and _secret_— and two security levels for integrity — _trusted_and _untrusted_. These levels are denoted respectively and . We define information flow ordering between these two levels: , and . The four levels define a security lattice, as shown on Figure 2. Every point on this lattice has two security components: one for confidentiality, and one for integrity. We extend the information flow ordering to elements on this lattice: if the ordering holds between the corresponding components. As is standard, we define _join_ as the least upper bound of and , and _meet_ as the greatest lower bound of and . All four lattice elements are meaningful; for example, it is possible for information to be both secret and untrusted when it depends on both secret and untrusted (i.e., attacker-controlled) values. This lattice is the simplest possible choice for exploring the topics of this paper; however, the results of this paper straightforwardly generalize to the richer security lattices used in other work on robustness [Chong:Myers:CSFW06].

Figure 1: Information flow lattice
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
103548
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description