Formal Security Analysis of Neural Networks using Symbolic Intervals

Formal Security Analysis of Neural Networks using Symbolic Intervals

Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana
Columbia University

Formal Security Analysis of Neural Networks using Symbolic Intervals

Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana
Columbia University


Due to the increasing deployment of Deep Neural Networks (DNNs) in real-world security-critical domains including autonomous vehicles and collision avoidance systems, formally checking security properties of DNNs, especially under different attacker capabilities, is becoming crucial. Most existing security testing techniques for DNNs try to find adversarial examples without providing any formal security guarantees about the non-existence of adversarial examples. Recently, several projects have used different types of Satisfiability Modulo Theory (SMT) solvers to formally check security properties of DNNs. However, all of these approaches are limited by the high overhead caused by the solver.

In this paper, we present a new direction for formally checking security properties of DNNs without using SMT solvers. Instead, we leverage interval arithmetic to formally check security properties by computing rigorous bounds on the DNN outputs. Our approach, unlike existing solver-based approaches, is easily parallelizable. We further present symbolic interval analysis along with several other optimizations to minimize overestimations.

We design, implement, and evaluate our approach as part of ReluVal, a system for formally checking security properties of Relu-based DNNs. Our extensive empirical results show that ReluVal outperforms Reluplex, a state-of-the-art solver-based system, by 200 times on average for the same security properties. ReluVal is able to prove a security property within 4 hours on a single 8-core machine without GPUs, while Reluplex deemed inconclusive due to timeout (more than 5 days). Our experiments demonstrate that symbolic interval analysis is a promising new direction towards rigorously analyzing different security properties of DNNs.

1 Introduction

In the last five years, Deep Neural Networks (DNNs) have enjoyed tremendous progress, achieving or surpassing human-level performance in many tasks such as speech recognition [14], image classifications [25], and game playing [40]. We are already adopting DNNs in security- and mission-critical domains like collision avoidance and autonomous driving [5, 1]. For example, unmanned Aircraft Collision Avoidance System X (ACAS Xu), uses DNNs to predict best actions according to the location and the speed of the attacker/intruder planes in the vicinity. It was successfully tested by NASA and FAA [2, 29] and is on schedule to be installed in over 30,000 passengers and cargo aircraft worldwide [36] and US Navy’s fleets [3].

Unfortunately, despite our increasing reliance on DNNs, they remain susceptible to incorrect corner-case behaviors: small, human-imperceptible perturbations of test inputs or adversarial examples [42], unexpectedly and arbitrarily change a DNN’s predictions. In a security-critical system like ACAS Xu, an incorrectly handled corner case can easily be exploited by an attacker to cause significant damage costing thousands of lives.

Existing methods to test DNNs against corner cases focus on finding adversarial examples [38, 11, 33, 7, 37, 27, 28, 35, 44] without providing formal guarantees about the non-existence of adversarial inputs even within very small input ranges. In this paper, we focus on the problem of formally guaranteeing that a DNN never violates a security property (e.g., no collision) for any malicious input provided by an attacker within a given input range (e.g., for attacker aircraft’s speeds between and mph).

Due to non-linear activation functions like Relu, the general function a DNN represents is highly non-linear and non-convex. Therefore it is difficult to estimate the output range accurately. To tackle these challenges, all prior work on the formal security analysis of neural networks [20, 9, 16, 6] rely on different types of Satisfiability Modulo Theories (SMT) solvers and are thus severely limited by the efficiency of the solvers.

We present ReluVal, a new direction for formally checking security properties of DNNs without using SMT solvers. Our approach leverages interval arithmetic [39] to compute rigorous bounds on the outputs of a DNN. Given the ranges of operands (e.g., and ), interval arithmetic computes the output range efficiently using only the lower and upper bounds of the operands (e.g., because and ). Compared to SMT solvers, we found interval arithmetic to be significantly more efficient and flexible for formal analysis of a DNN’s security properties.

Operationally, given an input range and security property , ReluVal propagates it layer by layer to calculate the output range, applying a variety of optimization to improve accuracy. ReluVal finishes with two possible outcomes: (1) a formal guarantee that no value in violates (“secure”); and (2) an adversarial example in violating (“insecure”). Optionally, ReluVal can also guarantee that no value in a set of subintervals of violates (“secure subintervals”) and that all remaining subintervals each contain at least one concrete adversarial example of (“insecure subintervals”).

A key challenge in ReluVal is the inherent overestimation caused by the input dependencies [8, 39] when interval arithmetic is applied to complex functions. Specifically, the operands of each hidden neuron depend on the same input to the DNN, but interval arithmetic assumes that they are independent and may thus compute an output range much larger than the true range. For example, consider a simplified neural network in which input is fed to two neurons that compute and respectively, and the intermediate outputs are summed to generate the final output . If the input range of is , the true output range of is . However, naive interval arithmetic will compute the range of as , introducing a huge overestimation error. Much of our research effort focuses on mitigating this challenge; below we describe two effective optimizations to tighten the bounds.

First, ReluVal uses symbolic intervals whenever possible to track the symbolic lower and upper bounds of each neuron. In the preceding example, ReluVal tracks the intermediate outputs symbolically ( and respectively) to compute the range of the final output as . When propagating symbolic bound constraints across a DNN, ReluVal correctly handles non-linear functions such as Relu and calculates proper symbolic upper and lower bounds. It concretizes symbolic intervals when needed to preserve a sound approximation of the true ranges. Symbolic intervals enable ReluVal to accurately handle input dependencies, reducing output bound estimation errors by 85.67% compared to naive extension based on our evaluation.

Second, when the output range of the DNN is too large to be conclusive, ReluVal iteratively bisects the input range and repeats the range propagation on the smaller input ranges. We term this optimization iterative interval refinement because it is in spirit similar to abstraction refinement [4, 13]. Interval refinement is also amenable to massive parallelization, an additional advantage of ReluVal over hard-to-parallelize SMT solvers.

Mathematically, we prove that interval refinement on DNNs always converges in finite steps as long as the DNN is Lipschitz continuous which is met when a DNN is stable [42]. To make interval refinement converge faster, ReluVal uses additional optimizations that analyze how each input variable influences the output of a DNN by computing each layer’s gradients to input variables. For instance, when bisecting an input range, ReluVal picks the input variable range that influences the output the most. Further, it looks for input variable ranges that influence the output monotonically, and uses only the lower and upper bounds of each such range for sound analysis of the output range, avoiding splitting any of these ranges.

We implemented ReluVal using around 3,000 line of C code. We evaluated ReluVal on two different DNNs, ACAS Xu and an MNIST network, using security properties (out of which 10 are the same ones used in [20]). Our results show that ReluVal can provide formal guarantees for all 15 properties, and is on average times faster than Reluplex, a state-of-the-art DNN verifier using a specialized solver [20]. ReluVal is even able to prove a security property within 4 hours that Reluplex [20] deemed inconclusive due to timeout after 5 days. For MNIST, ReluVal verified 39.4% out of 5000 randomly selected test images to be robust against up to attacks.

This paper makes three main contributions.

  • To the best of our knowledge, ReluVal is the first system that leverages interval arithmetic to provide formal guarantees of DNN security.

  • Naive application of interval arithmetic to DNNs is ineffective. We present two optimizations – symbolic intervals and iterative refinement – that significantly improve the accuracy of interval arithmetic on DNNs.

  • We designed, implemented, evaluated our techniques as part of ReluVal and demonstrated that it is on average 200 times faster than Reluplex, a state-of-the-art DNN verifier using a specialized solver [20].

2 Background

2.1 Preliminary of Deep Learning

A typical feedforward DNN can be thought of a function mapping the input (e.g., images, texts) to outputs (e.g., labels as in image classification, texts as in machine translation). Specifically, is composed of a sequence of parametric functions , where denotes the number of layers in DNN, denotes the corresponding transformation performed by -th layer and denotes the weight parameters of -th layer. Specifically, each , performs two operations: (1) a linear transformation of its input (i.e., either or the output from ) denoted as , where and is the output of denoting intermediate output of layer while processing , and (2) a nonlinear transformation where is the nonlinear activation function. The common activation functions include sigmoid, hyperbolic tangent, or ReLU (Rectified Linear Unit) [34]. In this paper, we focus on DNNs using ReLU as the activation given that it is the most popular one used in the modern state-of-the-art DNN architectures [12, 15, 41].

2.2 Threat Model

Target system. In this paper, we consider any type of security-critical system, e.g., airborne collision avoidance system for unmanned air-crafts like ACAS Xu [29], which uses DNNs for decision making in the presence of an adversary/intruder. The decision making predicts the best action based on sensor data of the current speed and course of the vehicle (the aircraft itself), those of the adversary, and distances between the aircraft and nearby intruders. DNNs are becoming increasingly popular in such systems due to better accuracy and less performance overhead than traditional rule-based systems [19].

Figure 1: DNN in the victim aircraft (ownship) should predict to take a left (upper figure) but unexpectedly advise right (lower figure) due to the presence of adversarial inputs (e.g., if the attacker approaches at certain approaching angles), so that it will collide with the intruder.

Security properties. In this paper, we focus on input-output based security properties that ensure the correct action in the presence of adversarial inputs within certain ranges applicable to a wide range of DNN-based systems. Input-output based security properties are well suited for the DNN-based systems as their decision logic is often opaque even to their designers. Therefore, unlike traditional programs, writing complete specifications involving internal states are often hard.

For example, consider a security property specifying that a self-driving car’s crash avoidance system using DNN to predict steering angle: it should steer left if an attacker car approaches it from right. In this setting, even though the final output is easy to predict the correct outputs for the internal neurons are hard to predict even for the designer of the DNN.

Attacker model. We assume that the inputs an adversary can provide are bounded within an interval specified by a security property. For example, an attacker aircraft has a maximum speed (e.g., it can only move between 0 and 500 mph). Therefore, the attacker is free to choose any value within that range. This attacker model is, in essence, similar to the ones used for adversarial attacks on vision-based DNNs where the attacker aims to search for visual-imperceptible perturbations (within certain bound) applied on the original image such that the prediction varies (to an incorrect label). Note that, in this setting, the imperceptibility is measured using the norm.

Formally, given a computer vision DNN , the attacker solves following optimization problem: such that , where denotes the -norm and is the perturbation applied to original input . In other words, the security property of a vision DNN being robust against the adversarial perturbation above is defined as: all within a -distance ball of in the input space, .

Unlike the adversarial images, we extend the attacker model to allow a different amount of changes for different features. Specifically, instead of requiring overall perturbations on input features to be bounded by L-norm, our security properties allow different input features to be transformed within different intervals. Moreover, for DNNs where the outputs are not explicit labels, unlike adversarial image, we do not require the predicted label to remain the same. We support properties specifying arbitrary output intervals.

An example. As shown in Figure 1, normally, when the distance (one feature to the DNN) between the victim ship (ownship) and the intruder is large, the victim ship advisory system will advice left to avoid the collision and then advise right to get back to the original track. However, if the DNN is not verified, there may exist one specific situation where the advisory system, for certain approaching angles of the attacker ship, advises the ship incorrectly to take a right turn instead of left, leading to a fatal collision. If an attacker knows about the presence of such an adversarial case, he can specifically approach the ship at the adversarial angle to cause a collision.

2.3 Interval Analysis

Interval arithmetic studies the arithmetic operations on intervals rather than concrete values. As discussed above, since (1) the safety property checking requires setting input features with intervals and checking for output intervals for violations, and (2) the operations in DNN only include additions and multiplications (in linear transformation) and simple nonlinear operations (ReLU), interval analysis becomes a natural fit to our problem.

Formally, let denote a concrete real value and denote an interval, which can also be written as , an interval extension of a function defined for is a function defined where the true bound of the output of on (written as ) is a subset of the output bound on interval extension . The ideal interval extension approaches the true bound of .

Let where d is the dimension of inputs, an interval valued function is Inclusion isotonic if :

An interval extension function that is defined on an interval is said to be Lipschitz continuous if for every :

where is the width of interval , and here denotes  [39].

3 Overview

Figure 2: Running example to demonstrate our technique.

Interval analysis is a natural fit to the goal of verifying safety properties in neural network as we have discussed in Section 2.3. Naively, by setting input features as intervals, we could follow the same arithmetic performed in the DNN to compute the output intervals. Based on the output interval, we can verify if the input perturbations will finally lead to violations or not (e.g., output intervals go beyond a certain bound).

Nevertheless, naively computing output intervals in this way suffers from getting extremely loose bounds due to the dependency problem. In particular, it can only get a fair conservative estimation of the output range, which is too wide that includes many false positives. In this section, we illustrate the inter-dependency problem by running a motivating example. Then based on the example, we describe the core techniques we develop to mitigate the dependency problem in the setting of verifying DNNs.

(a) Naive interval propagation
(b) Symbolic interval propagation
(c) Iterative bisection and refinement
Figure 3: Examples showing (1) naive interval extension version where the output interval is loose due to the inter-dependency of the input variables, (2) using symbolic interval analysis to eliminate the input dependency, and (3) using bisection to reduce the input dependency.

A working example. We use a small motivating example shown in Figure 2 to illustrate the interdependency problem and our techniques in dealing with this problem in Figure 3.

Let us assume the sample NN is deployed in an unmanned aerial vehicle taking two inputs (1) distance from the intruder and (2) intruder approaching angle, and it outputs the steering angle. The NN consists of five neurons, where the weight parameters are also shown in Figure 3 attached with each edge.

Assume the possible distance from the intruder is between 4 and 6, and the possible angle of approaching intruder is between 1 and 5, we aim to verify if the steering angle is safe by defining any steering angle less than 7 as unsafe. Specifically, let denote the distance between an intruder and denote the approaching angle of the intruder. Essentially, given and , we aim to assert that . Figure (a)a illustrates the naive baseline interval propagation in this NN. By performing the interval multiplications and additions, along with the ReLU activation function, we get the output interval to be . Note that this is an overestimation because the lower bound 6 cannot be achieved: it only happens when the left hidden neuron outputs 11 and right outputs 5. However, for the left hidden neuron to output 11, the condition and has to be satisfied. Similarly, for the right hidden neuron to output 5, the condition and has to be satisfied. These two conditions are contradictory and therefore cannot be satisfied simultaneously, i.e., output can never be achieved. This effect is known as the dependency problem [39].

As we defined that a safe steering angle should not be less than 7, we cannot guarantee non-existence of violations, for the steering angle can have a value as low as 6 according to the naive interval propagation described above.

Symbolic interval propagation. Figure (b)b demonstrates that we can effectively keep track of the symbolic intervals for both upper and lower bounds of an interval, representing the dependencies of the intermediate neurons’ outputs. We propagate these symbolic intervals through the layers of an NN as far as possible. And we only concretize the bounds either at the final output layer or in intermediate layers where it is not possible to maintain an accurate symbolic interval due to large non-linearities. Then, by calculating the concrete values at the final output layer by solving the closed-form symbolic interval, we are able to get a tighter interval.

For example, the left hidden neuron updates the symbolic lower and upper bounds to be , denoting the operation performed by the previous linear transformation (taking the dot product of input and weight parameters). As we also know for the given input range and , we can safely propagate this symbolic interval after ReLU activation. However, note that in the right hidden neuron, as the possible value of can potentially be as low as . Thus when propagating through ReLU, we partially concretize the lower bound to be 0 (ReLU always changes the negative input to 0), and keep the upper bound as the symbolic equation . In the final layer, the propagated bound will be , where we can finally compute the concrete interval . This is tighter than the naive baseline interval . It also allows us to guarantee the non-existence of any unsafe steering angle.

In summary, symbolic interval propagation explicitly represents the intermediate computation results in terms of the symbolic intervals that encodes the inter-dependency of the input features to minimize overestimation. However, as shown in the previous example, we might have to concretize the bound at certain points as we propagate through layers and neurons, where we lose many dependency relationships. Therefore, we introduce another optimization, iterative refinement, as described below. As shown in Section 7, we can achieve very tight bounds by combining these two techniques.

Iterative refinement. Figure (c)c illustrates another optimization that we introduce for mitigating the dependency problem. Here, we leverage the fact that the dependency error for Lipschitz continuous functions (e.g., most popular DNNs are Lipschitz continuous [42]) decreases in the width of the intervals. Therefore, we can bisect the input interval by evenly dividing the interval into the union of two consecutive sub-intervals and reduce the overestimation. The output bound can thus be tightened as shown in the example. The interval becomes , which proves the non-existence of the violation. Note that we can iteratively refine the output interval by keeping on splitting the intervals and the operation is highly parallelizable as the splitted sub-intervals can be checked independently (Section 7). In Section 4, we provide a proof that the iterative refinement can effectively reduce the width of the output range to arbitrary precision within finite steps for any Lipschitz continuous DNN.

4 Proof of Correctness

Section 3 demonstrates the basic idea of naive interval extension and the optimization of iterative refinement. In this section, we give the detailed proof about the correctness of interval analysis/estimation on DNNs, as known as interval extension estimation, and the convergence of iterative refinement. The proofs are based on two main properties of neural networks: Inclusion isotonicity and Lipschitz continuity. In general, the correctness guarantee of interval extension hold for most finite DNNs while the convergence guarantee requires Lipschitz property. In the following, we give the proof of correctness for two most important techniques we use throughout the paper, but the proof is generic and works for our other optimizations such as symbolic interval analysis, influence analysis and monotonicity as described in Section 5.

Let denote an NN and denote its naive interval extension. Also we define the naive interval extension as a function for all such that , which only involves naive interval operations during interval variable representations. For all the other types of interval extensions, they can be easily analyzed based on the following proof.

4.1 Correctness of Overestimation

We are going to demonstrate that, for the naive interval extension of , always overestimates the theoretically tightest output range . According to our definition of inclusion isotonicity described in Section 2, it suffices to prove that naive interval extension of an NN is inclusion isotonic. Note that we only consider neural networks with ReLUs as activation function for the following proof, but the proof can be easily extended to other popular activation functions like tanh or sigmoid.

First, we need to demonstrate that is inclusion isotonic. Because ReLU is monotonic, so we can simply consider its interval extension to be . Therefore, , we have and so that its interval extension . Most common activation functions are inclusion isotonic. We refer interested readers to [39] for a list of common functions that are inclusion isotonic.

We note that is a composition of activation functions and linear functions. And we also see that linear functions, as well as common activation functions, are inclusion isotonic[39]. Because any combinations of inclusion isotonic functions are still inclusion isotonic, thus, we have that the interval representation of is inclusion isotonic.

Next, we show for arbitrary , that:

Applying the previously shown inclusion isotonicity properties of , we get:

Now, for any such , we have as and is inclusion isotonic. We thus get:


which is exactly the desired result.

Now, we get the result shown in Equation 1 that for all input , the interval extension of , , always contains the true codomain (theoretically tightest bound) for the .

4.2 Convergence in Finite Steps of Splits

Now we see the naive interval extension of is an overestimation of true output. Next, we show that iteratively splitting input is an effective way to refine and reduce such overestimated error. Empirically, we can see finite steps of splits allow us to approximate with with arbitrary accuracy, this is guaranteed by Lipschitz continuity property of NN.

First, we need to prove satisfies interval Lipschitz continuity. All interval extension of common activation functions and linear functions in NNs were proved to be interval Lipschitz continuous in [39]. Here we take as an example to prove. is a Lipschitz function, because . So for its interval extension , according to its monotonicity, we can get , . Thus, interval extension of is Lipschitz continuous. As NN is made up of multiple Lipschitz continuous functions, its interval extension is still Lipschitz continuous.

Then we demonstrate that by splitting input into small pieces and taking the union of their corresponding outputs, we can achieve at least times smaller overestimated error:

We define a -split uniform subdivision of input as a collection of sets :

where and . We note that this is exactly a partition of each into pieces of equivalent width such that , and . We then define a refinement of over with splits as:

Finally, we define the range of overestimated error created by naive interval extension on an NN after -split refinement as :

Because naive interval extension function is Lipschitz continuous, so as its -split refinement . Thus, , . ( is the Lipschitz constant of naive interval extension .)

Because, after -split refinement, there are at most two separate errors created by naive interval extension which are and , where each is not larger than . Therefore:


Equation 2 shows the error width of -split refinement gradually converges to 0 as we increase . That is, we can achieve arbitrary accuracy when using to approximate with equivalently large .

5 Methodology

Figure 4 shows the main workflow and components of ReluVal. Specifically, ReluVal first takes the input range and uses symbolic interval analysis to get a tight estimation of the output range. If the estimated output interval is tight enough so that the security property is satisfied, it outputs secure. If the checking process identifies any adversarial case, i.e., the concrete input violating the security property, it outputs this as a counterexample. Otherwise, if the preliminary output interval is still loose/conservative such that the unsafe output can exist in the current conservative estimation, ReluVal uses iterative interval refinement to further tighten the output interval to approach the theoretically tightest bound. Then it returns to the first step and repeats the same process. Once the number of iteration reaches the preset threshold, ReluVal outputs timeout denoting it cannot verify the security property in a specified period.

Figure 4: Workflow of ReluVal in checking security property of DNN.

As discussed in Section 3, simple interval extension only obtains loose/conservative intervals due to input dependency problem, which possibly includes many false positives. In the following, we describe the detail of methodologies of symbolic interval analysis and iterative interval refinement, which both reduces the effect of dependency problem and tighten the output interval estimation. We also describe other optimizations we propose to further improve the two techniques’ performances.

5.1 Symbolic Interval Propagation

Symbolic Interval propagation is one of our core contributions to mitigate the input dependency problem and tighten the output interval estimation. If DNN only consists of the linear transformations, keeping symbolic equation throughout the intermediate arithmetic process of DNN can perfectly eliminate the input dependency errors. However, as shown in Section 3, essentially drops the equation and replaces it with 0 if the equation evaluates to negative value. Therefore, at certain neurons, we keep the lower and upper bound equations and concretize whenever it evaluates to negative and goes through ReLU. For example, we keep for as equation as it evaluates to positive. Instead, for , as the lower bound evaluates to negative, and we need concretize it to 0. The benefit of maintaining equation is when propagating the equation to deeper layers, some equations can be combined to cancel out such that it implicitly encodes the variable dependencies. In contrast, the original naive interval propagation blindly treats two equations as independent variables and perform interval addition or multiplication, where the underlying dependency relations are overlooked.

Inputs: network tested neural network
input input interval
1:Initialize ;
2:// cache mask matrix needed in backward propagation
4:// loops for each layer
5:for layer = 1 numlayer do
6:     // matmal equations with weights as interval;
7:     = weight ;
8:     // estimate output ranges for each neuron
9:     upper = , lower = ;
10:     // update the output ranges for each node
11:     if layer != lastLayer then
12:          for i = 1 layerSize[layer] do
13:               if upper[i] then
14:                    R[layer][i]=[0,0];
15:                     = = 0;
16:               else if lower[i]then
17:                    R[layer][i]=[1,1];
18:               else
19:                    R[layer][i]=[0,1];
20:                    if  then
21:                          = = 0;
22:                          = upper;                                              
23:     else
24:          output = {lower, upper};      
25:return R, output;
Algorithm 1 Forward symbolic interval analysis

Algorithm 1 elaborates the procedure of propagating symbolic intervals/equations in the interval computation of DNN. In the following, we describe the core components and the details of this technique.

Constructing symbolic intervals. Given a particular neuron , (1) If is in the first layer, then we can get the equations:

where are the inputs in equation, is the weight of the first layer). With the constraints of input , we can calculate a new bound via the equation as denoting the concrete bound. (2) If belongs to the intermediate layer, then we initialize the symbolic intervals of ’s output as:

where and are the equations from last layer. and denotes the positive and negative weights of current layer. The idea is when positive weight parameter multiplying the interval , the output is , while it outputs if weight parameter is negative where the output bound gets flipped in terms of and .

Concretization. When ReLU takes in a symbolic equation, we evaluate the concrete value of equation’s upper bound and lower bound. In particular, if , then we keep the lower equation and pass it to the next layer. Otherwise, we concretize it as 0. If , then we keep the upper equation and pass it to the next layer. Otherwise, we concretize it as .

Correctness. We first clarify three different output intervals: (1) theoretically tightest bound , (2) naive interval extension bound , and (3) symbolic constraint bound . We prove the symbolic constraint bound is a superset of theoretically tightest bound and a subset of output naive interval extension:


For a given input range propagated to the output layer, it will involve both non-linearity due to ReLUs and linear transformations. symbolic interval analysis keeps the accurate bounds in linear transformation and calls concretization to handle non-linearity. Compared to theoretically tightest bound, the only approximation introduced is due to concretization in handling ReLU, which is an over-approximation as proved before. Naive interval extension, on the other hand, is an extreme version of symbolic interval analysis where it does not keep any symbolic constraint. Therefore, symbolic interval analysis partially over-approximates the theoretically tightest bound, which is fully over-approximated by naive interval extension, which is demonstrated by Equation 3.

Error analysis. We also analyze how tight the bound obtained by symbolic interval analysis is by measuring the error/distance achieved by symbolic interval analysis to the theoretically tightest bound. Assume the total error at the second layer’s neurons is (the first layer does not have any over-approximation) and the number of neurons in the second layer requiring concretized is , then the error of the output ideally becomes where is number of layers. Empirically, we find is only 1 or 2 when the input range is small. Therefore, symbolic interval analysis achieves a significant improvement over naive interval extension, which we also show in Section 7.

5.2 Iterative Interval Refinement

Despite that symbolic interval analysis alone can get relatively tight bound, it occasionally happens that the estimated output intervals are still not tight enough for verifying properties, especially when the input intervals are comparably large so that too many concretizations are triggered. As discussed above in Section 5, we resort to our another core techniques, iterative interval refinement. In addition, we propose two optimizations, influence analysis and monotonicity, which further refines the estimated output ranges based on iterative interval refinement.

Baseline iterative refinement. In Section 4, we have proved that theoretically tightest bound could be achieved if we can split arbitrary large times. Besides, we also show that as we split more times, the output interval estimation is expected to be increasingly closer to the ideal tightest bound. Therefore, we take iterative bisection for each input interval until the output interval is tight enough to meet the security property, or time out, as shown in Figure 4. The iterative bisection process can be represented as a bisection tree as shown in Figure 5. Each bisection on one input yields two children denoting two consecutive sub-intervals, the union of which forms their parent.

Specifically, let denote the deepest depth of bisection tree, and denote the set containing all the leaf intervals created by iterative interval refinement proved to include no inputs violating properties. To identify if there exists any adversarial example in the bisected input ranges, we sample a few input points (the current default is the middle point of each range) and verify if the concrete output leads to property violations. If so, we output the adversarial example, mark the range as definitely containing adversarial examples, and conclude the analysis for this specific range. Otherwise, we repeat the symbolic interval analysis process for the range. This default configuration is tailored towards deriving a conclusive answer of “secure” or “insecure” for the entire input interval. Users of ReluVal can configure it to further split an insecure interval to potentially discover secure subintervals within the insecure interval, trading off analysis time for more secure intervals.

Here, means the input interval with split depth . After one bisection on , it creates two children: and .

Correctness. Compared to N-split refinement, iterative interval refinement with the deepest depth will at least achieve the same effect when (d is the dimension of input ), which makes the width of the overestimated error becomes:

Therefore, as the depths of iterative interval refinement goes increasingly deeper, the error of iterative interval refinement is approaching zero, and the estimated output interval can monotonically approach the tightest bound.

Figure 5: An n-depth bisection tree with n split depth. Each node denotes the bisected sub-interval.

Optimizing iterative refinement. Based on the baseline iterative refinement, we develop two optimizations, namely influence analysis and monotonicity, to further cut the average bisection depths.

(1) Influence analysis. When deciding which input intervals to bisect first, the baseline method follows pure random strategy. In this optimization, we compute the gradient or Jacobian of the output with respect to each input feature and pick the largest one as the first to bisect. The high-level intuition is that gradient represents the influence of the input on the output, which essentially measures the output sensitivity on each input feature.

Inputs: network tested neural network
R gradient mask
1:// initialize upper and lower gradient bounds
2: = weights[lastLayer];
3:for layer = numlayer-1 do
4:     for 1 layerSize[layer] do
5:          // is an interval containing and
6:          // interval hadamard product
7:          =R[layer] ;
8:          // interval matrix multiplication
9:          =weights[layer] ;      
10:return ;
Algorithm 2 Backward propagation for gradient interval

Algorithm 2 includes the steps for backward computation of input feature influence. However, note that instead of working on concrete values, this version works for the interval. The basic idea is to identify the influence caused by ReLU. If there are no ReLU in the target DNN, the Jacobian matrix is only determined by the weight parameters, which is fixed given weights for whatever inputs are. As ReLU’s gradient is either 0 for negative input or 1 for positive input. We only need to record ReLUs with 0 gradients during forward propagation and use this mask to calculate during backward propagation. However, note that given a range of inputs, a node could be both negative and positive, so the gradient of this node becomes an interval . For the other nodes which are sure to be always positive or negative given input range, we regard their gradient as the interval or .

Note that we still cannot get a true bound of partial derivative for each input interval , as it is extremely hard in finding , so instead we gives an interval extension estimation to approximate , and is guaranteed by the inclusion isotonic property as defined in Section 2.3.

Then we resort to Smear Function [21, 22]: , where the input with the largest smear value is the most effective interval input for the targeted output . At each bisection, we then pick the most effective to bisect so that we could, to the greatest extent, reduce the overestimated error created by interval extension. Our empirical results in Section 7 indicates the smear function method effectively prioritize the bisection on the input that can mostly reduce the over-approximation error, and thus decrease the average split depth. The algorithm of using smear function to identify the most influencing input intervals to bisect is shown in Algorithm 3.

Inputs: network tested neural network
input input interval
g gradient interval calculated by backward propagation
1:for i = 1 input.length do
2:     // r is the range of each input interval
3:     ;
4:     // e is the influence from each input to output
5:     e = ;
6:     if e largest then most effective feature
7:          largest = e;
8:          splitFeature = i;      
9:return splitFeature;
Algorithm 3 Using influence analysis to choose the most influential feature to split

(2) Monotonicity. Computing the Jacobian of intervals also helps to reason about the monotonicity property of the input interval on the output. In particular, there are cases that that or , which means the partial derivative of is always positive or negative given the interval input . In these cases, we can replace the interval with two concrete value and . Because if both and has no violation, it is impossible for any value in between can trigger a violation. Our empirical results in Section 7 also indicate that monotonicity can help decrease the number of splits required for proving the security property.

6 Implementation

Setup. We implement ReluVal in C and leverage OpenBLAS111 to enable efficient matrix multiplications. We evaluate ReluVal on a Linux server running Ubuntu 16.04 with 16 CPU cores and 256GB memory.

Parallelization. One unique advantage of ReluVal over other security property checking systems like Reluplex is that the interval arithmetic in the setting of verifying DNN is highly parallelizable by nature. During the process of iterative interval refinement, newly created input ranges after splitting are independent. This feature allows us to create as many threads as possible, each taking care of a specific input range, to enjoy a considerable speedup by distributing different input ranges to different workers.

However, there are two key challenges pending to solve in the parallel setting. First, as shown in Section 5.2, the bisection tree is often not balanced leading to substantially different running time for each thread. We found the worst case running time on a single thread is nontrivial. In other words, most of the available computation workers are idle when only a few threads are still running. Second, as we cannot predict in advance what the depth of each thread needs to bisect, this leads to high scheduling overhead because ReluVal tends to create new threads while often terminate in seconds.

To solve these two problems, we develop dynamic thread rebalancing algorithm that can approximately pinpoint the potential difficult tasks and efficiently split and assign them to newly created threads. Specifically, we keep a window of latest ten depths for each thread to track its current average depth and estimate the depths of following subtrees. Based on such estimated average depth, child threads can be created by ReluVal to help the slowest thread while minimizing the cost of creating new threads.

Outward roundup. A large number of matrix multiplication in the DNN in floating point could lead to a severe precision drops after round-up [10]. For example, consider the output of one neuron is [0.00000001, 0.00000002]. If the precision is , then it is automatically rounded up to [0.0,0.0]. After one layer propagation with its following weight parameter 1000, the correct output should be [0.00001, 0.00002], but now it becomes [0.0, 0.0] due to the rounding operation. Though this roundup error is minor for this current layer output, as the interval propagates through the neural network, the error will accumulate and significantly affect the output precision. In fact, our empirical tests show that some adversarial examples reported by Reluplex [20] is false positive, due to such rounding problem.

Therefore, we adopt outward roundup in ReluVal. In particular, for every newly calculated interval or symbolic intervals , we always round them up to . This outward roundup strategy ensures the computed output range to be an overestimation of the true output range. We implement outward round up using 32-bit float. We note that this precision is enough for verifying properties on ACAS Xu models, though it can be easily extended to the precision of 64-bit double.

7 Evaluation

7.1 Evaluation Setup

In the evaluation, we consider two general categories of DNN models, deployed for handling two different tasks. The first category is airborne collision avoidance system (ACAS) crucial for alerting and preventing the collisions between aircrafts. Traffic alert and collision avoidance system (TCAS) belongs to such systems and is currently mandated on all large commercial aircrafts worldwide [26] to avoid the midair collision. We focus our evaluation on the latest system in this category called ACAS Xu for unmanned system [23], as Reluplex also evaluated on the same system where ours show substantial outperformance to all the models tested in Reluplex. The second category includes the models deployed to recognize hand-written digit, known as MNIST dataset. Our preliminary results demonstrate that ReluVal can also scale to larger networks that none of the solver-based verification tools have succeeded to check.

ACAS Xu. The ACAS Xu system consists of forty-five different NN models. Each network is composed of an input layer taking five inputs, an output layer generating five outputs, and six hidden layers with each containing fifty neurons. Specifically, as shown in Figure 6, five inputs include . In particular, denotes the distance between ownship and intruder, denotes the heading direction angle of ownship relative to the intruder, denotes the heading direction angle of the intruder relative to ownship, is the speed of ownship, and is the speed of intruder. Output of the NN includes {COC, weak left, weak right, strong left, strong right}. COC denotes clear of conflicts, weak left means heading left with angle /s, weak right means heading right with angle /s, strong left is heading left with angle /s, and strong right denotes heading right with angle /s. Each output in NN corresponds to the score for this action advisory where the minimal labels the best decision to take.

Figure 6: Horizontal view of ACAS Xu operating scenarios.

DNNs on MNIST. For DNN on classifying hand-written digits, we test a neural network with 784 inputs, 10 outputs and two hidden layers with each having 512 neurons. On MNIST test data set, it can achieve 98.28% accuracy for classification.

Source  Properties Networks Reluplex Time (sec) ReluVal Time (sec) Speedup
from [20]
45 443,560.73* 14,603.27 30
34222We remove model_4_2 and model_5_3 because Reluplex found incorrect adversarial examples due to roundup problems (these models do not have any adversarial cases). 123,420.40 117,243.26 1
42 35,040.28 19,018.90 2
42 13,919.51 441.97 32
1 23,212.52 216.88 107
1 220,330.82 46.59 4729
1 86400.0* 9,240.29 9
1 43,200.01 40.41 1069
1 116,441.97 15,639.52 7
1 23,683.07 10.94 2165
1 4,394.91 27.89 158
1 2,556.28 0.104 24580
1 172,800.0* 148.21 1166
2 172,810.86* 288.98 598
2 31,328.26 876.80 36
* Reluplex has different thresholds in determining time out for different properties.
Table 1: Verifying properties of ACAS Xu compared with Reluplex. to are the properties proposed in Reluplex [20]. to are our additional properties.

7.2 Performance on ACAS Xu Models

In this section, we first elaborate the detailed comparison of ReluVal and Reluplex in terms of the verification performances. Then, we compare ReluVal with the state-of-the-art adversarial attack on DNN, such as CW [7], showing that on average ReluVal can consistently find 50% more adversarial examples. Finally, we show that ReluVal can accurately narrow down all possible adversarial ranges, providing more insights on the distribution of adversarial corner-case in the input space.

Comparison to Reluplex. Table 1 compares the time needed for ReluVal with that of Reluplex for verifying ten original properties described in their paper [20]. In addition, we include the comparison of five more properties that are also safety critical while not discussed in the original paper. The detailed description of each property is in the Appendix. To enable fair comparison, we clone the Reluplex source code to the local machine perform all the comparison on the same hardware. Table 1 shows that ReluVal always outperforms Reluplex in checking all fifteen security properties. For properties that Reluplex times out, ReluVal terminates in reasonably much shorter time. On average, ReluVal achieves up to speedup.

# Seeds CW CW Miss ReluVal ReluVal Miss
50 24/40 40.0% 40/40 0%
40 21/40 47.5% 40/40 0%
30 17/40 58.5% 40/40 0%
20 10/40 75.0% 40/40 0%
10 6/40 85.0% 40/40 0%
Table 2: The number of adversarial inputs CW and ReluVal can find out of 40 adversarial ACAS Xu properties. The last column shows the percentage of adversarial properties CW missed to find.

Finding adversarial inputs. In terms of the number of adversarial examples detected, ReluVal is also better than those popular optimization-based method using the gradient to find adversarial examples. We compare to the Carlini and Wagner (CW) method [7], the state-of-the-art gradient-based attack which minimizes their proposed CW loss function. In verifying ACAS Xu fifteen properties on forty-five models, there are total forty models that we have verified have adversarial examples on different properties. Therefore, we test CW on the same forty models against the properties that they have adversarial examples and check how many models CW is able to come up with adversarial cases. Table 2 shows that CW attack constantly misses some models that it fails to find adversarial examples. As optimization-based gradient descent requires starting from a seed input and iteratively looks for adversarial examples, where the choice of seeds may highly influence the success of finding an adversarial inputs. Therefore, we try a different number of seed inputs to give more flexibility during the input generation. Note that our technique in ReluVal does not need any seed input. Thus it is not restricted by the potentially undesired starting seed and can fully explore the input space. On average, CW misses 61.2% number of models, which do have adversarial inputs exist that CW fails to find.

Narrowing down adversarial ranges. A unique and useful feature for detecting adversarial examples of ReluVal is that it can isolate adversarial ranges and non-adversarial ranges given input ranges. This is useful because it can accurately narrow down all the possible adversarial ranges in certain precision in the input space so that user can identify the suspicious adversarial inputs that are not more fine-grained than e.g., or smaller. Here we set precision as meaning that we allow splitting the interval ranges into small pieces not beyond . Table 3 shows the results of three properties we checked. For example, property is specifies model_4_1 should output strong right with input range , , , , and . For this property, ReluVal finally splits the input ranges into 262,144 small pieces and is able to prove 163,915 pieces. Then it proves does not contain any adversarial input, and is adversarial.

P Adv Range Adv Unsure Non-adv
98229 1 163915
and 18121 2 14645
17738 1 15029
Table 3: The second column shows the proved adversarial ranges, while the rest of ranges are proved non-adversarial. Last three columns showing detailed number of splitted ranges with the precision of .

7.3 Preliminary Tests on MNIST Model

Besides ACAS Xu, we also test ReluVal on MNIST model which achieves decent accuracy (98.28%). Given a particular seed image, we allow arbitrary perturbation on every pixel value and the total perturbation is bounded by norm. In particular, ReluVal can prove 956 seed images robust for and 721 images robust enough for out of 1000 randomly selected test images. Note that we occasionally get timeout due to the currently restricted computation resources, we can further optimize the system and work on GPU to verify these properties as well as with larger norm bound. Figure 7 shows the detailed results. As the norm is increased, the percentage of images that have no adversarial perturbations drops quickly to close to 0.

Figure 7: Percentage of images proved to be not adversarial with by ReluVal on MNIST test model out of 1000 random test MNIST images.

7.4 Optimizations

In this subsection, we evaluate the effectiveness of optimizations proposed in Section 5 compared to basic naive interval extension with iterative interval refinement. The results are reported in Table 4.

Methods Deepest Dep (%) Avg Dep (%) Time (%)
S.C.P 42.06 49.28 99.99
I.A. 10.65 10.85 96.04
Mono 0.325 0.497 16.91
Table 4: The percentages of the deepest depth, average depth and running time for average property verification saved by three main methodologies: symbolic interval analysis, influence analysis, monotonicity, compared with naive interval extension.

Symbolic interval propagation. Table 4 shows that symbolic interval analysis saves deepest and average depth of bisection tree (Figure 5) by up to 42.06% and 49.28%, respectively, over naive interval extension.

Influence analysis. As one of the optimizations used in iterative refinement, influence analysis helps prioritize the most influential ranges to the output and split it first. Compared to sequentially or randomly pick features to bisect, it can help reduce the average depth by 10.85%, leading to saving the running time up to 96.04%.

Monotonocity. As another optimization used in iterative interval refinement, monotonicity saves relatively less in terms of tree depth. However, it can still reduce the average running time by 16.91%. This is because monotonicity cannot help for shallow depths due to the imprecise computation of Jacobian matrix on the interval, but it can help a lot when average depth is deep.

8 Related Work

Adversarial machine learning. It draws growing attention to study the vulnerability of emerging machine learning systems. A typical attacker tries to generate adversarial examples aiming to fool targetted machine learning models by adding minimal perturbations to original inputs. Szegedy et al. first proposed to generate adversarial examples using second-order optimization technique [42], and they conjectured that reducing the Lipschitz upper bound of the DNN can help mitigate the vulnerability of DNN on adversarial examples. Goodfellow et al. [11] then propose fast gradient sign method that drastically reduced the overhead in generating adversarial examples, and they demonstrate the adversarial examples is due to the DNN being “too linear”. Later, many efficient variants have been proposed to find a diverse category of adversarial examples [33, 7, 38]. From the defense perspective, adversarial retraining by far has been shown to be one of the most effective ways to improve the robustness against adversary like [11, 37, 27, 7, 28, 35, 44]. However, most of these arms-races are restricted to one type of adversaries/security properties (e.g., overall perturbations bounded by some norms). Besides, neither the attack nor the defense can give any provable guarantees on the non-existence of adversarial examples to ensure the security properties of the neural network. Unlike these attacks, ReluVal can provide a provable security analysis of given input ranges, systematically narrowing down and detecting all adversarial ranges.

Verify machine learning systems. Recently, SMT solver-based approach was applied to verify small DNNs like ACAS Xu, such as Reluplex sovler [20], PLANET solver [9], and automated verification framework [16]. This technique is by large limited by the scalability of the solver. Therefore, they either cannot scale to larger network beyond a few hundred of neurons [20] or lack the provable guarantees [16]. Compared to solver-based verification systems, ReluVal substantially outperforms on the same properties and shows great potential to be scaled to much larger networks. On the other hand, Kolter et al. [24] transforms the verification problem into the convex optimization by relaxation and solving the dual problem. However, they did not consider individual input space violations but more focused on giving the theoretical error bound on a set of inputs. ReluVal can accurately pinpoint the adversarial input ranges or prove the non-existence of any security property violation for any particular input.

Interval optimization. Interval analysis is used long before to solve non-linear equations and global optimizations and has shown great success in many application domains [18, 31, 30]. Due to its capabilities in providing rigorous enclosures of solutions in modeling equations, many numerical optimizations [17, 43] leveraged interval analysis to achieve a near-precise approximation of models and constraints. We note that the computation inside NN is mostly a sequence of simple linear transformation with a simple type of nonlinear activation function. These computations thus highly resemble those in traditional domains where interval analysis has shown the success. Therefore, based on the foundation of interval analysis laid by Moore et al. [39, 32], we leverage interval analysis, and our empirical results show substantial performance boosting in proving security properties of DNNs.

9 Future Work and Conclusion

Although this paper focuses on verifying security properties of DNNs, ReluVal itself is a generic framework that can efficiently leverage interval analysis to understand and analyze the DNN computation. In the future, we hope to develop a full-fledged DNN security analysis tool based on ReluVal, just like any traditional program analysis tool, that can not only efficiently check arbitrary security properties of DNNs, but can also provide insights into the behaviors of hidden neurons with provable guarantees.

In this paper, we designed, developed, and evaluated ReluVal, a formal security analysis system for neural networks. We introduced several novel techniques including symbolic interval arithmetic to perform formal analysis without resorting to SMT solvers. ReluVal performed 200 times faster on average than the current state-of-art solver-based approaches.


  • [1] Baidu Apollo Autonomous Driving Platform.
  • [2] NASA, FAA, Industry Conduct Initial Sense-and-Avoid Test.
  • [3] NAVAIR plans to install ACAS Xu on MQ-4C fleet.
  • [4] T. Ball and S. K. Rajamani. The s lam project: debugging system software via static analysis. In ACM SIGPLAN Notices, volume 37, pages 1–3. ACM, 2002.
  • [5] C. Bloom, J. Tan, J. Ramjohn, and L. Bauer. Self-driving cars and data collection: Privacy perceptions of networked autonomous vehicles. In Symposium on Usable Privacy and Security (SOUPS), 2017.
  • [6] N. Carlini, G. Katz, C. Barrett, and D. L. Dill. Ground-truth adversarial examples. arXiv preprint arXiv:1709.10207, 2017.
  • [7] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy, pages 39–57. IEEE, 2017.
  • [8] L. H. De Figueiredo and J. Stolfi. Affine arithmetic: concepts and applications. Numerical Algorithms, 37(1):147–158, 2004.
  • [9] R. Ehlers. Formal verification of piece-wise linear feed-forward neural networks. arXiv preprint arXiv:1705.01320, 2017.
  • [10] D. Goldberg. What every computer scientist should know about floating-point arithmetic. ACM Computing Surveys (CSUR), 23(1):5–48, 1991.
  • [11] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [13] T. A. Henzinger, R. Jhala, R. Majumdar, and G. Sutre. Lazy abstraction. ACM SIGPLAN Notices, 37(1):58–70, 2002.
  • [14] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97, 2012.
  • [15] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, volume 1, page 3, 2017.
  • [16] X. Huang, M. Kwiatkowska, S. Wang, and M. Wu. Safety verification of deep neural networks. In International Conference on Computer Aided Verification, pages 3–29. Springer, 2017.
  • [17] D. Ishii, K. Yoshizoe, and T. Suzumura. Scalable parallel numerical constraint solver using global load balancing. In Proceedings of the ACM SIGPLAN Workshop on X10, pages 33–38. ACM, 2015.
  • [18] L. Jaulin and E. Walter. Guaranteed nonlinear parameter estimation from bounded-error data via interval analysis. Mathematics and computers in simulation, 35(2):123–137, 1993.
  • [19] K. D. Julian, J. Lopez, J. S. Brush, M. P. Owen, and M. J. Kochenderfer. Policy compression for aircraft collision avoidance systems. In Digital Avionics Systems Conference (DASC), 2016 IEEE/AIAA 35th, pages 1–10. IEEE, 2016.
  • [20] G. Katz, C. Barrett, D. Dill, K. Julian, and M. Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. arXiv preprint arXiv:1702.01135, 2017.
  • [21] R. B. Kearfott. Rigorous global search: continuous problems, volume 13. Springer Science & Business Media, 2013.
  • [22] R. B. Kearfott and M. Novoa III. Algorithm 681: Intbis, a portable interval newton/bisection package. ACM Transactions on Mathematical Software (TOMS), 16(2):152–157, 1990.
  • [23] M. J. Kochenderfer, J. E. Holland, and J. P. Chryssanthacopoulos. Next-generation airborne collision avoidance system. Technical report, Massachusetts Institute of Technology-Lincoln Laboratory Lexington United States, 2012.
  • [24] J. Z. Kolter and E. Wong. Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv preprint arXiv:1711.00851, 2017.
  • [25] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [26] J. Kuchar and A. C. Drumm. The traffic alert and collision avoidance system. Lincoln Laboratory Journal, 16(2):277, 2007.
  • [27] A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
  • [28] Y. Liu, X. Chen, C. Liu, and D. Song. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770, 2016.
  • [29] M. Marston and G. Baca. Acas-xu initial self-separation flight tests. NASA Technical Reports Server, 2015.
  • [30] R. Moore and W. Lodwick. Interval analysis and fuzzy set theory. Fuzzy sets and systems, 135(1):5–9, 2003.
  • [31] R. E. Moore. Interval arithmetic and automatic error analysis in digital computing. Technical report, STANFORD UNIV CALIF APPLIED MATHEMATICS AND STATISTICS LABS, 1962.
  • [32] R. E. Moore. Methods And Applications Of Interval Analysis, volume 2. Siam, 1979.
  • [33] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2574–2582, 2016.
  • [34] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 807–814, 2010.
  • [35] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 427–436, 2015.
  • [36] M. T. Notes. Airborne collision avoidance system x. MIT Lincoln Laboratory, 2015.
  • [37] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint arXiv:1602.02697, 2016.
  • [38] K. Pei, Y. Cao, J. Yang, and S. Jana. Deepxplore: Automated whitebox testing of deep learning systems. arXiv preprint arXiv:1705.06640, 2017.
  • [39] M. J. C. Ramon E. Moore, R. Baker Kearfott. Introduction to Interval Analysis. SIAM, 2009.
  • [40] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017.
  • [41] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826, 2016.
  • [42] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  • [43] R. Vaidyanathan and M. El-Halwagi. Global optimization of nonconvex nonlinear programs via interval analysis. Computers & Chemical Engineering, 18(10):889–897, 1994.
  • [44] W. Xu, Y. Qi, and D. Evans. Automatically evading classifiers. In Proceedings of the 2016 Network and Distributed Systems Symposium, 2016.

Appendix A Appendix: Formal Definitions for ACAS Xu Properties to

Inputs. Inputs for each ACAS Xu DNN model are:

: the distance between ownship and intruder;

: the heading direction angle of ownship relative to intruder;

: heading direction angle of intruder relative to ownship;

: speed of ownshipe;

: speed of intruder;

Outputs. Outputs for each ACAS Xu DNN model are:

COC: Clear of Conflicts;

weak left: heading left with angle /s;

weak right: heading right with angle /s;

strong left: heading left with angle /s;

strong right: heading right with angle /s.

45 Models. There are 45 different models indexed by two extra inputs and , model_x_y means the model used when and :

: previous action indexed as {COC, weak left, weak right, strong left, strong right}.

: time until loss of vertical separation indexed as {0, 1, 5, 10, 20, 40, 60, 80, 100}

Property : If the intruder is distant and is significantly slower than the ownship, the score of a COC advisory will always be below a certain fixed threshold.

Tested on: all 45 networks.

Input ranges: , , .

Desired output: the output of COC is at most 1500.

Property : If the intruder is distant and is significantly slower than the ownship, the score of a COC advisory will never be maximal.

Tested on: model_x_y, , except model_5_3 and model_4_2

Input ranges: , , .

Desired output: the score for COC is not the maximal score.

Property : If the intruder is directly ahead and is moving towards the ownship, the score for COC will not be minimal.

Tested on: all models except model_1_7, model_1_8 and model_1_9

Input ranges: , , , , .

Desired output: the score for COC is not the minimal score.

Property : If the intruder is directly ahead and is moving away from the ownship but at a lower speed than that of the ownship, the score for COC will not be minimal.

Tested on: all models except model_1_7, model_1_8 and model_1_9

Input ranges: , , , , .

Desired output: the score for COC is not the minimal score.

Property : If the intruder is near and approaching from the left, the network advises “strong right”.

Tested on: model_1_1

Input ranges: , , , , .

Desired output: the score for “strong right” is the minimal score.

Property : If the intruder is sufficiently far away, the network advises COC.

Tested on: model_1_1

Input ranges: , , , , .

Desired output: the score for COC is the minimal score.

Property : If vertical separation is large, the network will never advise a strong turn

Tested on: model_1_9

Input ranges: , , , , .

Desired output: the scores for “strong right” and “strong left” are never the minimal scores.

Property : For a large vertical separation and a previous “weak left” advisory, the network will either output COC or continue advising “weak left”.

Tested on: model_2_9

Input ranges: , , , , .

Desired output: the score for “weak left” is minimal or the score for COC is minimal.

Property : Even if the previous advisory was “weak right”, the presence of a nearby intruder will cause the network to output a “strong left” advisory instead.

Tested on: model_3_3

Input ranges: , , , , .

Desired output: the score for “strong left” is minimal.

Property : For a far away intruder, the network advises COC.

Tested on: model_4_5

Input ranges: , , , , .

Desired output: the score for COC is minimal.

Property : If the intruder is near and approaching from the left but the vertical separation is comparably large, the network still tend to advise “strong right” more than COC.

Tested on: model_1_1

Input ranges: , , , , .

Desired output: the score for “strong right” is always smaller than COC.

Property : If the intruder is distant and is significantly slower than the ownship, the score of a COC advisory will be the minimal.

Tested on: model_3_3

Input ranges: , , .

Desired output: the score for COC is the minimal score.

Property : For a far away intruder but the vertical distance are small, the network always advises COC no matter the directions are.

Tested on: model_1_1

Input ranges: , , , , .

Desired output: the score for COC is the minimal.

Property : If the intruder is near and approaching from the left and vertical distance is small, the network always advises strong right no matter previous action is strong right or strong left.

Tested on: model_4_1, model_5_1

Input ranges: , , , , .

Desired output: the score for “strong right” is always the minimal.

Property : If the intruder is near and approaching from the right and vertical distance is small, the network always advises strong left no matter previous action is strong right or strong left.

Tested on: model_4_1, model_5_1

Input ranges: , , , , .

Desired output: the score for “strong left” is always the minimal.

Property :

Tested on: model_4_1

Input ranges: , , , , .

Desired output: the score for “strong right” is the minimal.

Property :

Tested on: model_4_1

Input ranges: , , , , .

Desired output: the score for “strong right” is the minimal.

Property :

Tested on: model_1_2

Input ranges: , , , , .

Desired output: the score for “strong right” is the minimal.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description