Adversarial Detection of Flash Malware: Limitations and Open Issues

Adversarial Detection of Flash Malware: Limitations and Open Issues

Davide Maiorca,  Battista Biggio, 
Maria Elena Chiappe, and Giorgio Giacinto, 
D. Maiorca (, B. Biggio (, E. M. Chiappe (, and G. Giacinto ( are with the Department of Electrical and Electronic Engineering, University of Cagliari, Piazza d’Armi, 09123 Cagliari, Italy.

During the past two years, Flash malware has become one of the most insidious threats to detect, with almost critical vulnerabilities targeting Adobe Flash Player disclosed in the wild. Research has shown that machine learning can be successfully used to tackle this increasing variability and sophistication of Flash malware, by simply leveraging static analysis to extract information from the structure of the file or from its bytecode. However, the robustness of such systems against well-crafted evasion attempts - also known as adversarial examples - has never been investigated. In this paper, we first discuss how to craft adversarial Flash malware examples, and show that it suffices to only slightly manipulate them to evade detection. We then empirically demonstrate that popular defense techniques proposed to mitigate such threat, including re-training on adversarial examples, may not always be effective. We argue that this occurs when the feature vectors extracted from adversarial examples become indistinguishable from those of benign data, meaning that the given feature representation is intrinsically vulnerable. In this respect, we are the first to formally define and quantitatively characterize this vulnerability, highlighting when an attack can be countered by solely improving the security of the learning algorithm, or when it requires also considering additional features. We conclude the paper by suggesting alternative research directions to improve the security of learning-based Flash malware detectors.

Flash-based Malware Detection, Static Analysis, Secure Machine Learning, Adversarial Training, Computer Security

1 Introduction

The significant evolution of network-based services has allowed users to easily access a comprehensive, wide multimedia infrastructure that greatly improves the overall Internet surfing experience. It is common to visualize high-quality streams from web pages, to navigate websites through complex animations, and to play graphically-advanced games from social networks. Although HTML is nowadays considered the standard de facto to efficiently render multimedia contents, Adobe Small Web Format (SWF, previously known as ShockWave Flash) is still widely used, although many reports predict a fast decline for this technology. Major streaming websites such as CBS, CNN, Spotify, and HBO still rely on SWF technology [25].

SWF suffered from numerous security problems in the past two years. According to cvedetails,111 there were publicly disclosed vulnerabilities in , and in . That, of course, does not include zero-day vulnerabilities such as the one found in the Hacking Team servers during the popular attack perpetrated in  [4]. The Hacking Team incident also provoked a massive media uproar, forcing browser vendors to take additional security measures concerning Flash player, such as making updates compulsory for the user in order to visualize the multimedia content. However, this is just a weak palliative that does not completely solve the whole problem, for at least two reasons. First, security updates only concern publicly disclosed vulnerabilities; second, not all security issues are rapidly patched. A more secure defense strategy is therefore required not only to ensure protection against zero-day attacks, but also to foresee novel threats.

Signature-based detection of malware, which is still widely employed by many free and commercial anti-malware systems, is progressively getting inadequate to address the myriad of polymorphic attacks that are generated every second in the wild. This also applies to the detection of SWF-based malware, whose scripting code (ActionScript) contains heavily obfuscated routines that aim to confuse both automatic and manual analysis. The adoption of machine learning, combined with static or dynamic analysis of the embedded scripting code, has become a compelling alternative to signatures or rule-based systems. With respect to SWF malware, static analysis showed promising results at detecting malicious samples in the wild by extracting information either from the structure of the file [27] or from the ActionScript bytecode [34].

Previous work has however not discussed the robustness of such approaches against evasion attacks, namely, well-crafted manipulations of the input sample at test time to evade detection. More specifically, research has shown that machine-learning algorithms are vulnerable to such attacks [8, 11] - also recently referred to as adversarial examples in the context of deep learning [28, 17] - if the attacker possesses enough information about the system, and if she can perform enough changes to the extracted features. Such attacks have been reported also against Android and PDF malware detectors, but not against learning-based Flash malware detection tools [8, 26, 18, 15]. Several countermeasures have also been proposed to mitigate the impact of such attacks, with the goal of reshaping the decision function of the classifier with different techniques [9, 26, 15, 17, 13, bulo16-tnnls]. For instance, game-theoretical approaches require retraining the classifier on simulated attacks until a game equilibrium between the classifier and the attacker is reached, providing formal guarantees for its existence and uniqueness [13, bulo16-tnnls]. In the context of deep neural networks, although the underlying assumptions behind existence and uniqueness of a game equilibrium are not typically satisfied, the idea of retraining the learning algorithm on the attack samples, referred to as adversarial training, has been empirically shown to be promising [17].

The main goal of this paper is thus to understand whether and to which extent systems that statically analyze SWF files can be robust against adversarial attacks, i.e., to propose a thoroughly security evaluation procedure for learning-based SWF malware detectors based on static analysis. To this end, we first propose a representative system for Flash malware detection, named FlashBuster. It is a static machine-learning system that employs information extracted by both the structure and the content of SWF files. This allows for a more comprehensive assessment of the extracted static information, by representing and combining the content employed by previous state-of-the-art systems. We show that FlashBuster is able to detect the majority of malware in the wild, thus confirming results obtained in previous work. We then evaluate the security if FlashBuster by simulating evasion attacks with different levels of the attacker’s knowledge about the targeted system [11, 8] (also referred to as white-box and black-box attacks), against an increasing number of modifications to the input samples. The corresponding security evaluation curves, depicting how the detection rate decreases against attack samples that are increasingly manipulated, allow us to comprehensively understand and assess the vulnerability of FlashBuster under attack. We finally discuss the effectiveness of adversarial training against such attacks. To this end, we re-train FlashBuster on the evasion attack samples used against it, and surprisingly show that this strategy can be ineffective. We argue that this is due to an intrinsic vulnerability of the feature representation, i.e., to the fact that evasion attacks completely mimic the feature values of benign data, thus becoming totally indistinguishable for the learning algorithm. We define this vulnerability in formal terms, and quantitatively evaluate it through the definition of a specific metric which measures the extent to which the attack samples converge towards benign data.

Our findings highlight a crucial problem that must be considered when designing secure machine-learning systems, namely, that of evaluating in advance the vulnerability of the given features. Indeed, vulnerable information may compromise the whole system even if the employed decision function is robust. In this respect, we sketch possible research directions that may lead one to design more secure machine learning-based malware detectors.

This paper is structured as follows. Section 2 provides the basics to understand the SWF format. Section 3 provides the fundamentals of the ActionScript bytecode. Section 4 describes the architecture of FlashBuster. Section 5 describes the threat model and the possible attack scenarios. Section 6 discusses the vulnerabilities that affect learning-based systems, and introduces a quantitative measure of feature and learning vulnerabilities. Section 7 provides the experimental evaluation. Section 8 describes the related work in the field. Section 9 discusses and closes the paper.

2 ShockWave Flash File Format

ShockWave Flash (SWF) is a format that efficiently delivers multimedia contents, and it is processed by the Adobe Software such as Adobe Flash Player.222 Because of its compactness and adaptability to the most popular web browsers, SWF has been widely used for online applications such as games, animations, etc.

In this Section, we provide an overview of the SWF format, by focusing on the analysis of its main components. Moreover, we provide an insight into the ActionScript Virtual Machine (ASVM), which is used to parse scripted contents that might be found in the SWF file.

2.1 SWF Basics

This Section describes the main blocks of a SWF file, which is composed of three basic elements: (1) a header that describes important file properties such as the presence of compression, the version of the SWF format, and the number of video frames; (2) a list of tags, i.e., data structures that establish and control the operations performed by the reader on the file data; (3) a special tag called End that terminates the file.

The SWF format supports two types of tags: definition and control. Definition tags assign a number called character ID to each object shown by the reader. For example, the DefineFont and DefineSprite tags assign an ID, respectively, to a font and a sprite. Such IDs are then placed on a dictionary that will be accessed by control tags to establish which elements will be visualized on each frame.

Action tags represent special types of control tags whose functionalities are triggered by user actions, such as pressing a button, moving the mouse, and so forth. Starting from SWF , such actions were expanded by the introduction of a scripting language called ActionScript. With the release of its latest version (3.0), ActionScript code is now compiled to an entirely new bytecode called ActionScript Bytecode (ABC), and run by the ActionScript Virtual Machine 2 (ASVM ). More about the SWF file structure can be found on the official SWF reference [3].

2.2 ActionScript Virtual Machine

This Section describes the essentials of ASVM , the new version of the ActionScript Virtual Machine, released in 2006. The introduction of characteristics such as Just In Time compilation (JIT), namespaces, and packaging, brought to huge speed improvements compared to the previous version. The computation in the ASVM is based on the execution of method bodies composed by instructions. Each method body runs in a specific context that defines information such as default parameters.

ASVM 2 runs ABC files in four stages: (i) loading, where the ABC file is copied in memory; (ii) linking, where complex data structures are created; (iii) verification, where all the resolved data structures are verified to be coherent; (iv) execution, where the file is compiled through JIT compilation and executed.

More about ASVM can be found on the official VM references [2]. For the purposes of our paper, it is now important to provide a description of the bytecode contents, as FlashBuster relies on analyzing such information to perform detection.

3 Actionscript ByteCode (ABC)

To simplify the comprehension of the static methodology employed in this paper, we provide an example of how a simple method written in ActionScript is compiled into ABC bytecode, by disassembling and decompiling an application.333There might be multiple ways to express the disassembled code as ActionScript source code.

This method is a constructor of the class PingServers that performs three operations: (i) A superclass is invoked; (ii) The method addCallback of the ExternalInterface class is called to make the ActionScript code interact with external containers (e.g., JavaScript, HTML files, and so forth). In particular, this method registers the function pingCallback so that it could be called by the container with the constant name RESPOND_PINGDATA; (iii) The instance variables pingTimer and pingTimeouts are set to zero. More about ActionScript programming can be found on the official ActionScript reference [1].

From the perspective of the ActionScript bytecode, such operations can be summed up as follows: (i) The superclass is invoked through the instructions get_local_0, pushscope and constructsuper (lines -). As such instructions are not relevant for the scope of this paper, we recommend reading the ActionScript specifications for more insights [2]; (ii) The addCallback method is invoked through the instructions findpropstrict and getproperty (lines -). Such instructions respectively look and fetch the required packages, classes and methods. The resolution of such names is made through data structures called Names, which are very important for the scope of our paper. We will describe them more in detail later on; (iii) The fields pingTimer and pingTimeouts are set to zero with the instructions get_local_0, pushbyte, pushnull and initproperty. Again, Name structures are important to define which package and class the fields belong to.

1//ActionScript decompiled code
2public function PingServers()
4    super();
5    ExternalInterface.addCallback(Cmd.RESPOND_PINGDATA,this.pingCallback);
6    this.pingTimer = 0;
7    this.pingTimeouts = null;
11//ActionScript equivalent bytecode
15constructsuper 0
16findpropstrict Qname(PackageNamespace("flash.external"),"ExternalInterface")
17getproperty Qname(PackageNamespace("flash.external"),"ExternalInterface")
18findpropstrict Qname(PackageNamespace(""),"Cmd")
19getproperty Qname(PackageNamespace(""),"Cmd")
20getproperty Qname(PackageNamespace(""),"RESPOND_PINGDATA")
22getproperty Qname(PrivateNamespace(""),"pingCallback")
23callproperty Qname(PackageNamespace(""),"addCallback") 2
26pushbyte 0
27initproperty Qname(PrivateNamespace(""),"pingTimer")
30initproperty Qname(PrivateNamespace(""),"pingTimeouts")
Listing 1: An example of how Actionscript code (lines 1-8) is compiled into ABC Bytecode (lines 11-31). Debug information have been removed for the sake of brevity.

From the description above, it is clear that Name structures play a very important role at defining which classes and methods are invoked by the ActionScript code. We now provide an insight into such structures.

3.1 Names

Names are data structures composed by one unqualified name (for example, a class name) and one or more namespaces that typically represent the packages from which classes or methods are resolved. The Name resolution can occur either at compile time or at runtime. Additionally, there might be multiple namespaces from which the same unqualified name can be obtained. The choice of the appropriate Namespace from a list of candidates typically occurs at runtime.

According to the number of namespaces, Names can be subdivided in two categories:

QName. A data structure composed by one unqualified name and one namespace. This is the simplest Name structure that can be found. Both the name and the namespace are resolved at compile time.

Multiname. A data structure composed by one unqualified name and a set of namespaces. The resolution is performed at compile time. Hence, each of the namespaces in the set can be properly used for the name.

When a name is retrieved at runtime, it is typically loaded from the constant pool when it is needed.

3.2 SWF Malware

In order to better understand the approach proposed in this paper, Listing 2 shows a typical action performed by an ActionScript-based malware (MD5: 92711a995ecdd0663b608bd664b2ca89), which exploits the CVE-2015-3133 (according to VirusTotal) vulnerability, which allows to remotely execute arbitrary code due to memory corruption. As we did in the first part of Section 3, we describe the aforementioned actions by showing both the decompiled ActionScript source and the disassembled bytecode. The code in the Listing reads an UnsignedByte from the object _loc1_, which is an object of the class IG (note: this name was originally obfuscated with non ASCII characters). Such class inherits (see the coerce instruction) from the flash.utils.ByteArray built-in class. The code then performs a subtraction and assigns the output to the variable isAS3. Such value will be then copied to another array of bytes (we did not report this action for space reasons). Note how the reading is performed by following the little endian (by using the flash.utils.Endian) byte order. From the name of the decompiled variable, we assume that the byte sequences might be related to a code that is copied somewhere else. Manipulating information in this way is very common in Flash-based malware. It is interesting to observe how system API methods and classes are essential for the attacker to build shellcodes or to perform buffer overflows or heap spraying attacks. This often happens, as the official ActionScript API allows to easily manage low-level data structures.

2//Decompiled ActionScript
4_loc1_ = new IG();
5_loc1_.endian = Endian.LITTLE_ENDIAN;
6_loc1_.position = 0;
7this.isAS3 = _loc1_.readUnsignedByte() - 1;
9//Disassembled Bytecode
11findpropstrict Qname(PackageNamespace(""),"IG")
12constructprop Qname(PackageNamespace(""),"IG") 0
13coerce Qname(PackageNamespace("flash.utils"),"ByteArray")
16getlex Qname(PackageNamespace("flash.utils"),"Endian")
17getproperty Qname(PackageNamespace(""),"LITTLE_ENDIAN")
18setproperty Qname(PackageNamespace(""),"endian")
20pushbyte 0
21setproperty Qname(PackageNamespace(""),"position")
24callproperty Qname(PackageNamespace(""),"readUnsignedByte") 0
Listing 2: Part of the malicious code contained in the 92711a995ecdd0663b608bd664b2ca89 sample. Such function is represented by its decompiled output (lines 4-7) and by its equivalent bytecode output (lines 11-25).

4 FlashBuster Architecture

FlashBuster is a static, machine learning-based system whose goal is to detect malicious SWF files and to distinguish them from benign ones. This is achieved by leveraging information provided by the tag structure and the ActionScript bytecode of the file. Our goal was in particular to reproduce a combination of the main characteristics of previous state-of-the-art systems (see Section 8), which proved to be effective at detecting SWF malware.

Figure 1 shows the general architecture of the system, which can be divided in three modules:

Parser. This module analyzes the SWF file and extracts information about its structure and its ActionScript bytecode.

Feature Extractor. This module transforms the information obtained from the parser in a vector of numbers, which characterizes the whole SWF file.

Classifier. This module decides on the maliciousness of the SWF file basing on the feature vector it receives as input. Such module is a mathematical function that tunes its parameters by receiving a number of examples taken from a so-called training set. Once its parameters have been set up, the classifier is able to recognize malicious files that have not been included in the training examples.

Fig. 1: Graphical architecture of FlashBuster.

In the following, we provide a more detailed description of each component of the system.

4.1 Parser

As previously said, this module performs data preprocessing and selects the information that will be further processed by the other modules. FlashBuster leverages a modified version of JPEXS, a powerful, Java-based Flash disassembler and decompiler.444 This software is based on RABCDasm, one of the most popular Flash disassemblers, and it adds new features such as de-obfuscation, file debugging and preview, etc.

In particular, the parser featured by FlashBuster performs the following operations: (i) It performs static de-obfuscation of ActionScript code. This is important, as some of the malicious files might use name obfuscation or other popular techniques to conceal the attacks. (ii) It extracts the complete SWF structure in terms of tags. (iii) It disassembles the ABC bytecode so that it could be read as a plain-text file. Such operation includes automatic de-obfuscation of the ActionScript code. Both the tag structure and the ABC bytecode are sent to the feature extractor module for further analysis.

4.2 Feature Extraction

This module is the core of the whole system. It converts the information extracted by the parser to a vector of numbers that will be sent to the classifier. As the information is related both to the structure and the content of the SWF file, we now provide a detailed description of the features extracted in the two cases.

4.2.1 Structural Features (Tags)

These features are related to the information that can be extracted from the SWF tags, and are crucial to understand which objects and actions are executed by the SWF file. The main idea here is that malware does not contain particularly complex multimedia contents, such as video with a lot of frames or audio files. A lot of malware samples simply display images such as rectangles or blank backgrounds. For this reason, we extract the following features from the file structure, corresponding to the number of occurrences of specific SWF tags within the file:

Frames. this feature counts the tag ShowFrame that is used to display frames.

Shapes. this feature counts the tag DefineShape (in any of all its four variants), used to define new shapes that will be plot on the screen.

Sounds. this feature checks the presence of sound-related events by counting any of the following tags: DefineSound, SoundStreamHead1, SoundStreamH- ead-2 and SoundStreamBlock.

BinaryData. this feature counts groups of embedded data, represented by the tag DefineBinaryData.

Scripts. this feature counts how many ActionScript codes are contained in the file. A SWF file does not necessarily require such code to perform its operations, especially in benign files (ActionScript has been initially thought as an aid to the execution of SWF files). This is done by analyzing the following tags: DoABC, DoABCDefine, DoInitAction, DoAction.

Fonts. this feature counts font-related objects, by detecting any of the following tags: DefineFont (in all its variants), DefineCompactedFont, DefineFontInfo (in all its variants), DefineFontName.

Sprites. this feature counts the number of sprites by examining the tag DefineSprite.

MorphShapes. this feature counts the number of morphed shapes, (i.e., shapes that might transform into new ones) by examining the tag DefineMorphShape (and its variants).

Texts. this feature counts text-related objects by checking any of the following tags: DefineText (along with its variants) and DefineEditText.

Images. this feature counts the images contained in the file by examining any of the following tags (and their variants): DefineBits, DefineBitsJPEG, JPEGTables and DefineBitsLossless.

Videos. this feature counts the number of embedded videos by examining the tags DefineVideoStream and VideoFrame.

Buttons. this feature counts the buttons with which the user can interact with the SWF content. This is done by examining the tag DefineButton (along with its variants).

Errors. this feature counts the errors made by the parser when analzying specific tags. This often happens, for example, when the SWF file is malformed due to errors in its compression.

Unknown. this feature counts the tags that do not belong to the SWF specifications (probably malformed tags).

The reader can find more information about these tags on the official SWF specification [3].

Despite being effective, structural features must be carefully treated, as benign and malicious files can be similar to each other in terms of their tag structure. For this reason, structural features alone are not enough to ensure a reliable detection, and must be integrated with information from the scripted content of the file.

4.2.2 Actionscript Bytecode Features (API calls)

As structural features (i.e., tags) might suffer from the limitations mentioned in Sect. 4.2.1, we employed an additional set of features that focus on the content of the scripting code that might be included in the file. Although it is not strictly necessary to use ActionScript for benign operations, its role is essential to execute attacks. In particular, as shown in Sect. 3.2, the attacker usually needs to resort to system APIs to perform memory manipulation or to trigger specific events. Moreover, APIs can be used to communicate with external interfaces or to contact an external URL to automatically drop malicious content on the victim’s system.

System APIs belong to the official Adobe ActionScript specifications [1]. For this reason, we created an additional feature set that checks the presence or absence of all the classes and methods belonging to such specifications. This leads to new features. More specifically, this feature set represents the number of specific System methods and classes inside the bytecode. We chose to use only system-based APIs for two reasons: () the feature vector does not include user-defined APIs, so that the feature list is independent on the training data that is considered for the analysis; () system-based calls are more difficult to obfuscate, as they are not directly implemented by the user.

With respect to the example described in Sect. 3.2, we therefore consider as features the classes flash. utils.ByteArray and flash.utils.Endian, and the method readUnsignedByte. On the contrary, we do not consider the class IG, as it was directly implemented by the user. The rationale behind counting the occurrences of system-based methods and classes is that an attacker might systematically use functions such as readByte or writeByte to manipulate the memory. Alternatively, she might attempt to repeatedly trigger events or to access specific interfaces. Counting the occurrences might also increase the required effort by the attacker to evade the classification algorithm.

4.3 Classification

The features extracted with FlashBuster can be used with different classification algorithms. In the experimental evaluation we describe in Section 7, we tested different classification algorithms. In particular, we focused our attention on SVM and Random Forests, as these were successfully employed in several other malware detection tasks [34, 27].

5 Attack Model and Scenarios

To assess the security of FlashBuster against adversarial manipulation of the input data (which can be either performed at training time or at test time), we leverage an attack model originally defined in the area of adversarial machine learning [11, 10]. It builds on the well-known taxonomy of Barreno et al. [7, 6, 19] which categorizes potential attacks against machine-learning algorithms along three axes: security violation, attack specificity and attack influence. By exploiting this taxonomy, the attack model enables defining a number of potential attack scenarios, in terms of explicit assumptions on the attacker’s goal, knowledge of the system, and capability of manipulating the input data.

5.1 Attacker’s Goal

It is defined in terms of two characteristics, i.e., security violation and attack specificity.

Security violation. In security engineering, a system can be violated by compromising its integrity, availability, or privacy. Violating the integrity of FlashBuster amounts to having malware samples undetected; its availability is compromised if it misclassifies benign samples as malware, causing a denial of service to legitimate users; and privacy is violated if it leaks confidential information about its users.

Attack specificity. The specificity of the attack can be targeted or indiscriminate, based on whether the attacker aims to have only specific samples misclassified (e.g., a specific malware sample to infect a particular device or user), or if any misclassified sample meets her goal (e.g., if the goal is to launch an indiscriminate attack campaign).

We formalize the attacker’s goal here in terms of an objective function which evaluates to which extent the manipulated attack samples meet the attacker’s goal.

5.2 Attacker’s Knowledge

The attacker may have different levels of knowledge of the targeted system [11, 10, 7, 6, 19, 32]. In particular, she may know completely, partially, or do not have any information at all about: () the training data ; () the feature set , i.e., how input data is mapped onto a vector of feature values; () the learning algorithm , and its decision function , including its (trained) parameters (e.g., feature weights and bias in linear classifiers), if any. In some applications, the attacker may also exploit feedback on the classifier’s decisions to improve her knowledge of the system, and, more generally, her attack strategy [11, 10, 7, 19].

The attacker’s knowledge can be represented in terms of a space that encodes knowledge of the data , the feature space , the learning algorithm and its decision function . In this work, we will simulate both limited- and perfect-knowledge attacks, as detailed below.

5.2.1 Limited-Knowledge (LK) Black-Box Attacks

Under this scenario, the attacker is typically only assumed to know the feature representation and the learning algorithm , but not the training data and the trained classifier . This is a common assumption under the security-by-design paradigm: the goal is to show that the system may be reasonably secure even if the attacker knows how it works but does not know any detail on the specific deployed instance [11, 10, 7, 19, 8, 15].

In particular, according to the definition proposed by Biggio et al., we distinguish the cases in which either the training data or the trained classifier are unknown [23]. In the first case, to which we refer as LK attacks with surrogate data,it is often assumed that the attacker is able to collect a surrogate dataset and that she can learn a surrogate classifier on to approximate the true  [8, 24]. Note also that the class labels of can be modified using feedback provided from the targeted classifier , when available (e.g., as an online service providing class labels to the input data). The knowledge-parameter vector can be thus encoded as .

In the second case, to which we refer to as LK attacks with surrogate learners, we assume that the attacker knows the training distribution , but not the learning model. Hence, she trains a surrogate function on the same training data. Hence, the knowledge-parameter vector can be encoded as .

5.2.2 Perfect-Knowledge (PK) White-Box Attacks

This is the worst-case setting in which also the targeted classifier is fully known to the attacker, i.e., . Although it is not very likely to happen in practice that the attacker gets to know even the trained classifier’s parameters, this white-box setting is particularly interesting as it provides an upper bound on the performance degradation incurred by the system under attack, and can be used as reference to evaluate the effectiveness of the system against the other (less pessimistic) attack scenarios.

5.3 Attacker’s Capability

The attacker’s capability of manipulating the input data is defined in terms of the so-called attack influence and on the basis of some application-specific constraints.

Attack Influence. This defines whether the attacker can only manipulate data at test time (exploratory influence), or if she can also contaminate the training data (causative influence). This is possible, for instance, if the system is retrained online using data collected during operation which can be manipulated by the attacker [7, 19, 11].

Application-specific constraints. These constraints define how and to which extent the input data (and its features) can be modified to reach the attacker’s goal, according to the given application. In many cases, these constraints can be directly encoded in terms of distances in feature space, computed between the source malware data and its manipulated versions [14, 21, 16, 29, 13, 8]. FlashBuster is not an exception to this rule, as we will discuss in the remainder of this section. In general, the attacker’s capability can thus be represented in terms of a set of possible modifications performed on the input samples .

5.4 Attack Strategy

The attack strategy amounts to formalizing the derivation of the attack in terms of an optimization problem [8, 11]. Given the attacker’s goal , along with a knowledge-parameter vector and a set of manipulated attacks , the attack strategy is given as:


Under this formulation, one can characterize different attack scenarios. The two main ones often considered in adversarial machine learning are referred to as classifier evasion and poisoning [8, 12, 35, 22, 23, 7, 6, 19, 11, 10]. In the remainder of this work we focus on classifier evasion, while we refer the reader to [11, 35, 22, 23] for further details on classifier poisoning.

5.5 Evasion Attacks

Evasion attacks consist of manipulating malicious samples at test time to have them misclassified as benign by a trained classifier. The attacker’s goal is thus to violate system integrity, either with a targeted or with an indiscriminate attack, depending on whether the attacker is targeting a specific machine or running an indiscriminate attack campaign. More formally, evasion attacks can be written in terms of the following optimization problem:


where is the feature vector associated to the modified attack sample , is the feature vector associated to the source (unmodified) malware sample , is the feature extraction function, and is the surrogate classifier estimated by the attacker. With respect to Eq. (1), note that here samples can be optimized one at a time, as they can be independently modified.

As in previous work [11, 8, 15], we first simulate the attack at the feature level, i.e., we directly manipulate the feature values of malicious samples without constructing the corresponding real-world samples while running the attack. We discuss in Sect. 5.7 how to create the corresponding real-world evasive malware samples. The above problem can be thus simplified as:


where we have also made the manipulation constraints used to attack FlashBuster explicit. In particular, the box constraint (in which the inequality holds for each element of the vector) bounds the minimum and maximum feature values for the attack sample . For FlashBuster, we will only consider feature injection, i.e., we will only allow injection of structural and bytecode features within the SWF file to avoid compromising the intrusive functionality of the malware samples. This can be simply accounted for by setting . The additional distance constraint thus sets the maximum number of structural and bytecode features (i.e., tags and API calls) that can be injected into the file. The solution to the above optimization problem amounts to identifying which features should be modified to maximally decrease the value of the classification function, i.e., to maximize the probability of evading detection [11, 8]. Clearly, this set of features varies depending on the input sample .

1:, the malicious sample; , the initial location of the attack sample; , the surrogate classifier (Eq. 3); , the maximum number of injected structural and bytecode features (Eq. 4); and , the box constraint bounds (Eq. 4); , a small positive constant.
2:, the evasion attack sample.
Algorithm 1 Evasion Attack

5.6 Evasion Attack Algorithm

If the objective function (i.e., the decision function of the classifier) is not linear, as for kernelized SVMs and random forests, Problem (3)-(4) corresponds to a nonlinear programming problem with linear constraints. The solution is therefore typically found at a local minimum of the objective function. Problem (3)-(4) can be solved with standard algorithms, but this is not typically very efficient, as such solvers do not exploit specific knowledge about the evasion problem. We thus devise an ad-hoc solver based on exploring a descent direction aligned with the gradient using a bisect line search, similar to that used in our previous work [26]. Its basic structure is given as Algorithm 1. To minimize the number of iterations, we explore one feature at a time (starting from the most promising feature, i.e., the one exhibiting the highest gradient variation in absolute value), leveraging the fact that the solution will be sparse (as the problem is constrained). We also minimize the number of gradient and function evaluations to further speed up our evasion algorithm; e.g., we only re-compute the gradient of when no better point is found on the direction under exploration. Finally, we initialize twice (first starting from , and then from a benign sample projected onto the feasible domain), to mitigate the problem of ending up in a local minima that does not evade detection.555This problem has been first pointed out in [8], where the authors have introduced a mimicry term to overcome it. Here we just consider a different initialization mechanism, which allows us to get rid of the complicated mimicry term in the objective function.

5.7 Constructing Adversarial Malware Examples

A common problem when performing adversarial attacks against machine learning is evaluating whether they can be truly performed in practice. As gradient-descent attacks are performed at the feature level, the attacker is then supposed to solve the so-called inverse feature-mapping problem, i.e., to reconstruct from the obtained features the sample that can be deployed against the classifier itself [19, 11, 15].

Most of times, such operation is not easy to perform, not only from a more theoretical standpoint (as discussed in [19]), but also from a more practical perspective. In the specific case of Flash malware, and of malware in general, generating the corresponding real-world adversarial examples may be complicated, as a single wrong operation can compromise the intrusive functionality of the embedded exploitation code [15]. For example, removing one structural feature such as one frame or script might totally break the SWF file. This is why we consider only injection of additional content into the SWF file.

Constructing the real-world adversarial example (i.e., the malicious evasive SWF file) is a rather straight-forward process in our case. In particular, it is possible to inject structural features by using JPEXS. With its graphical interface, it is possible to deliberately add control and structural tags. It is also possible to inject bytecode features by editing the output disassembled by RABCDasm and re-assembling the file (this can be also graphically done with JPEXS).

For the purposes of this paper, we created some working proof-of-concepts, where structural and content-based features were deliberately and manually added by using the aforementioned tools. However, we plan as future work to make the creation process automatic.

6 On the Vulnerability of Feature Spaces and Learning Algorithms

We discuss here an interesting aspect related to the vulnerability of learning-based systems, first highlighted in [9, 26] and conceptually represented in Fig. 2, which shows two classifiers on a two-feature space. The classifiers can be defined as surfaces closed around malicious (left) and benign (right) samples. The red, blue and green samples represent, respectively, malicious, benign and attack samples. An evasion attack sample is typically misclassified as either: () its feature vector is far enough from those belonging to the rest of known training samples (both malicious and benign), or () it is indistinguishable from those exhibited by benign data. In the former case, usually referred to as blind-spot evasion, retraining the classifier on the adversarial examples (with adversarial training) should successfully enable their detection, improving classifier security. This means that the classification error induced by such attacks could have been reduced in advance, by designing a learning algorithm capable of anticipating this threat; for instance, building a classifier that better encloses benign data, and classifies as malicious the regions of the feature space where training data is absent or scarce (see, e.g., the classifier in the right plot of Fig. 2). We thus refer to this vulnerability as a vulnerability induced by the learning algorithm (left plot in Fig. 2). In the latter case, instead, retraining the classifier would be useless, as the whole distribution of the evasion samples is overlapped with that of benign data in feature space, i.e., the attack increases the Bayesian (non-reducible) error. We thus refer to this attack as mimicry evasion, and to the corresponding vulnerability as a vulnerability induced by the feature representation (right plot in Fig. 2). In fact, if a malware sample can be modified to exhibit the same feature values of benign data, it means that the given features are intrinsically weak, and no secure learning algorithm can prevent this issue.

This notion can also be motivated in formal terms, similarly to the risk analysis reported in [9]. From a Bayesian perspective, learning algorithms assume an underlying (though unknown) distribution governing the generation of benign () and malicious () data, and aim to minimize the classification error , where is the expectation operator, is the zero-one loss, and is the classification function returning the predicted class label (i.e., ). Let us denote the optimal classifier achieving the minimum (Bayesian) error on with . It is clear that, if there is no evidence of (training) data in some regions of the feature space (usually referred to as blind spots), such regions can be arbitrarily classified by as either benign or malicious with no impact on the classification error (the expectation on will be anyway zero in those regions). This is precisely the underlying reason behind the vulnerability of learning algorithms to blind-spot evasion.

Within this setting, evasion attacks can be indeed thought as a manipulation of the input samples through a function , which essentially introduces a deviation from the source distribution . By denoting with the error of the classifier on the manipulated samples, with the optimal (Bayesian) classifier on such manipulated data, we can compute the increase in the classification error of on the manipulated data as:


The first term is the increase in Bayesian error before and after the attack (which characterizes the vulnerability of the feature representation), while the second represents the classification error reducible by retraining on the attack samples (i.e., the vulnerability of the learning algorithm).

Fig. 2: Conceptual representation of learning (left) and feature (right) vulnerability. Red, blue and green samples represent, respectively, malicious, benign and attack samples.

Under this interpretation, we can introduce a metric to quantitatively assess the feature vulnerability. To this end, we first consider the so-called Bhattacharyya Coefficient ():


This coefficient essentially evaluates the overlapping between the distributions of benign and manipulated attack samples over the whole feature space . If the two distributions are exactly the same, , while if they are perfectly separated, . The convenient aspect of this metric is that it has a closed form for several known distributions; e.g., in the case of multivariate Gaussian distributions, it is given as , where

, and , , and are the means and covariance matrices of benign and attack data, respectively. To assess feature vulnerability, we use this expression for , and exploit the well-known result that the Bayesian error is upper bounded by . Accordingly, we measure the difference between such value computed after and before the attack, which gives us an (approximate) indication of the increase in the Bayesian error induced by the attack, and thus, a quantitative measure of the feature vulnerability (i.e., of the first term in Eq. 5).

7 Experimental Evaluation

The experimental evaluation proposed in this paper is divided in three parts.

Standard Evaluation. FlashBuster was trained with a dataset of randomly chosen malicious and benign SWF files, and it was tested against a number of previously unseen malicious and benign files. This experiment provides information on the system general performances in terms of true and false positives.

Adversarial Evaluation. In this experiment (directly linked to the previous one), we evaluated the performances of FlashBuster against adversarial attacks performed according to a gradient descent strategy (see Section 5.5).

Temporal Evaluation. FlashBuster was trained with a dataset of samples that had been first seen before a certain year, and it was tested against a set of samples that had been released after the same year. This test was performed to ensure the capability of FlashBuster to predict novel vulnerabilities and attacks.

In the following, we describe the dataset we employed for our experiments, as well as the basic setup of the preprocessing and feature extractor modules. This setup is common to all the evaluations described in this Section.

7.1 Basic Setup

Dataset. The dataset used for our experiments is composed of files, of which are malicious and are benign. Notably, every analyzed file (including benign ones) contains ActionScript code. This is to avoid analyzing files that do not contain ActionScript code, and that are therefore most likely benign. The malicious files, as well as part the benign ones, were retrieved from the VirusTotal service.666 Other benign files were retrieved from the DigitalCorpora repository.777

Preprocessing. As mentioned in Section 4.1, the original JPEXS parser was modified to allow a faster analysis of multiple SWF files, as well as a better integration with the other components of FlashBuster. All data related to tags and bytecodes are extracted and dumped to files, in order to allow for subsequent analyses by the other modules of FlashBuster. The extraction time may vary from milliseconds to some minutes for very large files.

Feature Extraction. As we are counting the occurrence of each feature, there can be a considerably high difference in values between certain features and others. Considering the possibility of adversarial attacks, this might give the attacker more degrees of freedom to confuse the classifier. For this reason, we adopted a feature normalization and selection strategy that is composed of three stages:

(i) We established an upper limit for the feature values in our dataset. For our experiments, we chose as a reasonable value that limits the number of injectable features in a specific sample (in particular, the maximum amount is given by ), where is the number of employed features and is the feature maximum value. Without limiting the feature values, it would be possible to generate samples with anomalous values of specific features, which would be rather difficult to be found in practice. Moreover, performing perfect knowledge attacks without limiting the values of the features is computationally very expensive.

To confirm that limiting the feature values does not influence classification performances, we repeated our experiments with higher upper limits, without noticing significant differences in performances.

(ii) We selected the most occurring features in the dataset. Such threshold has been chosen as it was the minimum value that did not affect classification performances in our experiments. In particular, we extracted the features with the highest score , given by the following:


where is the -feature, whilst and are the labels related to, respectively, malicious and benign file categories. Each feature is taken only once.

(iii) All features were normalized with the popular tf-idf strategy [5]. This was particularly crucial for SVM classifiers, which perform best with normalized features.

Classification. We used the popular machine learning suite Scikit-Learn888, which features the classifiers and the normalization strategy we used in our evaluation.

Training Procedure and Classifiers. All the performed evaluations (except the temporal one, which was carried out on a completely different dataset) share the following elements: (a) The dataset was randomly split by considering 50% of it as training set and the remaining 50% as test set. All the classifier parameters were evaluated with a 5-fold cross validation performed on the training set. We repeated the whole procedure five times, in order to rule out possible biases related to specific train/test divisions. (b) We performed our tests on three classifiers: (i) Random Forest; (ii) SVM with linear kernel; (iii) SVM with non-linear (RBF) kernel. Additionally, considering the possibility of the targeted attacks described in Section 5.5, we retrained the SVM RBF and Random Forest classifiers by adding to the original training set samples generated with a gradient descent strategy. In particular, the training splits were modified in the following way: for each training split, we generated new attack samples by performing, on randomly chosen malicious training samples, , and changes. In total, samples were added to each original training set split. Notably, adding too many features to the samples would have led to the creation of anomalous samples with unrealistic feature values.

7.2 Standard Evaluation

The standard evaluation was performed by following the criteria described in Section 7.1. In particular, for each classifier we calculated the average (among its splits) Receiving Operating Characteristic (ROC). Figure 3 shows the results of the evaluation.

Fig. 3: Average ROC curves obtained on train-test splits. The test was performed on three classifiers. In particular, we also report the results for the retrained variant of SVM RBF and Random Forests (RF). Vertical bars represent the standard deviation among the splits.
(a) PK
(b) LK
Fig. 4: Detection rate (at false positive rate = %) of FlashBuster for SVM (under PK) and Random Forests (under LK) classifiers against gradient descent attacks. Results are averaged on runs. SVM RBF and Random Forest have been also retrained with samples modified with the gradient descent strategy with , and .

From this experiment, we see that Random Forest was the classifier that generally performed best at detecting malicious SWF files. At % of false positives, the classifier was able to detect more than % of threats, whilst at % the number of detected threats was higher than %. SVM with RBF kernel exhibited similar performances. Linear models, on the contrary, poorly performed under %.

It may be expected that, being the generated attacks more similar to benign samples, adding them to the training set would decrease the performances of the classifiers. However, in our case, such operation only barely affected the examined classifiers. We interpret this result with the fact that gradient descent attacks create, in this case, malicious samples that are still considerably different to the original benign training distribution. For this reason, the classifiers are still able to correctly discriminate malicious and benign test samples.

7.3 Adversarial Evaluation

The adversarial evaluation aimed to assess the performance of the classifiers employed in the previous experiment after the gradient descent attacks described in Section 5.5. In this case, we evaluated the performances of the classifiers in terms of true positives (at 5% false positive rate) for a certain number of changes performed to the feature vector. Of course, the more changes are performed by the algorithm, the more effective the attack is.

It is important to observe that the experiments on SVMs and Random Forests were carried out under different levels of knowledge. Notably, Random Forests have a non-differentiable classification function, and for this reason the attacker cannot directly perform evasion on the target model. In this case, according to the taxonomy described in Section 5.5, we performed a Limited-Knowledge (LK) attack with surrogate learners, in which a differentiable surrogate model is employed to craft the adversarial examples instead of the target algorithm. To this end, in our experiments, we used an SVM with the RBF kernel (whose parameters were always evaluated by means of a -fold cross validation performed on the training set), trained on the same data as the target classifier. Conversely, when the attack is directly performed against the SVM classifier, we simulated a Perfect-Knowledge (PK) attack scenario.

Figure 4 provides the results of the evaluation. While all SVM-based classifiers were completely evaded after changes, Random Forests could still detect of the attacks. This may be however due to the fact that the surrogate model (SVM RBF) does not properly approximate the classification function of a Random Forest - i.e., the adversarial examples crafted against the SVM do not transfer properly against Random Forests. For this reason, we can not state with certainty that Random Forests are generally more secure (a more powerful white-box attack as that in [20] may enable evading them with higher probability). We leave a more detailed investigation of this aspect to future work.

Inefficacy of adversarial retraining. Notably, the employed retraining strategy only brings little improvement to the robustness of the classifiers. This can be better explained by observing the distributions of average feature values for one malicious test set before and after the attack against SVM RBF, as well as the variation of the related Bhattacharyya distance among the distributions (described in Section 6). It is worth noting that such value has been calculated under specific assumptions: (i) we approximated our distributions as multivariate Gaussians; (ii) the isotropic covariance is identical for the two classes (being the identity matrix).

Figure 5 shows that the value increases according to two criteria: (i) the increment of the number of changes ; (ii) performing the attack against a retrained classifier. This could further be explained by pointing out the difference between the Bayesian upper bound errors in case of attack and without the attack (that we define here as mimicry parameter) , where is the Bhattacharyya distance calculated after the attack, and is the one calculated before the attack. Table I shows how this value increases as more changes are performed, and as classifiers are retrained.

Attack m

TABLE I: Values of the mimicry parameter under multiple attack scenarios.

This shows that, although the function was retrained against the attacks to reduce the learning vulnerability, the feature vulnerability related to the employed static features cannot be reduced by simply retraining the classifier. In more detail, the problem here is that none of the employed static features is likely to appear in malware much more frequently than in benign data, i.e., there is no invariant feature that characterizes malware uniquely from benign data (and that can not be removed) [30]. This in turn means that it is possible to create a malicious sample that is indistinguishable from benign ones (even by only injecting content to the input SWF file) and, thus, additional features are required to correctly detect adversarial SWF malware examples.

Fig. 5: Average feature values for one test set in which malicious samples (red) were modified with a gradient descent attack ( is the number of changes) against SVM RBF (purple). The same attacks have been repeated against a retrained SVM RBF (green). The Bhattacharyya coefficient between the malicious distributions and the one related to benign (blue) samples is reported. Note how increases when attacking the retrained classifier, meaning that the malicious distribution slowly converges towards benign samples.

7.4 Temporal Evaluation

In this evaluation, we aimed to evaluate the capability of FlashBuster to predict novel and zero-day attacks. To this end, we trained the system by only using samples whose first submission date to the VirusTotal service was before ( samples), plus all the benign files in the dataset. The test set was therefore made of those malicious samples released in ( samples). We used all the classifiers of the previous experiments. The goal of this analysis was not to assess the performances of the system at detecting general or adversarial attacks, but to evaluate the predictive power of the features we employed against novel attacks. As the previous experiments, the parameters of the classifiers were evaluated with a -fold cross validation performed on the training set.

Table II shows the results obtained from this evaluation. It is possible to see that, considering non-retrained models, the linear one performed best at detecting novel samples. In particular, more than % of the tested samples were detected. However, non-linear models also exhibited good performances, with only a decrement of % in comparison to the linear model.

Notably, retrained models perform better than their non-retrained variants. In particular, retrained SVM RBF exhibits a increment in comparison to the Linear model. This means that generating artificial attacks with the gradient descent strategy makes the classifier more robust against variants of known attacks. In particular, the gradient descent attacks perform fine-grained, limited changes to the features. These variations in the feature values are most likely to occur when generating new attack variants. This is a reasonable result, as the majority of novel malicious attacks are directly derived from existing ones.

Classifier Accuracy
SVM RBF (Adv.)
SVM Linear
RF (Adv.)

TABLE II: Accuracy performances on five classifiers on a test set composed of data released after . Each classifier has been trained with data released before .

8 Related Work

As Flash-based malicious attacks started to considerably grow in , the number of detection approaches is rather limited. FlashDetect [31] is one of the first approaches to the detection of ActionScript -based malware. The authors instrumented Lightspark, an open source Flash viewer, to perform dynamic analysis of malicious Flash files. From this analysis, the system extracts features such as the number of ByteArray-related method calls, the presence of the loadBytes method, and so forth. FlashDetect was employed inside the Wepawet service, which is sadly not available anymore.

Gordon [34] is an approach that resorts to guided-code execution to detect malicious SWF files, by statically analyzing in particular their ActionScript bytecode. More specifically, the system selects the most suspicious security paths from the control flow graph of the code. Such paths have usually references to security-critical call, such as the ones for dynamic code loading. Although not publicly available, proved to be rather effective to detect Flash malware.

Hidost [33] is a static system that only focuses on the structure of the SWF file. More specifically, it considers sequences of objects belonging to the structure of the SWF file as features. The system evaluates the most occurring paths in the training dataset, and extracts features that are based on the training data. This might be dangerous from the perspective of targeted attacks, as a malicious test file with completely different paths might be able to evade detection. Moreover, the system does not analyze the embedded ActionScript code. In this way, an attacker might simply evade the system by perfectly mimicking the structure of the file, regardless of the type of code it contains.

Besides scientific works, there are also some off-the-shelf tools that are used to perform obfuscation of SWF files (e.g., DoSWF999 Notably, in this paper we did not analyze FlashBuster performances against samples that have been obfuscated by such tools. This is a different problem with respect to the one we analyzed in Section 5.5. In this case, obfuscation is performed without having knowledge of the target detection system. For this reason, the attacker does not have the control of the features that are changed by the attack. In this paper, we preferred focusing on the worst-case scenario, in which the attacker possesses complete knowledge of the target system.

We consider FlashBuster as a more complete variant of the aforementioned static systems, where information from both the structure and content of the file are extracted.

9 Conclusions and Future Work

In this paper, we proposed a security evaluation of static malicious SWF files detectors by introducing FlashBuster, a system that combines information analyzed by previous state-of-the-art detectors (i.e., file structure and content). The proposed security evaluation showed an intrinsic vulnerability of the static features used by SWF detectors. In particular, by using gradient descent attacks, we demonstrated how even retraining strategies are not always effective at ensuring robustness. More specifically, we measured and showed how gradient descent attacks make samples more similar to their benign counterparts. We plan to improve and solve some of the system limitations in future work: for example, reducing its dependence on JPEXS, whose possible failures could compromise the whole file analysis. We also plan to perform more experiments on SWF files that are obfuscated with off-the-shelf tools, in order to evaluate the resilience of FlashBuster against them.

In general, our claim for future is research is that focusing on improving the classifier decision function can be effective only if the employed features are intrinsically robust. This means that there should be specific features that are truly characteristic of malicious behavior and that cannot be mimicked in benign files. The efforts to guarantee more security should be therefore directed towards a security-oriented design of the features.


  • [1] Adobe, “Actionscript language specifications,”
  • [2] ——, “Actionscript virtual machine 2 overview,”
  • [3] ——, “Swf file format specifications,”
  • [4] ArsTechnica, “Hacking team’s flash 0-day: Potent enough to infect actual chrome user,” 2015.
  • [5] R. A. Baeza-Yates and B. Ribeiro-Neto, Modern Information Retrieval.   Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc., 1999.
  • [6] M. Barreno, B. Nelson, A. Joseph, and J. Tygar, “The security of machine learning,” Machine Learning, vol. 81, pp. 121–148, 2010.
  • [7] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, “Can machine learning be secure?” in ASIACCS.   New York, NY, USA: ACM, 2006, pp. 16–25.
  • [8] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” in ECML PKDD, Part III, ser. LNCS, H. Blockeel et al., Eds., vol. 8190.   Springer Berlin Heidelberg, 2013, pp. 387–402.
  • [9] B. Biggio, I. Corona, Z.-M. He, P. P. K. Chan, G. Giacinto, D. S. Yeung, and F. Roli, “One-and-a-half-class multiple classifier systems for secure learning against evasion attacks at test time,” in Multiple Classifier Systems, ser. LNCS, F. Schwenker, F. Roli, and J. Kittler, Eds.   Springer Int’l Pub., 2015, vol. 9132, pp. 168–180.
  • [10] B. Biggio, G. Fumera, and F. Roli, “Pattern recognition systems under attack: Design issues and research challenges,” Int’l J. Patt. Recogn. Artif. Intell., vol. 28, no. 7, p. 1460002, 2014.
  • [11] ——, “Security evaluation of pattern classifiers under attack,” IEEE Trans. Knowl. Data Eng., vol. 26, no. 4, pp. 984–996, 2014.
  • [12] B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” in 29th Int’l Conf. on Machine Learning, J. Langford and J. Pineau, Eds.   Omnipress, 2012, pp. 1807–1814.
  • [13] M. Brückner, C. Kanzow, and T. Scheffer, “Static prediction games for adversarial learning problems,” J. Mach. Learn. Res., vol. 13, pp. 2617–2654, Sept. 2012.
  • [14] N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, “Adversarial classification,” in 10th ACM SIGKDD Int’l Conf. Knowl. Disc. & Data Mining, 2004, pp. 99–108.
  • [15] A. Demontis, M. Melis, B. Biggio, D. Maiorca, D. Arp, K. Rieck, I. Corona, G. Giacinto, and F. Roli, “Yes, machine learning can be more secure! a case study on android malware detection,” IEEE Trans. Dep. Sec. Comp., In press.
  • [16] A. Globerson and S. T. Roweis, “Nightmare at test time: robust learning by feature deletion,” in 23rd ICML, W. W. Cohen and A. Moore, Eds., vol. 148.   ACM, 2006, pp. 353–360.
  • [17] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in ICLR, 2015.
  • [18] K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. D. McDaniel, “Adversarial examples for malware detection,” in ESORICS (2), ser. LNCS, vol. 10493.   Springer, 2017, pp. 62–79.
  • [19] L. Huang, A. D. Joseph, B. Nelson, B. Rubinstein, and J. D. Tygar, “Adversarial machine learning,” in AISec, 2011, pp. 43–57.
  • [20] A. Kantchelian, J. D. Tygar, and A. D. Joseph, “Evasion and hardening of tree ensemble classifiers,” in 33rd ICML, ser. JMLR W&CP, vol. 48., 2016, pp. 2387–2396.
  • [21] D. Lowd and C. Meek, “Adversarial learning,” in 11th Int’l Conf. KDD.   Chicago, IL, USA: ACM Press, 2005, pp. 641–647.
  • [22] S. Mei and X. Zhu, “Using machine teaching to identify optimal training-set attacks on machine learners,” in 29th AAAI Conf. Artificial Intelligence (AAAI ’15), 2015.
  • [23] L. Muñoz-González, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. C. Lupu, and F. Roli, “Towards poisoning of deep learning algorithms with back-gradient optimization,” in 10th ACM Workshop on Artificial Intelligence and Security, ser. AISec ’17.   New York, NY, USA: ACM, 2018.
  • [24] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in ASIA CCS.   New York, NY, USA: ACM, 2017, pp. 506–519.
  • [25] PCWorld, “Flash video is ’on life support,’ but big sites won’t let go,”, 2016.
  • [26] P. Russu, A. Demontis, B. Biggio, G. Fumera, and F. Roli, “Secure kernel machines against evasion attacks,” in AISec.   New York, NY, USA: ACM, 2016, pp. 59–69.
  • [27] N. Srndic and P. Laskov, “Detection of malicious PDF files based on hierarchical document structure,” in 20th NDSS, 2013.
  • [28] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in International Conference on Learning Representations, 2014. [Online]. Available:
  • [29] C. H. Teo, A. Globerson, S. Roweis, and A. Smola, “Convex learning with invariances,” in NIPS 20, J. Platt et al., Eds.   Cambridge, MA: MIT Press, 2008, pp. 1489–1496.
  • [30] L. Tong, B. Li, C. Hajaj, and Y. Vorobeychik, “Feature conservation in adversarial classifier evasion: A case study,” CoRR, vol. abs/1708.08327, 2017.
  • [31] T. Van Overveldt, C. Kruegel, and G. Vigna, “Flashdetect: Actionscript 3 malware detection,” in 15th RAID, 2012.
  • [32] N. Šrndic and P. Laskov, “Practical evasion of a learning-based classifier: A case study,” in IEEE Symp. S&P, ser. SP ’14.   Washington, DC, USA: IEEE CS, 2014, pp. 197–211.
  • [33] N. Šrndić and P. Laskov, “Hidost: A static machine-learning-based detector of malicious files,” EURASIP J. Inf. Secur., vol. 2016, no. 1, pp. 45:1–45:20, Dec. 2016.
  • [34] C. Wressnegger, F. Yamaguchi, D. Arp, and K. Rieck, “Comprehensive analysis and detection of flash-based malware,” in DIMVA, 2016.
  • [35] H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, and F. Roli, “Is feature selection secure against training data poisoning?” in 32nd ICML, F. Bach and D. Blei, Eds., vol. 37, 2015, pp. 1689–1698.

Davide Maiorca (M’16) received the M.Sc. degree (Hons.) in Electronic Engineering from the University of Cagliari, Italy, in 2012. He is a Ph.D. Student in Electronic Engineering and Computer Science at the University of Cagliari, Italy. In 2013, he visited the Systems Security group at Ruhr-Universität Bochum, guided by Prof. Dr. Thorsten Holz, and worked on advanced obfuscation of Android malware. His current research interests include adversarial machine learning, malware in documents and Flash applications, Android malware and mobile fingerprinting. He has been a member of the 2016 IEEE Security & Privacy Student Program Committee.

Battista Biggio (SM’17) received the M.Sc. degree (Hons.) in Electronic Engineering and the Ph.D. degree in Electronic Engineering and Computer Science from the University of Cagliari, Italy, in 2006 and 2010. Since 2007, he has been with the Department of Electrical and Electronic Engineering, University of Cagliari, where he is currently an Assistant Professor. In 2011, he visited the University of Tübingen, Germany, and worked on the security of machine learning to training data poisoning. His research interests include secure machine learning, multiple classifier systems, kernel methods, biometrics and computer security. Dr. Biggio serves as a reviewer for several international conferences and journals. He is a senior member of the IEEE and member of the IAPR.

Maria Elena Chiappe Maria Elena Chiappe received her B. Sc. degree in Electronic Engineering with honors from the University of Cagliari, Italy, in 2015. She is studying for her M. Sc. degree in Electronic Engineering, University of Cagliari. Her main research interests include computer security, detection of DGA and fast flux networks and machine learning.

Giorgio Giacinto (SM’10) is Associate Professor of Computer Engineering at the University of Cagliari, Italy. He obtained the MS degree in Electrical Engineering in 1994, and the Ph.D. degree in Computer Engineering in 1999. Since 1995 he joined the PRA Lab of the DIEE, University of Cagliari, Italy. His research interests are in the area of pattern recognition and its application to computer security, and image classification and retrieval. During his career Giorgio Giacinto has published more than 120 papers on international journals, conferences, and books. He is a senior member of the ACM and the IEEE. He has been involved in the scientific coordination of several research projects in the fields of pattern recognition and computer security, at the local, national and international level. Since 2012, he has been the organizer of the Summer School on Computer Security and Privacy “Building Trust in the Information Age” (

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description