# Adversarial Regression with Multiple Learners

Liang Tong    Sixie Yu    Scott Alfeld    Yevgeniy Vorobeychik
###### Abstract

Despite the considerable success enjoyed by machine learning techniques in practice, numerous studies demonstrated that many approaches are vulnerable to attacks. An important class of such attacks involves adversaries changing features at test time to cause incorrect predictions. Previous investigations of this problem pit a single learner against an adversary. However, in many situations an adversary’s decision is aimed at a collection of learners, rather than specifically targeted at each independently. We study the problem of adversarial linear regression with multiple learners. We approximate the resulting game by exhibiting an upper bound on learner loss functions, and show that the resulting game has a unique symmetric equilibrium. We present an algorithm for computing this equilibrium, and show through extensive experiments that equilibrium models are significantly more robust than conventional regularized linear regression.

Machine Learning, ICML

## 1 Introduction

Increasing use of machine learning in adversarial settings has motivated a series of efforts investigating the extent to which learning approaches can be subverted by malicious parties. An important class of such attacks involves adversaries changing their behaviors, or features of the environment, to effect an incorrect prediction. Most previous efforts study this problem as an interaction between a single learner and a single attacker (Brückner & Scheffer, 2011; Dalvi et al., 2004; Li & Vorobeychik, 2014; Zhou et al., 2012). However, in reality attackers often target a broad array of potential victim organizations. For example, they craft generic spam templates and generic malware, and then disseminate these widely to maximize impact. The resulting ecology of attack targets reflects not a single learner, but many such learners, all making autonomous decisions about how to detect malicious content, although these decisions often rely on similar training datasets.

We model the resulting game as an interaction between multiple learners, who simultaneously learn linear regression models, and an attacker, who observes the learned models (as in white-box attacks (Šrndic & Laskov, 2014)), and modifies the original feature vectors at test time in order to induce incorrect predictions. Crucially, rather than customizing the attack to each learner (as in typical models), the attacker chooses a single attack for all learners. We term the resulting game a Multi-Learner Stackelberg Game, to allude to its two stages, with learners jointly acting as Stackelberg leaders, and the attacker being the follower. Our first contribution is the formal model of this game. Our second contribution is to approximate this game by deriving upper bounds on the learner loss functions. The resulting approximation yields a game in which there always exists a symmetric equilibrium, and this equilibrium is unique. In addition, we prove that this unique equilibrium can be computed by solving a convex optimization problem. Our third contribution is to show that the equilibrium of the approximate game is robust, both theoretically (by showing it to be equivalent to a particular robust optimization problem), and through extensive experiments, which demonstrate it to be much more robust to attacks than standard regularization approaches.

Related Work Both attacks on and defenses of machine learning approaches have been studied within the literature on adversarial machine learning (Brückner & Scheffer, 2011; Dalvi et al., 2004; Li & Vorobeychik, 2014; Zhou et al., 2012; Lowd & Meek, 2005). These approaches commonly assume a single learner, and consider either the problem of finding evasions against a fixed model (Dalvi et al., 2004; Lowd & Meek, 2005; Šrndic & Laskov, 2014), or algorithmic approaches for making learning more robust to attacks (Russu et al., 2016; Brückner & Scheffer, 2011; Dalvi et al., 2004; Li & Vorobeychik, 2014, 2015). Most of these efforts deal specifically with classification learning, but several consider adversarial tampering with regression models (Alfeld et al., 2016; Grosshans et al., 2013), although still within a single-learner and single-attacker framework. Stevens & Lowd (2013) study the algorithmic problem of attacking multiple linear classifiers, but did not consider the associated game among classifiers.

Our work also has a connection to the literature on security games with multiple defenders (Laszka et al., 2016; Smith et al., 2017; Vorobeychik et al., 2011). The key distinction with our paper is that in multi-learner games, the learner strategy space is the space of possible models in a given model class, whereas prior research has focused on significantly simpler strategies (such as protecting a finite collection of attack targets).

## 2 Model

We investigate the interactions between a collection of learners and an attacker in regression problems, modeled as a Multi-Learner Stackelberg Game (MLSG). At the high level, this game involves two stages: first, all learners choose (train) their models from data, and second, the attacker transforms test data (such as features of the environment, at prediction time) to achieve malicious goals. Below, we first formalize the model of the learners and the attacker, and then formally describe the full game.

### 2.1 Modeling the Players

At training time, a set of training data is drawn from an unknown distribution . is the training sample and is a vector of values of each data in . We let denote the th instance in the training sample, associated with a corresponding value from . Hence, and . On the other hand, test data can be generated either from , the same distribution as the training data, or from , a modification of generated by an attacker. The nature of such malicious modifications is described below. We let represent the probability that a test instance is drawn from (i.e., the malicious distribution), and be the probability that it is generated from .

The action of the th learner is to select a vector as the parameter of the linear regression function , where is the predicted values for data . The expected cost function of the th learner at test time is then

 ci(θi,D′)=βE(X′,y)∼D′[ℓ(X′θi,y)]+(1−β)E(X,y)∼D[ℓ(Xθi,y)]. (1)

where . That is, the cost function of a learner is a combination of its expected cost from both the attacker and the honest source.

Every instance generated according to is, with probability , maliciously modified by the attacker into another, , as follows. We assume that the attacker has an instance-specific target , and wishes that the prediction made by each learner on the modified instance, , is close to this target. We measure this objective for the attacker by for a vector of predicted and target values and , respectively. In addition, the attacker incurs a cost of transforming a distribution into , denoted by .

After a dataset is generated in this way by the attacker, it is used simultaneously against all the learners. This is natural in most real attacks: for example, spam templates are commonly generated to be used broadly, against many individuals and organizations, and, similarly, malware executables are often produced to be generally effective, rather than custom made for each target. The expected cost function of the attacker is then a sum of its total expected cost for all learners plus the cost of transforming into with coefficient :

 (2)

As is typical, we estimate the cost functions of the learners and the attacker using training data , which is also used to simulate attacks. Consequently, the cost functions of each learner and the attacker are estimated by

 ci(θi,X′)=βℓ(X′θi,y)+(1−β)ℓ(Xθi,y) (3)

and

 ca({θi}ni=1,X′)=n∑i=1ℓ(X′θi,z)+λR(X′,X) (4)

where the attacker’s modification cost is measured by , the squared Frobenius norm.

### 2.2 The Multi-Learner Stackerlberg Game

We are now ready to formally define the game between the learners and the attacker. The MLSG has two stages: in the first stage, learners simultaneously select their model parameters , and in the second stage, the attacker makes its decision (manipulating ) after observing the learners’ model choices . We assume that the proposed game satisfies the following assumptions:

1. Players have complete information about parameters (common to all learners) and . This is a strong assumption, and we relax it in our experimental evaluation (Section 6), providing guidance on how to deal with uncertainty about these parameters.

2. Each learner has the same action (model parameter) space which is nonempty, compact and convex. The action space of the attacker is .

3. The columns of the training data are linearly independent.

We use Multi-Learner Stackelberg Equilibrium (MLSE) as the solution for the MLSG, defined as follows.

###### Definition 1 (Multi-Learner Stackelberg Equilibrium (MLSE)).

An action profile is an MLSE if it satisfies

 θ∗i=argminθi∈Θci(θi,X∗(θ)),∀i∈Ns.t. X∗(θ)=argminX′∈Rm×dca({θi}ni=1,X′). (5)

where constitutes the joint actions of the learners.

At the high level, the MLSE is a blend between a Nash equilibrium (among all learners) and a Stackelberg equilibrium (between the learners and the attacker), in which the attacker plays a best response to the observed models chosen by the learners, and given this behavior by the attacker, all learners’ models are mutually optimal.

The following lemma characterizes the best response of the attacker to arbitrary model choices by the learners.

###### Lemma 1 (Best Response of the Attacker).

Given , the best response of the attacker is

 X∗=(λX+zn∑i=1θ⊤i)(λI+n∑i=1θiθ⊤i)−1. (6)
###### Proof.

We derive the best response of the attacker by using the first order condition. Let denote the gradient of with respect to . Then

 ∇X′ca=2n∑i=1(X′θi−z)θ⊤i+2λ(X′−X).

Due to convexity of , let , we have

 X∗=(λX+zn∑i=1θ⊤i)(λI+n∑i=1θiθ⊤i)−1.

Lemma 1 shows that the best response of the attacker, , has a closed form solution, as a function of learner model parameters . Let , then in Eq. (5) can be rewritten as

 ci(θi,θ−i)=βℓ(X∗(θi,θ−i)θi,y)+(1−β)ℓ(Xθi,y). (7)

Using Eq. (7), we can then define a Multi-Learner Nash Game (MLNG):

###### Definition 2 (Multi-Learner Nash Game (MLNG)).

A static game, denoted as is a Multi-Learner Nash Game if

1. The set of players is the set of learners ,

2. the cost function of each learner is defined in Eq. (7),

3. all learners simultaneously select .

We can then define Multi-Learner Nash Equilibrium (MLNE) of the game :

###### Definition 3 (Multi-Learner Nash Equilibrium (MLNE)).

An action profile is a Multi-Learner Nash Equilibrium of the MLNG if it is the solution of the following set of coupled optimization problem:

 minθi∈Θci(θi,θ−i),∀i∈N. (8)

Combining the results above, the following result is immediate.

###### Theorem 1.

An action profile is an MLSE of the multi-learner Stackelberg game if and only if is a MLNE of the multi-learner Nash game , with defined in Eq. (6) for .

Theorem 1 shows that we can reduce the original -player Stackelberg game to an -player simultaneous-move game . In the remaining sections, we focus on analyzing the Nash equilibrium of this multi-learner Nash game.

## 3 Theoretical Analysis

In this section, we analyze the game . As presented in Eq. (6), there is an inverse of a complicated matrix to compute the best response of the attacker. Hence, the cost function shown in Eq. (7) is intractable. To address this challenge, we first derive a new game, with tractable cost function for its players, to approximate . Afterward, we analyze existence and uniqueness of the Nash Equilibirum of .

### 3.1 Approximation of ⟨N,Θ,(ci)⟩

We start our analysis by computing presented in Eq. (6). Let matrix , and . Then, Similarly, let matrix , and , which implies that The best response of the attacker can then be rewritten as We then obtain the following results.

###### Lemma 2.

and satisfy

1. and are invertible, and the corresponding invertible matrices, and , are positive definite.

2. .

3. .

###### Proof.
1. First, we prove that is invertible, and its inverse matrix, , is positive definite by using mathematical induction.

When , . As is an invertible square matrix and is a column vector, by using Sherman-Morrison formula, is invertible.

 A−11=1λ(I−θ1θ⊤1λ+θ⊤1θ1).

For any non-zero column vector , we have

 u⊤A−11u=λu⊤u+u⊤uθ⊤1θ1−u⊤θ1θ⊤1uλ(λ+θ⊤1θ1).

As and , according to Cauchy-Schwarz inequality,

 u⊤uθ⊤1θ1≥u⊤θ1θ⊤1u,

Then, . Thus, is a positive definite matrix.

We then assume that when , is invertible and is positive definite. Then, when ,

 Ak+1=Ak+θk+1θ⊤k+1.

As is invertible, is a column vector. By using Sherman-Morrison formula, we have that is invertible, and

 A−1k+1=A−1k−A−1kθk+1θ⊤k+1A−1k1+θ⊤k+1A−1kθk+1.

Then,

 u⊤A−1k+1u=u⊤A−1ku+u⊤A−1ku⋅θ⊤k+1A−1kθk+1−u⊤A−1kθk+1⋅θ⊤k+1A−1ku1+θ⊤k+1A−1kθk+1

As is a positive definite matrix, we have and . By using Extended Cauchy-Schwarz inequality, we have

 u⊤A−1kuθ⊤k+1A−1kθk+1>u⊤A−1kθk+1θ⊤k+1A−1ku.

Then, is positive definite. Hence, is invertible, and is positive definite. Similarly, we can prove that is invertible, and is positive definite.

2. We have proved that and are invertible. Then, the result can be obtained by using Sherman-Morrison formula.

3. Let . As is a symmetric matrix, its inverse, is also symmetric. Using a similar approach to the one above, we can prove that is invertible and is positive definite. By using Sherman-Morrison formula, we have

 A−1−i=A−1−i,−j−A−1−i,−jθjθ⊤jA−1−i,−j1+θ⊤jA−1−i,−jθj.

Then,

 θ⊤iA−1−iθi=θ⊤iA−1−i,−jθi−θ⊤iA−1−i,−jθj⋅∙θ⊤jA−1−i,−jθi1+θ⊤jA−1−i,−jθj=θ⊤iA−1−i,−jθi−(θ⊤iA−1−i,−jθj)21+θ⊤jA−1−i,−jθj≤θ⊤iA−1−i,−jθi.

We then iteratively apply Sherman-Morrison formula and get

 θ⊤iA−1−iθi≤θ⊤iA−10θi=1λθ⊤iθi.

Lemma 2 allows us to relax as follows:

###### Lemma 3.
 ℓ(X∗(θi,θ−i)θi,y)≤ℓ( B−iA−1−iθi,y) +1λ2||z−y||22(θ⊤iθi)2. (9)
###### Proof.

Firstly, by using Sherman-Morrison formula we have

Then,

 ℓ(X∗θi,y)=||BnA−1−iθi1+θ⊤iA−1−iθi−y||22=||BnA−1−iθi−y−θ⊤iA−1−iθiy1+θ⊤iA−1−iθi||22≤||BnA−1−iθi−y−θ⊤iA−1−iθiy||22=||(B−i+zθ⊤i)A−1−iθi−y−θ⊤iA−1−iθiy||22=||B−iA−1−iθi−y+(z−y)θ⊤iA−1−iθi||22≤ℓ(B−iA−1−iθi,y)+||z−y||22(θ⊤iA−1−iθi)2

By using Lemma 2, we have which completes the proof. ∎

Note that in Eq.  (3), and only depend on . Hence, the RHS of Eq. (3) is a strictly convex function with respect to . Lemma 3 shows that can be relaxed by moving out of and adding a regularizer with its coefficient . Motivated by this method, we iteratively relax by adding corresponding regularizers. We now identify a tractable upper bound function for .

###### Theorem 2.
 ci(θi,θ−i)≤¯ci(θi,θ−i)=ℓ(Xθi,y)+βλ2||z−y||22n∑j=1(θ⊤jθi)2+ϵ, (10)

where is a positive constant and .

###### Proof.

We prove by extending the results in Lemma 3 and iteratively relaxing the cost function. As presented in Lemma 3, we have

 ℓ(X∗θi,y)≤ℓ(B−iA−1−iθi,y)+1λ2||z−y||22(θ⊤iθi)2.

By using Sherman-Morrison formula,

 ℓ(B−iA−1−iθi,y)=||B−i(A−1−i,−j−A−1−i,−jθjθ⊤jA−1−i,−j1+θ⊤jA−1−i,−jθj)θi−y||22≤||B−iA−1−i,−jθi1+θ⊤jA−1−i,−jθj−y||22+△1(θ)

where , and is a continuous function of . As the action space is bounded, then . Hence, we have

where is a continuous function of and . Let , then, similarly, can be further relaxed as follows.

 (θ⊤jA−1−i,−jθi)2=(θ⊤j(A−1−i,−j,−k−A−1−i,−j,−kθkθ⊤kA−1−i,−j,−k1+θ⊤kA−1−i,−j,−kθk)θi)2≤(θ⊤jA−1−i,−j,−kθi)2+△3(θ)

where , using the same approach, can be further and iteratively relaxed as follows,

 (θ⊤jA−1−i,−jθi)2≤(θ⊤jA−10θi)2+△4(θ)=1λ2(θ⊤jθi)2+△4(θ)

where . Combining the results above, we can iteratively relax as follows,

 ℓ(B−iA−1−iθi,y)≤ℓ(B−i,−jA−1−i,−jθi,y)+1λ2||z−y||22(θ⊤jθi)2+△5(θ)≤ℓ(Xθi,y)+1λ2||z−y||22∑j≠i(θ⊤jθi)2+△(θ)

where and . Then,

 ℓ(X∗θi,y)≤ℓ(B−iA−1−iθi,y)+1λ2||z−y||22(θ⊤iθi)2≤ℓ(Xθi,y)+1λ2||z−y||22n∑j=1(θ⊤jθi)2+△(θ).

Hence,

 ci(θi,θ−i)=βℓ(X∗θi,y)+(1−β)ℓ(Xθi,y)≤ℓ(Xθi,y)+βλ2||z−y||22n∑j=1(θ⊤jθi)2+ϵ

where is a constant such that . ∎

As represented in Eq. (10), is strictly convex with respect to and . We then use the game as an approximation of . Let

 ˜ci(θi,θ−i)=¯ci(θi,θ−i)−ϵ=ℓ(Xθi,y)+βλ2||z−y||22n∑j=1(θ⊤jθi)2, (11)

then has the same Nash equilibrium with if one exists, as adding or deleting a constant term does not affect the optimal solution. Hence, we use to approximate , and analyze the Nash equilibrium of in the remaining sections.

### 3.2 Existence of Nash Equilibrium

As introduced in Section 2, each learner has identical action spaces, and they are trained with the same dataset. We exploit this symmetry to analyze the existence of a Nash equilibrium of the approximation game .

We first define a Symmetric Game (Cheng et al., 2004):

###### Definition 4 (Symmetric Game).

An n-player game is symmetric if the players have the same action space, and their cost functions satisfies

 ci(θi,θ−i)=cj(θj,θ−j),∀i,j∈N (12)

if and .

In a symmetric game it is natural to consider a Symmetric Equilibrium:

###### Definition 5 (Symmetric Equilibrium).

An action profile of is a symmetric equilibrium if it is a Nash equilibrium and .

We now show that our approximate game is symmetric, and always has a symmetric Nash equilibrium.

###### Theorem 3 (Existence of Nash Equilibrium).

is a symmetric game and it has at least one symmetric equilibrium.

###### Proof.

As described above, the players of use the same action space and complete information of others. Hence, the cost function is symmetric, making a symmetric game. As has nonempty, compact and convex action space, and the cost function is continuous in and convex in , according to Theorem 3 in Cheng et al. (2004), has at least one symmetric Nash equilibrium. ∎

### 3.3 Uniqueness of Nash Equilibrium

While we showed that the approximate game always admits a symmetric Nash equilibrium, it leaves open the possibility that there may be multiple symmetric equilibria, as well as equilibria which are not symmetric. We now demonstrate that this game in fact has a unique equilibrium (which must therefore be symmetric).

###### Theorem 4 (Uniqueness of Nash Equilibrium).

has a unique Nash equilibrium, and this unique NE is symmetric.

###### Proof.

We have known that has at least NE, and each learner has an nonempty, compact and convex action space . Hence, we can apply Theorem 2 and Theorem 6 of Rosen (1965). That is, for some fixed , if the matrix in Eq. (13) is positive definite, then has a unique NE.

 Jr(θ)=⎡⎢ ⎢⎣r1∇θ1,θ1˜c1(θ)…r1∇θ1,θn˜c1(θ)⋮⋮rn∇θn,θ1˜cn(θ)…rn∇θn,θn˜cn(θ)⎤⎥ ⎥⎦ (13)

By taking second-order derivatives, we have

 ∇θi,θi˜ci(θ)=2X⊤X+2β||z−y||22λ2(4θiθ⊤i+2θ⊤iθiI+∑j≠iθjθ⊤j)

and

 ∇θi,θj˜ci(θ)=2β||z−y||22λ2(θ⊤iθjI+θjθ⊤i)

We first let and decompose as follows,

 Jr(θ)=2nP+2β||z−y||22λ2n(Q+S+T), (14)

where and are block diagonal matrices such that , , and , . and are block symmetric matrices such that , , and , .

Next, we prove that is positive definite, and , and are positive semi-definite. Let be an vector, where are not all zero vectors.

1. . As the columns of are linearly independent and are not all zero vectors, there exists at least one such that . Hence, which indicates that is positive definite.

2. Similarly, which indicates that is a positive semi-definite matrix.

3. Let’s be a symmetric matrix such that and , . Hence, , . Note that is a positive semi-definite matrix, as it is also symmetric, there exists at least one lower triangular matrix