MimicGAN: Robust Projection onto Image Manifolds with Corruption Mimicking

MimicGAN: Robust Projection onto Image Manifolds with Corruption Mimicking

Abstract

In the past few years, generative models like Generative Adversarial Networks (GANs) have dramatically advanced our ability to represent and parameterize high-dimensional, non-linear image manifolds. As a result, they have been widely adopted across a variety of applications, ranging from challenging inverse problems like image completion, to being used as a prior in problems such as anomaly detection and adversarial defense. A recurring theme in many of these applications is the notion of projecting an image observation onto the manifold that is inferred by the generator. In this context, Projected Gradient Descent (PGD) has been the most popular approach, which essentially searches for a latent representation with the goal of minimizing discrepancy between a generated image and the given observation. However, PGD is an extremely brittle optimization technique that fails to identify the right projection when the observation is corrupted, even by a small amount. Unfortunately, such corruptions are common in the real world, for example arbitrary images with unknown crops, rotations, missing pixels, or other kinds of distribution shifts requiring a more robust projection technique. In this paper we propose corruption-mimicking, a new strategy that utilizes a surrogate network to approximate the unknown corruption directly at test time, without the need for additional supervision or data augmentation. The proposed projection technique significantly improves the robustness of PGD under a wide variety of corruptions, thereby enabling a more effective use of GANs in real-world applications. More importantly, we show that our approach produces state-of-the-art performance in several GAN-based applications – anomaly detection, domain adaptation, and adversarial defense, that rely on an accurate projection.

\cvprfinalcopy1

1 Introduction

Generative Adversarial Networks (GANs) [20] have been widely adopted in part due to their ability to parameterize complex, high-dimensional, non-linear image manifolds. As a result, they have been effective in several applications ranging from super resolution [34] and image editing [43, 57], to image-to-image translation [60, 36], etc. Additionally, an accurate parameterization of the image manifold provides a powerful regularization – sometimes referred as the “GAN prior” – for a range of problems such as defense against adversarial attacks [47, 26], anomaly detection [4, 58], and compressive sensing [51]. Despite the variabilities in their goals and formulations, an overarching requirement in all these applications is the ability to project an image observation onto the image manifold at test time.

Figure 1: Demonstration of the alternating optimization process. Though initialized randomly, the corruption-mimicking network effectively guides the search in the latent space of the generator to produce an estimate of the true image. In this example from the FFHQ dataset [29], the observed image is a cropped (zoom in) version of the ground truth.

Projecting onto an image manifold essentially involves an encoding step to project onto the latent space of a generator, followed by a decoding step to obtain the image from the image manifold. While the latter is typically carried out using a pre-trained generator model (e.g. convolutional neural networks or CNNs), the former step is typically implemented using two distinct family of approaches: (a) Projected gradient descent (PGD): a simple, yet powerful, latent space optimization strategy that minimizes the discrepancy between an observed sample, and the best estimate of that sample from the generator [57, 47]; and (b) Coupling GAN with an explicit encoder: Techniques such as, BiGAN [15] and ALI [16], include an additional network that explicitly provides a latent space representation for an image. This broad class of approaches also encompasses other generative modeling techniques such as variational autoencoders (VAEs) [30], [32, 11].

Though these techniques have been shown to provide effective projections, they often fail when the sample to be projected has undergone even minor distribution shifts compared to the data used for training the generator or simple corruptions such as missing pixels, translation, rotation or scale. In practice, both PGD and encoder-coupled GANs fail drastically under these conditions. Inferencing in the latent space of GANs has gained a lot of attention recently [10, 3, 53] with the advent of high-quality GANs such as BigGAN [13], and StyleGAN [29], thus strengthening the need for robust projections. While existing solutions such as unsupervised image-to-image translation [60, 36] have been remarkably successful in modeling complex distributional shifts across image domains, they rely on the availability of a large set of samples from the two domains. In contrast, we are interested in robustness to simpler distribution shifts such as affine transformations, corruptions or missing information, that necessitate robust projections even when very few samples are available from the corrupted domain at test time.

In this paper, we propose MimicGAN, a test-time approach for projecting images into the latent space of pre-trained generative models like GANs such that the decoder remains robust to a wide range of distribution shifts. Our main contribution is the process of corruption-mimicking which uses a surrogate neural network to mimic the corruption process, while simultaneously computing the projection. Given the corrupted observation(s), we perform an alternating optimization to train the surrogate conditioned on the current best estimate of projections from the GAN, followed by optimization in the latent space to identify the best projections conditioned on the current estimate of the surrogate network. Since the surrogate is a shallow network, and the GAN prior acts as a powerful regularizer, robust projection can be achieved with as few as a single sample in some cases, or as many as required by the complexity of the application. Importantly, because we estimate the corruption on the fly, MimicGAN does not need to know, in advance, how the images have been corrupted, or any prior assumptions on the type of distribution shift that has occured. The alternating optimization procedure of MimicGAN is demonstrated in Figure 1.

Main Findings We observe that corruption-mimicking leads to projections that are highly robust to changes in scale, translation, and rotation; to missing or partial information, and to domain shifts across similar datasets. It needs to be emphasized that the robustness is not achieved by using any kind of data augmentation strategies, but rather as a consequence of the corruption-mimicking, while keeping all other conditions the same. Broadly, MimicGAN  demonstrates robustness to the class of functions that can be expressed by the surrogate network considered. Moreover, PGD is a special case of MimicGAN  when the surrogate network is assumed to be the identity function and this simplification leads to highly sub-optimal projections in all applications considered.

Contributions
  • We propose a robust version of the popularly adopted PGD approach, referred as MimicGAN, that achieves robustness across a wide variety of test-time corruptions such as scaling, rotation, and missing or partial information.

  • Our method relies on a corruption-mimicking surrogate network that estimates unknown distributional shifts or corruptions on the fly, resulting in a single algorithm that works across a wide range of test-time distribution shifts, without any prior knowledge.

  • Using comprehensive experimental evaluation, we show that with no additional supervision, MimicGAN  is significantly more robust in terms of projection error compared to several competitive baseline methods trained on the same data.

  • Finally, we demonstrate significant performance improvements with MimicGAN in applications that rely on an accurate projection: adversarial defense, anomaly detection, domain adaptation, and adapting GANs across datasets (e.g. MNIST[33], USPS hand written digits, CelebA [38], FFHQ-Thumbnails [29], LFW [9]).

2 Related Work

MimicGAN  bridges the gap between two diverse yet related GAN-based methods in literature. On the one hand, a growing number of applications rely on using the GAN prior as a powerful regularizer for inverse problems, adversarial defense, anomaly detection etc. On the other hand, GANs have emerged as a powerful tool for image-to-image translation, that are able to map across very complex distributional shifts. MimicGAN enjoys the merits of both approaches, wherein it is highly effective at leveraging the GAN prior for adversarial defense, while also modeling the image translation problem as computing projections onto the manifold. In this section, we will briefly review the current art in the two broad classes of approaches and present comparisons to the proposed MimicGAN.

Image-to-Image Translation While several recent techniques have been proposed to handle very complex distributional shifts, such as unpaired image to image translations [60, 36], Pix2Pix[27] etc., they involve training networks that are able to map from one distribution to the other. More recently, Non Adversarial Mappings (NAMs) [24, 23] have shown preliminary evidence to find translations across domains without adversarial training. NAMs are conceptually similar to our approach, since they parameterize the distribution shift by a neural network that relies on a pre-trained generator. However, their approach expects a large set of observations (over , [24]) for a type of corruption, and as a result needs to be re-trained each time. Further, none of the existing image-to-image translation techniques effectively leverage the GAN prior for projection, which is the primary focus of this work. For example, it is not clear how they could be used for applications, such as, adversarial defense or anomaly detection, because of their large observed data requirement. Our goal is to perform robust projection at the test time, that can work with as few as a 2-5 observations. Consequently, MimicGAN  is able to project robustly across several distribution shifts using the same system and hyper-parameters with no additional tuning across the distribution shifts.

Leveraging the GAN prior: Our work improves on the notion of GAN priors [57, 51], [6], [12] – the idea that optimizing in the latent space of a pre-trained GAN provides a powerful prior to solve several traditionally hard problems. Yeh et al. [57] first introduced the projected gradient descent (PGD) method with GANs for filling up arbitrary holes in an image, in a semantically meaningful manner. Subsequently, several efforts have pursued solving inverse problems using PGD, for example compressive recovery [12], [51], and deblurring [6]. Asim et al. [6] proposed an alternative approach for image deblurring, wherein they used two separate GANs – one for the blur kernel, and another for the images. MimicGAN differs from this approach in that we make no assumptions on the types of corruptions that can be recovered, and hence requires only a single GAN for modeling the image manifold. There have also been improvements to the recovery of latent vectors for GANs [35], where it is shown that stochastic clipping can provide more accurate projections. This technique works effectively on clean data, but suffers from all the problems of PGD, in being non-robust. At the same time, it is a generic approach which can be combined with MimicGANfor improved performance. Finally, MimicGAN  is also related to iGAN [59], where the projection on the image manifold is used for user-guided image manipulation. The projection is achieved by an encoder that is trained at test time to find the right initialization in the GAN latent space. We compare with iGAN in our experiments, and show that while it is more robust than PGD, it is susceptible to many of its weaknesses.

Figure 2: MimicGAN: Illustration of the overall approach for projecting test images onto a known image manifold parameterized using a GAN.

In addition to its effectiveness for image recovery, the GAN-prior has also become a reasonably successful way to defend against adversarial attacks – which are small perturbations in the pixel space that are designed to cause a particular classifier to fail dramatically. For example, Defense-GAN [47], uses an approach called clean and classify, where the observed adversarial example is projected on to the image manifold, with the hope of eliminating the adversarial perturbation in the process. A very related idea is explored in [26], referred as Invert and Classify (INC), which also relies on PGD similar to Defense-GAN. A similar approach proposes to use the discriminator in addition to the generator to detect adversarial examples [49], and we show in our experiments that, in comparison, the MimicGANdefense is significantly more robust. Interestingly, all the three aforementioned approaches become a special case of MimicGAN, when the surrogate network in our approach is assumed to be identity. Note that, both Defense-GAN and MimicGAN  are applicable to white-box as well as black-box attacks, since they do not need access to the classifier model in order to clean the data. It should be noted that most existing GAN based defenses are effective only when the attack is designed purely on the classifier. However, recent evidence shows that these defenses can be broken when the attack is designed on the GAN and the classifier together [7]. We test MimicGAN  against such an attack and observe that, it is also vulnerable to such a GAN-based attack, but to a lower degree than existing GAN-based defense strategies.

Structural network priors Finally, our work is also related to the category of recent approaches that leverage the prior afforded by an untrained neural network itself to solve traditionally challenging inverse problems. The use of an untrained surrogate network to estimate the corruption draws inspiration from previous methods such as Deep Image Prior [55] and Deep Internal Learning [52], which work directly at test time. In the case of MimicGAN, the surrogate network architecture imposes a structural prior on the types of corruptions that can be recovered accurately, as described in section 3.1.1.

3 Proposed Approach

In this section, we describe MimicGAN in detail, wherein the core idea is to jointly obtain an estimate of the unknown corruption process and perform projection via latent space optimization. The proposed algorithm is generic in that it can work with any type of generative model including the families of GANs and variational autoencoders, as we make no assumptions on the generator except that we are able to sample from a latent space in a differentiable manner.

3.1 Formulation: Corruption-Mimicking for Robust Projection

Let us denote the generator of a pre-trained GAN as , where represents the image manifold and is the dimensionality of the latent space. Assuming the observation that needs to be projected onto is given by , where is some unknown corruption function, and is the true image on the manifold. Note that, belongs to a broad class of functions including geometric transformations (e.g. affine shift), missing information (e.g. missing pixels), or a systematic distributional shift (e.g. changes to pixel intensities). The goal is to estimate by projecting onto , without any prior knowledge about or paired examples representing the function. Formally, let , be observations which are assumed to be produced by a corruption process along with an unknown noise component, i.e., , where ’s are assumed to be i.i.d, drawn at random from the image manifold parameterized by the generator , correspond to the unknown corruption parameters, and denote parameters for the Normal distribution. In this formulation can be inherently stochastic, i.e., different observations can be corrupted in slightly different ways. In the rest of the text we drop the notation for convenience.

Our main contribution in this paper is a technique called corruption-mimicking, that is able to estimate the likely corruption function while simultaneously projecting the observation onto the image manifold. We propose to parameterize the corruption function by a neural network, , that “mimics” the unknown corruption process, conditioned upon the current best estimate of the projection . Next, we estimate the best possible projection conditioned on the current estimate of the function using (2). This alternating optimization progresses by incrementally refining , while ensuring that there always exists a valid solution , such that . The generator is kept frozen during the entire optimization loop, and hence this is implemented as an entirely test-time solution. Figure 2 shows an overview of the proposed approach, and how the gradients are used to update , and respectively. Figure 1 shows an example of the alternating optimization procedure, that demonstrates how a better estimate of can lead to a reliable projection. In both the examples shown, the corruption function is arbitrary cropping (“zoom in”) applied to the original image.

The proposed solution is applicable over a wide class of functions , where is the hypothesis space of corruption functions that can be modeled using the surrogate neural network. Mathematically, we reformulate (2) as follows:

(1)

Since both and are differentiable, we can evaluate the gradients of the objective in (1), using backpropagation and utilize existing gradient based optimizers. In addition to computing gradients with respect to , we also perform clipping in order to restrict it within the desired range (e.g., ) resulting in a PGD-style optimization. Solving this alternating optimization problem produces high quality estimates for both and the projections .

In the special case where the corruption function is known a priori, the projection problem can be simplified:

(2)

where the optimal projections are given by , is a loss function (e.g. mean squared error) for reconstructing the observations using the known and is the latent noise vector. This is commonly adopted in several state-of-the-art solutions for image inpainting [57], adversarial defense [47], compressed sensing [12],[51], and deblurring [6].

While (2) can be highly effective when is known, it fails dramatically when is unknown. In practice, this is often handled by making a naïve assumption that is identity and employing PGD, which we show in Section 4, tends to produce non-robust projections onto the image manifold, under a wide-range of commonly occurring corruptions. By producing highly unreliable projections, this eventually makes the GAN unusable in downstream applications.

Architecture

Figure 3: Architecture of surrogate network used to model the distribution shift at test time. The same architecture is used for CelebA, LFW and FFHQ datasets. The mask acts independently on the RGB channels, as a multiplication without any bias. The addition symbol implies a passthrough that is just an addition operation, and we do not use any non-linearity after the sum.

The surrogate neural network plays a cruicial role in obtaining robust projections. We use a spatial transformer layer (STL) [28] first with the parameters that model affine transforms corresponding to scale, rotation and shearing. Next, we include convolutional layers, followed by a masking layer with the mask being randomly initialized with the same size of the image. The architecture is described in the Figure 3.

Justification for the architecture: The robustness afforded by MimicGAN corresponds to the class of functions that can be expressed using the surrogate network, . Consequently, we choose the layers based on commonly occuring corruptions. The spatial transformer captures affine transformations, as a result of which, we obtain robustness to geometric corruptions such as scale, rotation, shift. Next, convolutional layers with non-linearities allow us to model various image transformations such as blurs, edges, color shifts etc. The masking layer allows us to model corruptions such as arbitrary masks, missing pixels, noise, etc. Finally, we create a short-cut connection [21] to make it easy for the network to model the identity transformation.

The architecture of the surrogate network chosen in this work is based on “typical” corruptions we expect to see in the wild. The layers in the network can be substituted based on the application at hand, for example if one only expects to see affine corruptions then a ST layer should suffice for MimicGAN. More generally, by choosing the architecture for the surrogate network, we are imposing a structural prior on the kinds of corruptions MimicGAN  can handle. As a result, this can be a limitation if the corruption cannot be modeled by existing CNN layers (for example affine transforms without the ST layer).

Given the highly underdetermined nature of this problem, in many cases need not emulate the corruption exactly. Nevertheless, the process of learning this surrogate sufficiently regularizes the problem of computing the optimal projection. The balance between updating and re-estimating the projection using PGD is critical to the convergence behavior. Since is updated using a different set of images in each iteration (due to updates in ), it rarely demonstrates overfitting behavior. However, we note that, overfitting can occur if the surrogate network is made very deep.

Loss Function

Next we describe the construction of loss function in (1), which is comprised of two different terms:

  • Corruption-mimicking loss: Measures the discrepancy between the true observations and the estimates from MimicGAN,

    where is the norm.

  • Adversarial loss: Using the discriminator and the generator from the pre-trained GAN, we measure:

    Note, this is the same as the generator loss used for training a GAN.

Given these components, the overall loss is defined as:

(3)

where is kept fixed at . Note that, the projected gradient descent (PGD) technique [57, 47] can be derived as a special case of our approach, when , where is identity. Also note that since the surrogate can never perfectly learn the identity function, MimicGAN  will slightly underperform PGD in cases where there is no corruption, i.e. when the corruption is identity.

Input :  Observed Images , Pre-trained generator and discriminator .
Output :  Recovered Images , Surrogate
Initialize :  For all is initialized as average of realizations drawn from // see text
Initialize : Random initialization of surrogate parameters,
for  to  do
       for  to // update surrogate
       do
             ; Compute loss using (3); ;
       end for
      for  to // perform PGD conditioned on surrogate
       do
             ; Compute loss using (3); ; ;
       end for
      
end for
return , .
Algorithm 1 MimicGAN

Algorithm

The procedure to perform the alternating optimization is shown in Algorithm 1. We run the inner loops for updating and for and number of iterations respectively. The projection operation denoted by is the clipping operation, where we restrict the ’s to lie in the range . We use the RMSProp Optimizer to perform the gradient descent step in each case, with learning rates of and for the two steps respectively. Note that, since our approach requires only the observations to compute the projection, it lends itself to a task-agnostic inference wherein the user does not need to specify the type of corruption or acquire examples a priori.

Initialization

MimicGAN depends on an initial seed to begin the alternating optimization, and we observed large variabilities in convergence behavior due to the choice of the seed. In order to avoid this, we initialize the estimate of projected images by computing an average sample on the image manifold, by averaging samples drawn from the random uniform distribution. When the latent space is drawn from a uniform distribution, this effectively initializes them with zero values. Note that, we initialize the estimate for all observations with the same mean image. We observe that this not only speeds up convergence, but is also stable across several random seeds. There are more sophisticated initialization techniques such as in iGAN [59], however, these are effective under little or no corruptions, and fail severely otherwise. Zero-initialization also works just as effectively on the GANs considered here, but the proposed averaging technique is applicable more generally if latent space distributions are not zero-centered.

4 Robustness Experiments

(a) Robustness to rotations
(b) Robustness to scale
(c) Projections under missing pixels
(d) Projections under missing context
Figure 8: Reprojection error: is shown here for 100 test images, along with its standard deviation. In all the four challenging cases considered here, MimicGAN  significantly outperforms the baselines, which are widely used to explore the latent space of a GAN.
(a) Robustness to rotations (arrow indicates setting used during training)
(b) Robustness to scale (arrow indicates setting used during training)
(c) Robustness to missing pixels.
(d) Robustness to missing contextual information.
Figure 13: Qualitative results demonstrating the invariance properties for different test images under all the corruptions considered here. Here, each image is from a different run with the specified corruption setting, and we see that MimicGAN  provides significantly better reprojection error in all cases above.

In this section, we demonstrate how corruption-mimicking with MimicGANimproves projection quality across various deformations and transformations of a held-out test set, not accessed by the GAN during training. As described in the previous section, we assume that the GAN has been trained on clean or un-corrupted data. In addition, we expect the corrupted images to be available only at test time, with no prior knowledge on the corruption function. For all empirical evaluation in this section, we measure the projection error as:

(4)

where is the ground truth (i.e., test images with no corruptions, like exactly matching crop and rotation settings at train time) and is the latent vector obtained upon convergence of the proposed algorithm 1. Next, we describe the different transformations and distortions considered for this study, along with experiment details.

Experiment Setup: We perform all our robustness experiments on the CelebA Faces dataset [38] which contains images. We crop each image by pixels, from the center of the image, and resize each image to be of size . We use the DCGAN [45] architecture, which is trained on of the images, while the rest are used for evaluation. Following [45], we rescale all the images to be in the range of . Finally, we train the GANs on the original, clean images, and introduce corruptions only at test time. Note, all our experiments were carried out using TensorFlow [2]. To corrupt the samples in each experiment, we draw random samples from the test set, followed up applying the same corruption on all of them, for simplicity. MimicGANcan work even when each sample is corrupted with slightly different parameters, where for e.g., each image is rotated by . We test MimicGAN  in this more challenging setting with adversarial defense experiment, in section 5.2, and show it is still robust.

Hyperparameters: In general, we find MimicGAN to be highly stable to hyper-parameter choices in algorithm 1. In particular, the results reported were obtained with the following settings: . Though these can be refined for different datasets, we observed these settings to produce consistent performance in all cases. It is important to note that in all the following experiments, the held-out set is directly presented to MimicGAN  without any additional tuning or training.

Baselines: Following the state-of-practice, we compare MimicGAN against these baseline techniques:

  • Projected Gradient Descent (PGD): This is a special case of MimicGAN when the surrogate, is identity. We fixed the learning rate for projected gradient descent at . Note that, PGD has been successfully used in several applications involving GANs [6, 12, 51, 57]. The optimal projection is given by , where , where we drop the sample index for convenience.

  • ResNet + PGD: We repeat PGD, but this time compute loss in the feature space from a pre-trained ResNet-50 [21]. A version of this idea with VGG features has been used in [24], but we observed improved performance with ResNet features compared to VGG. We expect this to be a better baseline than simply using raw pixel intensities, since the ResNet has been trained on ImageNet [46], that include a wide range of image variations. However, it has been previously reported that even complex models such as the ResNet fail to generalize to even small perturbations [8]. The optimal projection in this case is given by , where , where extracts ResNet-50 features corresponding to the image . We follow [59], and include a mixture of losses in both pixel space (like PGD) and ResNet feature space with a weight of .

  • iGAN [59] : Interactive-GAN learns an encoder on the fly to determine the best projection. We use an encoder with similar architecture as the discriminator except the last layer that contains the appropriate number of neurons as the latent dimension. This network, is trained given observations such that the optimal projection is given by: where .

  • BIGAN/ALI [15], [16]: This is a technique that learns an encoder to map into the latent space directly, while training the GAN. Considering our final goal is to project onto the image manifold, one can use the learned encoder directly. The optimal projection is obtained by passing the image through the encoder and the decoder directly as where is the encoder in the BIGAN.

    In all cases, we measure the quality of projection as the error between the original image, and the one obtained from the generator after encoding. We also use the same pre-trained GAN in all the baselines, with the only factor of variation being the projection technique.

4.1 Robustness to Affine Transforms

In this experiment, we study how MimicGAN  can be used to obtain scale and shift invariant projections on the image manifold. These form the most common type of corruptions one can expect to observe in the wild. We consider two such transformations: (a) scale: where we provide more context of the image (“zooming out”) than what the GAN has observed during training, and (b) rotation: here we rotate the images with a small crop to avoid edge artifacts. As we will demonstrate, even these seemingly simple transformations can significantly throw-off modern deep learning systems, unless they have appeared in the training set [8].

Interestingly, MimicGAN provides robustness to these transformations over a very wide range as showed in Figures (b)b and(a)a respectively. Here, we chose random examples from the held-out test set, and compared the projection error (mean and standard deviation) obtained for different amounts of scaling or rotation. It is immediately obvious that the baseline techniques are extremely sensitive to even small changes in these settings. In comparison, MimicGAN  demonstrates invariant properties over a very large magnitude of perturbations. Additionally, in figures (b)b and (a)a we show examples of all crops and rotations on a particular test image, and demonstrate the effectiveness of MimicGAN. Note, the canonical setting used during the training is marked with an arrow. It is interesting to observe that all baseline methods fail somewhat differently.

4.2 Projections with missing or partial information

Next, we study the robustness of MimicGAN  to missing information or lack of complete context while projecting onto the manifold. Specifically, we consider two commonly observed cases: (a) Missing pixels: Here we randomly drop pixels from the image, with the number of missing pixels varied between . (b) Zoom-in: We intentionally leave out a lot of context from the original image by cropping very close to the face. In both these cases, obtaining an accurate projection is an ill-posed inverse problem since there can be infinitely many solutions for the same observation, especially in cases when the corruption is extreme. We show that, even in this ill-posed setting, MimicGAN  is able to provide meaningful and highly plausible projections.

Projection results obtained under these corruptions are showed in Figures (c)c and (d)d respectively. Quantitative results obtained by averaging the errors over randomly chosen test examples are shown in Figures (c)c, and (d)d. We observe that while BIGAN fails very easily with even small amounts of missing pixels. On the other hand, PGD appears to be more stable, but eventually fails beyond missing pixels. In comparison, MimicGAN  remains robust for increasing level of corruption, and produces significantly lower projection errors.

4.3 Properties of MimicGAN

Finally, we study how MimicGAN  performs under different conditions when the number of observations vary, computational time, and how it generlizes to larger images of size .

Relationship to number of observations Since the surrogate network is one of the most important aspects of MimicGAN, the number of available observations can be expected to play a significant role in the quality of projection. For this experiment, we take a total of 128 images from the CelebA dataset, and project them using MimicGAN by varying the batch size to [1, 2, 4, 8, 16, 32, 64, 128]. We compare two metrics: quality of reprojection (same as eqn. (4)) and corruption mimicking error, which is defined as the loss between the observation and the output of the surrogate network. Ideally, we want both to be as small as possible.

In Figure (a)a, we show the results for this experiment, where the corruption considered is rotation. We observe that, as expected with more observations the quality improves. However, it is interesting to note that typically 8-10 samples are sufficient to recover the same projection error as 100 observations. Moreover, as expected with a single observation the surrogate overfits leading to very low corruption mimicking error, but very high reprojection error. The high variance in the reprojection error is explained by the fact that when only a single observation is available, we run into an identifiability issue – i.e., MimicGAN  can no longer distinguish what the corruption was from the original image. We note we obtain better reprojection error, and higher visual quality of images even with just 2 observations when compared to the baseline, which is ResNet+PGD.

Time Complexity Trade-Off Naturally, using a surrogate network leads to a computational burden when compared with vanilla PGD because of training the surrogate. In Figure (c)c, we show the time taken to project 100 random samples for images of size , and . We compute the time taken per iteration for each of the methods, on the same NVIDIA P100 GPU with 16GB of memory. We observe that, as expected, MimicGAN  tends to take x the time than PGD, but is significantly faster than iGAN or ResNet+PGD in both sizes of images. This is primarily because computing ResNet features becomes the bottleneck in these methods.

Generalization to larger images We also show that robust projection can be achieved with larger images, using exactly the same surrogate model (adjusted for size) and hyper parameter settings. We see similar robustness advantages even for these images. In Figure (b)b, we show how MimicGAN  provides robust projections at sized images, when the corruption is an unknown rotation.

(a) Quality of projection under different number of observations. The high variance in projection with just a single observation is due to both an overfit surrogate and the identifiability issues arising from not being able to tell apart the underlying true signal from the corruption
(b) Robustness on FFHQ dataset with images
(c) Time complexity per 100 images
Figure 17: Properties of MimicGAN: (a) Number of observations vs quality of projection, and corruption mimicking. (b) Robustness properties generalize to larger images and (c) Time taken to compute the projection compared to different baselines.

5 Applications of Robust Projection

(a) CELEBALFW: MimicGAN robustness to distribution shifts as seen here, where the standard projected gradient descent (PGD) completely fails to recover accurate projections.
(b) Mix ’n Match: MimicGAN provides the flexibility to use different GANs for datasets other than what was used during training.
Figure 20: Here we show projections obtained with MimicGAN, i.e. , from three datasets CelebA [38], FFHQ [29], and LFW [44], where held out samples are projected onto GANs trained on each of these datasets. The images to be projected are randomly chosen from each of the datasets. It can be seen that MimicGAN  provides very meaningful projections across distributions, without explicitly knowing how these datasets are related to each other.

In this section, we study how robust projections can lead to performance gains in different applications involving GANs. The idea of projecting onto a known image manifold has been leveraged in applications such as adversarial defense [47, 26, 49], anomaly detection [58, 4, 5], domain adaptation, etc. We use the following experiments to verify our hypothesis that a more robust projection can make GAN based solutions more effective in these applications.

5.1 Distribution Alignment

An important application of GANs has been in the context of handling distribution shifts across datasets, and we evaluate the effectiveness of  MimicGAN  in solving this challenging problem. First, we consider the problem of projecting face images onto a manifold inferred using face datasets characterized by systematic distributional shifts. For example, projecting a child’s face using GANs trained extensively using adult faces is quite challenging in practice. Subsequently, we consider the problem of unsupervised domain adaptation, which attempts to leverage labeled data from a source dataset to build a classifier for an unlabeled target dataset, when there is an unknown distributional shift between the two datasets.

Adapting Face GANs

In this application, we consider three different face image datasets, namely CelebA, LFW [9] and FFHQ [29], and evaluate the quality of projecting faces from one dataset onto GANs trained using another dataset. With existing techniques, the projection operation can completely fail unless we know a priori how to normalize the distribution shift, which is challenging in practice. Instead, we show how MimicGAN  can help in accounting for this distribution shift such that any relevant GAN “backend” can be used without loss of functionality.

Datasets:
  • LFW: A combination of PubFig83 [44] and Labelled Faces in the Wild (LFW) aligned dataset [25], following [9]. This dataset contains 25,068 images of celebrities. We use the original settings proposed in this paper, and use a crop of 200 that only contains the close up of faces, followed by resizing them to be .

  • FFHQ: [29] contains 70,001 images scraped from Flickr, at multiple resolutions, and we use the thumbnail version with images of size without any additional crops or resizing.

Note that, for all datasets, we used a DCGAN architecture similar to the one described in Section 5.1.2. The only difference was in the case of FFHQ, wherein we included an additional layer to both the generator and discriminator to account for the change in image size (). In Figure (b)b, we illustrate the results obtained by projecting CelebA faces onto an LFW GAN. For comparison, we also show the corresponding projections obtained using the baseline PGD in figure (a)a. It is seen that PGD tries to identify images that exactly match the given inputs, which, if the mode doesn’t exist in the distribution, result in sub-optimal projections. In contrast, MimicGAN  produces significantly higher quality projections by automatically accounting for the distribution shifts.

Unsupervised Domain Adaptation

The task considered here is the alignment between two related yet distinct domains (source and target), while building a classifier for the target without access to any labeled data. For this experiment, we use the commonly benchmarked pair of domains in handwritten digit recognition, namely MNIST and USPS. We argue that with corruption mimicking, aligning two distributions essentially boils down to projecting target data onto the source GAN. This task is challenging due to unknown variations in scale, skew, rotation, and other statistical properties across these two domains. We show that with MimicGAN , this can be achieved remarkably well. We follow the experiment setup in several domain adaptation works in the literature, and employ simple 1-NN classifiers, in order to only evaluate the alignment quality, rather than more complex classifiers.

(a) USPS MNIST
(b) MNISTUSPS
Figure 23: MimicGAN allows for accurate projection under distribution shifts, as shown here with MNIST and USPS datasets. The images to be projected are randomly chosen from the other dataset, and we show here. The domain adaptation results on the full 65000 MNIST and 9298 USPS digits are shown in Table 1.

Setup: We follow the setup in [50], where we perform adaptation on the entire 65000 digits of MNIST and 9298 digits of USPS. We resized the MNIST digits to 16x16 so they are the same size as the latter, and used the following hyper-parameter settings: , and . We perform the projection in batches of size at a time for efficiency. These hyper-parameters are not very sensitive, and are chosen here based on a validation set of 100 images.

Baselines: We compare the performance of MimicGAN  in distribution alignment with the following baselines:

  • No Adaptation: Here we use a nearest neighbor classifier trained on the source dataset directly on the target dataset.

  • PGD: We compare performance when no corruption-mimicking is performed, with the same GAN backend used by MimicGAN .

  • Subspace Alignment [17] A classical domain adpatation approach that aligns two distributions based on a single subspace that minimizes the Forbenius norm between the target and the subspace aligned source. The source aligned coordinate system is given by , where is the subspace fit to the source dataset, and the subspace of the target dataset.

  • Large Scale Optimal Transport (OT) [50] A recent technique that leverages an efficient version of optimal transport to align large datasets.

Additionally, we compare the performance of MimicGAN to more recent domain adaptation approaches in Table 1. For this we follow standard protocol and train a CNN on the source dataset and test it on the aligned target dataset, we verify that our no adaptation numbers are very close to those reported in most recent papers.

Method M U U M

1-NN

Task Agnostic

No Adaptation 71.26 31.65
PGD 58.15 24.96
Subspace Alignment [17] 59.53 44.61
Large Scale OT [50] 77.92 60.50
MimicGAN (ours) 80.75 65.50
\cdashline1-4

CNN

No Adaptation 81.6 60.3
PGD 61.7 48.2
MimicGAN (ours) 83.7 63.3
\cdashline2-5

Task Specific

CoGAN [37] 91.2 89.1
ADDA [54] 92.4 93.8
Gen to Adapt [48] 95.3 90.8
DANN [18] 95.7 90.0
CyCADA [22] 95.6 96.5
Table 1: An application of distribution alignment in unsupervised domain adaptation (UDA). MimicGAN outperforms many task-agnostic alignment strategies, i.e. methods that do not have access to source as well as target labels, using a pixel-based 1-nearest neighbor (1-NN) classifier, as well as a convolutional neural network (CNN). For context, we also compare with recent task-specific adaptation techniques.

Results: We show the results of projection for both datasets in Figures (a)a, and (b)b, and it is clearly evident that corruption-mimicking significantly improves the quality of projection as compared to PGD. It is also interesting to note that PGD never recovers the true samples from the target distribution, and only recovers those images very similar to those the GAN has previously had seen. We also quantitatively evaluate the 1-NN classifier performance after performing the alignment using MimicGAN. Table 1 shows that alignment under MimicGAN is superior to even large scale optimal transport [50], which computes a sample-wise alignment between two entire distributions. More importantly, using MimicGAN improves over PGD by nearly 20 and 30 percentage points in MNIST to USPS, and USPS to MNIST respectively. This significant boost is attributed to the improved quality of projection.

For the CNN based methods, it should be noted that the current state of the art methods in domain adaptation are all task-specific adaptation, i.e. the alignment is closely tied to the classifier, and uses labels from the source domain. On the other hand, MimicGAN  aligns the distribution in a task-agnostic manner, i.e. with no knowledge of either the CNN classifier or the source labels. As a result, MimicGAN  is inferior to the existing task-specific methods, as expected. However, it is worth noting that among the task-agnostic methods (pure distribution alignment), MimicGAN  obtains the highest adaptation accuracy.

(a)
(b) Universal Perturbations [40]
Attack No Defense Cowboy [49] Defense GAN [47] MimicGAN (Ours)
BIM [31] 05.60
DF [41] 06.20
FGSM [19] 11.60
CWL [14] 00.40
PGDM [39] 05.40
Obfuscated [7] (attack includes GAN)
(c) MimicGAN  for anomaly detection using a leave one class out setup.
Figure 27: Adversarial Defense & Anomaly Detection: In Figure (a)a, the class indicated as the anomaly is left out during training a GAN. Next, the anomalous class is reprojected onto this GAN. We expect a better projection technique to identify out-of-distribution samples more effectively. MimicGAN  defense shows significantly higher robustness to several strong adversarial attacks. Performance on clean test data is . Here, we show the average defense on adversarial attacks obtained using 3 different GANs on the same dataset. Across the board we see that  MimicGANprovides a stronger defense, even in the case of Obfuscated Gradients Attack [7], where the GAN is included in designing the adversarial attack.

5.2 Adversarial Defense

Here we study how MimicGAN, by design, can provide effective defense against several state-of-the-art adversarial attacks. We argue that a robust projection onto the image manifold results in very effective cleaning of adversarial data. In this context, the MimicGAN defense can be viewed as a generalization of the recent GAN-based defenses [47, 26, 49] that assume the corruption function to be identity, similar to the PGD baseline. The more recent Cowboy defense [49] uses an additional adversarial loss term to the defense-GAN loss [47] with the discriminator (We implement it with ).

We consider a variety of strong attacks to benchmark our defense, and we find that in every single case the MimicGAN defense is significantly stronger than existing techniques. While we outperform Defense-GAN[47], we retain its advantages – i.e, the MimicGAN defense is a test-time only algorithm that does not require any additional training. It is also entirely unsupervised, and does not need knowledge of the classifier prior to deploying the defense, thus leading to a practical defense strategy.

Setup: We use a CNN classifier that achieves a test accuracy of on the Fashion-MNIST dataset [56]. We design a variety of attacks using the cleverhans toolbox [42], and test our defense on this classifier for all the following experiments. The proposed defense involves the projection operation, following algorithm 1, where the unknown corruptions are adversarial perturbations. The performance of different defense strategies is measured using randomly chosen test images from the dataset, which are cleaned in batches of size . We use the following hyper-parameter settings that are determined using a validation set of 100 examples. We observe that, as before, these settings are not very sensitive, and remain effective over a large range of values.

Universal perturbations: MimicGAN  also provides effective defense against universal perturbations [40, 1], which belong to the class of image-agnostic perturbations where an attack is just a single vector which when added to the entire dataset can fool a classifier. To test this defense, we first design a targeted universal perturbation using the Fast Gradient Sign Method (FGSM) [19], by computing the mean adversarial perturbation from test images, i.e. for an adversarially perturbed image , we define the universal perturbation to be . We can also increase the magnitude of the attack by scaling it: . Typically, a larger magnitude implies a stronger attack up to a certain point after which it becomes noise and reduces to a trivial attack. In Figure 27(a), we observe that when compared to the state-of-the-art Defense-GAN, our defense is significantly more robust.

Image-dependent perturbations: Finally, we test MimicGAN against the following image-specific attacks: (a) The Carlini-Wagner L2 attack (CWL) [14], (b) Fast Gradient Sign Method (FGSM) [19], (c) Projected Gradient Descent Method (PGDM) [39], (d) DeepFool [41], (e) Basic Iterative Method (BIM) [31] and (f) Obfuscated Gradients [7]. We hypothesize that even though the perturbation on each image is different, the surrogate learns an average perturbation when presented with a few adversarial examples. As seen in Table 27(b), this turns out to be a strong regularization, resulting in a significantly improved defense compared to baseline approaches such as Defense-GAN [47] or Cowboy [49]. We use the same GAN backend for these baselines so that the only variable in the adversarial defence is the projection technique. The defence performance reported in Figure 27(b) is obtained using 3 separate GAN backends that are trained with random subsets containing 45000 images or of the MNIST training set, to ensure they result in sufficiently different GANs. The obfuscated attack [7] is the strongest attack considered here as it targets the GAN in addition to the classifier, we attack all the three GANs and report the performance. While MimicGAN is vulnerable to such an attack, it can afford a stronger defense than plain GAN-based defenses, as seen in Table 27(b).

5.3 Anomaly Detection

GANs have become a popular choice for unsupervised anomaly detection since out of distribution samples are represented as those samples with a relatively high reprojection error, since they are not well represented by the image manifold inferred by the generator. A typical experimental setup [58, 4, 5] for a class dataset is to train a GAN on classes, and use the class as the anomaly - this is repeated for every class. We test the effectiveness of MimicGAN  in such a task, which we expect to improve with more robust projections. We also compare our method with recent manifold-projection based anomaly detection techniques [5, 4, 58].

Experimental Details: We perform the anomaly detection task on the MNIST digits training set, where we leave one class out and train a GAN on the remaining classes. We use the same hyper-parameters used in the domain adaptation setting described in section 5.1.2. The training set contains of the normal data, and the test set contains the remaining of the normal, along with all the anomalous class samples, similar to [58]. We use the final projection error given in eq. (4) to distinguish normal samples from anomalous ones, the hypothesis being that a normal sample should have a significantly lower error than one that is out of distribution. Though some recent approaches have also used distance in the feature space of the discriminator as a measure for detection, we found the projection error to be more effective. We compute the area under the ROC curve as the evaluation metric for detection performance.

Table 2 shows the average detection performance across all classes in the MNIST dataset, in comparison to several recent techniques. We see that using corruption-mimicking with MimicGAN  significantly improves detection performance. In addition, we match the performance of a state-of-the-art technique specifically designed for anomaly detection, simply through robust reprojection. It is also worth noting that over our baseline approach, PGD, MimicGAN  increases detection performance by nearly 30 percentage points.

Method Area under ROC
VAE[5]
AnoGAN[5]
GANomaly (BIGAN) [4]
EGBAD (PGD) [58]
MimicGAN (ours)
Table 2: Anomaly Detection on MNIST leave-one-class out experiment. Average performance on all 10 classes is reported below. Corruption mimicking boosts PGD by to match state of the art techniques.

6 Discussions and Future Work

In this paper, we presented MimicGAN, an entirely unsupervised system that can accurately project images back onto the image manifold even in presence of a variety of unknown corruptions. We achieve this by introducing a corruption-mimicking surrogate in addition to a GAN prior, that works without any additional data augmentation or supervision. The properties of the surrogate network enables robustness to a variety of corruptions, for example we show that a surrogate with a spatial transformer layer provides robustness to affine transformations. We also show that by simply improving the robustness of projections across these corruptions,a huge boost in performance can be obtained in a wide variety of applications leveraging GAN priors such as: adversarial defense, domain adaptation, and anomaly detection. The results in this study indicate that the GAN prior can be much more powerful than previously understood, when coupled with robust projection strategies.

There are several avenues of future study, particularly with respect to the surrogate network which is central to MimicGAN. For applications like domain adaptation, including available supervision in the source domain may lead to improved task-specific alignment, thereby improving adaptation quality. Next, since the surrogate network is trained at test time with a few observations, it maybe easy to break it using few adversarial examples, that are crafted with knowledge of the surrogate network. The current framework cannot handle such shifts and will require generilzations that can provide reliable projections even under adversally perturbed observations. Finally, exploring how robust projections improve problems in inverse imaging remains to be addressed. In this regard, making the projection variational can also be useful in recovering multiple plausible solutions to under-determined inverse problems with the GAN prior.

Acknowledgement

This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

Disclaimer

This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes.

Footnotes

  1. affiliationtext: Center for Applied Scientific Computing (CASC),
    Lawrence Livermore National Laboratory

References

  1. T. A. Hogan and B. Kailkhura (2018) Universal decision-based black-box perturbations: breaking security-through-obscurity defenses. arXiv preprint arXiv:1811.03733. Cited by: §5.2.
  2. M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean and M. Devin (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467. Cited by: §4.
  3. R. Abdal, Y. Qin and P. Wonka (2019) Image2StyleGAN: how to embed images into the StyleGAN latent space?. arXiv preprint arXiv:1904.03189. Cited by: §1.
  4. S. Akcay, A. Atapour-Abarghouei and T. P. Breckon (2018) Ganomaly: semi-supervised anomaly detection via adversarial training. arXiv preprint arXiv:1805.06725. Cited by: §1, §5.3, Table 2, §5.
  5. J. An and S. Cho (2015) Variational autoencoder based anomaly detection using reconstruction probability. Special Lecture on IE 2, pp. 1–18. Cited by: §5.3, Table 2, §5.
  6. M. Asim, F. Shamshad and A. Ahmed (2018) Solving bilinear inverse problems using deep generative priors. CoRR abs/1802.04073. External Links: Link, 1802.04073 Cited by: §2, §3.1, 1st item.
  7. A. Athalye, N. Carlini and D. Wagner (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420. Cited by: §2, (c)c, Figure 27, §5.2.
  8. A. Azulay and Y. Weiss (2018) Why do deep convolutional networks generalize so poorly to small image transformations?. arXiv preprint arXiv:1805.12177. Cited by: 2nd item, §4.1.
  9. B. Becker and E. Ortiz (2013) Evaluating open-universe face identification on the web. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 904–911. Cited by: item 4., item 1., §5.1.1.
  10. (2018)(Website) External Links: Link Cited by: §1.
  11. P. Bojanowski, A. Joulin, D. Lopez-Paz and A. Szlam (2017) Optimizing the latent space of generative networks. arXiv preprint arXiv:1707.05776. Cited by: §1.
  12. A. Bora, A. Jalal, E. Price and A. G. Dimakis (2017) Compressed sensing using generative models. arXiv preprint arXiv:1703.03208. Cited by: §2, §3.1, 1st item.
  13. A. Brock, J. Donahue and K. Simonyan (2018) Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096. Cited by: §1.
  14. N. Carlini and D. Wagner (2017) Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. Cited by: (c)c, §5.2.
  15. J. Donahue, P. Krähenbühl and T. Darrell (2016) Adversarial feature learning. arXiv preprint arXiv:1605.09782. Cited by: §1, 4th item.
  16. V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky and A. Courville (2016) Adversarially learned inference. arXiv preprint arXiv:1606.00704. Cited by: §1, 4th item.
  17. B. Fernando, A. Habrard, M. Sebban and T. Tuytelaars (2013) Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE international conference on computer vision, pp. 2960–2967. Cited by: 3rd item, Table 1.
  18. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand and V. Lempitsky (2016) Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17 (1), pp. 2096–2030. Cited by: Table 1.
  19. I. J. Goodfellow, J. Shlens and C. Szegedy (2014) Explaining and harnessing adversarial examples. CoRR abs/1412.6572. External Links: Link, 1412.6572 Cited by: (c)c, §5.2, §5.2.
  20. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems (NIPS), pp. 2672–2680. Cited by: §1.
  21. K. He, X. Zhang, S. Ren and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §3.1.1, 2nd item.
  22. J. Hoffman, E. Tzeng, T. Park, J. Zhu, P. Isola, K. Saenko, A. A. Efros and T. Darrell (2017) Cycada: cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213. Cited by: Table 1.
  23. Y. Hoshen and L. Wolf (2018) Nam: non-adversarial unsupervised domain mapping. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 436–451. Cited by: §2.
  24. Y. Hoshen (2018) Non-adversarial mapping with vaes. In Advances in Neural Information Processing Systems, pp. 7528–7537. Cited by: §2, 2nd item.
  25. G. B. Huang, M. Ramesh, T. Berg and E. Learned-Miller (2007-10) Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical report Technical Report 07-49, University of Massachusetts, Amherst. Cited by: item 1..
  26. A. Ilyas, A. Jalal, E. Asteri, C. Daskalakis and A. G. Dimakis (2017) The robust manifold defense: adversarial training using generative models. arXiv preprint arXiv:1712.09196. Cited by: §1, §2, §5.2, §5.
  27. P. Isola, J. Zhu, T. Zhou and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967–5976. Cited by: §2.
  28. M. Jaderberg, K. Simonyan and A. Zisserman (2015) Spatial transformer networks. In Advances in neural information processing systems, pp. 2017–2025. Cited by: §3.1.1.
  29. T. Karras, S. Laine and T. Aila (2018) A style-based generator architecture for generative adversarial networks. arXiv preprint arXiv:1812.04948. Cited by: Figure 1, item 4., §1, Figure 20, item 2., §5.1.1.
  30. D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §1.
  31. A. Kurakin, I. Goodfellow and S. Bengio (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: (c)c, §5.2.
  32. A. B. L. Larsen, S. K. Sønderby, H. Larochelle and O. Winther (2015) Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300. Cited by: §1.
  33. Y. LeCun (1998) The MNIST database of handwritten digits. http://yann. lecun. com/exdb/mnist/. Cited by: item 4..
  34. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz and Z. Wang (2016) Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint. Cited by: §1.
  35. Z. C. Lipton and S. Tripathi (2017) Precise recovery of latent vectors from generative adversarial networks. arXiv preprint arXiv:1702.04782. Cited by: §2.
  36. M. Liu, T. Breuel and J. Kautz (2017) Unsupervised image-to-image translation networks. In Advances in Neural Information Processing Systems, pp. 700–708. Cited by: §1, §1, §2.
  37. M. Liu and O. Tuzel (2016) Coupled generative adversarial networks. In Advances in neural information processing systems, pp. 469–477. Cited by: Table 1.
  38. Z. Liu, P. Luo, X. Wang and X. Tang (2015-12) Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), Cited by: item 4., §4, Figure 20.
  39. A. Madry, A. Makelov, L. Schmidt, D. Tsipras and A. Vladu (2018) Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, External Links: Link Cited by: (c)c, §5.2.
  40. S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi and P. Frossard (2017) Universal adversarial perturbations. arXiv preprint. Cited by: (b)b, §5.2.
  41. S. Moosavi-Dezfooli, A. Fawzi and P. Frossard (2016) Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582. Cited by: (c)c, §5.2.
  42. N. Papernot, I. Goodfellow, R. Sheatsley, R. Feinman and P. McDaniel (2016) Cleverhans v1.0.0: an adversarial machine learning library. arXiv preprint arXiv:1610.00768. Cited by: §5.2.
  43. D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell and A. A. Efros (2016) Context encoders: feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544. Cited by: §1.
  44. N. Pinto, Z. Stone, T. Zickler and D. Cox (2011) Scaling up biologically-inspired computer vision: a case study in unconstrained face recognition on facebook. In CVPR 2011 WORKSHOPS, pp. 35–42. Cited by: Figure 20, item 1..
  45. A. Radford, L. Metz and S. Chintala (2016) Unsupervised representation learning with deep convolutional generative adversarial networks. International Conference on Learning Representations (ICLR). Cited by: §4.
  46. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla and M. Bernstein (2015) Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115 (3), pp. 211–252. Cited by: 2nd item.
  47. P. Samangouei, M. Kabkab and R. Chellappa (2018) Defense-gan: protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605. Cited by: §1, §1, §2, §3.1.2, §3.1, (c)c, §5.2, §5.2, §5.2, §5.
  48. S. Sankaranarayanan, Y. Balaji, C. D. Castillo and R. Chellappa (2018) Generate to adapt: aligning domains using generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8503–8512. Cited by: Table 1.
  49. G. K. Santhanam and P. Grnarova (2018) Defending against adversarial attacks by leveraging an entire gan. arXiv preprint arXiv:1805.10652. Cited by: §2, (c)c, §5.2, §5.2, §5.
  50. V. Seguy, B. B. Damodaran, R. Flamary, N. Courty, A. Rolet and M. Blondel (2017) Large-scale optimal transport and mapping estimation. arXiv preprint arXiv:1711.02283. Cited by: 4th item, §5.1.2, §5.1.2, Table 1.
  51. V. Shah and C. Hegde (2018) Solving linear inverse problems using GAN priors: an algorithm with provable guarantees. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4609–4613. Cited by: §1, §2, §3.1, 1st item.
  52. A. Shocher, N. Cohen and M. Irani (2018) “Zero-shot” super-resolution using deep internal learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3118–3126. Cited by: §2.
  53. (2019)(Website) External Links: Link Cited by: §1.
  54. E. Tzeng, J. Hoffman, K. Saenko and T. Darrell (2017) Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176. Cited by: Table 1.
  55. D. Ulyanov, A. Vedaldi and V. Lempitsky (2017) Deep image prior. arXiv preprint arXiv:1711.10925. Cited by: §2.
  56. H. Xiao, K. Rasul and R. Vollgraf (2017-08-28)(Website) External Links: cs.LG/1708.07747 Cited by: §5.2.
  57. R. A. Yeh, C. Chen, T. Y. Lim, A. G. Schwing, M. Hasegawa-Johnson and M. N. Do (2017) Semantic image inpainting with deep generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5485–5493. Cited by: §1, §1, §2, §3.1.2, §3.1, 1st item.
  58. H. Zenati, C. S. Foo, B. Lecouat, G. Manek and V. R. Chandrasekhar (2018) Efficient gan-based anomaly detection. arXiv preprint arXiv:1802.06222. Cited by: §1, §5.3, §5.3, Table 2, §5.
  59. J. Zhu, P. Krähenbühl, E. Shechtman and A. A. Efros (2016) Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision, pp. 597–613. Cited by: §2, §3.1.4, 2nd item, 3rd item.
  60. J. Zhu, T. Park, P. Isola and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §1, §1, §2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402356
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description