Realistic Adversarial Examples in 3D Meshes

Realistic Adversarial Examples in 3D Meshes

   Dawei Yang     Chaowei Xiao11footnotemark: 1     Bo Li   Jia Deng   Mingyan Liu
                     1University of Michigan 2Princeton University 3UIUC
indicates equal contributions.
Abstract

Highly expressive models such as deep neural networks (DNNs) have been widely applied to various applications and achieved increasing success. However, recent studies show that such machine learning models appear to be vulnerable against adversarial examples. So far adversarial examples have been heavily explored for 2D images, while few works have conducted to understand vulnerabilities of 3D objects which exist in real world, where 3D objects are projected to 2D domains by photo taking for different learning (recognition) tasks. In this paper we consider adversarial behaviors in practical scenarios by manipulating the shape and texture of a given 3D mesh representation of an object. Our goal is to project the optimized “adversarial meshes" to 2D with a photorealistic renderer, and still able to mislead different machine learning models. Extensive experiments show that by generating unnoticeable 3D adversarial perturbation on shape or texture for a 3D mesh, the corresponding projected 2D instance can either lead classifiers to misclassify the victim object as an arbitrary malicious target, or hide any target object within the scene from object detectors. We conduct human studies to show that our optimized adversarial 3D perturbation is highly unnoticeable for human vision systems. In addition to the subtle perturbation for a given 3D mesh, we also propose to synthesize a realistic 3D mesh and put in a scene mimicking similar rendering conditions and therefore attack different machine learning models. In-depth analysis of transferability among various 3D renderers and vulnerable regions of meshes are provided to help better understand adversarial behaviors in real-world.

Realistic Adversarial Examples in 3D Meshes

   Dawei Yangthanks: indicates equal contributions.     Chaowei Xiao11footnotemark: 1     Bo Li   Jia Deng   Mingyan Liu
                     1University of Michigan 2Princeton University 3UIUC

1 Introduction

Machine learning, especially deep neural networks, have achieved great successes in various domains, including image recognition (He et al., 2016), natural language process (Collobert & Weston, 2008), speech to text translation (Deng et al., 2013), and robotics (Silver et al., 2016). Despite the increasing successes, machine learning models are found vulnerable to adversarial examples. Small magnitude of perturbation is added to the input, such as an image, and therefore different learning models can be misled to make targeted incorrect prediction. Such adversarial examples have been widely studied in 2D domain (Szegedy et al., 2013; Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016; Carlini & Wagner, 2017; Xiao et al., 2018b; c; a) , while in practice directly manipulating pixel values of real world observations is hard. As a result, it is important to explore the vulnerabilities of 3D meshes in practice. In addition, synthetic datasets have been widely used to improve the performance of machine learning models for various tasks, including view point estimation (Su et al., 2015), semantic understanding in street views (Richter et al., 2016), human pose estimation (Varol et al., 2017; Chen et al., 2015), 3D shape reconstruction (Yang & Deng, 2018; Massa et al., 2016), and indoor scene understanding (Song et al., 2017; Zhang et al., 2017; Handa et al., 2016; McCormac et al., 2017). In these tasks, images and videos are captured through a physically-based renderer, and such synthetic data are usually generated with different scene configurations and viewpoints, while the 3D assets are from large-scale datasets such as ShapeNet (Chang et al., 2015), ModelNet (Z. Wu, 2015) and SUNCG (Song et al., 2017). Therefore, it is critical to explore the possibilities of manipulating such 3D shape datasets and the corresponding severe consequences when rendering the “adversarial 3D meshes" for learning tasks.

Physical adversarial examples are studied (Kurakin et al., 2016; Evtimov et al., 2017; Eykholt et al., 2018; Athalye & Sutskever, 2017) but they do not focus on the 3D object itself or the real-world rendering process. In this paper, we propose meshAdv to generate adversarial perturbation for 3D meshes by manipulating the shape or texture information, which can be eventually rendered to 2D domain to attack different machine learning models. We also propose to place a 3D mesh (here we use a bunny) rendered in physically based scenes to fulfill adversarial goals. We target to attack both classifiers (Inception-v3 by Szegedy et al. and DenseNet by Huang et al.) and object detectors (Yolo-v3 by Redmon & Farhadi). Our proposed 3D mesh based attack pipeline is shown in Figure 1. In particular, we leverage a physically-based renderer, which computes screen pixel values by raycasting view directions and simulating the shape and light interactions with physics models, to project 3D scenes to 2D domains. First, we propose to either generate perturbation for the shape or texture of a 3D mesh, and guarantee the rendered 2D instance can be misclassified by traditional classifiers into the targeted class. Here we do not control the rendering conditions (e.g. lighting and viewpoints) and show that the “3D adversarial mesh" is able to attack the classifier under various rendering conditions with almost 100% attack success rate. Second, we generate adversarial perturbation for 3D meshes to attack an object detector in a synthetic indoor scene and show that by adding a bunny with subtle perturbation, the detector can mis-detect various existing objects within the scene. Third, we also propose to place a 3D mesh in a random outdoor scene and render it under similar physical conditions with existing objects to guarantee its realistic observation. We show that such added object can lead object detectors to miss the target object within the given real-world scene. To better evaluate adversarial perturbation on 3D meshes, we propose to use a smoothing loss (Vogel & Oman, 1996) and a root-mean-square distance as measurement metrics for shape and texture based perturbation respectively, and report the magnitude of adversarial perturbation in various settings (best, average, worst) to serve as a baseline for future possible adversarial attacks on 3D meshes. We conduct user study to allow real humans to identify the categories of the generated adversarial 3D meshes, and the collected statistical results show that users recognize the adversarial meshes as ground truth with probability .

In addition, we analyze transferability upon different types of renderers, where perturbation against one renderer can be transferred to the other. We show that transferability among different renderers is high for untargeted attack but low for targeted attacks, which provides in-depth understanding of properties of different renderers. The transferability makes black-box attacks possible against different renderers. For instance, attacker can attack the differentiable renderer and transfer the perturbation to non-differentiable ones which encounter high computational cost. In our experiments, we show that we can attack a differentiable renderer, the neural mesh renderer (Kato et al., 2018) and transfer the perturbation to the non-differentiable renderer Mitsuba (Jakob, 2010). Finally, to better understand the corresponding vulnerable regions for 3D meshes, we also analyze the manipulation flow for shape based perturbation, and find that the vulnerable regions of 3D meshes usually lie on the parts that are close to the viewpoint with large curvatures. This leads to better understanding of adversarial behaviors for real-world 3D objects and provide potential directions to enhance the model robustness, such as design adaptive attention mechanism or leverage deformation information to improve machine learning models.

Figure 1: The pipeline of adversarial mesh generation by meshAdv.

In summary, our contributions are listed below: 1). We propose meshAdv to generate adversarial perturbation based on shape or texture information of 3D meshes and use a physically-based renderer to project such adversarial meshes to 2D and therefore attack existing classifiers under various rendering conditions with almost 100% attack success rate; 2). We propose to place a 3D mesh in both indoor and outdoor scenes mimicking the same physical rendering conditions, and show that existing objects can be missed by object detectors; 3). We evaluate adversarial 3D meshes based on different metrics and provide a baseline for adversarial attacks on 3D meshes; 4). We evaluate the transferability of adversarial 3D meshes among different renderers and show that untargeted attack can achieve high transferability; 5). We propose a pipeline for black-box attack that aims to attack a differentiable renderer and transfer the perturbation to a non-differentiable renderer in both known and unknown environments. 6). We provide in-depth analysis for the vulnerable regions for 3D meshes and therefore lead to discussion for adversarial behaviors in physical world.

2 Related Work

Differentiable Renderers Our method integrates gradient-based optimization with a differentiable 3D renderer integrated into an end-to-end pipeline. From this perspective, there are works taking on different tasks with a similar approach. For example, Barron & Malik use a spherical-harmonics-lighting-based differentiable renderer (Ramamoorthi & Hanrahan, 2001) to jointly estimate shape, reflectance and illumination from shading by optimizing to satisfy the rendering equation. Kato et al. propose the Neural Mesh Renderer for neural networks and perform single-image 3D mesh reconstruction and gradient-based 3D mesh editing with the renderer. Genova et al. integrate a differentiable renderer during training, to regress 3D morphable model parameters from image pixels. Mordvintsev et al. show that through differentiable rendering, they can perform texture optimization and style transfer directly in screen space to achieve better visual quality and 3D properties. These gradient-based optimization methods with integrated renderers, along with our work, largely attribute to the readily accessible differentiable renderers such as OpenDR (Loper & Black, 2014), differentiable mesh renderers designed for neural networks (Kato et al., 2018; Genova et al., 2018), RenderNet (Nguyen-Phuoc et al., 2018), and the irradiance renderer (Ramamoorthi & Hanrahan, 2001; Barron & Malik, 2015). We exploit differentiable rendering techniques with a different optimization target: we try to minimize our modification to 3D contents, while deceiving a machine learning model into misclassification or misdetection of objects in the rendered images.

Adversarial Attacks Adversarial examples have been heavily explored in 2D domains (Szegedy et al., 2013; Goodfellow et al., 2014; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016). Physical adversarial examples are studied by (Kurakin et al., 2016; Evtimov et al., 2017; Athalye & Sutskever, 2017). However, they focus on manipulating the paints color of the objects and do not focus on the 3D object itself. In this work, we aim to explore the adversarial 3D mesh itself.

Zeng et al. (2017) perturbed the physical parameters (normal, illumination and material) for untargeted attacks against 3D shape classification and a visual question answering system. However, they represent the shape with pixelwise normal map, which still operates on 2D space, and such normal map may not be physically plausible. A concurrent work (Liu et al., 2018) proposes to manipulate lighting and geometry to attack 3D renderers. There are several major differences comparing with our work: 1) Magnitude of perturbation. The perturbation in Liu et al. (2018) such as lighting change is visible, while low magnitude perturbation is the most important part in adversarial behaviors. In our proposed attacks, we explicitly constraint the perturbation to be of small magnitude, and we conduct human subject experiments to confirm that the perturbation is unnoticeable. 2) Targeted attack. Based on the objective function and results of the experiments, Liu et al. (2018) can only mislead objects from one category to other close categories such as jaguar and elephant. In our work, we explicitly force the object from each class to be targeted attacked into all the rest of classes with almost 100% attack success rate. 3) Renderers. We perform attacks based on the state-of-the-art renderer (Kolotouros, 2018) which makes our attacks, while Liu et al. (2018) built a customized renderer and it is hard to tell whether such vulnerabilities come from the customized renderer or the manipulated object. 4) Realistic attacks. Manipulating lighting is less realistic in open environments. Compared with their attacks on lighting and shape, we proposed to manipulate shape and texture of meshes which are easier to conduct in practice. In addition, we evaluate our attacks in random physically realistic scenes to demonstrate the robustness of attacks under various physical conditions. 5) Victim learning models. We attack both classifier and object detector, which is widely used in safety-sensitive applications such as autonomous driving, while Liu et al. (2018) only attacks classifiers. 6). Analysis. We provide in-depth analysis for the 3D adversarial examples, such as their vulnerable regions, to help build better understanding.

3 3D Adversarial Meshes

Here we formally define the adversarial 3D mesh problem first, and then introduce the adversarial optimization objectives and loss functions.

3.1 Problem definition

Let be a machine learning model trained in 2D domain. denotes a 2D image with corresponding label as . aims to learn a mapping from image domain to where and . A physically based renderer computes a 2D image from a 3D mesh with camera parameters and illumination parameters . A 3D mesh can be further represented by vertices , texture and faces . The renderer is differentiable under certain assumptions, and please refer to the appendix for more details. An attacker aims to generate a “3D adversarial mesh” by manipulating the shape (vertices) and texture information for the original 3D mesh , which is eventually rendered to a 2D image to mislead a machine learning model so that (untargeted attack) or (targeted attack) where is the malicious target output and is the ground truth.

Achieving the above goals is non-trivial with the following challenges. 1) Rendering 3D content to 2D space is complicated: a) 2D image space is largely reduced because we parameterize 2D images as the production from 3D shapes/textures/illumination, and such reduction can affect adversarial behaviors; b) 3D contents (shape/texture/illumination) are entangled together to generate the pixel values in a 2D image, so perturbing one will affect the other; c) This rendering process in general is not differentiable unless we make substantial assumptions such as simple surface and lighting models. 2) 3D space itself is complicated: a) 3D constraints such as physically possible shape geometry and texture are not directly reflected on 2D (Zeng et al., 2017); b) Human perception on 3D objects are based on 3D understandings. Changes of 2D pixel values may not affect 3D features of meshes, but manipulation on 3D meshes directly affects 3D features of the meshes, so it is challenging to generate unnoticeable perturbation for 3D meshes.

3.2 Adversarial Optimization Objective

Our objective is to generate a “3D adversarial mesh” by adding subtle perturbation, such that the machine learning models will make incorrect prediction based on its rendered instance. In the meantime we hope the adversarial meshes remain perceptually realistic for human vision system. Specifically, the optimization objective function is as follows:

(1)

In this equation, is the adversarial loss to fool the model , such that will misclassify a specified target (i.e. ), given the rendered image . is a loss to keep the 3D adversarial mesh perceptually realistic. is a hyper-parameter to balance the losses. Given the optimization objective, we try to generate the 3D adversarial mesh by manipulating its shape and texture respectively. We denote this method as meshAdv.

We evaluate meshAdv on two common tasks: image classification and object detection. We further instantiate and in the next subsections, regarding different tasks and perturbation sources (vertices or texture).

3.2.1 Adversarial Losses

Classification For a classification model , the output is the probability distribution of object categories, given an image of the object as the input. We use the cross entropy loss (De Boer et al., 2005) as the adversarial loss for meshAdv:

(2)

Note that image is the rendered image of : .

Object Detection For object detection, the adversary’s goal is to make the victim object disappear from the object detector, which is called disappearance attack. For instance, we choose the state-of-the-art model, Yolo-v3 (Redmon & Farhadi, 2018), as our targeted attack model . It divides the input image into different grid cells. For each grid cell, Yolo-v3 predicts the locations and label confidence values of bounding boxes. For each bounding box, it generates 5 values (4 for the coordinates and 1 for the objectness score) and a probability distribution over classes. We use the disappearance attack loss (Eykholt et al., 2018) as our adversarial loss:

(3)

where is the output of Yolo-v3 by feeding a rendered image to . is a function to extract the probabilities of the bounding box in the grid cell being labeled as the class based on .

3.2.2 Perceptual Losses

To keep the “3D adversarial mesh” perceptually realistic, we leverage a smoothing loss similar to the total variation loss (Vogel & Oman, 1996) as our perceptual loss:

(4)

where are the coordinates of the image rendered with the adversarial mesh .

We apply this smoothing loss when generating texture based perturbation for the 3D adversarial mesh . However, for shape based perturbation, manipulation of vertices may introduce unwanted mesh topology change, which is reported in (Kato et al., 2018). In our task, we do not perform smoothing on the vertices directly, but the displacement of vertices from the original positions. Therefore, we extend our 2D smoothing loss to 3D vertex flow:

(5)

where is the displacement of the vertex from the position in the pristine mesh, and denotes the indices of the vertices that are on the same face (neighbors) with .

4 Transferability of 3D Adversarial Meshes

By optimizing the aforementioned adversarial objective end-to-end, we are able to obtain “3D adversarial meshes " and fool the machine learning models based on the images rendered by the differentiable renderer. In addition, it is particularly interesting to see whether black-box attack against an unknown renderer is possible and whether we can attack a renderer with low computational cost and transfer such perturbation to , which requires high computational cost such as industrial-level renderers. Here we call a photorealistic renderer, because multiple-bounce interreflection, occlusion, high quality stratified sampling and reconstruction, complicated illumination models are all present in such that the final image is an accurate reflection of real-world physics as captured by a camera. We analyze two scenarios: transferring to different renderers in known and unknown physical environments.

Transferability towards a Photorealistic Renderer in Known Environment In this scenario, our purpose is to test our 3D adversarial mesh directly under the same rendering configuration (lighting parameters , camera parameters ), only replacing the the differentiable renderer with the photorealistic renderer . In other words, while can fool the network as expected, we would like to see whether can attack successfully.

Transferability towards a Photorealistic Renderer in Unknown Environment In this scenario, we would like to attack a non-differentiable system in a fixed unknown environment, using a differentiable renderer . We still have the access to the shape and its mask in the original photorealistic rendering, as well as the network . But the rendering process of is not differentiable, and we hope to employ the differentiable renderer to generate the adversarial perturbation and transfer the adversarial behavior to such that will fool . To achieve this, we propose the attacking pipeline as follows: 1) Estimate the camera parameters by optimizing the loss of the two masks , where renders the silhouettes of the object as the object mask; 2) Estimate the lighting parameters using the estimated camera parameters by optimizing the difference of the two rendered images in the masked region: (note that does not require the same lighting model as , since we can still use simple lighting models in to estimate the complicated lighting ); 3) Generate the adversarial mesh using our adversarial and perceptual losses with the estimated lighting and camera parameters. Here we add randomness to lighting parameters, camera parameters, and the object position to improve the robustness against those variations since we do not have exact estimation; 4) Test our adversarial mesh in the photorealistic renderer : .

5 Experimental Results

In this section, we first show the attack effectiveness of “3D meshes" generated by meshAdv against classifiers under various settings. We then visualize the perturbation flow of vertices to better understand the vulnerable regions of those 3D objects. In addition, we show examples of applying meshAdv to object detectors in physically realistic scenes. Finally, We evaluate the transferability for 3D adversarial meshes from the differentiable renderer to a photorealistic non-differentiable renderer under both known and unknown rendering environments.

5.1 Experimental Setup

In our experiment, we choose DenseNet (Huang et al., 2017) and Inception-v3 (Szegedy et al., 2016) trained on ImageNet (Deng et al., 2009) as our targeted attack models for classification, and Yolo-v3 trained on COCO (Lin et al., 2014) for object detection. We preprocess CAD models in PASCAL3D+ (Xiang et al., 2014) with uniform mesh resampling with MeshLab (Cignoni et al., 2008) to increase the triangle density, and use the processed CAD models as 3D meshes to attack. Since these 3D objects have constant texture values, for texture perturbation we also start from constant as pristine texture. For the differentiable renderer, we use the off-the-shelf PyTorch implementation (Paszke et al., 2017; Kolotouros, 2018) of the Neural Mesh Renderer(NMR) (Kato et al., 2018) to generate “adversarial meshes”. We create PASCAL3D+ renderings for classification as our evaluation dataset. The details of creation are shown in the appendix. We generate a total of 72 samples with 7 different classes of 3D objects. We refer to these data as PASCAL3D+ renderings for later use in this paper. For optimizing the objective, we use Adam (Kingma & Ba, 2014) as our solver. In addition, we select the hyperparameter in Equation 1 using binary search, with 5 rounds of search and 1000 iterations for each round.

5.2 MeshAdv on Classification

Perturbation Type Model Test Accuracy Best Case Average Case Worst Case distance prob distance prob distance prob Shape DenseNet Inception-v3 Texture DenseNet Inception-v3

Table 1: Accuracy and attack success probability against different models on pristine data (Test Accuracy) and adversarial examples generated by meshAdv with PASCAL3D+ renderings. We show the average distance (distance) and the attack success probability (prob) under different settings.

In this section, we evaluate quantitative and qualitative performance of meshAdv against classifiers. For each sample in our PASCAL3D+ renderings, we try to targeted-attack it into the rest of other 6 categories. Next, for each perturbation type (shape and texture) and each model (DenseNet and Inception-v3), we split the results into three different cases similar to Carlini & Wagner (2017): Best Case means we attack samples within one class to other classes and report on the target class that is easiest to attack. Average Case means we do the same but report the performance on all of the target classes. Similarly, Worst case means that we report on the target class that is hardest to attack. Table 1 shows the attack success probabilities of 3D adversarial meshes, and their evaluation metrics for shape and texture based perturbation. For shape based perturbation, we use the smoothing loss from Equation 5 as the evaluation metric. For texture based perturbation, we compute the root-mean-square distance of texture values for each face of the mesh: , where is the texture color of the th one of the mesh’s faces. This is because the smoothing loss for texture is computed on the 2D image, and we need the metric to be in 3D, i.e. independent of its rendered images. The results show that meshAdv can achieve almost 100% attack success rate for both adversarial perturbation types.

Target class aeroplane bicycle boat bottle chair diningtable sofa (a) Perturbation on shape Target class aeroplane bicycle boat bottle chair diningtable sofa (b) Perturbation on texture
Figure 2: Benign images (diagonal) and corresponding adversarial examples generated by meshAdv on PASCAL3D+ renderings tested on Inception-v3. Adversarial target classes are shown at the top. We show perturbation on (a) shape and (b) texture.

Figure 2 shows the generated “3D adversarial meshes” against Inception-v3 after manipulating the vertices and texture respectively. The diagonal shows the images rendered with the pristine meshes. The target class of each adversarial mesh is shown at the top, and please see appendix for more results against DenseNet. Note that for each class, we randomly select one sample to show in the image, i.e. these images are not manually curated. It is worth noting that the perturbation on object shape or texture, generated by meshAdv, is barely noticeable to human, while being able to mislead classifiers. To help assess the vertex displacement, we discuss the flow visualization and human perceptual study in the following subsections.

Target class aeroplane bicycle boat bottle chair diningtable sofa (a) Rendered view Target class aeroplane bicycle boat bottle chair diningtable sofa (b) Canonical view
(c) Flow visualization of a 3D adversarial mesh targeting “bicycle”
Figure 3: Heatmap visualization of adversarial vertex flows on Inception-v3.

Visualizing Vertex Manipulation In order to better understand the vulnerable regions of 3D objects, in Figure 3, we convert the magnitude of the vertex manipulation flow to heatmap visualization. The heatmaps in the figure correspond to the ones in Figure 2(a). We adopt two viewpoints in this figure: the rendered view (a), which is the same as the one used for rendering the images; and the canonical view (b), which is achieved by fixing camera parameters for all shapes. We observe that the regions that are close to the camera with large curvature, such as edges, are more vulnerable. We find this is reasonable, since vertex displacement in those regions will bring significant change to normals, thus affecting the shading from the light sources and causing the screen pixel value to change drastically.

Since the heatmap only shows the magnitude of the vertex displacement, and we would also like to observe their adversarial flow directions. Figure 3(c) shows a close-up 3D quiver plot of the vertex flow in the vertical stabilizer region of an aeroplane. In this example, the perturbed aeroplane mesh is classified to “bicycle” in its rendering. From this figure, we observe that the adjacent vertices flow tend to go towards the similar direction, which illustrates the effect of our smoothing loss operated on the vertex flow.

Human Perceptual Study The detailed description of our human study settings is shown in the appendix. In total, we collect 3820 annotations from 49 participants. of trials were classified correctly, which indicates that our adversarial perturbation is unnoticeable to human as they can tell the ground truth class for these adversarial meshes.

5.3 MeshAdv on Object Detection

For object detection, we use Yolo-v3 as our target model. In this experiment, we evaluate meshAdv in two scenarios (indoor/outdoor) to demonstrate that it can attack the object detector successfully.

Indoor Scene The indoor scene is a synthetic scene. We compose the scene manually with a desk and a chair to simulate an indoor environment, and place in the scene a single directional light with low ambient light. We then put the Stanford Bunny mesh (Turk & Levoy, 1994) onto the desk, and show that by manipulating either the shape or the texture of the adversarial mesh, we can achieve the goal of removing the table detection or removing all detections, while keeping the perturbation unnoticeable, as is shown in Figure 4.

(a) Benign
(b) Table | Shape
(d) Table | Texture
(e) All | Texture
Figure 4: 3D adversarial meshes generated by meshAdv in a synthetic indoor scene. (a) represents the benign rendered image and (b)-(e) represent the rendered images from “adversarial meshes” by manipulating the shape or texture. We use the format “adversarial target | perturbation type” to denote the victim object aiming to hide and the type of perturbation respectively.

Outdoor Scene For the outdoor scene, we take a different approach: given a real photo of the outdoor scene, we first estimate the parameters of a sky lighting model (Hosek & Wilkie, 2012), using the API provided by Hold-Geoffroy et al., and then estimate a directional light and the ambient light. With this light estimation, our adversarial mesh will be realistically rendered and put into the real scene. In the real scene, we select the dog and the bicycle as our target objects. Different from adversarial goals in the synthetic indoor scene, we aim to remove the target objects respectively to increase the diversity of adversarial targets. By placing the rendered bunny in the scene, we successfully fool the network with barely noticeable perturbation using meshAdv, and the results are shown in Figure 5.

(a) | ground truth
(b) | Dog
(c) | ground truth
(d) | Bicycle
Figure 5: 3D adversarial meshes generated by meshAdv in an outdoor simulated scene. (a) and (c) show images rendered with pristine meshes as control experiments, while (b) and (d) contain “adversarial meshes” by manipulating the shape. We use the format “ / | target” to denote the benign/adversarial 3D meshes and the target to hide from the detector respectively.

5.4 Transferability to a Photorealistic Renderer in Known Environment

Considering the fact that it is not always possible to access the renderer of real-world systems, here we evaluate if we can still generated 3D adversarial meshes given black-box renderers. First, we directly render the “adversarial meshes” generated from Section 5.2 using a photorealistic renderer called Mitsuba (Jakob, 2010), with the same lighting and camera parameters. We then evaluate the transferability based on targeted/untargeted attack success rate by feeding the adversarial meshes to the black-box renderers. The transferability of untargeted attacks are shown in Table 2. (Additional results for confusion matrices of targeted attacks are shown in the appendix.) We observe that untargeted attacks achieve high transferability, while that of target-attack is low except for some categories.

Model/Target aeroplane bicycle boat bottle chair diningtable sofa average DenseNet Inception-v3

Table 2: Untargeted attack success rate against Mitsuba by transferring adversarial meshes generated by attacking a differentiable renderer targeting different classes.

5.5 Transferability to a Photorealistic Renderer in Unknown Environment

For transferability in unknown environment, we compose two tasks: adversarial meshes against a classifier and an object detector, respectively. Following the estimation pipeline proposed in Section 4, we first optimize the camera parameters using the Adam optimizer (Kingma & Ba, 2014), then estimate the lighting using 5 directional lights and an ambient light. Then we manipulate the shape in the scene until the image rendered by NMR successfully targeted-attack the classifier or the object detector with a high confidence. During this process, we add small random perturbation to the camera , lighting and the 3D object position such that our generated “adversarial meshes” will be more robust under uncertainties. After successfully generating , we re-render the original scene with , and test the rendered image on model .

Transferability for Classification We place an aeroplane object from PASCAL3D+ and put it in an outdoor scene with sky light, and render it using Mitsuba. As is shown in Figure 6, we successfully attacked the classification system to output the target “hammerhead” by replacing the pristine mesh with our “adversarial mesh”. Note that even we do not have a very accurate lighting estimate, we still achieve the transferability by adding perturbation to the lighting parameters.

Figure 6: Transferability of 3D adversarial meshes against classifiers in unknown environment. We estimate the camera viewpoints and lighting parameters using the differentiable renderer NMR, and apply the generated adversarial mesh generated to the non-differentiable renderer Mitsuba to evaluate attack transferability. The “airliner" is misclassified to the target class “hammerhead" after rendered by Mitsuba.

Transferability for Object Detection For object detection, we modified a scene from Bitterli (2016), and placed the Stanford Bunny object into the scene. The adversarial goal here is to remove the leftmost chair of the image rendered by Mitsuba via attacking the differentiable renderer NMR. Without an accurate lighting estimate, Figure 7 shows that the “adversarial meshes” can successfully remove the target (the leftmost chair) from the detector in this black-box setting.

(a) Benign
(b) | NMR
(c) | Mitsuba
(e) | Mitsuba
Figure 7: Transferability of 3D adversarial meshes against object detectors in unknown environment. (b) (c) are controlled experiments. is generated using NMR (d), targeting to hide the leftmost chair, and the adversarial mesh is tested on Mitsuba (e). We use “ | renderer" to denote whether the added object is adversarially optimized and the renderer that we aim to attack with transferability respectively.

6 Conclusion

In this paper, we proposed meshAdv to generate “3D adversarial meshes” by manipulating the shape or the texture of a mesh. These “3D adversarial meshes” can be rendered to 2D domains to mislead different machine learning models. We provide in-depth analysis for the vulnerable regions of 3D meshes based on the visualization of the vertex flow, and we also analyze the transferability for “3D adversarial meshes” among different renderers. This provides us a better understanding of adversarial behaviors in practice, and motivates potential defenses.

Acknowledgement We thank Lei Yang, Pin-Yu Chen for their valuable discussions on this work. This work is partially supported by the National Science Foundation under Grant CNS-1422211, CNS-1616575 and IIS-1617767.

References

  • Athalye & Sutskever (2017) Anish Athalye and Ilya Sutskever. Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397, 2017.
  • Barron & Malik (2015) Jonathan T Barron and Jitendra Malik. Shape, illumination, and reflectance from shading. TPAMI, 2015.
  • Bitterli (2016) Benedikt Bitterli. Rendering resources, 2016. https://benedikt-bitterli.me/resources/.
  • Carlini & Wagner (2017) Nicholas Carlini and David A. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017, pp. 39–57, 2017. doi: 10.1109/SP.2017.49. URL https://doi.org/10.1109/SP.2017.49.
  • Chang et al. (2015) Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR], Stanford University — Princeton University — Toyota Technological Institute at Chicago, 2015.
  • Chen et al. (2015) Wenzheng Chen, Huan Wang, Yangyan Li, Hao Su, Zhenhua Wang, Changhe Tu, Dani Lischinski, Daniel Cohen-Or, and Baoquan Chen. Synthesizing training images for boosting human 3d pose estimation. In 3D Vision (3DV), 2015.
  • Cignoni et al. (2008) Paolo Cignoni, Marco Callieri, Massimiliano Corsini, Matteo Dellepiane, Fabio Ganovelli, and Guido Ranzuglia. Meshlab: an open-source mesh processing tool. In Eurographics Italian chapter conference, volume 2008, pp. 129–136, 2008.
  • Collobert & Weston (2008) Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pp. 160–167. ACM, 2008.
  • De Boer et al. (2005) Pieter-Tjerk De Boer, Dirk P Kroese, Shie Mannor, and Reuven Y Rubinstein. A tutorial on the cross-entropy method. Annals of operations research, 134(1):19–67, 2005.
  • Deng et al. (2009) J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255, June 2009. doi: 10.1109/CVPR.2009.5206848.
  • Deng et al. (2013) Ltsc Deng, Jinyu Li, Jui-Ting Huang, Kaisheng Yao, Dong Yu, Frank Seide, Michael L Seltzer, Geoffrey Zweig, Xiaodong He, Jason D Williams, et al. Recent advances in deep learning for speech research at microsoft. In ICASSP, volume 26, pp.  64, 2013.
  • Evtimov et al. (2017) Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. Robust physical-world attacks on deep learning models. arXiv preprint arXiv:1707.08945, 1, 2017.
  • Eykholt et al. (2018) Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, and Dawn Song. Physical adversarial examples for object detectors. arXiv preprint arXiv:1807.07769, 2018.
  • Genova et al. (2018) Kyle Genova, Forrester Cole, Aaron Maschinot, Aaron Sarna, Daniel Vlasic, and William T. Freeman. Unsupervised training for 3d morphable model regression. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • Goodfellow et al. (2014) Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
  • Handa et al. (2016) Ankur Handa, Viorica Patraucean, Vijay Badrinarayanan, Simon Stent, and Roberto Cipolla. Understanding realworld indoor scenes with synthetic data. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4077–4085, 2016.
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
  • Hold-Geoffroy et al. (2017) Yannick Hold-Geoffroy, Kalyan Sunkavalli, Sunil Hadap, Emiliano Gambaretto, and Jean-François Lalonde. Deep outdoor illumination estimation. In IEEE International Conference on Computer Vision and Pattern Recognition, 2017.
  • Hosek & Wilkie (2012) Lukas Hosek and Alexander Wilkie. An analytic model for full spectral sky-dome radiance. ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH 2012), 31(4), July 2012. To appear.
  • Huang et al. (2017) Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, volume 1, pp.  3, 2017.
  • Immel et al. (1986) David S. Immel, Michael F. Cohen, and Donald P. Greenberg. A radiosity method for non-diffuse environments. In Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’86, pp. 133–142, New York, NY, USA, 1986. ACM. ISBN 0-89791-196-2. doi: 10.1145/15922.15901. URL http://doi.acm.org/10.1145/15922.15901.
  • Jakob (2010) Wenzel Jakob. Mitsuba renderer, 2010. http://www.mitsuba-renderer.org.
  • Kajiya (1986) James T. Kajiya. The rendering equation. In Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’86, pp. 143–150, New York, NY, USA, 1986. ACM. ISBN 0-89791-196-2. doi: 10.1145/15922.15902. URL http://doi.acm.org/10.1145/15922.15902.
  • Kato et al. (2018) Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. Neural 3d mesh renderer. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • Kolotouros (2018) Nikos Kolotouros. Pytorch implememtation of the neural mesh renderer. https://github.com/daniilidis-group/neural_renderer, 2018. Accessed: 2018-09-10.
  • Kurakin et al. (2016) Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
  • Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740–755. Springer, 2014.
  • Liu et al. (2018) Hsueh-Ti Derek Liu, Michael Tao, Chun-Liang Li, Derek Nowrouzezahrai, and Alec Jacobson. Adversarial geometry and lighting using a differentiable renderer. CoRR, abs/1808.02651, 2018.
  • Loper & Black (2014) Matthew M. Loper and Michael J. Black. Opendr: An approximate differentiable renderer. In Computer Vision – ECCV 2014, pp. 154–169, Cham, 2014. Springer International Publishing. ISBN 978-3-319-10584-0.
  • Massa et al. (2016) Francisco Massa, Bryan Russell, and Mathieu Aubry. Deep exemplar 2d-3d detection by adapting from real to rendered views. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • McCormac et al. (2017) John McCormac, Ankur Handa, Stefan Leutenegger, and Andrew J. Davison. Scenenet rgb-d: Can 5m synthetic images beat generic imagenet pre-training on indoor segmentation? In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
  • Moosavi-Dezfooli et al. (2016) Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582, 2016.
  • Mordvintsev et al. (2018) Alexander Mordvintsev, Nicola Pezzotti, Ludwig Schubert, and Chris Olah. Differentiable image parameterizations. Distill, 2018. https://distill.pub/2018/differentiable-parameterizations.
  • Nguyen-Phuoc et al. (2018) Thu Nguyen-Phuoc, Chuan Li, Stephen Balaban, and Yong-Liang Yang. Rendernet: A deep convolutional network for differentiable rendering from 3d shapes. In Advances in Neural Information Processing Systems. 2018.
  • Papernot et al. (2016) Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pp. 372–387. IEEE, 2016.
  • Paszke et al. (2017) Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
  • Ramamoorthi & Hanrahan (2001) Ravi Ramamoorthi and Pat Hanrahan. An efficient representation for irradiance environment maps. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’01, pp. 497–500. ACM, 2001. ISBN 1-58113-374-X. doi: 10.1145/383259.383317.
  • Redmon & Farhadi (2018) Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.
  • Richter et al. (2016) Stephan R. Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (eds.), European Conference on Computer Vision (ECCV), volume 9906 of LNCS, pp. 102–118. Springer International Publishing, 2016.
  • Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016.
  • Song et al. (2017) Shuran Song, Fisher Yu, Andy Zeng, Angel X Chang, Manolis Savva, and Thomas Funkhouser. Semantic scene completion from a single depth image. IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • Su et al. (2015) Hao Su, Charles R. Qi, Yangyan Li, and Leonidas J. Guibas. Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views. In The IEEE International Conference on Computer Vision (ICCV), December 2015.
  • Szegedy et al. (2013) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
  • Szegedy et al. (2016) Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826, 2016.
  • Turk & Levoy (1994) Greg Turk and Marc Levoy. Zippered polygon meshes from range images. In Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’94, pp. 311–318, New York, NY, USA, 1994. ACM. ISBN 0-89791-667-0. doi: 10.1145/192161.192241. URL http://doi.acm.org/10.1145/192161.192241.
  • Varol et al. (2017) Gül Varol, Javier Romero, Xavier Martin, Naureen Mahmood, Michael J. Black, Ivan Laptev, and Cordelia Schmid. Learning from synthetic humans. In CVPR, 2017.
  • Vogel & Oman (1996) Curtis R Vogel and Mary E Oman. Iterative methods for total variation denoising. SIAM Journal on Scientific Computing, 17(1):227–238, 1996.
  • Xiang et al. (2014) Yu Xiang, Roozbeh Mottaghi, and Silvio Savarese. Beyond pascal: A benchmark for 3d object detection in the wild. In Applications of Computer Vision (WACV), 2014 IEEE Winter Conference on, pp. 75–82. IEEE, 2014.
  • Xiao et al. (2018a) Chaowei Xiao, Ruizhi Deng, Bo Li, Fisher Yu, Dawn Song, et al. Characterizing adversarial examples based on spatial consistency information for semantic segmentation. In Proceedings of the (ECCV), pp. 217–234, 2018a.
  • Xiao et al. (2018b) Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610, 2018b.
  • Xiao et al. (2018c) Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612, 2018c.
  • Yang & Deng (2018) Dawei Yang and Jia Deng. Shape from shading through shape evolution. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • Z. Wu (2015) A. Khosla F. Yu L. Zhang X. Tang J. Xiao Z. Wu, S. Song. 3d shapenets: A deep representation for volumetric shapes. In Computer Vision and Pattern Recognition, 2015.
  • Zeng et al. (2017) Xiaohui Zeng, Chenxi Liu, Weichao Qiu, Lingxi Xie, Yu-Wing Tai, Chi Keung Tang, and Alan L Yuille. Adversarial attacks beyond the image space. arXiv preprint arXiv:1711.07183, 2017.
  • Zhang et al. (2017) Yinda Zhang, Shuran Song, Ersin Yumer, Manolis Savva, Joon-Young Lee, Hailin Jin, and Thomas Funkhouser. Physically-based rendering for indoor scene understanding using convolutional neural networks. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

Appendix

Creation of PASCAL3D+ Renderings

We state the creation of our PASCAL3D+ renderings for classification. First, we use NMR to generate synthetic renderings using the objects in PASCAL3D+ in different settings such as viewpoints and lighting (intensity and direction). Then, we create a table mapping the object classes in PASCAL3D+ to the corresponding classes in the ImageNet. Next, we feed the synthetic renderings to DenseNet and Inception-v3 and filter out the samples that are misclassified by either network, which means both models have prediction accuracy on our PASCAL3D+ renderings.

Differentiable Rendering Formulation

A physically-based renderer computes a 2D image with camera parameters , a 3D mesh and illumination parameters by approximating the real world physics, e.g. the rendering equation (Kajiya, 1986; Immel et al., 1986). A differentiable renderer makes such computation differentiable w.r.t. the input by making assumptions on lighting models and surface reflectance, and simplifying the ray-casting process. Following common practice, we use 3D triangular meshes for shape representation, Lambertian surface for surface modeling, directional lighting with a uniform ambient for illumination, and ignore interreflection and occlusion. Here, we further explain the details regarding 3D object , illumination and camera parameters used in the differentiable renderer.

For a 3D object in 3D triangular mesh representation, let be the set of its vertices in 3D space, and be the indices of its faces . For textures, traditionally, they are represented by mapping to 2D images with mesh surface parameterization. For simplicity, here we attach to each triangular face a single color as its texture: .

For illumination, we use directional light sources plus an ambient light. The lighting directions are denoted , where are unit vectors. Similarly, the lighting colors are denoted for directional light sources and for the ambient light, with in RGB color space.

We put the object at the origin , and set up our perspective camera following a common practice: the camera viewpoint is described by a quadruple , where is the distance of the camera to the origin, and , , are azimuth, elevation and tilt angles respectively. Note that here we assume the camera intrinsics are fixed and we only need gradients for the extrinsic parameters .

Given the above description, the 2D image produced by the differentiable renderer can be derived as follows:

(6)

normal computes the normal direction for each triangular face in the mesh, by computing the cross product of the vectors along two edges of the face:

(7)

shading computes the shading intensity on the face given the face normal direction and lighting parameters:

(8)

Given the texture for each face , we compute the color of each face by elementwise multiplication:

(9)

rasterize projects the computed face colors in 3D space onto the 2D camera plane by raycasting and depth testing. We also cap the color to be within range .

Human Perceptual Study

We conduct a user study on Amazon Mechanical Turk (AMT) in order to quantify the realism of the adversarial meshes generated by meshAdv. We uploaded the adversarial images on which DenseNet and Inception-v3 misclassify the object. Participants were asked to classify those adversarial images to one of the two classes (the ground-truth class and the target class). The order of these two classes was randomized and the adversarial images appeared for 2 seconds in the middle of the screen on each trial. After disappearing, the participant had unlimited time to select the more feasible class according to her perception. For each participant, one could only conduct at most 50 trials, and each adversarial image was shown to 5 different participants.

Additional Figures

Figure 8 shows the generated “3D adversarial meshes” against DenseNet, similar to Figure 2. Figure 9 lists confusion matrices of targeted attack for evaluating transferability to Mitsuba under known environment.

Target class aeroplane bicycle boat bottle chair diningtable sofa (a) Shape Target class aeroplane bicycle boat bottle chair diningtable sofa (b) Texture
Figure 8: Benign images and the corresponding adversarial examples generated by meshAdv on PASCAL3D+ shapes on DenseNet. (a) presents the “adversarial meshes” by manipulating shape while (b) by manipulating texture.

(a) DenseNet

(b) Inception-v3

Figure 9: Targeted-attack transferability to Mitsuba. (,) represents the attack success rate of the “adversarial meshes” labeled as groundtruth and attacked as target on images rendered by Mitsuba.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
306288
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description