Image-to-image translation for cross-domain disentanglement

Image-to-image translation for cross-domain disentanglement

Abel Gonzalez-Garcia
Computer Vision Center
agonzalez@cvc.uab.es
&Joost van de Weijer
Computer Vision Center
Universitat Autònoma de Barcelona &Yoshua Bengio
Montreal Institute for Learning Algorithms
Université de Montréal
Abstract

Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In this paper, we bridge these two objectives and introduce the concept of cross-domain disentanglement. We aim to separate the internal representation into three parts. The shared part contains information for both domains. The exclusive parts, on the other hand, contain only factors of variation that are particular to each domain. We achieve this through bidirectional image translation based on Generative Adversarial Networks and cross-domain autoencoders, a novel network component. The obtained model offers multiple advantages. We can output diverse samples covering multiple modes of the distributions of both domains. We can perform cross-domain retrieval without the need of labeled data. Finally, we can perform domain-specific image transfer and interpolation. We compare our model to the state-of-the-art in multi-modal image translation and achieve better results.

 

Image-to-image translation for cross-domain disentanglement


  Abel Gonzalez-Garcia Computer Vision Center agonzalez@cvc.uab.es Joost van de Weijer Computer Vision Center Universitat Autònoma de Barcelona Yoshua Bengio Montreal Institute for Learning Algorithms Université de Montréal

\@float

noticebox[b]Preprint. Work in progress.\end@float

1 Introduction

Deep learning has greatly improved the quality of image-to-image translation methods. These methods aim to learn a mapping that transforms images from one domain to another. Examples include colorization where the aim is to map a grayscale image to a plausible colored image of the same scene [1, 2]. Another example is semantic segmentation where an RGB image is translated to a map indicating the semantic class of each pixel in the RGB image [3, 4]. A general purpose image-to-image translation method is proposed by Isola et al. [5]. Their method is successfully applied to a wide range of problems when paired data is available. The theory is further extended to unpaired data by introducing a cycle consistency loss [6]. The U-Net [7] architecture is commonly used for image-to-image translation. This network can be interpreted as an encoder-decoder network. The encoder extracts the relevant information from the input domain and passes it on to the decoder, which then transforms this information to the output domain. In spite of the current popularity of these models, the learned representation (the output of the encoder), has not been studied. Here we investigate and impose structure to the representation learned in image-to-image translation models.

Disentangling the accidental scene events, such as illumination, shadows, viewpoint and object orientation from the intrinsic scene properties has been a long desired goal of computer vision [8, 9]. When applied to deep learning, this allows deep models to be aware of isolated factors of variation affecting the represented entities [10, 11]. Therefore, models can marginalize information along a particular factor of variation, should it be not relevant for the task at hand. Such a process can be especially beneficial for tasks that are hindered when particular factors are present, for example, varying illumination conditions in object recognition. Moreover, disentangled representations grants a more precise control for those tasks that perform actions based on the representation.

In this paper, we combine the disentanglement objective with image-to-image translation, and introduce the concept of cross-domain disentanglement. The aim is to disentangle the domain specific factors from the factors which are shared across the domains. To do so, we partition the representation into three parts; the shared part containing information that is common to both domains, and two exclusive parts, which only represent those factors of variation that are particular to each domain (see example in figure 1).

Cross-domain disentanglement for image-to-image translation has several advantages that allow for applications that would otherwise not be feasible:

  • Sample diversity: we can generate a distribution of images conditioned on the input image, whereas most image-to-image architectures can only generate deterministic results [5, 6]. Our approach is similar to the recent work of Zhu et al. [12], however, we explicitly model variations in both domains, whereas they only consider variations in the output domain.

  • Cross domain retrieval: we can retrieve similar images in both domains based on the part of the representation that is shared between the domains. Contrary to [13] we do not require labeled data to learn the shared representation.

  • Domain-specific image transfer: domain-specific features can be transfered between images.

  • Domain-specific image interpolation: interpolation between two images with respect to the domain-specific features.

Our model is based on bidirectional image translation across domains, using a pair of Generative Adversarial Networks (GANs) [14]. We enforce a successful disentangled structure in the learned representation through an adequate combination of multiple losses, a novel use of a recent type of network layer [15], and a new network component called cross-domain autoencoder. We demonstrate the disentanglement properties of our method on variations on the MNIST dataset [16], and apply it to bidirectional multi-modal image translation in more complex datasets [17, 18], achieving better results than state-of-the-art methods [5, 12] due to the finer control and generality granted by our disentangled representation.

Figure 1: (Left) Example of a pair of domains, containing images with colored digits on black background or white digits on colored background. (Right) Disentangled representation, separated into shared part across domains (digit) and domain-exclusive parts (color in the background or in the digit).

2 Cross-domain disentanglement networks

The goal of our method is to learn deep structured representations that are clearly separated in three parts. Let be two image domains (e.g. fig. 1) and let be an image representation in either domain. We split into sub-representations depending on whether the information contained in that part belongs exclusively to domain (), domain (), or it is shared between both domains (/). Figure 1 depicts an example of this representation for images of digits with colors in different areas (digit or background). In this case, the shared part of the representation is the actual digit without color information, i.e. “the image contains a 5”. The exclusive parts are the color information in the different parts of the image, e.g. “the digit is yellow” or “the background is purple”.

Figure 2: Overview of our model. (Left) Image translation blocks, and , based on an encoder-decoder architecture. We enforce representation disentanglement through the combination of several losses and a GRL. (Right) Cross-domain autoencoders help aligning the latent space and impose further disentanglement constraints.

Figure 2 presents an overview of our model, which can be separated into image translation modules (left) and cross-domain autoencoders (right). The translation modules and translate images from domain to domain , and from to , respectively. They follow an encoder-decoder architecture. Encoders and process the input image through a series of convolutional layers and output a latent representation . Traditionally in these architectures (e.g. [5, 6, 12]), the decoder takes the full representation and generates an image in the corresponding output domain. In our model, however, the latent representation is split into shared and exclusive parts, i.e. , and only the shared part of the representation is used for translation. Decoders and combine with random noise that accounts for the missing exclusive part, which is unknown for the other domain at test time. This enables the generation of multiple plausible translations given an input image. The other component of the model, the cross-domain autoencoders, is a new type of module that helps aligning the latent distributions and enforce representation disentanglement. The following sections describe all the components of the model and detail how we achieve the necessary constraints on the learned representation. For simplicity, we focus on input domain , the model for is analogous.

2.1 Image translation modules

Generative Adversarial Networks (GAN) are a popular framework [14] consisting of two networks that compete against each other. The generator tries to synthesize realistic images to fool a discriminator, whose task is to detect whether images come from the generator or from the real data distribution. When the generated images are conditioned using an input image, the task becomes image translation. Our image translation modules are inspired by the successful architecture used in pix2pix [5], based on convolutional GANs [19]. The generator encoder consists of several convolutional layers of stride 2, followed by batch normalization [20] and leaky ReLU activations. The decoder uses fractionally strided convolutions to upsample the internal representation back to the image resolution. We adapt this architecture for the disentanglement problem with the following modifications.

Exclusive representation.   The exclusive representation of an image must not contain information about domain . Therefore, it should not be possible to use only to generate an image in . To enforce this desirable behavior, we try to generate images from but also actively guide the feature learning to prevent this from happening. For this, we propose a novel application of the Gradient Reversal Layer (GRL), originally introduced in [15] to learn domain-agnostic features. During the forward pass of the network, this layer acts as the identity function. On the backward pass, however, the GRL reverses the gradients flowing back from the corresponding branch. Inspired by this idea, our model includes a small decoder in each image translation module, in the case, that tries to generate images in with as input. We add a GRL at the beginning of , immediately after (orange dashed line in fig. 2). The GRL inverts the sign of the gradient that is backpropagated to the encoder , affecting only those units involved in the generation of the exclusive features . We train with an adversarial loss on the generated images. In theory, this approach will force not to contain information that might generate images in the domain.

Shared representation.   The shared part of the representations and of a pair of corresponding images should contain similar information and be invariant to the domain. Some domain adaptation approaches [15, 21, 22] have successfully used a GRL to create domain-invariant features. However, we have found here that this approach can quickly become unstable when the loss starts diverging. A possible solution for this consists in bounding the loss by the performance of random chance [23]. In our case, and due to the fact that our images are paired, we can attain the desired invariance simply by adding an L1 loss on these features, which forces them to be indistinguishable for both domains:

(1)

Adding noise in the representation.   A drawback of loss (1) on the shared representation is that it encourages the model to use a small signal , which reduces the loss but does not increase the similarity between and . We have found out that adding small noise () to the output of the encoder as in [24] prevents this from happening and leads to better results111We also investigated constraining but found this to be less stable..

Architectural bottleneck.   Most image translation approaches [5, 6, 12, 25, 26] are devised for pairs of domains that retain the spatial structure (e.g. grayscale to color images), and thus a great amount of information is shared between input and output. In the usual encoder-decoder architecture, all this information passes through a bottleneck, here called the latent representation, that connects the two components. To prevent the loss of details at higher resolutions, it is common to use skip connections (e.g. U-Net [7, 5, 6, 12, 25, 26]). When disentangling the latent representation, however, skip connections pose a problem. The higher resolution features operated by the encoder contain both shared and exclusive information, but the decoder must receive only the shared part of the representation from the encoder. Therefore, instead of using skip connections, we reduce the architectural bottleneck by increasing the size of the latent representation. In fact, we only increase the spatial dimensions of the shared part of the representation, from to . We found out that in the considered domains, the exclusive part can be successfully modeled by a vector, which is later tiled and concatenated with the shared part before decoding. We implement the different size of the latent representation by parallel last layers in the encoder, convolutional for the shared part and fully connected for the exclusive part.

Reconstructing the latent space.   The input of the translation decoders is the shared representation and random input noise that takes the role of the exclusive part of the representation. Concretely, we use an 8-dimensional noise vector sampled from . The exclusive representation must be approximately distributed like the input noise, as both take the same place in the input of the decoder (see sec. 2.2). To achieve this, we add a discriminator that tries to distinguish between the output exclusive representation and input noise , and train it with the original GAN loss [14]. This pushes the distribution of towards and makes the input of the decoder consistent.

Commonly, adversarial image translation approaches [5, 6, 12] attempt to achieve some stochasticity by adding random noise to the input or the internal features. However, in many cases this noise is mostly ignored and the generated outputs are uni-modal [5, 6]. We follow the ideas explored in [27, 28, 12] to avoid this and reconstruct the latent representation from the generated image by feeding it to the encoder. The reconstructed representation should match the decoder input, so we add an L1 loss between the original and reconstructed , as well as the input noise and the reconstructed exclusive part

(2)

WGAN-GP loss.   Since the original formulation [14], more advanced GAN losses have appeared. For example, the use of the Wasserstein-1 distance in WGAN [29] has been shown to provide desirable convergence properties and to correlate well with perceptual quality of the generated images. We adopt the more stable Gradient Penalty variant [30] (WGAN-GP) for our model. Let be a convolutional discriminator with single scalar as output. Following [5], we condition on the corresponding paired image in the input domain, which is concatenated to the real or generated image (omitted from following notation). Our discriminator and generator losses are then defined as

(3)
(4)

where is the distribution obtained by randomly interpolating between real images and generated images  [30]. Contrarily to [5], we do not include a reconstruction term in the generator loss. Our outputs should cover multiple modes of the output distribution and thus they do not necessarily match the paired image in the other domain.

2.2 Cross-domain autoencoders

The image translation modules impose three main constraints: (1) the shared part of the representation must be identical for both domains, (2) the exclusive part only has information about its own domain, and (3) the generated output must belong to the other domain. However, there is no force that aligns the generated output with the corresponding input image to show the same concept (e.g. same number) but in different domains. In fact, the generated images need not correspond to the input if the encoders learn to map different concepts to the same shared latent representation. In order to achieve consistency across domains, we introduce the idea of cross-domain autoencoders (fig. 2, right).

A classic autoencoder would take the full representation encoded for input image , and input it in the decoder of module , which outputs images in , with the goal of reconstructing . Since the shared representations in our model must be undistinguishable, we could use the shared representation from the other domain instead of . This provides an extra incentive for the encoder to place useful information about domain in , as does not contain any domain-exclusive information. Our cross-domain autoencoders use this combination to generate the reconstructed input . We train them with the standard L1 reconstruction loss

(5)

2.3 Bi-directional image translation

Given the multi-modal nature of our system in both domains, our architecture is unified to perform image translation in the two directions simultaneously. This is paramount to learn how to disentangle what part of the representation can be shared across domains and what parts are exclusive to each. We train our model jointly in an end-to-end manner, minimizing the following total loss

(6)

where the GAN losses contain both the generator and discriminator loss.

3 Related work

Disentangling deep representations.   A desirable property of learned representations is the ability to disentangle the factors of variation [10]. For this reason, there has been a substantial interest on learning disentangled representations [31, 32], including some works based on generative models [33, 34, 11]. One of the earliest architectures for learning disentangled representations using deep learning was applied to the task of emotion recognition [35]. The work of [11] combines a Variational Autoencoder (VAE) with a GAN to disentangle representations depending on what is specified (i.e. labeled in the dataset) and the remaining factors of variation, which are unspecified. In a similar intra-domain spirit, InfoGAN [36] optimizes a lower bound on the mutual information between the representation and the images, successfully controlling some factors of variation in the considered images. Reed et al. [33] propose learning each factor of variation of the image manifold as its own sub-manifold using a higher-order Boltzmann machine. Alternatively, the analogy-making approach of [17] attempts to disentangle the factors of variation by using representation arithmetics. Finally, some domain adaptation approaches [15, 21, 22] aim at obtaining invariant features for the classification task, granting some level of disentanglement but depending on class labels. Even though representation disentangling has been widely studied, we are unaware of any work studying true cross-domain representation disentangling, which is the focus of this paper.

Image translation.   Lately, image generation using adversarial training methods has attracted a great amount of attention [14, 29]. We consider the image translation task, in which the generative process is conditioned on an input image [2, 5, 12]. While some approaches had previously applied adversarial losses for specific image translation tasks such as style transfer [37], or colorization [2], the approach of Isola et al. [5], called pix2pix, was the first GAN-based image translation approach that was not tailored to a specific application. Despite the excellent results of these models, they are limited by the lack of variation of their generated outputs, which are virtually deterministic, as the input noise is mostly ignored. As a consequence, they can only provide a one-to-one mapping across image domains, a phenomenon named mode collapse [38].

In order to reduce this limitation, Zhu et al. [12] extended the pix2pix framework. They minimize the reconstruction error of the latent code by a reverse decoder with the generated output as input, forcing the generator to take the input noise into account. Furthermore, they combine a conditional GAN with a conditional VAE, whose goal is to provide a plausible latent vector given a target image. The resulting model, called BicyleGAN, effectively achieves one-to-many image translations. However, there are several differences with our method. Theirs is restricted in only one direction, whereas our method operates in a many-to-many setting. Moreover, our representation grants a finer control on the stochastic factors of the generated images as we also model variations on the inputs, allowing us to keep selected properties fixed. Finally, the obtained disentangled image features are useful for additional tasks beyond image translation, such as cross-domain retrieval or visual analogies.

Figure 3: (a) Samples generated by our model using random noise as exclusive representation, where = MNIST-CD and = MNIST-CB. (b) Visual analogies created by combining the shared and exclusive parts of the representation for different samples.

4 Experiments

4.1 Representation disentangling on MNIST variations

We evaluate the properties of our representation by following the protocol introduced by [11] to measure the disentanglement of a representation. However, [11] operates within one domain only, whereas we learn cross-domain representations. For this reason, we extend MNIST [16], the handwritten-digit dataset used in [11], with variations that correspond to two different domains. In our variations, we either colorize the digit (MNIST-CD) or the background (MNIST-CB) with a randomly chosen color (fig.1). We use the standard splits for train (50K images) and test (10K images). We detail the architectures and hyperparameters used for all experiments in appendix A.

Sample diversity.   Figure 3a shows samples generated by our model in both domains using random input noise. We can observe how our model successfully generates diverse samples for different noise values, varying the color where appropriate but maintaining the digit information. Note, however, that the model has no knowledge of the digit in the image as labels are not provided, it effectively learns what information is shared across both domains. This demonstrates that we achieve many-to-many image translation through proper manipulation of the disentangled latent representation.

Domain-specific image transfer.   We use visual analogies to evaluate our domain-specific transferring capabilities. The task of visual analogy generation consists in applying a particular property of a given reference image to a query image [17]. For example, in MNIST-CD we could change the color of the digit in one image to the another digit’s color (fig. 3b). Our disentangled representation grants us a precise control over the image generation process, facilitating the visual analogy task as it can be seen as applying domain-specific properties (encoded in the exclusive part of the representation) from one image to another. We can generate visual analogies using our model as follows. Let us consider two input images from the same domain and their disentangled representations ), and , respectively. We use as input query and as reference. We can generate the desired visual analogy by simply combining the shared part of the query with the exclusive part of the reference and running it through the corresponding decoder, i.e. . Fig. 3b illustrates this process and shows some qualitative results. We can confirm that the query images acquire the corresponding properties of the reference images, as the output images have the correct digit and color. Note how our model has not been explicitly trained to achieve this behavior, it is a natural consequence of a correctly disentangled representation.

Domain-specific image interpolation.   Beyond transferring domain-specific properties between images, our representation allows us to interpolate between two images along domain-specific properties. To do this, we simply keep one part of the representation fixed while we interpolate between two samples in the other part. Finally, we combine each interpolated value with the fixed part and run it through the decoder to generate the image. Figure 4a shows results for interpolations on the exclusive and shared parts for various random samples of both domains. When interpolating on the exclusive part, we generate samples along domain-specific factors of variation, i.e. color, and maintain the digit, which is the shared information. Analogously, when we interpolate on the shared part, the domain-specific properties stay stable while one digit smoothly transforms into the other.

Cross-domain and domain-agnostic retrieval.   Given an image query and a database of images, the goal of retrieval is selecting those images that are similar to the query, either semantically or visually. Domains are generally arranged so that the shared information is semantic whereas the exclusive information is stylistic. In this case, our disentangled representation enables both semantic retrieval using the shared part, and visual using the exclusive part. In the usual cross-domain retrieval scenario [13, 39, 40] the query and the image database belong to different domains. To test our model in this setting, we compute the Euclidean distance between shared features of images in different domains, and compare it with a simple baseline using distances on image pixels. Table (a)a presents the results in terms of Recall@1, a commonly used retrieval metric [13, 39, 40]. The high values obtained by the shared features demonstrate how using our disentangled representation provides an effective way of performing cross-domain retrieval, clearly superior to directly using the images. Moreover, we do not need image labels, as opposed to specialized approaches such as [13].

Figure 4: (a) Interpolation between samples on either part of the representation. (b) Qualitative results for domain-agnostic retrieval. We show a random image query and the 10 nearest-neighbors in the set union of both domains, using Euclidean distance on image pixels, shared representation and exclusive representation.
Method Cross-domain Dom.-agnostic
CD CB CD CB
pixels 30.45 40.02 90.85 66.66
shared 99.99 99.95 99.99 99.93
exclusive 9.93 9.80 11.22 19.32
(a)
Method Cars Chairs
pix2pix 0.094 0.073 0.095 0.085
BicycleGAN 0.106 0.066 0.086 0.094
Ours 0.083 0.047 0.083 0.095
(b)
Table 1: (a) Retrieval results in terms of Recall@1 on MNIST-CD and MNIST-CB for the cross-domain and domain-agnostic setups. (b) LPIPS metric on samples generated for cars [17] and chairs [18].

Domain-agnostic retrieval tries to retrieve images from both domains without prioritizing images of the query domain. This rarely studied setting might be useful for reducing damaging biases (e.g. race, gender) in some retrieval scenarios. Our disentangled representation is ideal for this task, as the shared part does not contain domain-exclusive information. Figure 4 shows examples of random queries in one domain and the 10 nearest-neighbors from both domains, using distances on image pixels, shared, and exclusive features. Shared features retrieve images from both domains (46%), whereas the other two clearly prioritize images from the query domain (almost 100%). Table (a)a confirms this, as the performance for shared features in this setting is about the same as in cross-domain, but much higher when using image pixels. Moreover, exclusive features have little information about the digit, retrieving images that are visually similar regardless of the digit, in contrast to the image pixel baseline. Therefore, our exclusive features may result useful in visual retrieval.

Ablation study.   We tried removing some network components and observed the effect on the model (table 2). We measure performance as the ability to create visual analogies, which guarantees a minimum of disentanglement. We create the ground-truth target analogy (e.g. digit of the query with the color of the reference) and compute the Euclidean distance with the output of our model. We can see how some components such as the cross-domain autoencoders are crucial for this task, since when removed (No auto.) or switched by normal autoencoders (Normal auto.) the performance decreases significantly. Our model still manages to create visual analogies without some components, but this negatively affects on other tasks (e.g. diverse sample generation) as well as the training stability.

Dataset Full No auto. Normal auto. No GRL No noise No
MNIST-CB 13.04 35.08 39.96 11.35 12.95 16.62
MNIST-CD 10.18 14.99 16.76 11.51 11.93 12.39
Table 2: Ablation study based on performance of visual analogies, measured as the distance to the ground-truth.

4.2 Many-to-many image translation

In this section, we demonstrate the performance of our method for the task of bi-directional multi-modal image translation. We use pix2pix [5] and the state-of-the-art multi-modal approach of BicycleGAN [12] as baselines, combining two independently trained models in either direction. Despite the truly remarkable results of these approaches, they are limited by the underlying assumption of spatial correspondence between images across domains, and thus cannot be applied when domains undergo significant structural changes such as viewpoint, as confirmed experimentally. Our method, on the other hand, removes this assumption as it does not rely on additional side information to generate its samples, only on the learned latent representation. To provide a fair comparison, we remove the skip connections in [5, 12] and increase the latent space to , as in our architecture. We measure the performance quantitatively with the Learned Perceptual Image Patch Similarly (LPIPS) metric of [41], which is based on differences between network features and correlates very well with human judgments. We use the official implementation and default settings by the authors [41].

Figure 5: (a) Generated samples for the 3D car dataset [17] by our method, pix2pix [5], and BicycleGAN [12]. (b) Car analogies. (c) Samples when is all viewpoints except frontal. (d) Our generated 3D chairs [18].

3D car models.   We use the 3D car images [17] of the 199 CAD models in [42], rendered from 24 equally spaced viewpoints. Let be the frontal/rear car views, and the profile views. We set 5 random cars for test and train with the remaining 796 images. Fig. 5 shows generated samples by our method and the baselines. The deterministic nature of pix2pix conflates both views into one, making it unable to output realistic cars with a specific viewpoint. BicycleGAN generates better samples, but the quality is still rather poor. Our method generates good quality samples and covers multiple modes of the output distribution. Moreover, it maintains shared information across domains (e.g. car color), whereas BicycleGAN’s samples might not correspond to the input image. We attribute this to our finer control over the latent representation, as we model image variations also in the input domain.

Table (b)b presents quantitative results. For each image in the test set (two views per car model/domain) we generate three samples and compute the LPIPS [41] metric between them and both possible ground-truths, as domains are bi-modal, e.g. left and right profiles. Then, we select the minimum distance to either ground-truth and average over samples. In both directions, our model outperforms the two baselines, generating samples that are perceptually more similar to the actual examples.

Fig. 5b shows visual analogies for cars (created as in fig. 3b). With our exclusive representation, we can apply the orientation of one car to another while maintaining other properties such as style or color. Finally, we show in fig. 5c samples of our model when domain is the frontal viewpoint and is all other viewpoints. Even for this more challenging case, we manage to output samples covering many modes of the output distribution. Moreover, outputs are uni-modal when necessary (last row), as only domain-exclusive information is varied during generation and only has one viewpoint.

3D chair models.   Similarly to the cars dataset, [18] offers rendered images of 1393 CAD chair models from different viewpoints. We arrange and as before, and train the model with 5,372 images from 1,343 chairs, leaving 50 chairs for test. Figure 5c shows examples of the chairs generated by our model. In this case, the samples also effectively cover both modes of the target distribution. Quantitatively (table (b)b), we achieve the best results in one direction but worse than pix2pix for the other. This could be due to the fact that viewpoint changes in this dataset are less extreme than with cars, as the aspect-ratios of frontal and profile views are quite similar.

5 Conclusions

We have presented the concept of cross-domain disentanglement and proposed a model to solve it. Our model effectively disentangles the representation into a part shared across domains and two parts exclusive to each domain. We applied this to multiple tasks such as diverse sample generation, cross-domain retrieval, and domain-specific image transfer and interpolation. We also introduced the many-to-many image translation setting and paved the way to overcome some limitations of current approaches through the use of a disentangled representation.

References

  • [1] Iizuka, S., Simo-Serra, E., Ishikawa, H.: Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Transactions on Graphics (TOG) 35(4) (2016) 110
  • [2] Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: European Conference on Computer Vision, Springer (2016) 649–666
  • [3] Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2015) 3431–3440
  • [4] Eigen, D., Fergus, R.: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of the IEEE International Conference on Computer Vision. (2015) 2650–2658
  • [5] Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2017)
  • [6] Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2017) 2223–2232
  • [7] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention, Springer (2015) 234–241
  • [8] Barrow, H., Tenenbaum, J.: Recovering intrinsic scene characteristics. Comput. Vis. Syst 2 (1978)
  • [9] Tappen, M.F., Freeman, W.T., Adelson, E.H.: Recovering intrinsic images from a single image. In: Advances in neural information processing systems. (2003) 1367–1374
  • [10] Bengio, Y., Courville, A., Vincent, P.: Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence 35(8) (2013) 1798–1828
  • [11] Mathieu, M.F., Zhao, J.J., Zhao, J., Ramesh, A., Sprechmann, P., LeCun, Y.: Disentangling factors of variation in deep representation using adversarial training. In: Advances in Neural Information Processing Systems. (2016) 5040–5048
  • [12] Zhu, J.Y., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. In: Advances in Neural Information Processing Systems. (2017) 465–476
  • [13] Aytar, Y., Castrejon, L., Vondrick, C., Pirsiavash, H., Torralba, A.: Cross-modal scene networks. IEEE transactions on pattern analysis and machine intelligence (2017)
  • [14] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems. (2014) 2672–2680
  • [15] Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: International Conference on Machine Learning. (2015) 1180–1189
  • [16] LeCun, Y.: The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/ (1998)
  • [17] Reed, S.E., Zhang, Y., Zhang, Y., Lee, H.: Deep visual analogy-making. In: Advances in neural information processing systems. (2015) 1252–1260
  • [18] Aubry, M., Maturana, D., Efros, A.A., Russell, B.C., Sivic, J.: Seeing 3d chairs: exemplar part-based 2d-3d alignment using a large dataset of cad models. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2014) 3762–3769
  • [19] Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. (2015)
  • [20] Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning. (2015) 448–456
  • [21] Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., Erhan, D.: Domain separation networks. In: Advances in Neural Information Processing Systems. (2016) 343–351
  • [22] Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., Krishnan, D.: Unsupervised pixel-level domain adaptation with generative adversarial networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Volume 1. (2017)  7
  • [23] Feutry, C., Piantanida, P., Bengio, Y., Duhamel, P.: Learning anonymized representations with adversarial neural networks. arXiv preprint arXiv:1802.09386 (2018)
  • [24] Wang, Y., van de Weijer, J., Herranz, L.: Mix and match networks: encoder-decoder alignment for zero-pair image translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2018)
  • [25] Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence 39(12) (2017) 2481–2495
  • [26] Mathieu, M., Couprie, C., LeCun, Y.: Deep multi-scale video prediction beyond mean square error. In: International Conference on Learning Representations. (2016)
  • [27] Dumoulin, V., Belghazi, I., Poole, B., Mastropietro, O., Lamb, A., Arjovsky, M., Courville, A.: Adversarially learned inference. In: International Conference on Learning Representations. (2017)
  • [28] Donahue, J., Krähenbühl, P., Darrell, T.: Adversarial feature learning. In: International Conference on Learning Representations. (2017)
  • [29] Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning. (2017) 214–223
  • [30] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein gans. In: Advances in Neural Information Processing Systems. (2017) 5769–5779
  • [31] Tenenbaum, J.B., Freeman, W.T.: Separating style and content. In: Advances in neural information processing systems. (1997) 662–668
  • [32] Hinton, G.E., Krizhevsky, A., Wang, S.D.: Transforming auto-encoders. In: International Conference on Artificial Neural Networks, Springer (2011) 44–51
  • [33] Reed, S., Sohn, K., Zhang, Y., Lee, H.: Learning to disentangle factors of variation with manifold interaction. In: International Conference on Machine Learning. (2014) 1431–1439
  • [34] Kingma, D.P., Mohamed, S., Rezende, D.J., Welling, M.: Semi-supervised learning with deep generative models. In: Advances in Neural Information Processing Systems. (2014) 3581–3589
  • [35] Rifai, S., Bengio, Y., Courville, A., Vincent, P., Mirza, M.: Disentangling factors of variation for facial expression recognition. In: European Conference on Computer Vision, Springer (2012) 808–822
  • [36] Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In: Advances in Neural Information Processing Systems. (2016) 2172–2180
  • [37] Li, C., Wand, M.: Precomputed real-time texture synthesis with markovian generative adversarial networks. In: European Conference on Computer Vision, Springer (2016) 702–716
  • [38] Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training gans. In: Advances in Neural Information Processing Systems. (2016) 2234–2242
  • [39] Ji, X., Wang, W., Zhang, M., Yang, Y.: Cross-domain image retrieval with attention modeling. In: Proceedings of the 2017 ACM on Multimedia Conference, ACM (2017) 1654–1662
  • [40] Pang, K., Song, Y.Z., Xiang, T., Hospedales, T.: Cross-domain generative learning for fine-grained sketch-based image retrieval, BMVC (2017)
  • [41] Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep networks as a perceptual metric. In: CVPR. (2018)
  • [42] Fidler, S., Dickinson, S., Urtasun, R.: 3d object detection and viewpoint estimation with a deformable 3d cuboid model. In: Advances in neural information processing systems. (2012) 611–619

Appendix A Network architecture and hyperparameters

In this section, we disclose the network architecture and implementation details used for each experiment. We train our models with the recommended weight loss values of pix2pix, i.e. and . We set the weight of the exclusive decoder loss to 0.1. We found experimentally that a lower value in this case achieves a good representation disentanglement without excessively altering the quality of the generated images. The weighting parameter of the Gradient Reversal Layer, , is fixed to 1. We train with Adam and a fixed learning rate of 0.0002.

Generator encoder - common:   5 convolutional layers of size 4x4 and stride 2 with 64, 128, 256, and 512 filters, LeakyReLU of 0.2, and batch normalization.

Generator encoder - shared:   1 convolutional layer of size 4x4, stride 2, and 512 filters.

Generator encoder - shared:   1 fully connected layer with 8 output units.

Generator decoder:   5 convolutional layers of size 4x4 and stride 1/2 with 512, 256, 128, 64, and 3 filters, ReLU, and batch normalization. Dropout with probability 0.5 on the first 3 layers, only at training time.

Exclusive generator decoder:   same architecture as generator decoder, but with a Gradient Reversal Layer at its input.

Discriminator WGAN-GP:   3 convolutional layers of size 4x4 and stride 2 with 64, 128, and 256 channels, LeakyReLU of 0.2, and 1 fully connected layer with 1 output channel and without no-linearity.

Discriminator GAN:   3 convolutional layers of size 4x4 and stride 2 with 64, 128, and 256 channels, LeakyReLU of 0.2, batch normalization, and 1 last convolutional layer with 1 output channel and sigmoid as no-linearity.

Input resolution:    , input image resized when necessary.

Epochs:

  • MNIST-CD/CB: 15

  • 3D cars: 900

  • 3D chairs: 75

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
199125
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description