DRIT++: Diverse Image-to-Image Translation via Disentangled Representations

DRIT++: Diverse Image-to-Image Translation via Disentangled Representations

Abstract

Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for this task: 1) lack of aligned training pairs and 2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for generating diverse outputs without paired training images. To synthesize diverse outputs, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Our model takes the encoded content features extracted from a given input and attribute vectors sampled from the attribute space to synthesize diverse outputs at test time. To handle unpaired training data, we introduce a cross-cycle consistency loss based on disentangled representations. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks without paired training data. For quantitative evaluations, we measure realism with user study and Fréchet inception distance, and measure diversity with the perceptual distance metric, Jensen-Shannon divergence, and number of statistically-different bins.

Figure 1: Unpaired diverse image-to-image translation. () Our model learns to perform diverse translation between two collections of images without aligned training pairs. () Multi-domain image-to-image translation.
(a) CycleGAN Zhu et al. (2017a)
(b) UNIT Liu et al. (2017)
(c) MUNIT Huang et al. (2018), DRIT Lee et al. (2018)
Figure 2: Comparisons of unsupervised I2I translation methods. Denote and as images in domain and : (a) CycleGAN Zhu et al. (2017a) maps and onto separated latent spaces. (b) UNIT Liu et al. (2017) assumes and can be mapped onto a shared latent space. (c) Our approach disentangles the latent spaces of and into a shared content space and an attribute space of each domain.

1 Introduction

Image-to-Image (I2I) translation aims to learn the mapping between different visual domains. Numerous vision and graphics problems can be formulated as I2I translation problems, such as colorization Larsson et al. (2016); Zhang et al. (2016) (grayscale color), super-resolution Lai et al. (2017); Ledig et al. (2017); Li et al. (2016, 2019) (low-resolution high-resolution), and photorealistic image synthesis Chen and Koltun (2017); Park et al. (2019); Wang et al. (2018) (label image). In addition, I2I translation can be applied to synthesize images for domain adaptation Bousmalis et al. (2017); Chen et al. (2019); Hoffman et al. (2018); Murez et al. (2018); Shrivastava et al. (2017).

Learning the mapping between two visual domains is challenging for two main reasons. First, aligned training image pairs are either difficult to collect (e.g., day scene night scene) or do not exist (e.g., artwork real photo). Second, many such mappings are inherently multimodal — a single input may correspond to multiple possible outputs. To handle multimodal translation, one possible approach is to inject a random noise vector to the generator for modeling the multimodal data distribution in the target domain. However, mode collapse may still occur easily since the generator often ignores the additional noise vectors.

Several recent efforts have been made to address these issues. The Pix2pix Isola et al. (2017) method applies conditional generative adversarial network to I2I translation problems. Nevertheless, the training process requires paired data. A number of recent approaches Choi et al. (2018a); Liu et al. (2017); Taigman et al. (2017); Yi et al. (2017); Zhu et al. (2017a) relax the dependency on paired training data for learning I2I translation. These methods, however, generate a single output conditioned on the given input image. As shown in Isola et al. (2017); Zhu et al. (2017b), the strategy of incorporating noise vectors as additional inputs to the generator does not increase variations of generated outputs due to the mode collapse issue. The generators in these methods are likely to overlook the added noise vectors. Most recently, the BicycleGAN Zhu et al. (2017b) algorithm tackles the problem of generating diverse outputs in I2I translation by encouraging the one-to-one relationship between the output and the latent vector. Nevertheless, the training process of BicycleGAN requires paired images.

In this paper, we propose a disentangled representation framework for learning to generate diverse outputs with unpaired training data. We propose to embed images onto two spaces: 1) a domain-invariant content space and 2) a domain-specific attribute space as shown in Figure 2. Our generator learns to perform I2I translation conditioned on content features and a latent attribute vector. The domain-specific attribute space aims to model variations within a domain given the same content, while the domain-invariant content space captures information across domains. We disentangle the representations by applying a content adversarial loss to encourage the content features not to carry domain-specific cues, and a latent regression loss to encourage the invertible mapping between the latent attribute vectors and the corresponding outputs. To handle unpaired datasets, we propose a cross-cycle consistency loss using the proposed disentangled representations. Given a pair of unaligned images, we first perform a cross-domain mapping to obtain intermediate results by swapping the attribute vectors from both images. We can then reconstruct the original input image pair by applying the cross-domain mapping one more time and use the proposed cross-cycle consistency loss to enforce the consistency between the original and the reconstructed images. Furthermore, we apply the mode seeking regularization Mao et al. (2019) to further improve the diversity of generated images. At test time, we can use either 1) randomly sampled vectors from the attribute space to generate diverse outputs or 2) the transferred attribute vectors extracted from existing images for example-guided translation. Figure 1 shows examples of diverse outputs produced by our model.

We evaluate the proposed model with extensive qualitative and quantitative experiments. For various I2I tasks, we show diverse translation results with randomly sampled attribute vectors and example-guided translation with transferred attribute vectors from existing images. In addition to the common dual-domain image-to-image translation, we extend our proposed framework to the more general multi-domain image-to-image translation and demonstrate diverse translation among domains. We measure realism of our results with a user study and the Fréchet inception distance (FID) Heusel et al. (2017), and evaluate diversity using perceptual distance metrics Zhang et al. (2018b). However, the diversity metric alone does not effectively measure similarity between the distribution of generated images and the distribution of real data. Therefore, we use the Jensen-Shannon Divergence (JSD) distance which measures the similarity between distributions, and the Number of Statistically-Different Bins (NDB) Richardson and Weiss (2018) metric which determines the relative proportions of samples within clusters predetermined by real data.

We make the following contributions in this work:

1) We introduce a disentangled representation framework for image-to-image translation. We apply a content discriminator to facilitate the factorization of domain-invariant content space and domain-specific attribute space, and a cross-cycle consistency loss that allows us to train the model with unpaired data.

2) Extensive qualitative and quantitative experiments show that our model performs favorably against existing I2I models. Images generated by our model are both diverse and realistic.

3) The proposed disentangled representation and cross-cycle consistency can be applied to multi-domain image-to-image translation for generating diverse images.

2 Related Work

Generative adversarial networks. The recent years have witnessed rapid advances of generative adversarial networks (GANs) Arjovsky et al. (2017); Goodfellow et al. (2014); Radford et al. (2016) for image generation. The core idea of GANs lies in the adversarial loss that enforces the distribution of generated images to match that of the target domain. The generators in GANs can map from noise vectors to realistic images. Several recent efforts exploit conditional GAN in various contexts including conditioned on text Reed et al. (2016), audio Lee et al. (2019), low-resolution images Ledig et al. (2017), human pose Ma et al. (2017); AlBahar and Huang (2019), video frames Vondrick et al. (2016), and image Isola et al. (2017). Our work focuses on using GAN conditioned on an input image. In contrast to several existing conditional GAN frameworks that require paired training data, our model generates diverse outputs without paired data. As such, our method has wider applicability to problems where paired training datasets are scarce or not available.

Image-to-image translation. I2I translation aims to learn the mapping from a source image domain to a target image domain. The Pix2pix Isola et al. (2017) method applies a conditional GAN to model the mapping function. Although high-quality results have been shown, the model training requires paired training data. To train with unpaired data, the CycleGAN Zhu et al. (2017a), DiscoGAN Kim et al. (2017), and UNIT Liu et al. (2017) schemes leverage cycle consistency to regularize the training. However, these methods perform generation conditioned solely on an input image and thus produce one single output. Simply injecting a noise vector to a generator is usually not an effective solution to achieve multimodal generation due to the lack of regularization between the noise vectors and the target domain. On the other hand, the BicycleGAN Zhu et al. (2017b) algorithm enforces the bijection mapping between the latent and target space to tackle the mode collapse problem. Nevertheless, the method is only applicable to problems with paired training data. Unlike existing work, our method enables I2I translation with diverse outputs in the absence of paired training data.

We note several concurrent methods Almahairi et al. (2018); Cao et al. (2018); Huang et al. (2018); Lin et al. (2018a, b); Ma et al. (2018) (all independently developed) also adopt disentangled representations similar to our work for learning diverse I2I translation from unpaired training data. Furthermore, several approaches Choi et al. (2018b); Liu et al. (2018) extend the conventional dual-domain I2I to general multi-domain settings. However, these methods can only achieve one-to-one mapping among domains.

(a) Training with unpaired images
(b) Testing with random attributes
(c) Testing with a given attribute
Figure 3: Method overview. (a) With the proposed content adversarial loss (Section 3.1) and the cross-cycle consistency loss (Section 3.2), we are able to learn the multimodal mapping between the domain and with unpaired data. Thanks to the proposed disentangled representation, we can generate output images conditioned on either (b) random attributes or (c) a given attribute at test time.

Disentangled representations. The task of learning disentangled representation aims at modeling the factors of data variations. Previous work makes use of labeled data to factorize representations into class-related and class-independent components Cheung et al. (2015); Kingma et al. (2014); Makhzani et al. (2016); Mathieu et al. (2016). Recently, numerous unsupervised methods have been developed Chen et al. (2016); Denton and Birodkar (2017) to learn disentangled representations. The InfoGAN Chen et al. (2016) algorithm achieves disentanglement by maximizing the mutual information between latent variables and data variation. Similar to DrNet Denton and Birodkar (2017) that separates time-independent and time-varying components with an adversarial loss, we apply a content adversarial loss to disentangle an image into domain-invariant and domain-specific representations to facilitate learning diverse cross-domain mappings.

3 Disentangled Representation for I2I Translation

Our goal is to learn a multimodal mapping between two visual domains and without paired training data. As illustrated in Figure 3, our framework consists of content encoders , attribute encoders , generators , and domain discriminators for both domains, and a content discriminators . Taking domain as an example, the content encoder maps images onto a shared, domain-invariant content space () and the attribute encoder maps images onto a domain-specific attribute space (). The generator synthesizes images conditioned on both content and attribute vectors (). The discriminator aims to discriminate between real images and translated images in the domain . In addition, the content discriminator is trained to distinguish the extracted content representations between two domains. To synthesize multimodal outputs at test time, we regularize the attribute vectors so that they can be drawn from a prior Gaussian distribution .

Figure 4: Additional loss functions. In addition to the cross-cycle reconstruction loss and the content adversarial loss described in Figure 3, we apply several additional loss functions in our training process. The self-reconstruction loss facilitates training with self-reconstruction; the KL loss aims to align the attribute representation with a prior Gaussian distribution; the adversarial loss encourages to generate realistic images in each domain; and the latent regression loss enforces the reconstruction on the latent attribute vector. Finally, the mode seeking regularization further improves the diversity. More details can be found in Section 3.3 ans Section 3.4.

3.1 Disentangle Content and Attribute Representations

Our approach embeds input images onto a shared content space , and domain-specific attribute spaces, and . Intuitively, the content encoders should encode the common information that is shared between domains onto , while the attribute encoders should map the remaining domain-specific information onto and .

(1)

To achieve representation disentanglement, we apply two strategies: weight-sharing and a content discriminator. First, similar to Liu et al. (2017), based on the assumption that two domains share a common latent space, we share the weight between the last layer of and and the first layer of and . Through weight sharing, we enforce the content representation to be mapped onto the same space. However, sharing the same high-level mapping functions does not guarantee the same content representations encode the same information for both domains. Thus, we propose a content discriminator which aims to distinguish the domain membership of the encoded content features and . On the other hand, content encoders learn to produce encoded content representations whose domain membership cannot be distinguished by the content discriminator . We express this content adversarial loss as:

(2)

3.2 Cross-cycle Consistency Loss

With the disentangled representation where the content space is shared among domains and the attribute space encodes intra-domain variations, we can perform I2I translation by combining a content representation from an arbitrary image and an attribute representation from an image of the target domain. We leverage this property and propose a cross-cycle consistency. In contrast to cycle consistency constraint in Zhu et al. (2017a) (i.e., ) which assumes one-to-one mapping between the two domains, the proposed cross-cycle constraint exploit the disentangled content and attribute representations for cyclic reconstruction.

Our cross-cycle constraint consists of two stages of I2I translation.

Forward translation. Given a non-corresponding pair of images and , we encode them into and . We then perform the first translation by swapping the attribute representation (i.e.,  and ) to generate , where .

(3)

Backward translation. After encoding and into and , we perform the second translation by once again swapping the attribute representation (i.e.,  and ).

(4)

Here, after two I2I translation stages, the translation should reconstruct the original images and (as illustrated in Figure 3). To enforce this constraint, we formulate the cross-cycle consistency loss as:

(5)

where and , respectively.

3.3 Other Loss Functions

In addition to the proposed content adversarial loss and cross-cycle consistency loss, we also use several other loss functions to facilitate network training. We illustrate these additional losses in Figure 4. Starting from the top-right, in the counter-clockwise order:

Domain adversarial loss. We impose an adversarial loss where and attempt to discriminate between real images and generated images in each domain, while and attempt to generate realistic images.

Self-reconstruction loss. In addition to the cross-cycle reconstruction, we apply a self-reconstruction loss to facilitate the training process. With encoded content and attribute features and , the decoders and should decode them back to original input and . That is, and .

Latent regression loss. To encourage invertible mapping between the image and the latent space, we apply a latent regression loss similar to Zhu et al. (2017b). We draw a latent vector from the prior Gaussian distribution as the attribute representation and attempt to reconstruct it with and .

The full objective function of our network is:

(6)
(7)

where the hyper-parameters s control the importance of each term.

Figure 5: Multi-domains I2I framework. We further extend the proposed disentangle representation framework to a more general multi-domain setting. Different from the class-specific encoders, generators, and discriminators used in dual-domain I2I, all networks in multi-domain are shared among all domains. Furthermore, one-hot domain codes are used as inputs and the discriminator will perform domain classification in addition to discrimination.

3.4 Mode Seeking Regularization

We incorporate the mode seeking regularization Mao et al. (2019) method to alleviate the mode-collapse problem in conditional generation tasks. Given a conditional image , latent vectors and , and a conditional generator , we use the mode seeking regularization term to maximize the ratio of the distance between and with respect to the distance between and ,

(8)

where denotes the distance metric.

The regularization term can be easily incorporated into the proposed framework:

(9)

where denote the full objective.

3.5 Multi-Domain Image-to-Image Translation

In addition to the translation between two domains, we apply the proposed disentangle representation to the multi-domain setting. Different from typical I2I designed for two domains, multi-domain I2I aims to perform translation among multiple domains with a single generator .

We illustrate the framework for multi-domain I2I in Figure 5. Given domains , two images () and their one-hot domain codes () are randomly sampled (). We encode the images onto a shared content space , and domain-specific attribute spaces :

(10)

We then perform the forward and backward translation similar to the dual-domain translation.

(11)

In addition to the loss functions used in the dual-domain translation, we leverage the discriminator as an auxiliary domain classifier. That is, the discriminator not only aims to discriminate between real images and translated images (), but also performs domain classification ().

(12)

Thus, our new objective function is:

(13)
(14)
Method mode-seeking multi-domain high-resolution
DRIT - - -
DRIT++ (two-domain) - -
DRIT++ (multi-domain) -
DRIT++ (high-resolution) -
Table 1: Summary of the components used in each method. We desciribe the differences among DRIT, DRIT++, and variants.
Input Generated images
Figure 6: Sample results. We show example results produced by our model. The left column shows the input images in the source domain. The other five columns show the output images generated by sampling random vectors in the attribute space. The mappings from top to bottom are: Photo Monet, winter summer, photograph portrait, and cat dog.
(a)
Figure 7: Baseline artifacts. On the winter summer translation task, our model produces more diverse and realistic samples over baselines.
(a)
Figure 8: Effectiveness of mode seeking regularization. Mode seeking regularization helps improve the diversity of translated images while maintaining the visual quality.
(a)
Figure 9: Linear interpolation between two attribute vectors. Translation results with linear-interpolated attribute vectors between two attributes (highlighted in red).
Content Attribute Output
(a) Inter-domain attribute transfer
Content Attribute Output
(b) Intra-domain attribute transfer
Figure 10: Attribute transfer. At test time, in addition to random sampling from the attribute space, we can also perform translation with the query images with the desired attributes. Since the content space is shared across the two domains, we not only can achieve (a) inter-domain, but also (b) intra-domain attribute transfer. Note that we do not explicitly involve intra-domain attribute transfer during training.

4 Experimental Results

Implementation details. We implement the proposed model with PyTorch Paszke et al. (2017). We use the input image size of for all of our experiments. For the content encoder , we use an architecture consisting of three convolution layers followed by four residual blocks. For the attribute encoder , we use a CNN architecture with four convolution layers followed by fully-connected layers. We set the size of the attribute vector to for all experiments. For the generator , we use an architecture consisting of four residual blocks followed by three fractionally strided convolution layers.

For training, we use the Adam optimizer Kinga and Adam (2015) with a batch size of , a learning rate of , and exponential decay rates . In all experiments, we set the hyper-parameters as follows: , , , , , and . We also apply an L1 weight regularization on the content representation with a weight of . We follow the procedure in DCGAN Radford et al. (2016) for training the model with adversarial loss. More results can be found at http://vllab.ucmerced.edu/hylee/DRIT_pp/. The source code and trained models will be made available to the public.

Datasets. We evaluate the proposed model on several datasets include Yosemite Zhu et al. (2017a) (summer and winter scenes), pets (cat and dog) cropped from Google images, artworks Zhu et al. (2017a) (Monet), and photo-to-portrait cropped from subsets of the WikiArt dataset29 and the CelebA dataset Liu et al. (2015).

Evaluated methods. We perform the evaluation on the following algorithms:

  • DRIT++: The proposed model.

  • DRIT Lee et al. (2018), and MUNIT Huang et al. (2018): Multimodal generation frameworks trained with unpaired data.

  • DRIT w/o : DRIT model without the content discriminator.

  • Cycle/Bicycle: We construct a baseline using a combination of CylceGAN and BicycleGAN. Here, we first train CycleGAN on unpaired data to generate corresponding images as pseudo image pairs. We then use this pseudo paired data to train BicycleGAN.

  • CycleGAN Zhu et al. (2017a), and BicycleGAN Zhu et al. (2017b)

The proposed DRIT++ method extends the original DRIT method by 1) incorporating mode-seeking regularization for improving sample diversity and 2) generalizing the two-domain model to handle multi-domain image-to-image translation problems. The DRIT++ (multi-domain) algorithm is backward compatible with the DRIT++ (two-domain) and DRIT methodss with comparable performance (as shown in Section LABEL:subsec:subsec:quan). Thus, the DRIT++ (two-domain) method can be viewed as a special case of the DRIT++ (multi-domain) algorithm. The DRIT++ (two-domain) algorithm can improve the visual quality slightly over the DRIT++ (multi-domain) scheme with a category-specific generator and discriminator under the two-domain setting.

(a)
Figure 11: Multi-domain I2I. We show example results of our model on the multi-domain I2I task. We demonstrate the translation among real images and two artistic styles (Monet and Ukiyoe), and the translation among different weather conditions (sunny, cloudy, snowy, and foggy).
Figure 12: Realism of synthesized images. We conduct a user study to ask subjects to select results that are more realistic through pairwise comparisons. The number indicates the percentage of preference for that comparison pair. We use the winter summer and the cat dog translation for this experiment.

4.1 Qualitative Evaluation

Diversity. We first compare the proposed model with other methods in Figure 6. In Figure 7, demonstrate the visual artifacts of images generated by baseline methods. Both our model without and Cycle/Bicycle can generate diverse results. However, the results contain clearly visible artifacts. Without the content discriminator, our model fails to capture domain-specific details (e.g., the color of tree and sky). Therefore, the variations of synthesized images lie in global color differences. As the Cycle/Bicycle methods are trained on pseudo paired data generated by CycleGAN, the quality of the pseudo paired data is not high. As a result, the generated images contain limited diversity.

To better analyze the learned domain-specific attribute space, we perform linear interpolation between two given attributes and generate the corresponding images as shown in Figure 9. The interpolation results validate the continuity in the attribute space and show that our model can generalize in the distribution, rather than simply retain visual information.

Mode seeking regularization. We demonstrate the effectiveness of the mode seeking regularization term in Figure 8. The mode seeking regularization term substantially alleviates the mode collapse issue in DRIT Lee et al. (2018), particularly in the challenging shape-variation translation (i.e., dog-to-cat translation).

Attribute transfer. We demonstrate the results of the attribute transfer in Figure 10. By disentangling content and attribute representations, we are able to perform attribute transfer from images of desired attributes, as illustrated in Figure 3(c). Furthermore, since the content space is shared between two domains, we can generate images conditioned on content features encoded from either domain. Thus our model can achieve not only inter-domain but also intra-domain attribute transfer. Note that intra-domain attribute transfer is not explicitly involved in the training process.

Multi-domain I2I. Figure 11 shows the results of applying the proposed method on the multi-domain I2I. We perform translation among three domains (real images and two artistic styles) and four domains (different weather conditions). Using one single generator, the proposed model is able to perform diverse translation among multiple domains.

4.2 Quantitative Evaluation

Metrics We conduct quantitative evaluations using the following metrics:

  • FID. To evaluate the quality of the generated images, we use the FID Heusel et al. (2017) metric to measure the distance between the generated distribution and the real one through features extracted by Inception Network Szegedy et al. (2015). Lower FID values indicate better quality of the generated images.

  • LPIPS. To evaluate diversity, we employ LPIPS Zhang et al. (2018b) metric to measure the average feature distances between generated samples. Higher LPIPS scores indicate better diversity among the generated images.

  • JSD and NDB. To measure the similarity between the distribution between real images and generated one, we adopt two bin-based metrics, JSD and NDB Richardson and Weiss (2018). These metrics evaluate the extent of mode missing of generative models. Similar to Richardson and Weiss (2018), we first cluster the training samples using K-means into different bins. These bins can be viewed as modes of the real data distribution. We then assign each generated sample to the bin of its nearest neighbor. We compute the bin-proportions of the training samples and the synthesized samples to evaluate the difference between the generated distribution and the real data distribution. The NDB and JSD metrics of the bin-proportion are then computed to measure the level of mode collapse. Lower NDB and JSD scores mean the generated data distribution approaches the real data distribution better by fitting more modes. More discussions on these metrics can be found in Richardson and Weiss (2018).

  • User preference. For evaluating realism of synthesized images, we conduct a user study using pairwise comparison. Given a pair of images sampled from real images and translated images generated from various methods, each subject needs to answer the question “Which image is more realistic?”

Realism vs. diversity. We conduct the experiment using winter summer and cat dog translation with the Yosemite and pets datasets, respectively. Table 2, Table 5, and Figure 12 present the quantitative comparisons with other methods as well as baseline methods. In Table 2, the DRIT++ method performs well on all metrics. The DRIT++ method generates images that are not only realistic, but also diverse and close to the original data distribution. Table 5 validates the effectiveness of the content discriminator, latent regression loss, and mode-seeking regularization in the proposed algorithm. Figure 12 shows the results of user study. The DRIT++ algorithm performs favorably against the state-of-the-art approaches as well as baseline methods.

Multi-domain translation We compare the performance of DRIT++, StarGAN Choi et al. (2018b), and DosGAN Lin et al. (2018a) in terms of realism on the weather dataset. For each trial, We translate 1000 testing images to one of four domains and measure the visual quality (in terms of FID) and diversity (using the LPIPS metric). We report the averaged results of 5 trials. Table 6 shows that the disentangled representations by our method not only enable diverse translation, but also improve the quality of generated images. Figure 14 presents qualitative results by the evaluated methods.

Multi-domain model on two-domain translation Two-domain translation is a special case of multi-domain translation problems. We conduct an experiment under the same settings described in Table 2 and 3. As shown in Table 3, our multi-domain model performs well in all metrics against the two-domain translation model that consists of the domain-specific generator and discriminator.

Ablation study on the content discriminator In practice, the content discriminator helps align distributions of the latent content representations of two domains. We conduct experiments on both cat2dog and the Yosemite datasets to illustrate this. The distance between the means of the content representations from two domains is measured by:

(15)

Table 4 shows the quantitative results. Furthermore, Figure 13 visualizes the distributions of the latent content representations from two domains using t-SNE. The distance (15) between the content representations of the two domains is much smaller with the help of the content discriminator.

Datasets Winter Summer
Cycle/Bicycle DRIT MUNIT DRIT++
FID
NDB
JSD
LPIPS
Datasets Cat Dog
Cycle/Bicycle DRIT MUNIT DRIT++
FID
NDB
JSD
LPIPS
Table 2: Quantitative results of the Yosemite (SummerWinter) and the CatDog dataset.
DRIT++ (two-domain) DRIT++ (multi-domain)
FID
NDB
JSD
LPIPS
Table 3: Quantitative results of DRIT++ (multi-domain) on the Yosemite (SummerWinter) dataset.
Dataset DRIT++ DRIT++ w/o content discriminator
cat2dog 10.45 55.45
Yosemite 31.56 58.69
Table 4: Average distance between latent content representations of two domains.
Figure 13: Visualization of the latent content representations of two domains using t-SNE. Each data point is a content representation encoded from an image of that domain.
DRIT w/o DRIT w/o KL DRIT w/o DRIT DRIT++
FID
NDB
JSD
LPIPS
Table 5: Ablation study. We demonstrate the effect of content discriminator, latent regression loss, and mode-seeking regularization in the proposed algorithm.
DRIT++ StarGAN DosGAN
FID
LPIPS
Table 6: Multi-domain translation comparison. We compare the visual quality and diversity of DRIT++ (multi-domian) with two multi-domain translation model on the weather dataset. The results are averaged after 5 trials. StarGAN gets highest score on LPIPS due to its lower visual quality.
Figure 14: Comparisons of different multi-domain translation model on the weather dataset.
(a)
Figure 15: Multi-scale generator-discriminator. To enhance the quality of generated high-resolution images, we adopt a multi-scale generator-discriminator architecture. We generate low-resolution images from the intermediate features of the generator. An additional adversarial domain loss is applied on the low-resolution images.

4.3 High Resolution I2I

We demonstrate that the proposed scheme can be applied to the translation tasks with high-resolution images. We perform image translation on the street scene (GTA Richter et al. (2016) Cityscape Cordts et al. (2016)) dataset. The size of the input image is pixels. During the training, we randomly crop the image to the size of for memory efficiency consideration. To enhance the quality of the generated high-resolution images, we adopt a multi-scale generator-discriminator structure similar to the StackGAN Zhang et al. (2018a) scheme. As shown in Figure 15, we extract the intermediate feature of the generator and pass through a convolutional layer to generate low-resolution images. We utilize an additional discriminator which takes low-resolution images as input. This discriminator enforces the first few layers of the generator to capture the distribution of low-level variations such as colors and image structures. We find such multi-scale generator-discriminator structure facilitate the training and yields more realistic images on high-resolution translation task. To validate the effectiveness of the multi-scale architecture, we show the comparison between (1) adding two more layers to generators and (2) using the multi-scale generator-discriminator architecture in Table 7 and Figure 16. We report the FID and LPIPS scores of the generated images by the two methods on the GTA5  Cityscape translation task. As shown in Table 7, using the multi-scale architecture we can generate more photo-realistic images on the translation task with high-resolution images.

DRIT++ w/ 2 more layers DRIT++ (high-resolution)
FID
LPIPS
Table 7: Ablation study on multi-scale generator-discriminator architecture. We improvement using two more layers in the multi-scale architecture.
(a)
Figure 16: High-resolution translations. We show sample results produced by our model with multi-scale generator-discriminator architecture. The mappings from top to bottom are: GTA Cityscape, Cityscape GTA.
(a) Summer Winter
(b) van Gogh Monet
Figure 17: Failure examples. Typical cases: (a) Attribute space not fully exploited. (b) Distribution characteristic difference.

4.4 Limitations

The performance of the proposed algorithm is limited in several aspects. First, due to the limited amount of training data, the attribute space is not fully exploited. Our I2I translation fails when the sampled attribute vectors locate in under-sampled space, see Figure 17(a). Second, it remains difficult when the domain characteristics differ significantly. For example, Figure 17(b) shows a failure case on the human figure due to the lack of human-related portraits in Monet collections. Third, we use multiple encoders and decoders for the cross-cycle consistency during training, which requires large memory usage. The memory usage limits the application on high-resolution image-to-image translation.

5 Conclusions

In this paper, we present a novel disentangled representation framework for diverse image-to-image translation with unpaired data. we propose to disentangle the latent space to a content space that encodes common information between domains, and a domain-specific attribute space that can model the diverse variations given the same content. We apply a content discriminator to facilitate the representation disentanglement. We propose a cross-cycle consistency loss for cyclic reconstruction to train in the absence of paired data. Qualitative and quantitative results show that the proposed model produces realistic and diverse images. We also apply the proposed algorithm to domain adaptation and achieve competitive performance compared to the state-of-the-art methods.

Acknowledgements

This work is supported in part by the NSF CAREER Grant #1149783, the NSF Grant #1755785, and gifts from Verisk, Adobe and Google.

Footnotes

  1. email: {hlee246, htseng6, ylu52, mhyang}@ucmerced.edu
  2. email: qimao@pku.edu.cn
  3. email: jbhuang@vt.edu
  4. email: maneesh.singh@verisk.com
  5. email: {hlee246, htseng6, ylu52, mhyang}@ucmerced.edu
  6. email: qimao@pku.edu.cn
  7. email: jbhuang@vt.edu
  8. email: maneesh.singh@verisk.com
  9. email: {hlee246, htseng6, ylu52, mhyang}@ucmerced.edu
  10. email: qimao@pku.edu.cn
  11. email: jbhuang@vt.edu
  12. email: maneesh.singh@verisk.com
  13. email: {hlee246, htseng6, ylu52, mhyang}@ucmerced.edu
  14. email: qimao@pku.edu.cn
  15. email: jbhuang@vt.edu
  16. email: maneesh.singh@verisk.com
  17. email: {hlee246, htseng6, ylu52, mhyang}@ucmerced.edu
  18. email: qimao@pku.edu.cn
  19. email: jbhuang@vt.edu
  20. email: maneesh.singh@verisk.com
  21. email: {hlee246, htseng6, ylu52, mhyang}@ucmerced.edu
  22. email: qimao@pku.edu.cn
  23. email: jbhuang@vt.edu
  24. email: maneesh.singh@verisk.com
  25. email: {hlee246, htseng6, ylu52, mhyang}@ucmerced.edu
  26. email: qimao@pku.edu.cn
  27. email: jbhuang@vt.edu
  28. email: maneesh.singh@verisk.com
  29. https://www.wikiart.org/

References

  1. Guided image-to-image translation with bi-directional feature transformation. In ICCV, Cited by: §2.
  2. Augmented cyclegan: learning many-to-many mappings from unpaired data. In ICML, Cited by: §2.
  3. Wasserstein GAN. In ICML, Cited by: §2.
  4. Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR, Cited by: §1.
  5. DiDA: disentangled synthesis for domain adaptation. arXiv preprint arXiv:1805.08019. Cited by: §2.
  6. Photographic image synthesis with cascaded refinement networks. In ICCV, Cited by: §1.
  7. InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. In NIPS, Cited by: §2.
  8. CrDoCo: pixel-level domain transfer with cross-domain consistency. In CVPR, Cited by: §1.
  9. Discovering hidden factors of variation in deep networks. In ICLR workshop, Cited by: §2.
  10. Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, Vol. 1711. Cited by: §1.
  11. StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, Cited by: §2, §4.2.
  12. The cityscapes dataset for semantic urban scene understanding. In CVPR, Cited by: §4.3.
  13. Unsupervised learning of disentangled representations from video. In NIPS, Cited by: §2.
  14. Generative adversarial nets. In NIPS, Cited by: §2.
  15. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, Cited by: §1, 1st item.
  16. CyCADA: cycle-consistent adversarial domain adaptation. In ICML, Cited by: §1.
  17. Multimodal unsupervised image-to-image translation. In ECCV, Cited by: (c)c, §2, 2nd item.
  18. Image-to-image translation with conditional adversarial networks. In CVPR, Cited by: §1, §2, §2.
  19. Learning to discover cross-domain relations with generative adversarial networks. In ICML, Cited by: §2.
  20. A method for stochastic optimization. In ICLR, Cited by: §4.
  21. Semi-supervised learning with deep generative models. In NIPS, Cited by: §2.
  22. Deep laplacian pyramid networks for fast and accurate superresolution. In CVPR, Cited by: §1.
  23. Learning representations for automatic colorization. In ECCV, Cited by: §1.
  24. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR, Cited by: §1, §2.
  25. Diverse image-to-image translation via disentangled representations. In ECCV, Cited by: (c)c, 2nd item, §4.1.
  26. Dancing to music. In NeurIPS, Cited by: §2.
  27. Deep joint image filtering. In ECCV, Cited by: §1.
  28. Joint image filtering with deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence 41 (8), pp. 1909–1923. Cited by: §1.
  29. Exploring explicit domain supervision for latentspace disentanglement in unpaired image-to-image translation. arXiv preprint arXiv:1805.08019. Cited by: §2, §4.2.
  30. Conditional image-to-image translation. In CVPR, Cited by: §2.
  31. A unified feature disentangler for multi-domain image translation and manipulation. In NIPS, Cited by: §2.
  32. Unsupervised image-to-image translation networks. In NIPS, Cited by: Figure 2, (b)b, §1, §2, §3.1.
  33. Deep learning face attributes in the wild. In ICCV, Cited by: §4.
  34. Exemplar guided unsupervised image-to-image translation. In ICLR, Cited by: §2.
  35. Pose guided person image generation. In NIPS, Cited by: §2.
  36. Adversarial autoencoders. In ICLR workshop, Cited by: §2.
  37. Mode seeking generative adversarial networks for diverse image synthesis. In CVPR, Cited by: §1, §3.4.
  38. Disentangling factors of variation in deep representation using adversarial training. In NIPS, Cited by: §2.
  39. Image to image translation for domain adaptation. In CVPR, Cited by: §1.
  40. Semantic image synthesis with spatially-adaptive normalization. Cited by: §1.
  41. Automatic differentiation in pytorch. In NIPS workshop, Cited by: §4.
  42. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, Cited by: §2, §4.
  43. Generative adversarial text to image synthesis. In ICML, Cited by: §2.
  44. On GANs and GMMs. In NIPS, Cited by: §1, 3rd item.
  45. Playing for data: Ground truth from computer games. In ECCV, Cited by: §4.3.
  46. Learning from simulated and unsupervised images through adversarial training. In CVPR, Cited by: §1.
  47. Going deeper with convolutions. In CVPR, Cited by: 1st item.
  48. Unsupervised cross-domain image generation. In ICLR, Cited by: §1.
  49. Generating videos with scene dynamics. In NIPS, Cited by: §2.
  50. High-resolution image synthesis and semantic manipulation with conditional gans. In CVPR, Cited by: §1.
  51. DualGAN: unsupervised dual learning for image-to-image translation.. In ICCV, Cited by: §1.
  52. Stackgan++: realistic image synthesis with stacked generative adversarial networks. TPAMI. Cited by: §4.3.
  53. The unreasonable effectiveness of deep networks as a perceptual metric. In CVPR, Cited by: §1, 2nd item.
  54. Colorful image colorization. In ECCV, Cited by: §1.
  55. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, Cited by: Figure 2, (a)a, §1, §2, §3.2, 5th item, §4.
  56. Toward multimodal image-to-image translation. In NIPS, Cited by: §1, §2, §3.3, 5th item.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402533
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description