A Variational U-Net for Conditional Appearance and Shape Generation
Deep generative models have demonstrated great performance in image synthesis. However, results deteriorate in case of spatial deformations, since they generate images of objects directly, rather than modeling the intricate interplay of their inherent shape and appearance. We present a conditional U-Net  for shape-guided image generation, conditioned on the output of a variational autoencoder for appearance. The approach is trained end-to-end on images, without requiring samples of the same object with varying pose or appearance. Experiments show that the model enables conditional image generation and transfer. Therefore, either shape or appearance can be retained from a query image, while freely altering the other. Moreover, appearance can be sampled due to its stochastic latent representation, while preserving shape. In quantitative and qualitative experiments on COCO , DeepFashion [21, 23], shoes , Market-1501  and handbags  the approach demonstrates significant improvements over the state-of-the-art.
Recently there has been great interest in generative models for image synthesis [7, 12, 18, 24, 49, 51, 32]. Generating images of objects requires a detailed understanding of both, their appearance and spatial layout. Therefore, we have to distinguish basic object characteristics. On the one hand, there is the shape and geometrical layout of an object relative to the viewpoint of the observer (a person sitting, standing, or lying or a folded handbag). On the other hand, there are inherent appearance properties such as those characterized by color and texture (curly long brown hair vs. buzz cut black hair or the pattern of corduroy). Evidently, objects naturally change their shape, while retaining their inherent appearance (bending a shoe does not change its style). However, the picture of the object varies dramatically in the process, e.g., due to translation or even self-occlusion. Conversely, the color or fabric of a dress can change with no impact on its shape, but again clearly altering the image of the dress.
With deep learning, there has lately been great progress in generative models, in particular generative adversarial networks (GANs) [1, 8, 10, 27, 38], variational autoencoders , and their combination [2, 17]. Despite impressive results, these models still suffer from weak performance in case of image distributions with large spatial variation: while on perfectly registered faces (e.g., aligned CelebA dataset ) high-resolution images have been generated [19, 13], synthesizing the full human body from datasets as diverse as COCO  is still an open challenge. The main reason for this is that these generative models directly synthesize the image of an object, but fail to model the intricate interplay of appearance and shape that is producing the image. Therefore, they can easily add facial hair or glasses to a face as this amounts to recoloring of image areas. Contrast this to a person moving their arm, which would be represented as coloring the arm at the old position with background color and turning the background at the new position into an arm. What we are lacking is a generative model that can move and deform objects and not only blend their color.
Therefore, we seek to model both, appearance and shape, and their interplay when generating images. For general applicability, we want to be able to learn from mere still image datasets with no need for a series of images of the same object instance showing different articulations. We propose a conditional U-Net  architecture for mapping from shape to the target image and condition on a latent representation of a variational autoencoder for appearance. To disentangle shape and appearance, we allow to utilize easily available information related to shape, such as edges or automatic estimates of body joint locations. Our approach then enables conditional image generation and transfer: to synthesize different geometrical layouts or change the appearance of an object, either shape or appearance can be retained from a query image, whereas the other component can be freely altered or even imputed from other images. Moreover, the model also allows to sample from the appearance distribution without altering the shape.
2 Related work
In the context of deep learning, three different approaches to image generation can be identified. Generative Adversarial Networks , Autoregressive (AR) models  and Variational Auto-Encoders (VAE) .
Our method provides control over both, appearance and shape. In contrast, many previous methods can control the generative process only with respect to appearance. [15, 26, 38] utilize class labels,  attributes and [44, 52] textual descriptions to control the appearance.
Control over shape has been mainly obtained in the Image-to-Image translation framework.  uses a discriminator to obtain realistic outputs but their method is limited to the synthesis of a single, uncontrollable appearance. To obtain a larger variety of appearances,  first generates a segmentation mask of fashion articles and then synthesizes an image. This leads to larger variations in appearances but does not allow to change the pose of a given appearance.
 uses segmentation masks to produce images in the context of street scenes as well. They do not rely on adversarial training but directly learn a multimodal distribution for each segmentation label. The amount of appearances that can be produced is given by the number of combinations of modes, resulting in very coarse modeling of appearance. In contrast, our method makes no assumption that the data can be well represented by a limited number of modes, does not require segmentation masks, and it includes an inference mechanism for appearance.
 utilizes the GAN framework and  the autoregressive framework to provide control over shape and appearance. However the appearance is specified by very coarse text descriptions. Furthermore, both methods have problems producing the desired shape consistently.
In contrast to our generative approach, [4, 3] have pursued unsupervised learning of human posture similarity for retrieval in still images and [25, 5] in videos. Rendering images of persons in different poses has been considered by  for a fixed, discrete set of target poses, and by  for general poses. In the latter, the authors use a two-stage model. The first stage implements pixelwise regression to a target image from a conditional image and the pose of the target image. Thus the method is fully supervised and requires labeled examples of the same appearance in different poses. As the result of the first stage is in most cases too blurry, they use a second stage which employs adversarial training to produce more realistic images. Our method is never directly trained on the transfer task and therefore does not require such specific datasets. Instead, we carefully model the separation between shape and appearance and as a result, obtain an explicit representation of the appearance which can be combined with new poses.
Let be an image of an object from a dataset . We want to understand how images are influenced by two essential characteristics of the objects that they depict: their shape and appearance . Although the precise semantics of can vary, we assume it characterizes geometrical information, particularly location, shape, and pose. then represents the intrinsic appearance characteristics.
If and capture all variations of interest, the variance of a probabilistic model for images conditioned on those two variables is only due to noise. Hence, the maximum a posteriori estimate serves as an image generator controlled by and . How can we model this generator?
3.1 Variational Autoencoder based on latent shape and appearance
If and are both latent variables, a popular way of learning the generator is to use a VAE. To learn we need to maximize the log-likelihood of observed data and marginalize out the latent variables and . To avoid the intractable integral, one introduces an approximate posterior to obtain the evidence lower bound (ELBO) from Jensen’s inequality,
As one can see, Eq. 1 contains the prior , which is assumed to be a standard normal distribution in the VAE framework. With this joint prior we cannot guarantee that both variables, and would be separated in the latent space. Thus, our overall goal of separately altering shape and appearance cannot be met. A standard normal prior can model but it is not suited to describe the spatial information contained in , which is localized and easily gets lost in the bottleneck. Therefore, we need additional information to disentangle y and z when learning the generator .
3.2 Conditional Variational Autoencoder with appearance
In the previous section we have shown that a standard VAE with two latent variables is not suitable for learning disentangled representations of and . Instead we assume that we have an estimator function for the variable , i.e., . For example, could provide information on shape by extracting edges or automatically estimating body joint locations [6, 41]. Following up on Eq. 1, the task is now to infer the latent variable from the image and the estimate by maximizing their conditional log-likelihood.
Compared to Eq. 1, the ELBO in Eq. 2 depends now on the (conditional) prior . This distribution can now be estimated from the training data and captures potential interrelations between shape and appearance. For instance a person jumping is less likely to wear a dinner jacket than a T-shirt.
Following  we model as a parametric Laplace and as a parametric Gaussian distribution. The parameters of these distributions are estimated by two neural networks and respectively. Using the reparametrization trick , these networks can be trained end-to-end using standard gradient descent. The loss function for training follows directly from Eq. 2 and has the form:
where denotes Kullback-Leibler divergence. The next section derives the network architecture we use for modeling and .
|GT||pix2pix||our (reconst.)||our (random samples)|
Let us first establish a network which estimates the parameters of the distribution . We assume further, as it is common practice , that the distribution has constant standard deviation and the function is a deterministic function in . As a consequence, the network can be considered as an image generator network and we can replace the second term in Eq. 3 with the reconstruction loss :
It is well known that pixelwise statistics of images, such as the -norm here, do not model perceptual quality of images well . Instead we adopt the perceptual loss from  and formulate the final loss function as:
where is a network for measuring perceptual similarity (in our case VGG19 ) and are hyper-parameters that control the contribution of the different layers of to the total loss.
If we forget for a moment about , the task of the network is to generate an image given the estimate of the shape information of an image . Here it is crucial that we want to preserve spatial information given by in the output image . Therefore, we represent in the form of an image of the same size as . Depending on the estimate this is easy to achieve. For example, estimated joints of a human body can be used to draw a stickman for this person. Given such image representation of we require that each keypoint of is used to estimate . A U-Net architecture  would be the most appropriate choice in this case, as its skip-connections help to propagate the information directly from input to output. In our case, however, the generator should learn about images by also conditioning on .
The appearance is sampled from the Gaussian distribution whose parameters are estimated by the encoder network . Its optimization requires balancing two terms. It has to encode enough information about into such that can describe the data well as measured by the reconstructions loss in (4). At the same time we penalize a deviation from the prior by minimizing the Kullback-Leibler divergence between and . The design of the generator as a U-Net already guarantees the preservation of spatial information in the output image. Therefore, any additional information about the shape encoded in , which is not already contained in the prior, incurs a cost without providing new information on the likelihood . Thus, an optimal encoder must be invariant to shape. In this case it suffices to include at the bottleneck of the generator .
More formally, let our U-Net-like generator consist of two parts: an encoder and a decoder (see Fig.2). We concatenate the inferred appearance representation with the bottle-neck representation of : and let the decoder generate an image from it. Concatenating the shape and appearance features keeps the gradients for training the respective encoders and well separated, while the decoder can learn to combine those representations for an optimal synthesis. Together and build a U-Net like network, which guarantees optimal transfer of spatial information from input to output images. On the other hand, when put together with frames a VAE that allows appearance inference. The prior is estimated by just before it concatenates into its representation. We train all three networks jointly by maximizing the loss in Eq. 5.
We now proof the advantages of the proposed method by showing the results of image generation in various datasets with different shape estimators . In addition to visual comparisons with other methods, all results are supported by numerical experiments. Code and additional experiments can be found at https://compvis.github.io/vunet.
|GT||pix2pix||our (reconst.)||our (random samples)|
Datasets To compare with other methods, we evaluate on: shoes , handbags , Market-1501 , DeepFashion [21, 23] and COCO . As baselines for our subsequent comparisons we use the state-of-the-art pix2pix model  and PG . To the best of our knowledge PG is the only one approach which is able to transfer one person to the pose of another. We show that we improve upon this method and do not require specific datasets for training. With regard to pix2pix, it is the most general image-to-image translation model which can work with different shape estimates. Where applicable we directly compare to the quantitative and qualitative results provided by the authors of the mentioned papers. As  does not perform experiments on Market-1501, DeepFashion and COCO we train their model on these datasets using their published code .
Shape estimate In the following experiments we work with two kinds of shape estimates: edge images and, in case of humans, automatically regressed body joint positions. We utilize edges extracted with the HED algorithm  by the authors of . Following  we apply current state-of-the-art real time multi-person pose estimator  for body joint regression.
Network architecture The generator is implemented as a U-Net architecture with residual blocks : blocks in the encoder part and symmetric blocks in the decoder part . Additional skip-connections link each block in to the corresponding block in and guarantee direct information flow from input to output. Empirically, we set the parameter which worked well for all considered datasets. Each residual block follows the architecture proposed in  without batch normalization. We use strided convolution with stride after each residual block to downsample the input until a bottleneck layer. In the decoder we utilize subpixel convolution  to perform the up-sampling between two consecutive residual blocks. All convolutional layers consists of filters. The encoder follows the same architecture as the encoder .
We train our model separately for each dataset using the Adam  optimizer with parameters and for iterations. The initial learning rate is set to and linearly decreases to during training. We utilize weight normalization and data dependent initialization of weights as described in . Each is set to the reciprocal of the total number of elements in layer .
In-plane normalization In some difficult cases, e.g. for datasets with high shape variability, it is difficult to perform appearance transfer from one object to another with no part correspondences between them. This problem is especially problematic when generating human beings. To cope with it we propose to use additional in-plane normalization utilizing the information provided by the shape estimate . In our case is given by the positions of body joints which we use to crop out areas around body limbs. This results in image crops that we stack together and give as input to the generator instead of . If some limbs are missing (e.g. due to occlusions) we use a black image instead of the corresponding crop.
Let us now investigate the proposed model for conditional image generation based on three tasks: 1) reconstruction of an image given its shape estimate and original appearance ; 2) conditional image generation based on a given shape estimate ; 3) conditional image generation from arbitrary combinations of and .
4.1 Image reconstruction
Given a query image and its shape estimate we can use the network to infer appearance of the image . Namely, we denote the mean of the distribution predicted by from the single image as its original appearance . Using these and we can ask our generator to reconstruct from its two components.
We show examples of images reconstructed by our methods in Figs. 3 and 4. Additionally, we follow the experiment in  and calculate for the reconstructions of the test images in Market-1501 and DeepFashion dataset Structural Similarities (SSIM)  and Inception Scores (IS)  (see Table 1). Compared to pix2pix  and PG  our method outperforms both in terms of SSIM score. Note that SSIM compares the reconstructions directly against the original images. As our method differs from both by generating images conditioned on shape and appearance this underlines the benefit of this conditional representation for image generation. In contrast to SSIM, inception score is measured on the set of reconstructed images independently from the original images. In terms of IS we achieve comparable results to  and improve on .
4.2 Appearance sampling
An important advantage of our model compared to  and  is its ability to generate multiple new images conditioned only on the estimate of an object’s shape . This is achieved by randomly sampling from the learned prior instead of inferring it directly from an image . Thus, appearance can be explored while keeping shape fixed.
Edges-to-images We compare our method to pix2pix by generating images from edge images of shoes or handbags. The results can been seen in Fig. 3. As noted by the authors in , the outputs of pix2pix show only marginal diversity at test time, thus looking almost identical. To save space, we therefore present only one of them. In contrast, our model generates high-quality images with large diversity. We also observe that our model generalizes better to sketchy drawings made by humans  (see Fig. 5). Due to a higher abstraction level, sketches are quite different to the edges extracted from the real images in the previous experiment. In this challenging task our model shows higher coherence to the input edge image as well as less artifacts such as at the carrying strap of the backpack.
Stickman-to-person Here we evaluate our model on the task of learning plausible appearances for rendering human beings. Given a we thus sample and infer . We compare our results with the ones achieved by pix2pix on Market-1501 and DeepFashion datasets (see Fig. 4). Due to marginal diversity in the output of pix2pix we again only show one sample per row. We observe that our model has learned a significantly more natural latent representation of the distribution of appearance. Also it preserves the spatial layout of the human figure better. We prove this observation by re-estimating joint positions from the test images generated by each methods on all three datasets. For this we apply the same the algorithm we used to estimate the positions of body joints initially, namely  with parameter kept fixed. We report mean -error in the positions of detected joints in Table 2. Our approach shows a significantly lower re-localization error, thus demonstrating that body pose has been favorably retained.
4.3 Independent transfer of shape and appearance
We show performance of our method for conditional image transfer, Fig. 7. Our disentangled representation of shape and appearance can transfer a single appearance over different shapes and vice versa. The model has learned a disentangled representation of both characteristics, so that one can be freely altered without affecting the other. This ability is further demonstrated in Fig. 6 that shows a synthesis across a full turn.
|max pairwise||max pairwise|
The only other work we can compare with in this experiment is PG from . In contrast to our method PG was trained fully supervised on DeepFashion and Market-1501 datasets with pairs of images that share appearance (person id) but contain different shapes (in this case pose) of the same person. Despite the fact that we never train our model explicitly on pairs of images, we demonstrate both qualitatively and quantitatively that our method improves upon . A direct visual comparison is shown in Fig. 8. We further design a new metric to evaluate and compare against PG on the appearance and shape transfer. Since code for  is not available our comparison is limited to generated images provided by . The idea behind our metric is to compare how good an appearance of a reference image is preserved when synthesizing it with a new shape estimate . For that we first fine-tune an ImageNet  pretrained VGG16  on Market-1501 on the challenging task of person re-identification. In test phase this network achieves mean average precision (mAP) of and rank-1 accuracy of on a task of single query retrieval. These results are comparable to those reported in . Due to the nature of Market-1501, which contains images of the same persons from multiple viewpoints, the features learned by the network should be pose invariant and mostly sensitive to appearance. Therefore, we use a difference between two features extracted by this network as a measure for appearance similarity.
For all results on DeepFashion and Market-1501 datasets reported in  we use our method to generate exactly the same images. Further we build groups of images sharing the same appearance and retain those groups that contain more than one element. As a result we obtain three groups of images (see Table. 3) which we analyze independently. We denote these groups with .
For each image in the group we find its nearest neighbors in the training set using the embedding of the fine-tuned VGG16. We search for the nearest neighbors in the training dataset, as the person IDs and poses were taken from the test dataset. We calculate the mean over each nearest-neighbor set and use this mean as the unique representation of the generated image . For images in the group we calculate maximal pairwise distance between the as well as the length of the standard deviation vector. The results over all three image groups are summarized in Table 3. One can see that our method shows higher compactness of the feature representations of the images in each group. From these results we conclude that our generated images are more consistent in their appearance than the results of PG.
Generalization to different poses Because we are not limited by the availability of labeled images showing the same appearance in different poses, we can utilize additional large scale datasets. Results on COCO are shown in Fig. 1. Besides still images, we are able to synthesize videos. Examples can be found at https://compvis.github.io/vunet, demonstrating the transfer of appearances from COCO to poses obtained from a video dataset .
4.4 Ablation study
At last we analyze the effect of individual components of our method on the quality of generated images (see Fig. 9).
Absence of appearance Without appearance information our generator is a U-Net performing a direct mapping from shape estimate to the image . In this case, the output of the generator is the mean of . Because we model it as a unimodal Laplace distribution, it is an estimate of the mean image over all possible images (of the dataset) with the given shape. As a result the output generations do not show any appearance at all (Fig. 9, second row).
Importance of KL-loss We show further what happens if we replace the VAE in our model with a simple autoencoder. In practice that means that we ignore the KL-term in the loss function in Eq. 5. In this case, the network has no incentive to learn a shape invariant representation of the appearance and just learns to copy and paste the appearance inputs to the positions provided by the shape estimate (Fig. 9, third row).
Our full model The last row in Fig. 9 shows that our full model can successfully perform appearance transfer.
We have presented a variational U-Net for conditional image generation by modeling the interplay of shape and appearance. While a variational autoencoder allows to sample appearance, the U-Net preserves object shape. Experiments on several datasets and diverse objects have demonstrated that the model significantly improves the state-of-the-art in conditional image generation and transfer. ††This work has been supported in part by the Heidelberg Academy of Science and a hardware donation from NVIDIA.
-  M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. arXiv preprint arXiv:1701.07875, 2016.
-  J. Bao, D. Chen, F. Wen, H. Li, and G. Hua. CVAE-GAN: Fine-grained image generation through assymetric training. In To appear in Proceedings of the International Conference on Computer Vision (ICCV), 2017.
-  M. Bautista, A. Sanakoyeu, and B. Ommer. Deep unsupervised similarity learning using partially ordered sets. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
-  M. Bautista, A. Sanakoyeu, E. Sutter, and B. Ommer. Cliquecnn: Deep unsupervised exemplar learning. In Proceedings of the Conference on Advances in Neural Information Processing Systems (NIPS), Barcelona, 2016. MIT Press, MIT Press.
-  B. Brattoli, U. Büchler, A. S. Wahl, M. E. Schwab, and B. Ommer. Lstm self-supervision for detailed behavior analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (BB and UB contributed equally), (BB and UB contributed equally), 2017.
-  Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In CVPR, 2017.
-  Q. Chen and V. Koltun. Photographic image synthesis with cascaded refinement networks. In To appear in Proceedings of the International Conference on Computer Vision (ICCV), 2017.
-  X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprint arXiv:1606.03657, 2016.
-  M. Eitz, J. Hays, and M. Alexa. How do humans sketch objects? ACM Trans. Graph. (Proc. SIGGRAPH), 31(4):44:1–44:10, 2012.
-  I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio. Generative adversarial nets. In In Neural Information Processing Systems (NIPS), pages 2672–2680, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV, pages 630–645, 2016.
-  P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. arxiv preprint arXiv:1611.07004, 2016.
-  T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
-  D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
-  D. P. Kingma, S. Mohamed, D. Jimenez Rezende, and M. Welling. Semi-supervised learning with deep generative models. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3581–3589. Curran Associates, Inc., 2014.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. CoRR, abs/1312.6114, 2013.
-  A. B. L. Larsen, S. K. Sønderby, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.
-  C. Lassner, G. Pons-Moll, and P. V. Gehler. A generative model for people in clothing. In Proceedings of the IEEE International Conference on Computer Vision, 2017.
-  C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. Photo-realistic single image super resolution using generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
-  T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: common objects in context. arXiv preprint arXiv:1405.0312, 2014.
-  Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015.
-  Z. Liu, S. Yan, P. Luo, X. Wang, and X. Tang. Fashion landmark detection in the wild. In European Conference on Computer Vision (ECCV), 2016.
-  L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool. Pose guided person image generation. In To appear in Proceedings of the Conference on Advances in Neural Information Processing Systems (NIPS), pages 3846–3854, 2017.
-  T. Milbich, M. Bautista, E. Sutter, and B. Ommer. Unsupervised video understanding by reconciliation of posture similarities. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017.
-  A. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2017.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In In International Conference On Learning Representations (ICLR), 2016.
-  S. E. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele, and H. Lee. Learning what and where to draw. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 217–225. Curran Associates, Inc., 2016.
-  S. E. Reed, A. van den Oord, N. Kalchbrenner, S. Gómez, Z. Wang, D. Belov, and N. de Freitas. Parallel multiscale autoregressive density estimation. In Proceedings of The 34th International Conference on Machine Learning, 2017.
-  O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation, pages 234–241. Springer International Publishing, Cham, 2015.
-  M. Rosca, B. Lakshminarayanan, D. Warde-Farley, and S. Mohamed. Variational approaches for auto-encoding generative adversarial networks. CoRR, abs/1706.04987, 2017.
-  J. C. Rubio, A. Eigenstetter, and B. Ommer. Generative regularization with latent topics for discriminative object recognition. Pattern Recognition, 48(12):3871–3880, 2015.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
-  T. Salimans, I. J. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016.
-  T. Salimans and D. P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 901–909. Curran Associates, Inc., 2016.
-  W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1874–1883, 2016.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
-  K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative models. In In Neural Information Processing Systems (NIPS), pages 3483–3491, 2015.
-  A. van den Oord, N. Kalchbrenner, , L. E. K. Kavukcuoglu, O. Vinyals, and A. Graves. Conditional image generation with pixelcnn decoders. In In Neural Information Processing Systems (NIPS), pages 4790–4798, 2016.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility to structural similarity. Trans. Img. Proc., 13(4):600–612, Apr. 2004.
-  S. Xie and Z. Tu. Holistically-nested edge detection. In In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015.
-  X. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2image: Conditional image generation from visual attributes. In Proceedings of the European Conference on Computer Vision, 2016.
-  A. Yu and K. Grauman. Fine-grained visual comparisons wiht local learnings. In In Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
-  H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. Metaxas. Stackgan: Text photo-realistic image synthesis with stacked generative adversarial networks. In To appear in Proceedings of the International Conference on Computer Vision (ICCV), 2017.
-  W. Zhang, M. Zhu, and K. G. Derpanis. From actemes to action: A strongly-supervised representation for detailed action understanding. In Proceedings of the IEEE International Conference on Computer Vision, pages 2248–2255, 2013.
-  B. Zhao, X. Wu, Z. Cheng, H. Liu, and J. Feng. Multi-view image generation from a single-view. arXiv preprint arXiv:1704.04886, 2017.
-  L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark. In Computer Vision, IEEE International Conference on, 2015.
-  Z. Zheng, L. Zheng, and Y. Yang. A discriminatively learned cnn embedding for person re-identification. arXiv preprint arXiv:1611.05666, 2016.
-  J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros. Generative visual manipulation on the natural image manifold. In Proceedings of European Conference on Computer Vision (ECCV), 2016.
-  J.-Y. Zhu and T. Park. ImagetoImage Translation with conditional adversarial nets.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.
-  S. Zhu, S. Fidler, R. Urtasun, D. Lin, and C. C. Loy. Be your own prada: Fashion synthesis with structural coherence. In Proceedings of the IEEE International Conference on Computer Vision, 2017.
Supplementary materials for Paper Submission 1449:
A Variational U-Net for Conditional Appearance and Shape Generation
Appendix A Network structure
The parameter of residual blocks in the network (see section 4) may vary for different datasets. For all experiments in the paper the value of was set to . Below we provide a detailed visualization the architecture of the model that generates images and has residual blocks.
Appendix B Examples of appearance sampling in different datasets
We show more examples highlighting the ability of our model to produce diverse samples similar to the results shown in Fig. 3 and 4. In Fig. 11 we condition on edge images of shoes and handbags and sample the appearance from the learned prior. We also run pix2pix multiple times to compare the diversity of the produced samples. A similar experiment is shown in Fig. 12, where we condition on human body joints instead of edge images.
Appendix C Transfer of shape and appearance
We show additional examples of transferring appearances to different shapes and vice versa. We emphasize again that our approach does not require labeled examples of images depticting the same appearance in different shapes. This enables us to apply it on a broad range of datasets as summarized in Table 4.
Appendix D Quantitative results for the ablation study
We have included quantitative results for the ablation study (see section 4.4) in Table 5. The positive effect of the KL-regularization cannot be quantified by the Inception Score and thus we presented the qualitative results in Fig. 9.
|our (no appearance)||2.211||0.080||2.211||0.080|
|our (no kl)||3.168||0.296||3.594||0.199|
Appendix E Limitations
The quality of the generated images depends highly on the dataset used for training. Our method relies on appearance commonalities across the dataset that can be used to learn efficient, pose-invariant encodings. If the dataset provides sufficient support for appearance details, they are faithfully preserved by our model (e.g. hats in DeepFashion, see Fig. 8, third row).
The COCO dataset shows large variance in both visual qualities (e.g. lighting conditions, resolutions, clutter and occlusion) as well as in appearance. This leads to little overlap of appearance details in different poses and the model focuses on aspects of appearance that can be reused for a large variety of poses in the dataset.
We show some failure cases of our approach in Fig. 19. The first row of Fig. 19 shows an example of rare data: children are underrepresented in COCO . A similar problem occurs in Market-1501  where most of the images represent a tight crop around a person and only some contain people from afar. This is shown in the second row which also contains an incorrect estimate for the left leg. Sometimes, estimated pose correlates with some other attribute of a dataset (e.g., gender as in DeepFashion [21, 23], where male and female models use very characteristic yet distinct set of poses). In this case our model morphs this attribute with the target appearance, e.g. generates a woman with definitely male body proportions (see row in Fig. 19). Under heavy viewpoint changes, appearance can be entirely unrelated, e.g. front view showing a white t-shirt which is totally covered from the rear view (see fourth row of Fig. 19). The algorithm however assumes that the appearance in both views is related. As the example in the last row of Fig. 19 shows, our model is confused if occluded body parts are annotated since this is not the case for most training samples.
|reason||original image||shape estimate||appearance||
pose estimation error