Controllable Text-to-Image Generation

Controllable Text-to-Image Generation

Abstract

In this paper, we propose a novel controllable text-to-image generative adversarial network (ControlGAN), which can effectively synthesise high-quality images and also control parts of the image generation according to natural language descriptions. To achieve this, we introduce a word-level spatial and channel-wise attention-driven generator that can disentangle different visual attributes, and allow the model to focus on generating and manipulating subregions corresponding to the most relevant words. Also, a word-level discriminator is proposed to provide fine-grained supervisory feedback by correlating words with image regions, facilitating training an effective generator which is able to manipulate specific visual attributes without affecting the generation of other content. Furthermore, perceptual loss is adopted to reduce the randomness involved in the image generation, and to encourage the generator to manipulate specific attributes required in the modified text. Extensive experiments on benchmark datasets demonstrate that our method outperforms existing state of the art, and is able to effectively manipulate synthetic images using natural language descriptions. Code is available at https://github.com/mrlibw/ControlGAN.

1 Introduction

Generating realistic images that semantically match given text descriptions is a challenging problem and has tremendous potential applications, such as image editing, video games, and computer-aided design. Recently, thanks to the success of generative adversarial networks (GANs) Denton et al. (2015); Goodfellow et al. (2014); Radford et al. (2015) in generating realistic images, text-to-image generation has made remarkable progress Reed et al. (2016a); Xu et al. (2018); Zhang et al. (2017a) by implementing conditional GANs (cGANs) Dong et al. (2017); Reed et al. (2016a, b), which are able to generate realistic images conditioned on given text descriptions.

However, current generative networks are typically uncontrollable, which means that if users change some words of a sentence, the synthetic image would be significantly different from the one generated from the original text as shown in Fig. 1. When the given text description (e.g., colour) is changed, corresponding visual attributes of the bird are modified, but other unrelated attributes (e.g., the pose and position) are changed as well. This is typically undesirable in real-world applications, when a user wants to further modify the synthetic image to satisfy her preferences.

The goal of this paper is to generate images from text, and also allow the user to manipulate synthetic images using natural language descriptions, in one framework. In particular, we focus on modifying visual attributes (e.g., category, texture, and colour) of objects in the generated images by changing given text descriptions. To achieve this, we propose a novel controllable text-to-image generative adversarial network (ControlGAN), which can synthesise high-quality images, and also allow the user to manipulate objects’ attributes, without affecting the generation of other content.

Our ControlGAN contains three novel components. The first component is the word-level spatial and channel-wise attention-driven generator, where an attention mechanism is exploited to allow the generator to synthesise subregions corresponding to the most relevant words. Our generator follows a multi-stage architecture Xu et al. (2018); Zhang et al. (2018a) that synthesises images from coarse to fine, and progressively improves the quality. The second component is a word-level discriminator, where the correlation between words and image subregions is explored to disentangle different visual attributes, which can provide the generator with fine-grained training signals related to visual attributes. The third component is the adoption of the perceptual loss Johnson et al. (2016) in text-to-image generation, which can reduce the randomness involved in the generation, and enforce the generator to preserve visual appearance related to the unmodified text.

To this end, an extensive analysis is performed, which demonstrates that our method can effectively disentangle different attributes and accurately manipulate parts of the synthetic image without losing diversity. Also, experimental results on the CUB Wah et al. (2011) and COCO Lin et al. (2014) datasets show that our method outperforms existing state of the art both qualitatively and quantitatively.

This bird has a yellow back and rump, gray outer rectrices, and a light gray breast. (original text) This bird has a red back and rump, yellow outer rectrices, and a light white breast. (modified text) Text Zhang et al. (2017a) Xu et al. (2018) Ours Original
Figure 1: Examples of modifying synthetic images using a natural language description. The current state of the art methods generate realistic images, but fail to generate plausible images when we slightly change the text. In contrast, our method allows parts of the image to be manipulated in correspondence to the modified text description while preserving other unrelated content.

2 Related Work

Text-to-image Generation.

Recently, there has been a lot of work and interest in text-to-image generation. Mansimov et al. Mansimov et al. (2015) proposed the AlignDRAW model that used an attention mechanism over words of a caption to draw image patches in multiple stages. Nguyen et al. Nguyen et al. (2017) introduced an approximate Langevin approach to synthesise images from text. Reed et al. Reed et al. (2016a) first applied the cGAN to generate plausible images conditioned on text descriptions. Zhang et al. Zhang et al. (2017a) decomposed text-to-image generation into several stages generating image from coarse to fine. However, all above approaches mainly focus on generating a new high-quality image from a given text, and cannot allow the user to manipulate the generation of specific visual attributes using natural language descriptions.

Image-to-image translation.

Our work is also closely related to conditional image manipulation methods. Cheng et al. Cheng et al. (2014) produced high-quality image parsing results from verbal commands. Zhu et al. Zhu et al. (2016) proposed to change the colour and shape of an object by manipulating latent vectors. Brock et al. Brock et al. (2016) introduced a hybrid model using VAEs Kingma et al. (2014) and GANs, which achieved an accurate reconstruction without loss of image quality. Recently, Nam et al. Nam et al. (2018) built a model for multi-modal learning on both text descriptions and input images, and proposed a text-adaptive discriminator which utilised word-level text-image matching scores as supervision. However, they adopted a global pooling layer to extract image features, which may lose important fine-grained spatial information. Moreover, the above approaches focus only on image-to-image translation instead of text-to-image generation, which is probably more challenging.

Attention.

The attention mechanism has shown its efficiency in various research fields including image captioning Xu et al. (2015); Zhang et al. (2017b), machine translation Bahdanau et al. (2014), object detection Oliva et al. (2003); Zhang et al. (2018b), and visual question answering Yang et al. (2016). It can effectively capture task-relevant information and reduce the interference from less important one. Recently, Xu et al. Xu et al. (2018) built the AttnGAN model that designed a word-level spatial attention to guide the generator to focus on subregions corresponding to the most relevant word. However, spatial attention only correlates words with partial regions without taking channel information into account. Also, different channels of features in CNNs may have different purposes, and it is crucial to avoid treating all channels without distinction, such that the most relevant channels in the visual features can be fully exploited.

Figure 2: The architecture of our proposed ControlGAN. In (b), is the correlation loss discussed in Sec. 3.3. In (c), is the perceptual loss discussed in Sec. 3.4.

3 Controllable Generative Adversarial Networks

Given a sentence , we aim to synthesise a realistic image that semantically aligns with (see Fig. 2), and also make this generation process controllable – if is modified to be , the synthetic result should semantically match while preserving irrelevant content existing in (shown in Fig. 5). To achieve this, we propose three novel components: 1) a channel-wise attention module, 2) a word-level discriminator, and 3) the adoption of the perceptual loss in the text-to-image generation. We elaborate our model as follows.

3.1 Architecture

We adopt the multi-stage AttnGAN Xu et al. (2018) as our backbone architecture (see Fig. 2). Given a sentence , the text encoder – a pre-trained bidirectional RNN Xu et al. (2018) – encodes the sentence into a sentence feature with dimension describing the whole sentence, and word features with length (i.e., number of words) and dimension . Following Zhang et al. (2017a), we also apply conditioning augmentation (CA) to . The augmented sentence feature is further concatenated with a random vector to serve as the input to the first stage. The overall framework generates an image from coarse- to fine-scale in multiple stages, and, in each stage, the network produces a hidden visual feature , which is the input to the corresponding generator to produce a synthetic image. Spatial attention Xu et al. (2018) and our proposed channel-wise attention modules take and as inputs, and output attentive word-context features. These attentive features are further concatenated with the hidden feature and then serve as input for the next stage.

The generator exploits the attention mechanism via incorporating a spatial attention module Xu et al. (2018) and the proposed channel-wise attention module. The spatial attention module Xu et al. (2018) can only correlate words with individual spatial locations without taking channel information into account. Thus, we introduce a channel-wise attention module (see Sec. 3.2) to exploit the connection between words and channels. We experimentally find that the channel-wise attention module highly correlates semantically meaningful parts with corresponding words, while the spatial attention focuses on colour descriptions (see Fig. 6). Therefore, our proposed channel-wise attention module, together with the spatial attention, can help the generator disentangle different visual attributes, and allow it to focus only on the most relevant subregions and channels.

Figure 3: The architecture of proposed channel-wise attention module and word-level discriminator.

3.2 Channel-Wise Attention

At the stage, the channel-wise attention module (see Fig. 3 (a)) takes two inputs: the word features and hidden visual features , where and denote the height and width of the feature map at stage . The word features are first mapped into the same semantic space as the visual features via a perception layer , producing , where .

Then, we calculate the channel-wise attention matrix by multiplying the converted word features and visual features , denoted as . Thus, aggregates correlation values between channels and words across all spatial locations. Next, is normalised by the softmax function to generate the normalised channel-wise attention matrix as

(1)

The attention weight represents the correlation between the channel in the visual features and the word in the sentence , and higher value means larger correlation.

Equipped with the channel-wise attention matrix , we obtain the final channel-wise attention features , denoted as . Each channel in is a dynamic representation weighted by the correlation between words and corresponding channels in the visual features. Thus, channels with high correlation values are enhanced resulting in a high response to corresponding words, which can facilitate disentangling word attributes into different channels, and also reduce the influence from irrelevant channels by assigning a lower correlation.

3.3 Word-Level Discriminator

To encourage the generator to modify only parts of the image according to the text, the discriminator should provide the generator with fine-grained training feedback, which can guide the generation of subregions corresponding to the most relevant words. Actually, the text-adaptive discriminator Nam et al. (2018) also explores the word-level information in the discriminator, but it adopts a global average pooling layer to output a 1D vector as image feature, and then calculates the correlation between image feature and each word. By doing this, the image feature may lose important spatial information, which provides crucial cues for disentangling different visual attributes. To address the issue, we propose a novel word-level discriminator inspired by Nam et al. (2018) to explore the correlation between image subregions and each word; see Fig. 3 (b).

Our word-level discriminator takes two inputs: 1) word features encoded from the text encoder, which follows the same architecture as the one (see Fig. 2 (a)) used in the generator, where and denote word features encoded from the original text and a randomly sampled mismatched text, respectively, and 2) visual features , both encoded by a GoogleNet-based Szegedy et al. (2015) image encoder from the real image and generated images , respectively.

For simplicity, in the following, we use to represent visual features and , and use for both original and mismatched word features. The word-level discriminator contains a perception layer that is used to align the channel dimension of visual feature and word feature , denoted as , where is a weight matrix to learn. Then, the word-context correlation matrix can be derived via , and is further normalised by the softmax function to get a correlation matrix :

(2)

where represents the correlation value between the word and the subregion of the image. Then, the image subregion-aware word features can be obtained by , which aggregates all spatial information weighted by the word-context correlation matrix .

Additionally, to further reduce the negative impact of less important words, we adopt the word-level self-attention Nam et al. (2018) to derive a 1D vector with length reflecting the relative importance of each word. Then, we repeat by times to produce , which has the same size as . Next, is further reweighted by to get , denoted as , where represents element-wise multiplication. Finally, we derive the correlation between the word and the whole image as Eq. (3):

(3)

where is the sigmoid function, evaluates the correlation between the word and the image, and and represent the column of and , respectively.

Therefore, the final correlation value between image and sentence is calculated by summing all word-context correlations, denoted as . By doing so, the generator can receive fine-grained feedback from the word-level discriminator for each visual attribute, which can further help supervise the generation and manipulation of each subregion independently.

3.4 Perceptual Loss

Without adding any constraint on text-irrelevant regions (e.g., backgrounds), the generated results can be highly random, and may also fail to be semantically consistent with other content. To mitigate this randomness, we adopt the perceptual loss Johnson et al. (2016) based on a 16-layer VGG network Simonyan and Zisserman (2014) pre-trained on the ImageNet dataset Russakovsky et al. (2015). The network is used to extract semantic features from both the generated image and the real image , and the perceptual loss is defined as

(4)

where is the activation of the layer of the VGG network, and and are the height and width of the feature map, respectively.

To our knowledge, we are the first to apply the perceptual loss Johnson et al. (2016) in controllable text-to-image generation, which can reduce the randomness involved in the image generation by matching feature space.

3.5 Objective Functions

The generator and discriminator are trained alternatively by minimising both the generator loss and discriminator loss .

Generator objective.

The generator loss as Eq. (5) contains an adversarial loss , a text-image correlation loss , a perceptual loss , and a text-image matching loss  Xu et al. (2018).

(5)

where is the number of stages, is the real image sampled from the true image distribution at stage , is the generated image at the stage sampled from the model distribution , are hyper-parameters controlling different losses, is the perceptual loss described in Sec. 3.4, which puts constraint on the generation process to reduce the randomness, the Xu et al. (2018) is used to measure text-image matching score based on the cosine similarity, and reflects the correlation between the generated image and the given text description considering spatial information.

The adversarial loss is composed of the unconditional and conditional adversarial losses shown in Eq. (6): the unconditional adversarial loss is applied to make the synthetic image be real, and the conditional adversarial loss is utilised to make the generated image match the given text .

(6)

Discriminator objective.

The final loss function for training the discriminator is defined as:

(7)

where is the correlation loss determining whether word-related visual attributes exist in the image (see Sec. 3.3), is a mismatched text description that is randomly sampled from the dataset and is irrelevant to , and is a hyper-parameter controlling the importance of additional losses.

The adversarial loss contains two components: the unconditional adversarial loss determines whether the image is real, and the conditional adversarial loss determines whether the given image matches the text description :

(8)

4 Experiments

To evaluate the effectiveness of our approach, we conduct extensive experiments on the CUB bird (Wah et al., 2011) and the MS COCO (Lin et al., 2014) datasets. We compare with two state of the art GAN methods on text-to-image generation, StackGAN++ Zhang et al. (2018a) and AttnGAN Xu et al. (2018). Results for the state of the art are reproduced based on the code released by the authors.

4.1 Datasets

Our method is evaluated on the CUB bird (Wah et al., 2011) and the MS COCO (Lin et al., 2014) datasets. The CUB dataset contains 8,855 training images and 2,933 test images, and each image has 10 corresponding text descriptions. As for the COCO dataset, it contains 82,783 training images and 40,504 validation images, and each image has 5 corresponding text descriptions. We preprocess these two datasets based on the methods introduced in Zhang et al. (2017a).

4.2 Implementation

There are three stages () in our ControlGAN generator following Xu et al. (2018). The three scales are , , and , and spatial and channel-wise attentions are applied at the stages 2 and 3. The text encoder is a pre-trained bidirectional LSTM Schuster and Paliwal (1997) to encode the given text description into a sentence feature with dimension 256 and word features with length 18 and dimension 256. In the perceptual loss, we compute the content loss at layer relu22 of VGG-16 Simonyan and Zisserman (2014) pre-trained on the ImageNet Russakovsky et al. (2015). The whole network is trained using the Adam optimiser Kingma and Ba (2014) with the learning rate 0.0002. The hyper-parameters , , , and are set to 0.5, 1, 1, and 5 for both datasets, respectively.

4.3 Comparison with State of the Art

Quantitative results.

We adopt the Inception Score Salimans et al. (2016) to evaluate the quality and diversity of the generated images. However, as the Inception Score cannot reflect the relevance between an image and a text description, we utilise R-precision Xu et al. (2018) to measure the correlation between a generated image and its corresponding text. We compare the top-1 text-to-image retrieval accuracy (Top-1 Acc) on the CUB and COCO datasets following Nam et al. (2018).

Quantitative results are shown in Table 1, our method achieves better IS and R-precision values on the CUB dataset compared with the state of the art, and has a competitive performance on the COCO dataset. This indicates that our method can generate higher-quality images with better diversity, which semantically align with the text descriptions.

To further evaluate whether the model can generate controllable results, we compute the reconstruction error Nam et al. (2018) between the image generated from the original text and the one from the modified text shown in Table 1. Compared with other methods, ControlGAN achieves a significantly lower reconstruction error, which demonstrates that our method can better preserve content in the image generated from the original text.

Qualitative results.

We show qualitative comparisons in Fig. 5. As we can see, according to modifying given text descriptions, our approach can successfully manipulate specific visual attributes accurately. Also, our method can even handle out-of-distribution queries, e.g., red zebra on a river shown in the last two columns of Fig. 5. All the above indicates that our approach can manipulate different visual attributes independently, which demonstrates the effectiveness of our approach in disentangling visual attributes for text-to-image generation.

Fig. 5 shows the visual comparison between ControlGAN, AttnGAN Xu et al. (2018), and StackGAN++ Zhang et al. (2018a). It can be observed that when the text is modified, the two compared approaches are more likely to generate new content, or change some visual attributes that are not relevant to the modified text. For instance, as shown in the first two columns, when we modify the colour attributes, StackGAN++ changes the pose of the bird, and AttnGAN generates new background. In contrast, our approach is able to accurately manipulate parts of the image generation corresponding to the modified text, while preserving the visual attributes related to unchanged text.

In the COCO dataset, our model again achieves much better results compared with others shown in Fig. 5. For example, as shown in the last four columns, the compared approaches cannot preserve the shape of objects and even fail to generate reasonable images. Generally speaking, the results on COCO are not as good as on the CUB dataset. We attribute this to the few text-image pairs and more abstract captions in the dataset. Although there are a lot of categories in COCO, each category only has a few number of examples, and captions focus mainly on the category of objects rather than detailed descriptions, which makes text-to-image generation more challenging.


CUB COCO
Method IS Top-1 Acc(%) error IS Top-1 Acc(%) error
StackGAN++ 4.04 .05 45.28 3.72 0.29 8.30 .10 72.83 3.17 0.32
AttnGAN 4.36 .03 67.82 4.43 0.26 25.89 .47 85.47 3.69 0.40
Ours 4.58 .09 69.33 3.23 0.18 24.06 .60 82.43 2.43 0.17
Table 1: Quantitative comparison: Inception Score, R-precision, and reconstruction error of state of the art and ControlGAN on the CUB and COCO datasets.
This bird is yellow with black and has a very short beak. This bird is orange with grey and has a very short beak. The small bird has a dark brown head and light brown body. The small bird has a dark tan head and light grey body. A large group of cows on a farm. A large group of white cows on a farm. A crowd of people fly kites on a hill. A crowd of people fly kites on a park. A group of zebras on a grassy field with trees in background. A group of red zebras on a river with trees in background.
Figure 4: Qualitative results on the CUB and COCO datasets. Odd-numbered columns show the original text and even-numbered ones the modified text. The last two are an out-of-distribution case.

Input This bird has a white neck and breast with a turquoise crown and feathers a small short beak. This bird has a grey neck and breast with a blue crown and feathers a small short beak. This bird has wings that are yellow and has a brown body. This bird has wings that are black and has a red body. A giraffe is standing on the dirt. A giraffe is standing on the dirt in an enclosure. A zebra stands on a pathway near grass. A sheep stands on a pathway near grass. StackGAN++ Zhang et al. (2018a) AttnGAN Xu et al. (2018) Ours
Figure 5: Qualitative comparison of three methods on the CUB and COCO datasets. Odd-numbered columns show the original text and even-numbered ones the modified text.
Figure 6: Top: visualisation of feature channels at stage 3. The number at the top-right corner is the channel number, and the word that has the highest correlation in Eq. 1 with the channel is shown under the image. Bottom: spatial attention produced in stage 3.

4.4 Component Analysis

Effectiveness of channel-wise attention.

Our model implements channel-wise attention in the generator, together the spatial attention, to generate realistic images. To better understand the effectiveness of attention mechanisms, we visualise the intermediate results and corresponding attention maps at different stages.

We experimentally find that the channel-wise attention correlates closely with semantic parts of objects, while the spatial attention focuses mainly on colour descriptions. Fig. 6 shows several channels of feature maps that correlate with different semantics, and our channel-wise attention module assigns large correlation values to channels that are semantically related to the word describing parts of a bird. This phenomenon is further verified by the ablation study shown in Fig. 8 (left side). Without channel-wise attention, our model fails to generate controllable results when we modify the text related to parts of a bird. In contrast, our model with channel-wise attention can generate better controllable results.

Effectiveness of word-level discriminator.

To verify the effectiveness of the word-level discriminator, we first conduct an ablation study that our model is trained without word-level discriminator, shown in Fig. 8 (right side), and then we construct a baseline model by replacing our discriminator with a text-adaptive discriminator Nam et al. (2018), which also explores the correlation between image features and words. Visual comparisons are shown in Fig. 8 (right side). We can easily observe that the compared baseline fails to manipulate the synthetic images. For example, as shown in the first two columns, the bird generated from the modified text has a totally different shape, and the background has been changed as well. This is due to the fact that the text-adaptive discriminator Nam et al. (2018) uses a global pooling layer to extract image features, which may lose important spatial information.

Effectiveness of perceptual loss.

Furthermore, we conduct an ablation study that our model is trained without the perceptual loss, shown in Fig. 8 (left side). Without perceptual loss, images generated from modified text are hard to preserve content that are related to unmodified text, which indicates that the perceptual loss can potentially introduce a stricter semantic constraint on the image generation and help reduce the involved randomness.

Input This yellow bird has grey and white wings and a red head. This yellow bird has grey and white wings and a red belly. The bird is small and round with white belly and blue wings. The bird is small and round with white head and blue wings. Ours without channel-wise attention Ours Input This bird’s wing bar is brown and yellow, and its belly is yellow. This bird’s wing bar is brown and yellow, and its belly is white. A small bird with a brown colouring and white belly. A small bird with a brown colouring and white head. Ours without word-level discriminator Ours
Figure 7: Left: ablation study of channel-wise attention; right: ablation study of the word-level discriminator.

Input The bird is small with a pointed bill, has black eyes, and a yellow crown. The bird is small with a pointed bill, has black eyes, and an orange crown. A bird with a white belly and metallic blue wings with a small beak. A bird with a white head and metallic blue wings with a small beak. Ours without perceptual loss Ours Input A songbird is white with blue feathers and black eyes. A songbird is yellow with blue and green feathers and black eyes. A tiny bird, with green flank, white belly, yellow crown, and a pointy bill. A tiny bird, with green flank, grey belly, blue crown, and a pointy bill. Ours with text-adaptive discriminator Ours
Figure 8: Left: ablation study of the perceptual loss Johnson et al. (2016); right: comparison between our word-level discriminator and text-adaptive discriminator Nam et al. (2018).

5 Conclusion

We have proposed a controllable generative adversarial network (ControlGAN), which can generate and manipulate the generation of images based on natural language descriptions. Our ControlGAN can successfully disentangle different visual attributes and allow parts of the synthetic image to be manipulated accurately, while preserving the generation of other content. Three novel components are introduced in our model: 1) the word-level spatial and channel-wise attention-driven generator can effectively disentangle different visual attributes, 2) the word-level discriminator provides the generator with fine-grained training signals related to each visual attribute, and 3) the adoption of perceptual loss reduces the randomness involved in the generation, and enforces the generator to reconstruct content related to unmodified text. Extensive experimental results demonstrate the effectiveness and superiority of our method on two benchmark datasets.

References

  1. D. Bahdanau, K. Cho and Y. Bengio (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: §2.
  2. A. Brock, T. Lim, J. M. Ritchie and N. Weston (2016) Neural photo editing with introspective adversarial networks. arXiv preprint arXiv:1609.07093. Cited by: §2.
  3. M. Cheng, S. Zheng, W. Lin, V. Vineet, P. Sturgess, N. Crook, N. J. Mitra and P. Torr (2014) ImageSpirit: verbal guided image parsing. ACM Transactions on Graphics (TOG) 34 (1), pp. 3. Cited by: §2.
  4. E. L. Denton, S. Chintala, A. Szlam and R. Fergus (2015) Deep generative image models using a \textLaplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems, pp. 1486–1494. Cited by: §1.
  5. H. Dong, S. Yu, C. Wu and Y. Guo (2017) Semantic image synthesis via adversarial learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5706–5714. Cited by: §1.
  6. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680. Cited by: §1.
  7. J. Johnson, A. Alahi and L. Fei-Fei (2016) Perceptual losses for real-time style transfer and super-resolution. In Proceedings of European Conference on Computer Vision, pp. 694–711. Cited by: §1, §3.4, §3.4, Figure 8.
  8. D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.2.
  9. D. P. Kingma, S. Mohamed, D. J. Rezende and M. Welling (2014) Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581–3589. Cited by: §2.
  10. T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár and C. L. Zitnick (2014) Microsoft \textCOCO: common objects in context. In Proceedings of European Conference on Computer Vision, pp. 740–755. Cited by: §1, §4.1, §4.
  11. E. Mansimov, E. Parisotto, J. L. Ba and R. Salakhutdinov (2015) Generating images from captions with attention. arXiv preprint arXiv:1511.02793. Cited by: §2.
  12. S. Nam, Y. Kim and S. J. Kim (2018) Text-adaptive generative adversarial networks: manipulating images with natural language. In Advances in Neural Information Processing Systems, pp. 42–51. Cited by: §2, §3.3, §3.3, Figure 8, §4.3, §4.3, §4.4.
  13. A. Nguyen, J. Clune, Y. Bengio, A. Dosovitskiy and J. Yosinski (2017) Plug & play generative networks: conditional iterative generation of images in latent space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4467–4477. Cited by: §2.
  14. A. Oliva, A. Torralba, M. S. Castelhano and J. M. Henderson (2003) Top-down control of visual attention in object detection. In Proceedings of International Conference on Image Processing (Cat. No. 03CH37429), Vol. 1, pp. 253–256. Cited by: §2.
  15. A. Radford, L. Metz and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §1.
  16. S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele and H. Lee (2016) Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396. Cited by: §1, §2.
  17. S. E. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele and H. Lee (2016) Learning what and where to draw. In Advances in Neural Information Processing Systems, pp. 217–225. Cited by: §1.
  18. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg and L. Fei-Fei (2015) Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115 (3), pp. 211–252. Cited by: §3.4, §4.2.
  19. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford and X. Chen (2016) Improved techniques for training gans. In Advances in Neural Information Processing Systems, pp. 2234–2242. Cited by: §4.3.
  20. M. Schuster and K. K. Paliwal (1997) Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45 (11), pp. 2673–2681. Cited by: §4.2.
  21. K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §3.4, §4.2.
  22. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich (2015) Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9. Cited by: §3.3.
  23. C. Wah, S. Branson, P. Welinder, P. Perona and S. Belongie (2011) The \textCaltech-Ucsd Birds-200-2011 dataset. Cited by: §1, §4.1, §4.
  24. K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel and Y. Bengio (2015) Show, attend and tell: neural image caption generation with visual attention. In International Conference on Machine Learning, pp. 2048–2057. Cited by: §2.
  25. T. Xu, P. Zhang, Q. Huang, H. Zhang, Z. Gan, X. Huang and X. He (2018) \textAttnGAN: fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1316–1324. Cited by: Figure 1, §1, §1, §2, §3.1, §3.1, §3.5, Figure 5, §4.2, §4.3, §4.3, §4.
  26. Z. Yang, X. He, J. Gao, L. Deng and A. Smola (2016) Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 21–29. Cited by: §2.
  27. H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang and D. N. Metaxas (2017) \textStackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5907–5915. Cited by: Figure 1, §1, §2, §3.1, §4.1.
  28. H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang and D. N. Metaxas (2018) \textStackGAN++: realistic image synthesis with stacked generative adversarial networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (8), pp. 1947–1962. Cited by: §1, Figure 5, §4.3, §4.
  29. X. Zhang, T. Wang, J. Qi, H. Lu and G. Wang (2018) Progressive attention guided recurrent network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 714–722. Cited by: §2.
  30. Z. Zhang, Y. Xie, F. Xing, M. McGough and L. Yang (2017) \textMDNet: a semantically and visually interpretable medical image diagnosis network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6428–6436. Cited by: §2.
  31. J. Zhu, P. Krähenbühl, E. Shechtman and A. A. Efros (2016) Generative visual manipulation on the natural image manifold. In Proceedings of European Conference on Computer Vision, pp. 597–613. Cited by: §2.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
402567
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description