GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images

GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images


Although current image generation methods have reached impressive quality levels, they are still unable to produce plausible yet diverse images of handwritten words. On the contrary, when writing by hand, a great variability is observed across different writers, and even when analyzing words scribbled by the same individual, involuntary variations are conspicuous. In this work, we take a step closer to producing realistic and varied artificially rendered handwritten words. We propose a novel method that is able to produce credible handwritten word images by conditioning the generative process with both calligraphic style features and textual content. Our generator is guided by three complementary learning objectives: to produce realistic images, to imitate a certain handwriting style and to convey a specific textual content. Our model is unconstrained to any predefined vocabulary, being able to render whatever input word. Given a sample writer, it is also able to mimic its calligraphic features in a few-shot setup. We significantly advance over prior art and demonstrate with qualitative, quantitative and human-based evaluations the realistic aspect of our synthetically produced images. \keywordsGenerative adversarial networks, style and content conditioning, handwritten word images.

1 Introduction

In just a few years since the conception of Generative Adversarial Networks (GANs) [1], we have witnessed an impressive progress on generating illusory plausible images. From the early low-resolution and hazy results, the quality of the artificially generated images has been notably enhanced. We are now able to synthetically produce high-resolution [2] artificial images that are indiscernible from real ones to the human observer [3].

In the original GAN architecture, inputs were randomly sampled from a latent space, so that it was hard to control which kind of images were being generated. The conception of conditional Generative Adversarial Networks (cGANs) [4] led to an important improvement. By allowing to condition the generative process on an input class label, the networks were then able to produce images from different given types [5]. However, such classes had to be predefined beforehand during the training stage and thus, it was impossible to produce images from other unseen classes during inference.

Figure 1: Turing’s test. Just five of the above words are real. Try to distinguish them from the artificially generated samples1.

But generative networks have not exclusively been used to produce synthetic images. The generation of data that is sequential in nature has also been largely explored in the literature. Generative methods have been proposed to produce audio signals [6], natural language excerpts [7], video streams [8] or stroke sequences [9, 10, 11, 12] able to trace sketches, drawings or handwritten text. In all of those approaches, in order to generate sequential data, the use of Recurrent Neural Networks (RNNs) has been adopted.

Yet, for the specific case of generating handwritten text, one could also envisage the option of directly producing the final images instead of generating the stroke sequences needed to pencil a particular word. Such non-recurrent approach presents several benefits. First, the training procedure is more efficient since recurrencies are avoided and the inherent parallelism nature of convolutional networks is leveraged. Second, since the output is generated all at once, we avoid the difficulties of learning long-range dependencies as well as vanishing gradient problems. Finally, online training data (pen-tip location sequences), which is hard to obtain, is no longer needed.

Nevertheless, the different attempts to directly generate raw word images present an important drawback. Similarly to the case with cGANs, most of the proposed approaches are just able to condition the word image generation to a predefined set of words, limiting its practical use. For example [13] is specifically designed to generate isolated digits, while [14] is restricted to a handful of Chinese characters. To our best knowledge, the only exception to that is the approach by Alonso et al. [15]. In their work they propose a non-recurrent generative architecture conditioned to input content strings. By having such design, the generative process is not restricted to a particular predefined vocabulary, and could potentially generate any word. However, the produced results are not realistic, still exhibiting a poor quality, sometimes producing barely legible word images. Their proposed approach also suffers from the mode collapse problem, tending to produce images with a unique writing style. In this paper we present a non-recurrent generative architecture conditioned to textual content sequences, that is specially tailored to produce realistic handwritten word images, indistinguishable to humans. Real and generated images are actually difficult to tell apart, as shown in Figure 1. In order to produce diverse styled word images, we propose to condition the generative process not only with textual content, but also with a specific writing style, defined by a latent set of calligraphic attributes.

Therefore, our approach is able to artificially render realistic handwritten word images that match a certain textual content and that mimic some style features (text skew, slant, roundness, stroke width, ligatures, etc.) from an exemplar writer. To this end, we guide the learning process by three different learning objectives [16]. First, an adversarial discriminator ensures that the images are realistic and that its visual appearance is as closest as possible to real handwritten word images. Second, a style classifier guarantees that the provided calligraphic attributes, characterizing a particular handwriting style, are properly transferred to the generated word instances. Finally, a state-of-the-art sequence-to-sequence handwritten word recognizer [17] controls that the textual contents have been properly conveyed during the image generation. To summarize, the main contributions of the paper are the following:

  • Our model conditions the handwritten word generative process both with calligraphic style features and textual content, producing varied samples indistinguishable by humans, surpassing the quality of the current state-of-the-art approaches.

  • We introduce the use of three complementary learning objectives to guide different aspects of the generative process.

  • We propose a character-based content conditioning that allows to generate any word, without being restricted to a specific vocabulary.

  • We put forward a few-shot calligraphic style conditioning to avoid the mode collapse problem.

2 Related Work

The generation of realistic synthetic handwritten word images is a challenging task. To this day, the most convincing approaches involved an expensive manual intervention aimed at clipping individual characters or glyphs [18, 19, 20, 21, 22]. When such approaches were combined with appropriate rendering techniques including ligatures among strokes, textures and background blending, the obtained results were indeed impressive. Haines et al. [22] illustrated how such approaches could artificially generate indistinguishable manuscript excerpts as if they were written by Sir Arthur Conan Doyle, Abraham Lincoln or Frida Kahlo. Of course such manual intervention is extremely expensive, and in order to produce large volumes of manufactured images the use of truetype electronic fonts has also been explored [23, 24]. Although such approaches benefit from a greater scalability, the realism of the generated images clearly deteriorates.

With the advent of deep learning, the generation of handwritten text was approached differently. As shown in the seminal work by Alex Graves [9], given a reasonable amount of training data, an RNN could learn meaningful latent spaces that encode realistic writing styles and their variations, and then generate stroke sequences that trace a certain text string. However, such sequential approaches [9, 10, 11, 12] need temporal data, obtained by recording with a digital stylus pen real handwritten samples, stroke-by-stroke, in vector form.

Contrary to sequential approaches, non-recurrent generative methods have been proposed to directly produce images. Both variational auto-encoders [25] and GANs [1] were able to learn the MNIST manifold and generate artificial handwritten digit images in the original publications. With the emergence of cGANs [4], able to condition the generative process on an input image rather than a random noise vector, the adversarial-guided image-to-image translation problem started to rise. Image-to-image translation has since been applied to many different style transfer applications, as demonstrated in [26] with the pix2pix network. Since then, image translation approaches have been acquiring the ability to disentangle style attributes from the contents of the input images, producing better style transfer results [27, 28].

Related to the generation of handwritten textual contents, such approaches have been mainly used for the synthesis of Chinese ideograms [29, 30, 14, 31]. However, they are restricted to work over a predefined set of content classes. The incapability to generate texts out of vocabulary (OOV) limits its practical application. An exception to that is the work of Alonso et al. [15], where the generation of handwritten word samples is conditioned by character sequences. However, their proposal suffers from the mode collapse problem, hindering the diversity of the generated images. Techniques like FUNIT [32], able to transfer unseen target styles to the content generated images could be beneficial for this limitation. In particular, the use of Adaptive Instance Normalization (AdaIN) layers, proposed in [33], shall allow to align both textual content and style attributes within the generative process.

Summarizing, state-of-the-art generative methods are still unable to produce plausible yet diverse images of whatever handwritten word automatically. In this paper we propose to condition a generative model for handwritten words with unconstrained text sequences and stylistic typographic attributes, so that we are able to generate any word with a great diversity over the produced results.

3 Conditioned Handwritten Generation

3.1 Problem Formulation

Let be a multi-writer handwritten word dataset, containing grayscale word images , their corresponding transcription strings and their writer identifiers . Let be a subset of randomly sampled handwritten word images from the same given writer . Let be the alphabet containing the allowed characters (letters, digits, punctuation signs, etc.), being all the possible text strings with length . Given a set of images as a few-shot example of the calligraphic style attributes for writer on the one hand, and given a textual content provided by any text string on the other hand; the proposed generative model has the ability to combine both sources of information. It has the objective to yield a handwritten word image having textual content equal to and sharing calligraphic style attributes with writer . Following this formulation, the generative model is defined as


where is the artificially generated handwritten word image with the desired properties. Moreover, we denote as the output distribution of the generative network .

The proposed architecture is divided in two main components. The generative network produces human-readable images conditioned to the combination of calligraphic style and textual content information. The second component are the learning objectives which guide the generative process towards producing images that look realistic; exhibiting a particular calligraphic style attributes; and having a specific textual content. Figure 2 gives an overview of our model.

Figure 2: Architecture of the proposed handwriting generation model.

3.2 Generative Network

The proposed generative architecture consists of a calligraphic style encoder , a textual content encoder and a conditioned image generator . The overall calligraphic style of input images is disentangled from their individual textual contents, whereas the string provides the desired content.

Calligraphic style encoding. Given the set of word images from the same writer , the style encoder aims at extracting the calligraphic style attributes, i.e. slant, glyph shapes, stroke width, character roundness, ligatures etc. from the provided input samples. Specifically, our proposed network learns a style latent space mapping, in which the obtained style representations are disentangled from the actual textual contents of the images . The VGG-19-BN [34] architecture is used as the backbone of . In order to process the input image set , all the images are resized to have the same height , padded to meet a maximum width and concatenated channel-wise to end up with a single tensor . If we ask a human to write the same word several times, slight involuntary variations appear. In order to imitate this phenomenon, randomly choosing permutations of the subset will already produce such characteristic fluctuations. In addition, an additive noise is applied to the output latent space to obtain a subtly distorted feature representation .

Textual content encoding. The textual content network is devoted to produce an encoding of the given text string that we want to artificially write. The proposed architecture outputs content features at two different levels. Low-level features encode the different characters that form a word and their spatial position within the string. A subsequent broader representation aims at guiding the whole word consistency. Formally, let be the input text string, character sequences shorter than are padded with the empty symbol . Let us define a character-wise embedding function . The first step of the content encoding stage embeds with a linear layer each character , represented by a one-hot vector, into a character-wise latent space. Then, the architecture is divided into two branches.

Character-wise encoding: Let be a Multi-Layer Perceptron (MLP). Each embedded character is processed individually by and their results are later stacked together. In order to combine such representation with style features, we have to ensure that the content feature map meets the shape of . Each character embedding is repeated multiple times horizontally to coarsely align the content features with the visual ones extracted from the style network, and the tensor is finally vertically expanded. The two feature representations are concatenated to be fed to the generator . Such a character-wise encoding enables the network to produce OOV words, i.e. words that have never been seen during training.

Global string encoding: Let be another MLP aimed at obtaining a much broader and global string representation. The character embeddings are concatenated into a large one-dimensional vector of size that is then processed by . Such global representation vector will be then injected into the generator splitted into pairs of parameters.

Both functions and make use of three fully-connected layers with ReLU activation functions and batch normalization [35].

Generator. Let be the combination of the calligraphic style attributes and the textual content information character-wise; and the global textual encoding. The generator is composed of two residual blocks [36] using the AdaIN as the normalization layer. Then, four convolutional modules with nearest neighbor up-sampling and a final activation layer generates the output image . AdaIN is formally defined as


where , and are the channel-wise mean and standard deviations. The global content information is injected four times () during the generative process by the AdaIN layers. Their parameters and are obtained by splitting in four pairs. Hence, the generative network is defined as


where is the encoding of the string character by character.

3.3 Learning Objectives

We propose to combine three complementary learning objectives: a discriminative loss, a style classification loss and a textual content loss. Each one of these losses aim at enforcing different properties of the desired generated image .

Discriminative Loss. Following the paradigm of GANs [1], we make use of a discriminative model to estimate the probability that samples come from a real source, i.e. training data , or belong to the artificially generated distribution . Taking the generative network and the discriminator , this setting corresponds to a optimization problem. The proposed discriminator starts with a convolutional layer, followed by six residual blocks with LeakyReLU activations and average poolings. A final binary classification layer is used to discern between fake and real images. Thus, the discriminative loss only controls that the general visual appearance of the generated image looks realistic. However, it does not take into consideration neither the calligraphic styles nor the textual contents. This loss is formally defined as


Style Loss. When generating realistic handwritten word images, encoding information related to calligraphic styles not only provides diversity on the generated samples, but also prevents the mode collapse problem. Calligraphy is a strong identifier of different writers. In that sense, the proposed style loss guides the generative network to generate samples conditioned to a particular writing style by means of a writer classifier . Given a handwritten word image, tries to identify the writer who produced it. The writer classifier follows the same architecture of the discriminator with a final classification MLP with the amount of writers in our training dataset. The classifier is only optimized with real samples drawn from , but it is used to guide the generation of the synthetic ones. We use the cross entropy loss, formally defined as


where is the predicted probability distribution over writers in and the real writer distribution. Generated samples should be classified as the writer used to construct the input style conditioning image set .

Figure 3: Architecture of the attention-based sequence-to-sequence handwritten word recognizer .

Content Loss. A final handwritten word recognizer network is used to guide our generator towards producing synthetic word images with a specific textual content. We implemented a state-of-the-art sequence-to-sequence model [17] for handwritten word recognition to examine whether the produced images are actually decoded as the string . The recognizer, depicted in Figure 3, consists of an encoder and a decoder coupled with an attention mechanism. Handwritten word images are processed by the encoder and high-level feature representations are obtained. A VGG-19-BN [34] architecture followed by a two-layered Bi-directional Gated Recurrent Unit (B-GRU) [37] is used as the encoder network. The decoder is a one-directional RNN that outputs character by character predictions at each time step. The attention mechanism dynamically aligns context features from each time step of the decoder with high-level features from the encoder, hopefully corresponding to the next character to decode. The Kullback-Leibler divergence loss is used as the recognition loss at each time step. This is formally defined as


where ; being the -th decoded character probability distribution by the word recognizer, being the probability of -th symbol in for , and being the real probability corresponding to . The empty symbol is ignored in the loss computation; denotes the -th character on the input text .

3.4 End-to-end Training

Overall, the whole architecture is trained end to end with the combination of the three proposed loss functions


Algorithm 1 presents the training strategy that has been followed in this work. denotes the optimizer function. Note that the parameter optimization is performed in two steps. First, the discriminative loss is computed using both real and generated samples (line 3). The style and content losses are computed by just providing real data (line 4). Even though and are optimized using only real data and, therefore, they could be pre-trained independently from the generative network , we obtained better results by initializing all the networks from scratch and jointly training them altogether. The network parameters are optimized by gradient ascent following the GAN paradigm whereas the parameters and are optimized by gradient descent. Finally, the overall generator loss is computed following Equation 7 where only the generator parameters are optimized (line 8).

Input: Input data ; alphabet ; max training iterations
      Output: Networks parameters .

2:     Get style and content mini-batches and
3:      Eq. 4 Real and generated samples
4:      Eq. 5 + Eq. 6 Real samples
7:      Eq. 7 Generated samples
9:until Max training iterations
Algorithm 1 Training algorithm for the proposed model.

4 Experiments

To carry out the different experiments, we have used a subset of the IAM corpus [38] as our multi-writer handwritten dataset . It consists of handwritten word snippets, written by 500 different individuals. Each word image has its associated writer and transcription metadata. A test subset of 160 writers has been kept apart during training to check whether the generative model is able to cope with unseen calligraphic styles. We have also used a subset of unique English words from the Brown [39] corpus as the source of strings for the content input. A test set of 400 unique words, disjoint from the IAM transcriptions has been used to test the performance when producing OOV words. To quantitatively measure the image quality, diversity and the ability to transfer style attributes of the proposed approach we will use the Fréchet Inception Distance (FID) [40, 41], measuring the distance between the Inception-v3 activation distributions for generated and real samples for each writer separately, and finally averaging them. Inception features, trained over ImageNet data, have not been designed to discern between different handwriting images. Although this measure might not be ideal to evaluate our specific case, it will still serve as an indication of the similarity between generated and real images.

a) IV-S
b) IV-U
c) OOV-S
d) OOV-U
Figure 4: Word image generation. a) In-Vocabulary (IV) words and seen (S) styles; b) In-Vocabulary (IV) words and unseen (U) styles; c) Out-of-Vocabulary (OOV) words and seen (S) styles and d) Out-of-Vocabulary (OOV) words and unseen (U) styles.

4.1 Generating Handwritten Word Images

We present in Figure 4 an illustrative selection of generated handwritten words. We appreciate the realistic and diverse aspect of the produced images. Qualitatively, we observe that the proposed approach is able to yield satisfactory results even when dealing with both words and calligraphic styles never seen during training. But, when analyzing the different experimental settings in Table 1, we appreciate that the FID measure slightly degrades when either we input an OOV word or a style never seen during training. Nevertheless, the reached FID measures in all four settings satisfactorily compare with the baseline achieved by real data.

Real images IV-S IV-U OOV-S OOV-U
Table 1: FID between generated images and real images of corresponding set.
Figure 5: t-SNE embedding visualization of generated instances of the word "deep".

In order to show the ability of the proposed method to produce a diverse set of generated images, we present in Figure 5 a t-SNE [42] visualization of different instances produced with a fixed textual content while varying the calligraphic style inputs. Different clusters corresponding to particular slants, stroke widths, character roundnesses, ligatures and cursive writings are observed.

Style Images
Textual Content FUNIT
Table 2: Comparison of handwritten word generation with FUNIT [32].

To further demonstrate the ability of the proposed approach to coalesce content and style information into the generated handwritten word images, we compare in Table 2 our produced results with the outcomes of the state-of-the-art approach FUNIT [32]. Being an image-to-image translation method, FUNIT starts with a content image and then injects the style attributes derived from a second sample image. Although FUNIT performs well for natural scene images, it is clear that such kind of approaches do not apply well for the specific case of handwritten words. Starting with a content image instead of a text string confines the generative process to the shapes of the initial drawing. When infusing the style features, the FUNIT method is only able to deform the stroke textures, often resulting in extremely distorted words. Conversely, our proposed generative process is able to produce realistic and diverse word samples given a content text string and a calligraphic style example. We observe how for the different produced versions of the same word, the proposed approach is able to change style attributes as stroke width or slant, to produce both cursive words, where all characters are connected through ligatures as well as disconnected writings, and even render the same characters differently, e.g. note the characters n or s in "Thank" or "inside" respectively.

4.2 Latent Space Interpolations

The generator network learns to map feature points in the latent space to synthetic handwritten word images. Such latent space presents a structure worth exploring. We first interpolate in Figure 6 between two different points and corresponding to two different calligraphic styles and while keeping the textual contents fixed. We observe how the generated images smoothly adjust from one style to another. Again note how individual characters evolve from one typography to another, e.g. the l from "also", or the f from "final".

Figure 6: Latent space interpolation between two different calligraphic styles for different words while keeping contents fixed.

Contrary to the continuous nature of the style latent space, the original content space is discrete in nature. Instead of computing point-wise interpolations, we present in Figure 7 the obtained word images for different styles when following a “word ladder” puzzle game, i.e. going from one word to another, one character difference at a time. Here we observe how different contents influence stylistic aspects. Usually s and i are disconnected when rendering the word "sired" but often appear with a ligature when jumping to the word "fired".

"three" "threw" "shrew" "shred" "sired" "fired" "fined" "firer" "fiver" "fever" "sever" "seven"
Figure 7: Word ladder. From "three" to "seven" changing one character at a time, generated for five different calligraphic styles.

4.3 Impact of the Learning Objectives

Along this paper, we have proposed to guide the generation process by three complementary goals. The discriminator loss controlling the genuine appearance of the generated images . The writer classification loss forcing to mimic the calligraphic style of input images . The recognition loss guaranteeing that is readable and conveys the exact text information . We analyze in Table 3 the effect of each learning objective.

FID Style Images
- - 364.10
- 207.47
- 138.80
Table 3: Effect of each different learning objectives when generating the content for different styles.

The sole use of the leads to constantly generating an image that is able to fool the discriminator. Although the generated image looks like handwritten strokes, the content and style inputs are ignored. When combining the discriminator with the auxiliary task of writer classification , the produced results are more encouraging, but the input text is still ignored, always tending to generate the word "the", since it is the most common word seen during training. When combining the discriminator with the word recognizer loss , the desired word is rendered. However, as in [15], we suffer from the mode collapse problem, always producing unvarying word instances. When combining the three learning objectives we appreciate that we are able to correctly render the appropriate textual content while mimicking the input styles, producing diverse results. We appreciate that the FID measure also decreases for each successive combination.

4.4 Human Evaluation

Finally, besides providing qualitative results and evaluating the generative process with the FID measure, we also tested whether the generated images were actually indistinguishable from real ones by human judgments. We have conducted a human evaluation study as follows: we have asked human examiners to assess whether a set of images were written by a human or artificially generated. Appraisers were presented a total of sixty images, one at a time, and they had to choose if each of them was real of fake. To construct the test, we chose thirty real words from the IAM test partition from ten different writers. We then generated thirty artificial samples by using OOV textual contents and by randomly taking the previous writers as the sources for the calligraphic styles. Such sets were not curated, so the only filter was that the generated samples had to be correctly transcribed by the word recognizer network . In total we collected responses.

Actual Predicted
Real Fake
Genuine R:
Generated FPR:
a) Confusion matrix () b) Accuracy distribution
Table 4: Human evaluation experiment where examiners had to determine whether words were real or artificially generated.

In Table 4 we present the confusion matrix of the human assessments, with Accuracy (ACC), Precision (P), Recall (R), False Positive Rate (FPR) and False Omission Rate (FOR) values. The study revealed that our generative model was clearly perceived as plausible, since a great portion of the generated samples were deemed genuine. Only a of the images were properly identified, which shows a similar performance than a random binary classifier. Accuracies over different examiners were normally distributed. We also observe that the synthetically generated word images were judged more often as being real than correctly identified as fraudulent, with a final False Positive Rate of .

5 Conclusion

In this paper we have presented a novel image generation architecture that produces realistic and varied artificially rendered samples of handwritten words. Our proposed generative pipeline is able to yield credible handwritten word images, by conditioning the generative process with both calligraphic style features and textual content. Furthermore, by jointly guiding our generator with three different cues: a discriminator, a style classifier and a content recognizer, our model is able to render any input word, not depending on any predefined vocabulary, while incorporating calligraphic styles in a few-shot setup. Experimental results demonstrate that the proposed method yields images with such a great realistic quality that are indistinguishable by humans.


  1. The real words are: "that", "vision", "asked", "hits" and "writer".


  1. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proceedings of the Neural Information Processing Systems Conference, 2014.
  2. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In Proceedings of the International Conference on Learning Representations, 2019.
  3. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
  4. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
  5. Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  6. Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang. MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment. In Proceedings of the AAAI Conference on Artificial Intelligence, 2018.
  7. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. SeqGAN: Sequence generative adversarial nets with policy gradient. In Proceedings of the AAAI Conference on Artificial Intelligence, 2017.
  8. Sergey Tulyakov, Ming-Yu Liu, Xiaodong Yang, and Jan Kautz. MoCoGAN: Decomposing motion and content for video generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  9. Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
  10. David Ha and Douglas Eck. A neural representation of sketch drawings. In Proceedings of the International Conference on Learning Representations, 2018.
  11. Yaroslav Ganin, Tejas Kulkarni, Igor Babuschkin, SM Eslami, and Oriol Vinyals. Synthesizing programs for images using reinforced adversarial learning. In Proceedings of the International Conference on Machine Learning, 2018.
  12. Ningyuan Zheng, Yifan Jiang, and Dingjiang Huang. Strokenet: A neural painting environment. In Proceedings of the International Conference on Learning Representations, 2019.
  13. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. DRAW: A recurrent neural network for image generation. In Proceedings of the International Conference on Machine Learning, 2015.
  14. Bo Chang, Qiong Zhang, Shenyi Pan, and Lili Meng. Generating handwritten Chinese characters using CycleGAN. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2018.
  15. Eloi Alonso, Bastien Moysset, and Ronaldo Messina. Adversarial generation of handwritten text images conditioned on sequences. In Proceedings of the International Conference on Document Analysis and Recognition, 2019.
  16. Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier GANs. In Proceedings of the International Conference on Machine Learning, 2017.
  17. Lei Kang, J Ignacio Toledo, Pau Riba, Mauricio Villegas, Alicia Fornés, and Marçal Rusiñol. Convolve, attend and spell: An attention-based sequence-to-sequence model for handwritten word recognition. In Proceedings of the German Conference on Pattern Recognition, 2018.
  18. Jue Wang, Chenyu Wu, Ying-Qing Xu, and Heung-Yeung Shum. Combining shape and physical models for online cursive handwriting synthesis. International Journal of Document Analysis and Recognition, 7(4):219–227, 2005.
  19. Thomas Konidaris, Basilios Gatos, Kostas Ntzios, Ioannis Pratikakis, Sergios Theodoridis, and Stavros J Perantonis. Keyword-guided word spotting in historical printed documents using synthetic data and user feedback. International Journal of Document Analysis and Recognition, 9(2-4):167–177, 2007.
  20. Zhouchen Lin and Liang Wan. Style-preserving English handwriting synthesis. Pattern Recognition, 40(7):2097–2109, 2007.
  21. Achint Oommen Thomas, Amalia Rusu, and Venu Govindaraju. Synthetic handwritten CAPTCHAs. Pattern Recognition, 42(12):3365–3373, 2009.
  22. Tom SF Haines, Oisin Mac Aodha, and Gabriel J Brostow. My text in your handwriting. ACM Transactions on Graphics, 35(3):1–18, 2016.
  23. Praveen Krishnan and CV Jawahar. Generating synthetic data for text recognition. arXiv preprint arXiv:1608.04224, 2016.
  24. Lei Kang, Marçal Rusiñol, Alicia Fornés, Pau Riba, and Mauricio Villegas. Unsupervised writer adaptation for synthetic-to-real handwritten word recognition. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2020.
  25. Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In Proceedings of the International Conference on Learning Representations, 2014.
  26. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  27. Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross-domain image generation. In Proceedings of the International Conference on Learning Representations, 2017.
  28. Vinaychandran Pondenkandath, Michele Alberti, Michaël Diatta, Rolf Ingold, and Marcus Liwicki. Historical document synthesis with generative adversarial networks. In Proceedings of the International Conference on Document Analysis and Recognition, 2019.
  29. Pengyuan Lyu, Xiang Bai, Cong Yao, Zhen Zhu, Tengteng Huang, and Wenyu Liu. Auto-encoder guided GAN for Chinese calligraphy synthesis. In Proceedings of the International Conference on Document Analysis and Recognition, 2017.
  30. Yuchen Tian. zi2zi: Master chinese calligraphy with conditional adversarial networks, 2017.
  31. Haochuan Jiang, Guanyu Yang, Kaizhu Huang, and Rui Zhang. W-net: one-shot arbitrary-style Chinese character generation with deep neural networks. In Proceedings of the International Conference on Neural Information Processing, 2018.
  32. Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, and Jan Kautz. Few-shot unsupervised image-to-image translation. In Proceedings of the IEEE International Conference on Computer Vision, 2019.
  33. Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, 2017.
  34. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations, 2015.
  35. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, 2015.
  36. Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision, 2018.
  37. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Proceedings of the NeurIPS Workshop on Deep Learning, 2014.
  38. U-V Marti and Horst Bunke. The IAM-database: an English sentence database for offline handwriting recognition. International Journal on Document Analysis and Recognition, 5(1):39–46, 2002.
  39. Steven Bird, Ewan Klein, and Edward Loper. Natural language processing with Python: analyzing text with the natural language toolkit. O’Reilly Media, Inc., 2009.
  40. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Proceedings of the Neural Information Processing Systems Conference, 2017.
  41. Ali Borji. Pros and cons of GAN evaluation measures. Computer Vision and Image Understanding, 179:41–65, 2019.
  42. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605, 2008.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description