Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications

Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications

Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications

Abstract

Sketching and natural languages are effective communication media for interactive applications. We introduce Sketchforme, the first neural-network-based system that can generate sketches based on text descriptions specified by users. Sketchforme is capable of gaining high-level and low-level understanding of multi-object sketched scenes without being trained on sketched scene datasets annotated with text descriptions. The sketches composed by Sketchforme are expressive and realistic: we show in our user study that these sketches convey descriptions better than human-generated sketches in multiple cases, and 36.5% of those sketches are considered to be human-generated. We develop multiple interactive applications using these generated sketches, and show that Sketchforme can significantly improve language learning applications and support intelligent language-based sketching assistants.

Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications

Forrest Huang
University of California, Berkeley
Berkeley, U.S.A.
forrest_huang@berkeley.edu
John F. Canny
University of California, Berkeley
Berkeley, U.S.A.
canny@berkeley.edu
\@float

copyrightbox[b]

\end@float

I.4.9 Image Processing and Computer Vision: Applications

interactive applications; natural language; sketching; generative models; deep learning; Transformer; interactive machine learning.

Sketching is a natural and effective way for people to communicate artistic and functional ideas. Sketches are abstract drawings widely used by designers, engineers and educators as a thinking tool to materialize their vision while discarding unnecessary details. Sketching is also a popular form of artistic expression among amateur and professional artists. With the pervasive use of sketches across diverse fields, researchers in the HCI and graphics communities developed sketch-based interactive tools to enable intuitive and rich user experiences, such as assisted sketching systems [?, ?], design tools for prototyping [?], and animation authoring tools [?].

While using sketches as an interactive medium poses numerous benefits, producing meaningful and delightful sketches can be challenging for users and typically requires years of education and practice. This is partially due to the array of skills one needs to master for sketching: finding the appropriate abstraction level desired for the sketch, composing such abstraction with visual representation involving individual objects, and conveying these objects via precise motor operations with drawing media. Researchers have developed computational methods to assist users in creating sketches, but these methods typically rely on rigid modeling of the pixel/stroke-level visual details without semantic understandings of the sketched content. These restrict the use-cases of these approaches to improving sketching mechanics of user-created sketches.

Recent advances in neural-network-based generative models drastically increased machines’ ability to generate convincing graphical content, including sketches, from high-level concepts. The Sketch-RNN model [?] demonstrates recurrent neural networks (RNNs) trained on crowd-sourced data can understand and generate original sketches from various concept classes. Using these techniques, we introduce Sketchforme, the first system that is capable of synthesizing sketches conditioned on natural language descriptions.

The contribution of this paper is two-fold. First, we contribute Sketchforme, the system that uses a novel two-step neural method for generating sketched scenes from text descriptions. Sketchforme first uses its Scene Composer, a neural network that learned high-level composition principles from datasets of human-annotated natural images that contain text captions, bounding boxes of individual objects and class information, to generate scenes composition layouts. Sketchforme then uses its Object sketcher, a neural network that learned low-level sketching mechanics to generate sketches adhering to the objects’ aspect ratios in the composition. Sketchforme composes these generated objects of certain aspect ratios into meaningful sketched scenes.

Second, we contribute and evaluate several applications, including a sketch-based language learning system and an intelligent sketching assistant, to exemplify the importance of Sketchforme in empowering novel sketch-based applications in Section Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications. In these applications, Sketchforme creates new interactions and user experiences with the interplay between language and sketches. We envision the connection that Sketchforme draws between language and sketching will enable further engaging and natural human-computer interactions, and open up new avenues for self-expression by users.

Prior works have augmented the creative process of sketching with automatically-generated and crowd-sourced drawing guidance. ShadowDraw [?] and EZ-sketching [?] used edge images traced from natural images to suggest realistic sketch strokes to users. The Drawing Assistant [?] extracts geometric structure guides to help users construct accurate drawings. PortraitSketch [?] provides sketching assistance specifically on facial sketches by adjusting geometry and stroke parameters. Researchers also developed crowd-sourced web applications to provide real-time feedback for users to correct and improve sketched strokes [?].

In addition to assisted sketching tools, researchers also developed sketch tutorial tools to improve users’ sketching proficiency. How2Sketch [?] automatically generates multi-step tutorials of sketching 3D objects. Sketch-sketch revolution [?] provides first-hand experiences created by sketch experts for novice sketchers.

While these methods help users create refined sketches, none of them can synthesize sketches from semantic descriptions as Sketchforme’s sketch generation process.

Sketchforme builds upon the Sketch-RNN model, the first neural-network based sketch generation model [?]. Sketch-RNN consists of a sequence encoder-decoder model that can unconditionally generate stroke-based sketches based on object classes, and conditionally reconstruct sketches based on users’ input sketches. Sketchforme extends Sketch-RNN’s model for sketching individual objects to support conditional sketch generation based on aspect ratios in the composition layouts.

Generating graphical content from text description is a popular ongoing research problem. Recent work on Generative Adversarial Networks (GANs) [?, ?] shows promising results in generating realistic images from text descriptions. GAN-CLS [?] augments the GAN architecture to consider text descriptions, and subsequently generate images based on users’ text input. Extending on these works, [?] introduces multiple components to first synthesize composition and outlines from a text description, and subsequently generate images from these composition and outlines. This is similar to Sketchforme’s multi-step approach to generate complete sketch-scenes from natural language except in the domain of natural images.

Several prior works in the Computer Vision community focus on the research problem of transferring styles between visual content. These prior works explore image stylization by matching statistics of feature maps (i.e. filters) of pre-trained models and with generative adversarial networks [?, ?]. One possible approach for sketch generation that arises from these techniques is to stylize synthetic images generated based on text descriptions. However, this approach likely results in realistic, detailed sketch-style images which contain distracting artifacts. Sketchforme focuses on synthesizing abstract sketched scenes from scratch that capture fundamental ideas from messages communicated by the scenes.

To support applications that affords sketch and natural-language based interactions, we developed Sketchforme, the system that provides the core capability of synthesizing sketched scenes from natural language descriptions. Sketchforme implements a two-step approach to generate a complete scene from text descriptions as illustrated in Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications. In the first step, Sketchforme uses its Scene Composer to synthesize composition layouts represented by bounding boxes of individual objects. These bounding boxes dictate the location, size and aspect ratio of the objects in the scene. Sketchforme’s Object Sketcher then uses this information at the second step of the generation process to generate specific sketch strokes of these objects in their corresponding bounding boxes. These steps are reflective of humans’ sketching process of scenes suggested by sketching tutorials, where the overall composition of the scene is drafted before filling in details that characterize each object [?].

Figure \thefigure: Overall system architecture of Sketchforme which consists of two steps in its sketch generation process.

By taking this two-step approach, Sketchforme is able to obtain both high-level scene understanding and knowledge of the relation between individual objects. This enables a multitude of applications that require such understanding. Moreover, this approach overcomes the difficulty for end-to-end sketch generation methods to capture global structures of sequential inputs [?]. End-to-end scene sketch generation also requires datasets of dedicated sketch-caption pairs that is difficult for crowd-workers to create [?] and will be prohibitively large in scale due to the combinatorial explosion of objects in the scenes.

To generate composition layouts of scenes, we first model composition layouts as a sequence of objects, such that each object generated by the network is represented with 8 values:

The first four values are fundamental data that describes bounding boxes of objects in the scene: x-position, y-position, width, height, and the class label. The last three values are boolean flags used as extra ‘tokens’ to mark the actual boxes, the beginning of sequences and the end of sequences.

Using this sequential encoding of scenes, we designed a Transformer-based Mixture Density Network as our Scene Composer to generate realistic composition layouts. Transformer Networks [?] are state-of-the-art neural networks for sequence-to-sequence modeling tasks, such as machine translation and question answering. We use the Transformer to perform a novel task: generating a sequence of objects from a text description , a sequence of words. As multiple scenes can correspond to the same text descriptions, we feed the outputs of the Transformer Network into Gaussian Mixture Models (GMMs) to model the variation of scenes forming a Mixture Density Network [?].

The generation process of the composition layouts involves taking the previous bounding box (or the start token) as an input and generating the current box . At each time-step, the Transformer model generates an output conditioned on the text input and previously generated boxes using self-attention and cross-attention mechanisms built into the architecture. This process is repeated for multiple bounding boxes until an end token is generated:

(1)

is then projected to the appropriate dimensionality to parametrize the GMM models with various projection layers and to model , the distribution of the bounding boxes’ positions, and , the distribution of the bounding boxes’ sizes. With these distributions, Sketchforme can generate bounding boxes by sampling from these distributions in Equations 2 and 3. The GMM models use the projected values as mean and co-variance for mixtures of multi-variate Gaussian distributions . These parameters are passed through appropriate activation functions (Sigmoid, and ) to comply with the required range of the parameters.

(2)
(3)

While is modeled only from the first projection layer , we consider to be conditioned on the width and height of the boxes with the position of the boxes similar to [?]. To introduce this condition, we concatenate the and the values of to the second projection layer as described in Equation 3. The probabilities of the current boxes are generated using a softmax-activated third projection layer from the Transformer output:

(4)

In addition, Sketchforme separately uses an LSTM to generate additional class label vectors because the class labels given certain descriptions are assumed to not vary across examples. The full architecture of the Scene Composer is shown in Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications.

Figure \thefigure: Model architecture of the Scene Composer used in the first step of Sketchforme’s sketch generation process.

After obtaining scene layouts from the Scene composer, we designed a modified version of Sketch-RNN to generate individual objects in Sketchforme according to the layouts. We adopt the decoder-only Sketch-RNN that is capable of generating sketches of only individual objects as a sequence of individual strokes. Sketch-RNN’s sequential generation process involves generating the current stroke based on the previous strokes in the sketched object commonly used in sequence modeling tasks. Sketch-RNN also uses a GMM to model variation of sketch strokes.

While the decoder-only Sketch-RNN generates realistic sketches of individual objects in certain concept classes, the aspect ratio of the output sketches generated by the original Sketch-RNN model cannot be constrained. Hence, sketches generated by the original Sketch-RNN model generated sketches may be unfit for assembling into scene sketches guided by the layout generated by the Scene composer. Further, naive direct resizing of the sketches can produce sketches of unsatisfactory quality for complex scenes.

We modified Sketch-RNN as the Object Sketcher that factors in the aspect ratios of sketches when generating individual objects. To incorporate this knowledge in the Sketch-RNN model, we compute the aspect ratio of the training data and concatenate the aspect ratio of the sketch with the previous input stroke in the sketch generation process of our modified Sketch-RNN as shown in Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications. The new formulation and output of the Sketch-RNN at stroke-sequence is:

(5)

Since each Sketch-RNN model only handles a single object class, we train multiple sketching models and use the appropriate model based on the class label in the layouts generated by the Scene Composer for assembling the final sketched scene.

Figure \thefigure: Model architecture of the extended Sketch-RNN in Sketchforme’s Object Sketcher.

Sketchforme’s Scene Composer and Object Sketcher are trained on different datasets that encapsulate visual-scene-level knowledge and sketching knowledge separately. This relaxes the requirement for Sketchforme to be trained on natural language annotated datasets of sketched scenes that provides highly varied scenes corresponding to realistic scene-caption pairs.

We trained the Scene Composer using the Visual Genome dataset [?] which contains natural language region descriptions and object relations of natural images to demonstrate its flexibility in utilizing various types of scene-layout datasets. Object relations in the dataset each contains a ‘subject’ (e.g., ‘person’), an ‘object’ (e.g., ‘on’), and a ‘predicate’ (e.g., ‘car’) represented by class labels and bounding boxes of the participating objects in the image. Natural language region descriptions are represented by bounding boxes of the regions and description texts that correspond to the regions. We reconcile these two types of information using region graphs in the dataset that pair these two types of data. With the paired data of natural language descriptions and relations, we train the Scene Composer to generate composition layouts. We selected relations that contain subsets of the 100 most commonly used object classes and 70 predicates in the dataset. This dataset of selected object classes and predicates contains 101,968 instances. We split this dataset in the scheme of: 70% training set, 10% validation set and 20% testing set.

The Object Sketcher is trained with the Quick, Draw! [?] dataset that consists of 70,000 training sketches, 2,500 validation sketches and 2,500 testing sketches for each of the 345 object categories in the dataset. As mentioned in Section Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications, we preprocess the data by computing the aspect ratio of each sketch as inputs to the Object Sketcher in addition to the original stroke data.

Using these data sources, we train multiple neural networks of various configurations and loss functions in Sketchforme. The LSTM architectures in the Scene Composer for generating composition layouts is stacked with 2 hidden-layers of size 512. Similarly, the Transformer Network has the configuration .

The Scene Composer is trained by minimizing both the negative log likelihood of the position and size data:

(6)
(7)

and cross-entropy loss for categorical outputs .

(8)

For generating the class labels, note that in our network each is represented as a 100-dimension vector, with each value corresponding to the output probability of the class. is thus computed as:

(9)

We combine these multiple losses with weight hyper-parameters to obtain a general training objective for the Scene Composer:

(10)

The initial learning rate for the model is . We use . We use the Adam Optimizer with to minimize the loss function. We use 5 mixtures in each of the GMMs. We chose these hyper-parameters based on empirical experiments.

The Object Sketcher uses an HyperLSTM cell [?] of size 2048 for the modified Sketch-RNN model. The loss function of the Sketch-RNN model is identical to the reconstruction loss in the original Sketch-RNN model to maximize the log-likelihood of the generated probability distribution of the stroke data at each step . The model is trained with an initial learning rate of and gradient clipping of 1.0.

Central to evaluating Sketchforme’s success is assessing its effectiveness in generating realistic and relevant sketches and layouts from text descriptions. We evaluated the data generated by Sketchforme at each step of the generation process qualitatively and quantitatively to demonstrate its effectiveness of generating sketched scenes. We further conducted a user study on the overall utility of the generated sketches to explore their potential in supporting real-world applications.

The composition layouts generated by the Scene Composer in the first step of Sketchforme are represented as bounding boxes of individual objects in the scene. While the GMM in the Scene Composer already directly maximizes the log likelihood of the data, we can evaluate the performance of the model by visualizing and comparing heat-maps created by super-positioning instances of real data and generated data.

Because Sketchforme considers the text input when generating the composition layouts, we should only compare the generated bounding boxes with bounding boxes from the dataset that is semantically similar to the text input. We obtain semantically similar ground-truth compositions by filtering the subjects, objects, and predicates based on the descriptions. For instance, the composition layouts generated from ‘a person riding a horse.’ are compared with all actual compositions with a ‘person’ subject, predicate that is related to riding such as ’on’, ’on top of’ etc. and ‘horse’ subject.

Heat-maps in Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications shows the distributions of Sketchforme’s synthetic bounding boxes and ground-truth bounding boxes from the dataset. From these heat-maps, we can obtain a holistic view on the generation performance of the model by visually evaluating the similarity between the heat-maps. We observe similar distributions between the actual relations and the generated composition layouts across all descriptions that correspond to the composition layouts.

We can further approximate an overlap metric between the distributions using a Monte-Carlo simulation to obtain a quantitative metric of the model’s performance. To estimate the overlap between the generated data distribution and the dataset’s distribution, we generated 100 composition layouts for each prompt, and randomly sampled 1000 data points within each bounding boxes in these layouts. We estimate the overlap between the distributions by counting the number of data points that lie within the intersections between any generated and ground-truth bounding boxes. We compare Sketchforme’s performance with both a heuristic-based bounding box generator and a naive random bounding box generator. The heuristic-based bounding box generator only generates the second bounding boxes below the first bounding boxes for prompts with the ’above’-related predicates, and vice versa. The random bounding-box generator samples random values that describe the bounding boxes from uniform distributions serving as a naive baseline. Table Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications shows the percentage of the 1000 data points that lie in the intersections. The overlap between real data and Sketchforme-synthesized data is higher than both of that between the heuristic-based generator and the random generator by a large margin, which confirms our qualitative visual inspection of the heat-maps.

Figure \thefigure: Heat-maps generated by super-positioning Generated/Visual Genome (ground-truth) data. Each horizontal pair of heat-maps corresponds to an object under a description.
Description Sketchforme Heuristics Random
a dog on a chair 89.1% 64.4% 61.6%
an elephant under a tree 68.4% 40.3% 30.6%
a person riding a horse 94.0% 57.7% 51.5%
a boat under a bridge 31.8% 15.0% 6.85%
Table \thetable: Overlap metric from Monte-Carlo simulations for each description between real data and generated/heuristics-generated/random data.

The main addition of Sketchforme to the original Sketch-RNN model is an additional input that allows the Object Sketcher to generate sketches based on target aspect ratios () of completed sketches. We evaluate this approach by generating sketches of various aspect ratios. The Object Sketcher is able to adhere to input aspect ratios and generate individual object sketches coherent to the ratios. As shown in Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications, trees generated with ratio can be perceived as significantly shorter than trees with .

Figure \thefigure: Generated sketches of trees with various aspect ratios by the extended Sketch-RNN model in Sketchforme.

Combining the composition layouts and object sketch generation model of individual objects, Sketchforme generates complete scene sketches directly from text descriptions. Several examples of the sketches are shown in Figure LABEL:fig:teaser and Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications. In these figures, sketches that correspond to ‘a boat under a bridge’ consist of small boats under bridges, whereas using ‘an apple on a tree’ as the input creates sketches with small apples on large trees that follow the actual sizes and proportions of the objects. Moreover, Sketchforme is able to generalize to novel concepts of ‘a cat on top of horse,’ such that the only relations involving a cat and a horse in the Visual Genome dataset which the model was trained on is ‘a horse facing a cat.’ The sizes of cats and horses in these sketches are in proportion to their actual sizes and the cat is adequately placed on the back of the horse.

Figure \thefigure: Complete scene sketches generated by Sketchforme. Sketchforme is able to generalize to novel concepts such as ‘a cat on top of the horse.’

Sketchforme’s high-level goal is to augment users’ communication ability in sketches by generating realistic, plausible and coherent sketches for users to interact with and take reference from in their learning and communication processes. To complement the quantitative and qualitative evaluation of the sketches, we conducted user studies on Amazon Mechanical Turk (AMT) to gauge human subjects’ opinions on the realism and ability of the sketches in conveying the description used to generate them.

We recruited 51 human subjects on AMT and asked them to each review 50 sketches generated by either humans or Sketchforme. These 50 sketches are generated from five descriptions. The human-generated sketches are obtained from another AMT task prior to this user study based on Quick, Draw! [?]. These human-generated sketches are shown in Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications. In this study, subjects are provided with the complete sketched scene and the descriptions that the scenes are based on. Subjects are required to respond to the following questions:

  1. Do you think this sketch was generated by a computer (AI) or a human?

  2. On a scale of 1-5 (1 represents that description conveyed very poorly, 5 represents that description conveyed very well), how well did you think the message is conveyed by the sketch?

The subjects are given 10 sketches as trial questions with answers to (1) provided to them at the beginning of the task. After completing the trial tasks, the subjects’ answers to the remaining 40 sketches are aggregated as the actual study result. This study protocol is similar to perception studies commonly used to evaluate synthetic visual content generation techniques in the deep learning community [?]. In addition, we collected comments from the users (if any) and their perceived overall difficulty of the task at the end of the task.

Figure \thefigure: Samples of Sketches produced by humans and Sketchforme used in the AMT user study.

The first question probes the realism of the sketches with a Turing-test-style question asking the subjects to determine whether the sketches are created by humans. Subjects on average considered 64.6% of the human-generated sketches as generated by humans, while they considered 36.5% of Sketchforme-generated sketches as generated by humans as shown in Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications. Although the percentage of Sketchforme-generated sketches considered as generated by humans are statistically significantly lower (, paired t-test) than human-generated sketches, individual participants commented in the study that it was difficult to distinguish between human-generated and Sketchforme-generated sketches. P2 mentioned that they "really couldn’t tell the difference in most images." P6 commented that they "didn’t know if it was human or a computer." These results demonstrate the potential for Sketchforme in generating realistic sketched scenes.

We hypothesize one of the possible reasons for the lower percentage of Sketchforme-generated sketches to be considered as human-drawn is that the curves of the synthetic sketches are in general less jittery than human-drawn sketches. We suggest future work explore introducing stroke variation to generate more realistic sketches.

Figure \thefigure: Percentage of sketches considered by users as human-generated. On average, 64.6% of human-generated sketches are perceived as human-generated, while 36.5% of Sketchforme-generated sketches are perceived as human-generated.

The results for the second question reflects the ability of the sketches to communicate the underlying descriptions of the sketches. The average score for human-generated sketches is , whereas the average score for Sketchforme-generated sketches is as shown in Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications. Although the Sketchforme-generated sketches achieved lower scores overall, Sketchforme-generated sketches achieved statistically better average scores for sketches based on two of the descriptions: ‘a boat under a bridge’ and ‘an airplane in front of a mountain’ (, paired t-test). There is also no significant difference between the scores of human/Sketchforme-generated sketches based on ‘a cat on top of a horse’. This shows the competitive performance of Sketchforme-generated sketches in communicating underlying descriptions for some scenes.

Figure \thefigure: Average score for the conveying of descriptions by the sketches. Sketchforme-generated sketches perform better than human-generated sketches for sketches of ‘a boat under a bridge.’ and ‘an airplane in front of a mountain.’

In this section, we explore several applications that can benefited from Sketchforme’s ability to synthesize compelling sketches from natural language descriptions.

Sketches have been shown to improve memory [?]. As language learning is a memory-intensive task, Sketchforme could support language education applications based on sketches. These sketches can potentially create engaging and effective learning processes and avoid rote learning.

To explore the possibility of Sketchforme in supporting language learning, we built a basic language-learning application that aims to educate learners with a translation task from German to English. In this application, learners are presented with a German phrase, and are asked to translate it to English in the form of multiple choices similar to the process of learning term definitions from flash-cards. This application also implements the Leitner system [?] with three bins that repeats phrases that learners make the most mistakes on most frequently. Using this system, the phrases are moved to different bins depending on the participants’ familiarity of the translations.

We gathered 10 pairs of German-English sentences from a native German speaker and form 2 sets of 5 translations each. In addition, deceptive English sentences are added as other choices in the multiple-choice test to be selected by the learners in the application. We deployed this application on AMT to test the improvement on learning performance by presenting Sketchforme-generated sketches along with the phrases. The full application with the sketches presented to the users is shown in Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications.

Figure \thefigure: User interface for the language learning application powered by Sketchforme. Sketches significantly reduced the time taken to achieve similar learning outcome.

The study consists of a training phase and a testing phase for each participant. In the training phase, participants are presented with correct answers after answering each question. The participant can only advance to the next phase when they answer all questions correctly consecutively for all translations according to the Leitner system. In the testing phase, participants are given one chance to provide their answer to all translations without seeing the correct answers. The participants are divided into two conditions, with the ’control’ group only receiving phrases on their interface during training, and the ’treatment’ group that receives both phrases and sketches generated by Sketchforme on their interface during the training phase. Both groups receive only the phrases on their interface during the testing phase. Moreover, we use our two sets of translations for training and testing phases alternatively, such that the participants will not get consecutive training and testing phases for the same set of descriptions.

The performance of the participants during the study are monitored with multiple analytical metrics including completion time of each phases and scores on the test phase etc. At the end of the study, we also provide surveys for them to rate the difficulty of the task and the usefulness of the sketches (if applicable) on five-point Likert scales, and ask them to provide any additional suggestions to the interface.

We recruited 38 participants on AMT to participate in the study. While we did not see significant differences (, unpaired t-test) in the correctness of answers in the testing phase of the phrases between the ‘control’ and ‘treatment’ groups of participant, we discovered that the time taken to complete the learning task was significant less (, unpaired t-test) for the ‘treatment’ group at seconds on average, compared to seconds for the control group as shown in Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications . Moreover, from the post-study survey, we also discovered that the ‘treatment’ group in general found the sketches to be helpful for learning (rated ).

Figure \thefigure: Average time taken to complete the language learning task among groups. With Sketchforme-generated sketches, participants take significantly less time on completing the study.

As Sketchforme is an automated system that is capable of generating sketches from free-form text descriptions, and with these promising results on sketch-assisted language learning, we envision Sketchforme to support and improve large-scale language learning applications in the future.

We designed Sketchforme to support interactive sketching systems using a sequential architecture and a multi-step generation process. To demonstrate Sketchforme’s capability of supporting interactive human-in-the-loop sketching applications, we built a prototype of an intelligent sketching assistant reflective of two potential use-cases:

As Sketchforme’s Scene Composer is a sequential architecture that takes the previous object in the scene to generate the next object, we can complete unfinished user scenes instead of starting with a blank canvas by starting the generation with both the start token and an existing object in the scene created by the user . Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications shows examples of Sketchforme completing users’ sketch of a horse in step a by adding the other object involved in the scene.

Sketchforme’s Scene Composer is capable of generating multiple potential candidate objects at each step while composing the scene layout of the generated sketch. As such, users can select their preferred scene layout from multiple potential candidates. Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applications shows multiple candidates proposed by Sketchforme based on a text description in step b. Moreover, since Sketch-RNN is also capable of generating a variety of sketches, the users can also select their preferred sketches of each individual objects in the scene.

Figure \thefigure: Intelligent sketching assistant powered by Sketchforme. The user can a) create partial sketch for Sketchforme suggest multiple candidates for b) users to choose to adequate sketch they prefer from the description ‘a horse under a tree.’

Sketchforme is trained to encode composition principles from a natural image dataset. In natural images, objects might occlude each other, hence affecting the sizes and positions of the bounding boxes in the composition layouts. Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applicationsa shows several boats that was inadequately placed in front of parts of the bridge that should have occluded the boats. To overcome this limitation, future systems can augment Sketchforme by including advanced vision models to determine the layer order in the original natural image. The current Sketchforme system only considers a naive layer order determined by the sequence of generation of the composition layouts.

Moreover, having occluded compositions lead to the generation of overlapping sketches requiring additional research on realistic methods to handle overlapping sketched objects. For instance, the model that generates composition layouts can enforce constraints to avoid overlaps in the sketches, or follow hand-crafted rules to handle overlaps.

Sketchforme utilizes the aspect ratios of bounding boxes as the primary signal to inform the shapes of sketches of individual objects. These shapes may consequentially determine the poses of sketched objects. Although these shapes can be sufficient to determine correct poses for some object classes, such as the ‘tree’ class, constraining the shapes might be weak signals for other object classes. These shapes can suggest incoherent perspectives or partial sketches such as examples shown in Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applicationsb. In Figure Sketchforme: Composing Sketched Scenes from Text Descriptions for Interactive Applicationsb, only the faces of the elephants were sketched due to the aspect ratios provided to the extended Sketch-RNN model, which is inappropriate for composing sketched scenes. To mitigate this limitation, future work should model the pose of objects in sketches and natural images to augment the composition knowledge of their models, such as incorporating complete masks of the objects.

(a) Occluded Objects
(b) Incoherent Poses
Figure \thefigure: Limitations of Sketchforme’s sketching process. In a), the boat is significantly occluded by the bridge which affects the quality of generated sketches. In b), the elephant was provided with a square bounding box. This guides the system to sketch only the face of the elephant which is inappropriate for the scene.

With the highly interactive media of sketching and language, combined with Sketchforme’s high-level and low-level understanding of each element of the sketched scene, we believe future work should explore conversational interfaces for generating sketches and interactive tutorial systems that guide users to create sketches coherent to text descriptions. Moreover, since the Object Sketcher in Sketchforme’s generation process is capable of completing partial sketches created by users, Sketchforme can suggest possible strokes following incomplete user sketches at the object level, which can be useful in sketch education applications.

The unique interplay between natural language and sketches embodied by Sketchforme creates possibilities of building new applications that utilize the interactive properties of sketching and language. In this paper, we explored the capability of Sketchforme in supporting basic language learning. We believe future work could explore other domains such as science and engineering education as text-annotated sketches are frequently used in these domains.

The current sketches generated by Sketchforme are binary sketch strokes without colors or animations. Future work should explore colored and/or animated sketches to enable richer user experiences. For instance, the natural image datasets that Sketchforme used to train models in the first step of sketch-generation process can be used to determine possible colors of the sketched objects.

This paper presents Sketchforme, a novel sketching system capable of generating abstract scene sketches that involve multiple objects based on natural language descriptions. Sketchforme adopts a novel two-step neural-network-based approach: the Scene Composer obtains high-level understanding of layouts of sketched scenes, and the Object Sketcher obtains low-level understanding of sketching individual objects. These models can be trained without text-annotated datasets of sketched scenes. In the user study evaluating the expressiveness and realism of sketches generated by Sketchforme, human subjects considered Sketchforme-generated sketches more expressive than human-generated sketches for two of the five seeded descriptions. They also considered 36.5% of these sketches to be generated by humans. The sketches generated by Sketchforme significantly improved a sketch-assisted language learning system and enabled compelling intelligent features of a sketching assistant. Sketchforme possesses the potential to support interactive applications that utilize both sketches and natural language as interaction media, and afford large-scale applications in sketching, language education and beyond.

Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
350885
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description