Sketchformer: Transformer-based Representation for Sketched Structure
Sketchformer is a novel transformer-based representation for encoding free-hand sketches input in a vector form, \ieas a sequence of strokes. Sketchformer effectively addresses multiple tasks: sketch classification, sketch based image retrieval (SBIR), and the reconstruction and interpolation of sketches. We report several variants exploring continuous and tokenized input representations, and contrast their performance. Our learned embedding, driven by a dictionary learning tokenization scheme, yields state of the art performance in classification and image retrieval tasks, when compared against baseline representations driven by LSTM sequence to sequence architectures: SketchRNN and derivatives. We show that sketch reconstruction and interpolation are improved significantly by the Sketchformer embedding for complex sketches with longer stroke sequences.
Sketch representation and interpretation remains an open challenge, particularly for complex and casually constructed drawings. Yet, the ability to classify, search, and manipulate sketched content remains attractive as gesture and touch interfaces reach ubiquity. Advances in recurrent network architectures within language processing have recently inspired sequence modeling approaches to sketch (\egSketchRNN ) that encode sketch as a variable length sequence of strokes, rather than in a rasterized or ‘pixel’ form. In particular, long-short term memory (LSTM) networks have shown significant promise in learning search embeddings [32, 5] due to their ability to model higher-level structure and temporal order versus convolutional neural networks (CNNs) on rasterized sketches [2, 17, 6, 21]. Yet, the limited temporal extent of LSTM restricts the structural complexity of sketches that may be accommodated in sequence embeddings. In language modeling domain, this shortcoming has been addressed through the emergence of Transformer networks [8, 7, 28] in which slot masking enhances the ability to learn longer term temporal structure in the stroke sequence.
This paper proposes Sketchformer, the first Transformer based network for learning a deep representation for free-hand sketches. We build on the language modeling Transformer architecture of Vaaswani \etal to develop several variants of Sketchformer that process sketch sequences in continuous and tokenized forms. We evaluate the efficacy of each learned sketch embedding for common sketch interpretation tasks. We make three core technical contributions:
1) Sketch Classification. We show that Sketchformer driven by a dictionary learning tokenization scheme outperforms state of the art sequence embeddings for sketched object recognition over QuickDraw! ; the largest and most diverse public corpus of sketched objects.
2) Generative Sketch Model. We show that for more complex, detailed sketches comprising lengthy stroke sequences, Sketchformer improves generative modeling of sketch -- demonstrated by higher fidelity reconstruction of sketches from the learned embedding. We also show that for sketches of all complexities, interpolation in the Sketchformer embedding is stable, generating more plausible intermediate sketches for both inter- and intra-class blends.
3) Sketch based Image Retrieval (SBIR) We show that Sketchformer can be unified with raster embedding to produce a search embedding for SBIR (after  for LSTM) to deliver improved prevision over a large photo corpus (Stock10M).
These enhancements to sketched object understanding, generative modeling and matching demonstrated for a diverse and complex sketch dataset suggest Transformer as a promising direction for stroke sequence modeling.
2 Related Work
Representation learning for sketch has received extensive attention within the domain of visual search. Classical sketch based image retrieval (SBIR) techniques explored spectral, edge-let based, and sparse gradient features the latter building upon the success of dictionary learning based models (e.g. bag of words) [25, 4, 22]. With the advent of deep learning, convolutional neural networks (CNNs) were rapidly adopted to learn search embedding . Triplet loss models are commonly used for visual search in the photographic domain [29, 19, 11], and have been extended to SBIR. Sangkloy \etal used a three-branch CNN with triplet loss to learn a general cross-domain embedding for SBIR. Fine-grained (within-class) SBIR was similarly explored by Yu \etal. Qi \etal instead use contrastive loss to learn correspondence between sketches and pre-extracted edge maps. Bui \etal[1, 2] perform cross-category retrieval using a triplet model and combined their technique with a learned model of visual aesthetics  to constrain SBIR using aesthetic cues in . A quadruplet loss was proposed by  for fine-grained SBIR. The generalization of sketch embeddings beyond training classes have also been studied [3, 14], and parameterized for zero-shot learning . Such concepts were later applied in sketch-based shape retrieval tasks . Variants of CycleGAN  have also shown to be useful as generative models for sketch . Sketch-A-Net was a seminal work for sketch classification that employed a CNN with large convolutional kernels to accommodate the sparsity of stroke pixels . Recognition of partial sketches has also been explored by . Wang \etal proposed sketch classification by sampling unordered points of a sketch image to learning a canonical order.
All the above works operate over rasterized sketches \egconverting the captured vector representation of sketch (as a sequence of strokes) to pixel form, discarding temporal order of strokes, and requiring the network to recover higher level spatial structure. Recent SBIR work has begun to directly input a vector (stroke sequence) representations for sketches , notably SketchRNN; an LSTM based sequence to sequence (seq2seq) variational auto-proposed by Eck \etal, trained on the largest public sketch corpus ‘QuickDraw!â . SketchRNN embedding was incorporated in a triplet network by Xu \etal to search for sketches using sketches. A variation using cascaded attention networks was proposed by  to improve vector sketch classification over Sketch-A-Net. Later, LiveSketch  extended SketchRNN to a triplet network to perform SBIR over tens of millions of images, harnessing the sketch embedding to suggest query improvements and guide the user via relevance feedback. The limited temporal scope of LSTM based seq2seq models can prevent such representations modeling long, complex sketches, a problem mitigated by our Transformer based model which builds upon the success shown by such architectures for language modeling [28, 7, 8]. Transformers encode long term temporal dependencies by modeling direct connections between data units. The temporal range of such dependencies was increased via the Transformer-XL  and BERT , which recently set new state-of-the-art performance on sentence classification and sentence-pair regression tasks using a cross-encoder. Recent work explores transformer beyond sequence modeling to 2D images . Our work is first to apply these insights to the problem of sketch modeling, incorporating the Transformer architecture of Vaswani \etal to deliver a multi-purpose embedding that exceeds the state of the art for several common sketch representation tasks.
3 Sketch Representation
We propose Sketchformer; a multi-purpose sketch representation from stroke sequence input. In this section we discuss the pre-processing steps, the adaptions made to the core architecture proposed by Vaswani \etal and the three application tasks.
3.1 Pre-processing and Tokenization
Following Eck \etal we simplify all sketches using the RDP algorithm  and normalize stroke length. Sketches for all our experiments are drawn from QuickDraw50M  (see Sec. 4; for dataset partitions).
1) Continuous. Quickdraw50M sketches are released in the ‘stroke-3’ format where each point stores its relative position to the previous point together with its binary pen state. To also include the ‘end of sketch’ state, the stroke-5 format is often employed: , where the the pen states - draw, - lift and - end are mutually exclusive . Our experiments with continuous sketch modeling use the ‘stroke-5’ format.
2) Dictionary learning. We build a dictionary of code words () to model the relative pen motion \ie. We randomly sample 100k sketched pen movements in the training set for clustering via K-means. We allocate 20% of sketch points for sampling inter-stroke transition, \ierelative transition when the pen is lifted, to balance with the more common within-stroke transitions. Each transition point is then assigned to the nearest code word, resulting in a sequence of discrete tokens. We also include 4 special tokens; a Start of Sketch (SOS) token at the beginning of every sketch, an End of Sketch (EOS) token at the end, a Stroke End Point (SEP) token to be inserted between strokes (indicate pen lifting) and a padding (PAD) token to pad the sketch to a fixed length.
3) Spatial Grid. The sketch canvas is first quantized into () square cells, each cell is represented by a token in our dictionary. Given the absolute sketch points, we determine which cell contains this point and assign the cell’s token to the point. The same four special tokens above are used to complete the sketch sequence.
Fig. 2 visualizes sketch reconstruction under the tokenized methods to explore sensitivity to the quantization parameters. Compared with the stroke-5 format (continuous) the tokenization methods are more compact. Dictionary learned tokenization (Tok-Dict) can have a small dictionary size and is invariant to translation since it is derived from stroke-3. On the other hand quantization error could accumulate over longer sketches if dictionary size is too low, shifting the position of strokes closer to the sequence’s end. The spatial grid based tokenization method (Tok-Grid), on the other hand, does not accumulate error but is sensitive to translation and yields a larger vocabulary (.
3.2 Transformer Architecture for Sketch
Sketchformer uses the Transformer network of Vaswani \etal . We add stages (\egself-attention and modified bottleneck) and adapt parameters in their design to learn a multi-purpose representation for stroke sequences, rather than language. A transformer network consists of an encoder and decoder blocks, each comprising several layers of multihead attention followed by a feed forward network. Fig. 1 illustrates the architecture with dotted lines indicating re-use of architecture stages from . In Fig. 4 we show how our learned embedding is used across multiple applications. Compared to  we use 4 MHA blocks versus 6 and a feed-forward dimension of 512 instead of 2048. Unlike traditional sequence modeling methods (RNN/LSTM) which learns the temporal order of current time steps from previous steps (or future steps in bidirectional encoding), the attention mechanism in transformers allows the network to decide which time steps to focus on to improve the task at hand. Each multihead attention (MHA) layer is formulated as such:
where , and are respective Key, Query and Value inputs to the single head attention (SHA) module. This module computes the similarity between pairs of Query and Key features, normalizes those scores and finally uses them as a projection matrix for the Value features. The multihead attention (MHA) module concatenates the output of multiple single heads and projects the result to a lower dimension. is a scaling constant and are learnable weight matrices.
The MHA output is fed to a positional feed forward network (FFN), which consists of two fully connected layers with ReLU activation. The MHA-FFN () blocks are the basis of the encoder side of our network ():
where indicates layer normalization over X and N is number of the MHA-FFN units .
The decoder takes as inputs the encoder output and target sequence in an auto-regressive fashion. In our case we are learning an transformer autoencoder so the target sequence is also the input sequence shifted forward by 1:
where is the encoder output, is the shifted auto-regressive version of input sequence .
The conventional transformer is designed for language translation and thus does not provide a feature embedding as required in Sketchformer (output of is also a sequence of vectors of the same length as ). To learn a compact representation for sketch we propose to apply self-attention on the encoder output, inspired by :
which is similar to SHA however the Key matrix , Value vector and bias are now trainable parameters. This self-attention layer learns a weight vector describing the importance of each time step in sequence , which is then accumulated to derive the compact embedding . On the decoder side, is passed through a FFN to resume the original shape of . These are the key novel modifications to the original Transformer architecture of Vaswani \etal(beyond above-mentioned parameter changes).
We also had to change how masking worked on the decoder. The Transformer uses a padding mask to stop attention blocks from giving weight to out-of-sequence points. Since we want a meaningful embedding for reconstruction and interpolation, we removed this mask from the decoder, forcing our transformer to learn reconstruction without previously knowing the sequence length and using only the embedding representation.
We employ two losses in training Sketchformer. A classification (softmax) loss is connected to the sketch embedding to preserve semantic information while a reconstruction loss ensures the decoder can reconstruct the input sequence from its embedding. If the input sequence is continuous (\iestroke-5) the reconstruction loss consists of a loss term modeling relative transitions and a 3-way classification term modeling the pen states. Otherwise the reconstruction loss uses softmax to regularize a dictionary of sketch tokens as per a language model. We found these losses simple yet effective in learning a robust sketch embedding. Fig. 3 visualizes the learned embedding for each of the three pre-processing variants, alongside that of a state of the art sketch encoding model using stroke sequences .
3.3 Cross-modal Search Embedding
To use our learned embedding for SBIR, we follow the joint embedding approach first presented in  and train an auxiliary network that unifies the vector (sketch) and raster (image corpus) representations into a common subspace.
This auxiliary network is composed of four fully connected layers (see Fig. 4) with ReLU activations. These are trained within a triplet framework and have input from three pre-trained branches: an anchor branch that models vector representations (our Sketchformer), plus positive and negative branches extracting representations from raster space.
The first two fully connected layers are domain-specific and we call each set , referring to vector-specific and raster-specific. The final two layers are shared between domains; we refer to this set as . Thus the end-to-end mapping from vector sketch and raster sketch/image to the joint embedding is:
where and are the input vector sketches and raster images respectively, and and their corresponding representations in the common embedding. is the network that models vector representations and is the one for raster images. In the original LiveSketch , is a SketchRNN -based model, while we employ our multi-task Sketchformer encoder instead. For we use the same off-the-shelf GoogLeNet-based network, pre-trained on a joint embedding task (from ).
The training is performed using triplet loss regularized with the help of a classification task. Training requires an aligned sketch and image dataset \iea sketch set and image set that share the same category list. This is not the case for Quickdraw, which is a sketch-only dataset without a corresponding image set. Again following , we use the raster sketch as a medium to bridge vector sketch with raster image. The off-the-shelf (from ) was trained to produce a joint embedding model unifying raster sketch and raster image; This allowed the authors train the , and sets using vector and raster versions of sketch only. By following the same procedure, we eliminate the need of having an aligned image set for Quickdraw as our network never sees an image feature during training.
The training is implemented in two phases. At phase one, the anchor and positive samples are vector and raster forms of random sketches in the same category while raster input of the negative branch is sampled from a different category. At phase two, we sample hard negatives from the same category with the anchor vector sketch and choose the raster form of the exact instance of the anchor sketch for the positive branch. The triplet loss maintains a margin between the anchor-positive and anchor-negative distances:
and margin in phase one, in phase two.
4 Experiments and Discussion
We evaluate the performance of the proposed transformer embeddings for three common tasks; sketch classification, sketch reconstruction and interpolation, and sketch based image retrieval (SBIR). We compare against two baseline sketch embeddings for encoding stroke sequences; SketchRNN  (also used for search in ) and LiveSketch . We evaluate using sketches from QuickDraw50M , and a large corpus of photos (Stock10M).
QuickDraw50M  comprises over 50M sketches of 345 object categories, crowd-sourced within a gamified context that encouraged casual sketches drawn at speed. Sketches are often messy and complex in their structure, consistent with tasks such as SBIR. Quickdraw50M captures sketches as stroke sequences, in contrast to earlier raster-based and less category-diverse datasets such as TUBerlin/Sketchy. We sample 2.5M sketches randomly with even class distribution from the public Quickdraw50M training partition to create training set (QD-2.5M) and use the public test partition of QuickDraw50M (QD-862k) comprising 2.5k sketches to evaluate our trained models. For SBIR and interpolation experiments we sort QD-862k by sequence length, and sample three datasets (QD345-S, QD345-M, QD345-L) at centiles 10, 50 and 90 respectively to create a set of short, medium and long stroke sequences. Each of these three datasets samples one sketch per class at random from the centile yielding three evaluation sets of 345 sketches. We sampled an additional query set QD345-Q for use in sketch search experiments, using the same 345 sketches as LiveSketch . The median stroke lengths of QD345-S, QD345-M, QD345-L are 30, 47 and 75 strokes respectively (after simplification via RDP ).
Stock67M is a diverse, unannotated corpus of photos used in prior SBIR work  to evaluate large-scale SBIR retrieval performance. We sample 10M of these images at random for our search corpus (Stock10M).
4.1 Evaluating Sketch Classification
We evaluate the class discrimination of the proposed sketch embedding via attaching dense and softmax layers to the transformer encoder stage, and training a 345-way classifier on QD2.5M. Table 1 reports the classification performance over QD-862k for each of the three proposed transformer embeddings, alongside two LSTM baselines -- the SketchRNN  and LiveSketch  variational autoencoder networks. Whilst all transformers outperform the baseline, the tokenized variant of the transformer based on dictionary learning (TForm-Tok-Dict) yields highest accuracy. We explore this further by shuffling the order of the sketch strokes retraining the transformer models from scratch. We were surprised to see comparable performance, suggesting this gain is due to spatial continuity rather than temporal information.
4.2 Reconstruction and Interpolation
We explore the generative power of the proposed embedding by measuring the degree of fidelity with which: 1) encoded sketches can be reconstructed via the decoder to resemble the input; 2) a pair of sketches may be interpolated within, and synthesized from, the embedding. The experiments are repeated for short (QD345-S), medium (QD345-M) and long (QD345-L) sketch complexities. We assess the fidelity of sketch reconstruction and the visual plausibility of interpolations via Amazon Mechanical Turk (MTurk). MTurk workers are presented with a set of reconstructions or interpolations and asked to make a 6-way preference choice; 5 methods and a ’cannot determine’ option. Each task is presented to five unique workers, and we only include results for which there is (\ie worker) consensus on the choice.
Reconstruction results are shown in Table 2 and favor the LiveSketch  embedding for short or medium length strokes, with the proposed tokenized transformer (TForm-Tok-Dict) producing better results for more complex sketches aided by the improved representational power of transformer for longer stroke sequences. Fig 6 provides representative visual examples for each sketch complexity.
We explore interpolation in Table 3 blending between pairs of sketches within (intra-) class and between (inter-) class. In all cases we encode sketches separately to the embedding, interpolate via slerp (after [12, 5] in which slerp was shown to offer best performance), and decode the interpolated point to generate the output sketch. Fig. 7 provides visual examples of inter- and intra- class interpolation for each method evaluated. In all cases the proposed tokenized transformer (TForm-Tok-Dict) outperforms other transformer variants and baselines, although the performance separation is narrower for shorter strokes echoing results of the reconstruction experiment. The stability of our representation is further demonstrated via local sampling within the embedding in Fig. 5.
|Short ( cent.)||SketchRNN ||0.00||2.06|
|Mean ( cent.)||SketchRNN ||0.00||0.00|
|Long ( cent.)||SketchRNN ||0.00||0.00|
4.3 Cross-modal Matching
We evaluate the performance of Sketchformer for sketch based retrieval of sketches (S-S) and images (S-I).
Sketch2Sketch (S-S) Matching. We quantify the accuracy of retrieving sketches in one modality (raster) given a sketched query in another (vector, \iestroke sequence) -- and vice-versa. This evaluates the performance of Sketchformer in discriminating between sketched visual structures invariant to their input modality. Sketchformer is trained on QD-2.5M and we query the test corpus QD-826k using QD-345Q as the query set. We measure overall mean average precision (mAP) for both coarse grain (\ieclass-specific) and fine-grain (\ieinstance-level) similarity, as mean average of mAP for each query. As per  for the former we consider a retrieved record a match if it matches the sketched object class. For the latter, exactly the same single sketch must match (in its different modality). To run raster variants, a rasterized version of QD-862k (for V-R) and of QD345-Q (for R-V) is produced by rendering strokes to a pixel canvas using the CairoSVG Python library. Table 4 show that for both class and instance level retrieval, the R-V configuration outperforms V-R indicating a performance gain due to encoding this large search index using the vector representation. In contrast to other experiments reported, the continuous variant of Sketchformer appears slightly preferred, matching higher for early ranked results for the S-S case -- see Fig. 9a for category-level precision-recall curve. Although Transformer outperforms RNN baselines by 1-3% in the V-R case the gain is more limited and indeed the performance over baselines is equivocal in the S-S where the search index is formed of rasterized sketches.
Sketch2Image (S-I) Matching. We evaluate sketch based image retrieval (SBIR) over Stock10M dataset of diverse photos and artworks, as such data is commonly indexed for large-scale SBIR evaluation [6, 5]. We compare against the state of the art SBIR algorithms accepting vector (LiveSketch ) and raster (Bui \etal) sketched queries. Since no ground-truth annotation is possible for this size of corpus, we crowd-source per-query annotation via Mechanical Turk (MTurk) for the top- (=15) results and compute both mAP% and precision@ curve averaged across all QD345-Q query sketches. Table 5 compares performance of our tokenized variants to these baselines, alongside associated Precision@k curves in Fig. 9b. The proposed dictionary learned transformer embedding (TForm-Tok-Dict) delivers the best performance (visual results in Fig. 8).
We presented Sketchformer; a learned representation for sketches based on the Transformer architecture . Several variants were explored using continuous and tokenized input; a dictionary learning based tokenization scheme delivers performance gains of 6% on previous LSTM autoencoder models (SketchRNN and derivatives). We showed interpolation within the embedding yields plausible blending of sketches within and between classes, and that reconstruction (auto-encoding) of sketches is also improved for complex sketches. Sketchformer was also shown effective as a basis for indexing sketch and image collections for sketch based visual search. Future work could further explore our continuous representation variant, or other variants with more symmetric encoder-decoder structure. We have demonstrated the potential for Transformer networks to learn a multi-purpose representation for sketch, but believe many further applications of Sketchformer exist beyond the three tasks studied here. For example, fusion with additional modalities might enable sketch driven photo generation  using complex sketches, or with a language embedding for novel sketch synthesis applications.
- footnotetext: *These authors contributed equally to this work
- (2016) Generalisation and sharing in triplet convnets for sketch based visual search. CoRR Abs arXiv:1611.05301. Cited by: §2.
- (2017) Compact descriptors for sketch-based image retrieval using a triplet loss convolutional neural network. Computer Vision and Image Understanding (CVIU). Cited by: §1, §2, Table 5.
- (2018) Sketching out the details: sketch-based image retrieval using convolutional neural networks with multi-stage regression. Elsevier Computers & Graphics. Cited by: §2, Figure 4, §3.3, §3.3, §4.3.
- (2015) Scalable sketch-based image retrieval using color gradient features. In Proc. ICCV Workshops, pp. 1–8. Cited by: §2.
- (2019) LiveSketch: query perturbations for guided sketch-based visual search. In Proc. CVPR, pp. 1–9. Cited by: §1, §1, §2, §3.2.1, §3.3, §3.3, §3.3, §4.1, §4.2, §4.2, §4.3, §4.3, Table 1, Table 2, Table 3, Table 4, Table 5, §4, §4, §4.
- (2017) Sketching with style: visual search with sketches and aesthetic context. In Proc. ICCV, Cited by: §1, §2, §4.3.
- (2019) Transformer-xl: attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. Cited by: §1, §2.
- (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, v.1, pp. 4171–4186. Cited by: §1, §2.
- (2019) Doodle to search: practical zero-shot sketch-based image retrieval. In Proc. CVPR, Cited by: §2.
- (1973) Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica: the international journal for geographic information and geovisualization 10 (2), pp. 112–122. Cited by: §3.1, §4.
- (2016) Deep image retrieval: learning global representations for image search. In Proc. ECCV, pp. 241–257. Cited by: §2.
- (2018) A neural representation of sketch drawings. In Proc. ICLR, Cited by: §1, §2, §3.1, §3.1, §3.3, §4.1, §4.2, Table 1, Table 2, Table 3, §4.
- (2018) Sketch-r2cnn: an attentive network for vector sketch recognition. arXiv preprint arXiv:1811.08170. Cited by: §2.
- (2019) Generalising fine-grained sketch-based image retrieval. In Proc. CVPR, Cited by: §2.
- (2019) GauGAN: semantic image synthesis with spatially adaptive normalization. In ACM SIGGRAPH 2019 Real-Time Live!, pp. 2. Cited by: §5.
- (2019) Image transformer. In Proc. NeurIPS, Cited by: §2.
- (2016) Sketch-based image retrieval via siamese convolutional neural network. In Proc. ICIP, pp. 2460–2464. Cited by: §1, §2.
- The Quick, Draw! Dataset. Note: \urlhttps://github.com/googlecreativelab/quickdraw-datasetAccessed: 2018-10-11 Cited by: §1, §2, §3.1, Table 1, §4, §4.
- (2016) CNN image retrieval learns from BoW: unsupervised fine-tuning with hard examples. In Proc. ECCV, pp. 3–20. Cited by: §2.
- (2018) Learning deep sketch abstraction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8014–8023. Cited by: §2.
- (2016) The sketchy database: learning to retrieve badly drawn bunnies. In Proc. ACM SIGGRAPH, Cited by: §1, §2.
- (2014) Sketch classification and classification-driven analysis using fisher vectors. ACM Transactions on Graphics (TOG) 33 (6), pp. 174. Cited by: §2.
- (2017) Quadruplet networks for sketch-based image retrieval. In Proc. ICMR, Cited by: §2.
- (2016) DeepSketch 2: deep convolutional neural networks for partial sketch recognition. In 2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI), pp. 1–6. Cited by: §2.
- (2003) Video google: a text retrieval approach to object matching in videos. In Proc. ICCV, Cited by: §2.
- (2018) Learning to sketch with shortcut cycle consistency. In Proc. CVPR, Cited by: §2.
- (2015) End-to-end memory networks. In Advances in neural information processing systems, pp. 2440–2448. Cited by: §3.2.
- (2017) Attention is all you need. In Proc. NeurIPS, Cited by: §1, §1, §2, Figure 1, §3.2, §3, §5.
- (2014) Learning fine-grained image similarity with deep ranking. In Proc. CVPR, pp. 1386–1393. Cited by: §2.
- (2018) Sketchpointnet: a compact network for robust sketch recognition. In 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 2994–2998. Cited by: §2.
- (2017) Bam! the behance artistic media dataset for recognition beyond photography. In Proc. ICCV, Cited by: §2.
- (2018) SketchMate: deep hashing for million-scale human sketch retrieval. In Proc. CVPR, Cited by: §1, §2, §4.
- (2018) Sketch-based shape retrieval via multi-view attention and generalized similarity. In 2018 7th International Conference on Digital Home (ICDH), pp. 311–317. Cited by: §2.
- (2016) Sketch me that shoe. In Proc. CVPR, pp. 799–807. Cited by: §2.
- (2016) Sketchnet: sketch classification with web images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1105–1113. Cited by: §2.
- (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. ICCV, Cited by: §2.