Deep Painterly Harmonization

Deep Painterly Harmonization


Copying an element from a photo and pasting it into a painting is a challenging task. Applying photo compositing techniques in this context yields subpar results that look like a collage — and existing painterly stylization algorithms, which are global, perform poorly when applied locally. We address these issues with a dedicated algorithm that carefully determines the local statistics to be transferred. We ensure both spatial and inter-scale statistical consistency and demonstrate that both aspects are key to generating quality results. To cope with the diversity of abstraction levels and types of paintings, we introduce a technique to adjust the parameters of the transfer depending on the painting. We show that our algorithm produces significantly better results than photo compositing or global stylization techniques and that it enables creative painterly edits that would be otherwise difficult to achieve. {CCSXML} <ccs2012> <concept> <concept_id>10010147.10010371.10010382.10010383</concept_id> <concept_desc>Computing methodologies Image processing</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012>


[500]Computing methodologies Image processing


Luan et al.] Fujun Luan  Sylvain Paris  Eli Shechtman  Kavita Bala
Cornell University  Adobe Research


Our method automatically harmonizes the compositing of an element into a painting. Given the proposed painting and element on the left, we show the compositing results (cropped for best fit) of unadjusted cut-and-paste, Deep Imange Analogy [LYY17], and our method.

1 Introduction

Image compositing is a key operation to create new visual content. It allows artists to remix existing materials into new pieces and artists such as Man Ray and David Hockney have created masterpieces using this technique. Compositing can be used in different contexts. In applications like photo collage, visible seams are desirable. But in others, the objective is to make the compositing inconspicuous, for instance, to add an object into a photograph in a way that makes it look like the object was present in the original scene. Many tools have been developed for photographic compositing, e.g., to remove boundary seams [PGB03], match the color [XADR12] or also fine texture [SJMP10]. However, there is no equivalent for paintings. If one seeks to add an object into a painting, the options are limited. One can paint the object manually or with a painting engine [CKIW15] but this requires time and skills that few people have. As we shall see, resorting to algorithms designed for photographs produces subpar results because they do not handle the brush texture and abstraction typical of paintings. And applying existing painterly stylization algorithms as is also performs poorly because they are meant for global stylization whereas we seek a local harmonization of color, texture, and structure properties.

In this paper, we address these challenges and enable one to copy an object in a photo and paste it into a painting so that the composite still looks like a genuine painting in the style of the original painting. We build upon recent work on painterly stylization [GEB16] to harmonize the appearance of the pasted object so that it matches that of the painting. Our strategy is to transfer relevant statistics of neural responses from the painting to the pasted object, with the main contribution being how we determine which statistics to transfer. Akin to previous work, we use the responses of the VGG neural network [SZ14] for the statistics that drive the process. In this context, we show that spatial consistency and inter-scale consistency matter. That is, transferring statistics that come from a small set of regions in the painting yields better results than using many isolated locations. Further, preserving the correlation of the neural responses between the layers of the network also improves the output quality. To achieve these two objectives, we introduce a two-pass algorithm: the first pass achieves coarse harmonization at a single scale. This serves as a starting point for the second pass which implements a fine multi-scale refinement. Figure Deep Painterly Harmonization(right) shows the results from our approach compared to a related technique.

We demonstrate our approach on a variety of examples. Painterly compositing is a demanding task because the synthesized style is juxtaposed with the original painting, making any discrepancy immediately visible. As a consequence, results from global stylization techniques that may be satisfying when observed in isolation can be disappointing in the context of compositing because the inherent side-by-side comparison with the original painting makes it easy to identify even subtle differences. In contrast, we conducted a user study that shows that our algorithm produces composites that are often perceived as genuine paintings.

1.1 Related Work

Image Harmonization.

The simplest way to blend images is to combine the foreground and background color values using linear interpolation, which is often accomplished using alpha matting [PD84]. Gradient-domain compositing (or Poisson blending) was first introduced by Pérez et al. \shortciteperez2003poisson which considers the boundary condition for seamless cloning. Xue et al. \shortcitexue2012understanding identified key statistical factors that affect the realism of photo compositings such as luminance, color temperature, saturation, and local contrast, and matched the histograms accordingly. Deep neural networks [ZKSE15, TSL17] further improved color properties of the composite by learning to improve the overall photo realism. Multi-Scale Image Harmonization [SJMP10] introduced smooth histogram and noise matching which handles fine texture on top of color, however it does not capture more structured textures like brush strokes which often appear in paintings. Image Melding [DSB12] combines Poisson blending with patch-based synthesis [BSFG09] in a unified optimization framework to harmonize color and patch similarity. Camouflage Images [CHM10] proposed an algorithm to embed objects into certain locations in cluttered photographs with a goal to make the objects hard to notice. While these techniques are mostly designed with photographs in mind, our focus is on paintings.

Style Transfer using Neural Networks.

Recent work on Neural Style transfer [GEB16] has shown impressive results on transferring the style of an artwork by matching the statistics of layer responses of a deep neural network. However, this technique is sensitive to mismatches in the image content and several approaches have been proposed to address this issue. Gatys et al. \shortciteGatys2017a add the possibility for users to guide the transfer with annotations. In the context of photographic transfer, Luan et al. \shortciteluan2017deep limit mismatches using scene analysis. Li and Wang \shortciteli2016combining use nearest-neighbor correspondences between neural responses to make the transfer content-aware. Feed-forward generators propose fast approximations of the original Neural Style formulations [ULVL16, JAFF16, LW16b]. Odena et al. \shortciteodena2016deconvolution study the filters used in these networks and explain how to avoid the grid-like artifacts produced by some techniques. Recent approaches replace the Gram matrix with matching other statistics of neural responses [HB17, LFY17]. Liao et al. \shortciteliao2017visual further improve the quality of the results by introducing bidirectional dense correspondence field matching. All these methods have in common that they change the style of entire images at once. Our work differs in that we focus on local transfer; we shall see that global methods do not work as well when applied locally.

1.2 Background

Our work builds upon the style transfer technique introduced by Gatys and colleagues \shortcitegatys2015neural (Neural Style) and several additional reconstruction losses proposed later to improve its results. We summarize these techniques below before describing our algorithm in the next section (§ 2).

1.2.1 Style Transfer

Parts of our technique have a similar structure to the Neural Style algorithm which proceeds in three steps.

  1. The input image and style are processed with the VGG network [SZ14] to produce a set of activation values and . Intuitively, these capture the statistics that represent the style of each image.

  2. The style activations are mapped to the input ones. In the original approach by Gatys et al., the entire set of style activations is used. Other options have been later proposed, e.g., using nearest neighbors neural patches [LW16a].

  3. The output image is reconstructed through an optimization process that seeks to preserve the content of the input image while at the same time match the visual appearance of the style image. These objectives are modeled using losses that we describe in more detail in the next section.

Our approach applies this three-step process twice, the main variation being the activation matching step (2). Our first pass uses a matching algorithm designed for robustness to large style differences, and our second pass uses a more constrained matching designed to achieve high visual quality.

1.2.2 Reconstruction Losses

The last step of the pipeline proposed by Gatys et al. is the reconstruction of the final image . As previously discussed, this involves solving an optimization problem that balances several objectives, each of them modeled by a loss function. Originally, Gatys et al. proposed two losses: one to preserve the content of the input image and one to match the visual appearance of the style image . Later, more reconstruction losses have been proposed to improve the quality of the output. Our work builds upon several of them that we review below.

Style and Content Losses.

In their original work, Gatys et al. used the loss below.


where is the total number of convolutional layers, the number of filters in the layer, and the number of activation values in the filters of the layer. is a matrix where the coefficient is the activation of the filter of the layer and is the corresponding Gram matrix. and are weights controlling the influence of each layer and controls the tradeoff between the content (Eq. 1b) and the style (Eq. 1c). The advantage of the Gram matrices is that they represent the statistics of the activation values independently of their location in the image, thereby allowing the style statistics to be “redistributed” in the image as needed to fit the input content. Said differently, the product amounts to summing over the entire image, thereby pooling local statistics into a global representation.

Histogram Loss.

Wilmot et al. \shortcitewilmot2017stable showed that is unstable because of ambiguities inherent in the Gram matrices and proposed the loss below to ensure that activation histograms are preserved, which remedies the ambiguity.


where are weights controlling the influence of each layer and is the histogram-remapped feature map by matching to .

Total Variation Loss.

Johnson et al. \shortcitejohnson2016perceptual showed that the total variation loss introduced by Mahendran and Vedaldi \shortcitemahendran2015understanding improves style transfer results by producing smoother outputs.


where the sum is over all the pixels of the output image .

2 Painterly Harmonization Algorithm

We designed a two-pass algorithm to achieve painterly harmonization. Previous work used a single-pass approach; for example, Gatys et al. \shortcitegatys2015neural match the entire style image to the entire input image and then use the norm on Gram matrices to reconstruct the final result. Li and Wand \shortciteli2016combining use nearest neighbors for matching and the norm on the activation vectors for reconstructing. In our early experiments, we found that such single-pass strategies did not work as well in our context and we were not able to achieve as good results as we hoped. This motivated us to develop a two-pass approach where the first pass aims for coarse harmonization, and the second focuses on fine visual quality (Alg. 1).

The first pass produces an intermediate result that is close to the desired style but we do not seek to produce the highest quality output possible at this point. By relaxing the requirement of high quality, we are able to design a robust algorithm that can cope with vastly different styles. This pass achieves coarse harmonization by first performing a rough match of the color and texture properties of the pasted region to those of semantically similar regions in the painting. We find nearest-neighbor neural patches independently on each network layer (Alg. 3) to match the responses of the pasted region and of the background. This gives us an intermediate result (Fig. 1b) that is a better starting point for the second pass.

Then, in the second pass, we start from this intermediate result and focus on visual quality. Intuitively, since the intermediate image and the style image are visually close, we can impose more stringent requirements on the output quality. In this pass, we work at a single intermediate layer that captures the local texture properties of the image. This generates a correspondence map that we process to remove spatial outliers. We then upsample this spatially consistent map to the finer levels of the network, thereby ensuring that at each output location, the neural responses at all scales come from the same location in the painting (Alg. 4). This leads to more coherent textures and better looking results (Fig. 1c). In the rest of this section, we describe in detail each step of the two passes.

input input image and mask
style image
output output image
// Pass #1: Robust coarse harmonization (§ 2.1, Alg. 2)
// Treat each layer independently during input-to-style mapping (Alg. 3)
// Pass #2: High-quality refinement (§ 2.2, Alg. 2)
// Enforce consistency across layers
// and in image space during input-to-style mapping (Alg. 4)
Algorithm 1 TwoPassHarmonization
input input image and mask
style image
neural mapping function
output output image
// Process input and style images with VGG network.
// Match each input activation in the mask to a style activation
// and store the mapping from the former to the latter in .
// Reconstruct output image to approximate new activations.
Algorithm 2 SinglePassHarmonization

2.1 First Pass: Robust Coarse Harmonization

We designed our first pass to be robust to the diversity of paintings that users may provide as style images. In our early experiments, we made two observations. First, we applied the technique of Gatys et al. \shortcitegatys2015neural as is, that is, we used the entire style image to build the style loss . This produced results where the pasted element became a “summary” of the style image. For instance, with Van Gogh’s Starry Night, the pasted element had a bit of swirly sky, one shiny star, some of the village structure, and a small part of the wavy trees. While each texture was properly represented, the result was not satisfying because only a subset of them made sense for the pasted element. Then, we experimented with the nearest-neighbor approach of Li and Wand \shortciteli2016combining. The intuition is that by assigning the closest style patch to each input patch, it selects style statistics more relevant to the pasted element. Although the generated texture tended to lack contrast compared to the original painting, the results were more satisfying because the texture was more appropriate. Based on these observations, we designed the algorithm below that relies on nearest-neighbor correspondences and a reconstruction loss adapted from [GEB16].

(a) Cut-and-paste
(b) pass. Robust harmonization but
weak texture (top)
and artifacts (bottom).
(c) pass. Refined results with accurate texture and no artifact.
Figure 1: Starting from vastly different input and style images (a), we first harmonize the overall appearance of the pasted element (b) and then refine the result to finely match the texture and remove artifacts (c).

Similarly to Li and Wand, for each layer of the neural network, we stack the activation coefficients at the same location in the different feature maps into an activation vector. Instead of considering feature maps, each of them with coefficients, we work with a single map that contains activation vectors of dimension . For each activation vector, we consider the patch centered on it. We use nearest neighbors based on the norm on these patches to assign a style vector to each input vector. We call this strategy independent mapping because the assignment is made independently for each layer. Algorithm 3 gives the pseudocode of this mapping. Intuitively, the independence across layers makes the process more robust because a poor match in a layer can be compensated for by better matches in the other layers. The downside of this approach is that the lack of coherence across layers impacts the quality of the output (Fig. 1b). However, as we shall see, these artifacts are limited and our second pass removes them.


Unlike Li and Wand who use the norm on these activation vectors to reconstruct the output image, we pool the vectors into Gram matrices and use (Eq. 1). Applying the norm directly on the vectors constrains the spatial location of the activation values; the Gram matrices relax this constraint as discussed in § 1.2.2. Figure 2 shows that using reconstruction directly, i.e., without Gram matrices, does not produce as good results.

Overly weak texture
Severe artifacts
Figure 2: Examples of quality loss when not using a Gram matrix in the first pass. The inputs are the same as in Figure 1.
input input neural activations and mask
style neural activations
output input-to-style mapping
// For each layer in the network...
for  do //
       // For each “activation patch” in the layer...
       //      ‘‘activation patch’’ = vector made of all the activations
      //        in a patch across all the filters of a layer.
       for  do // number of patches in the layer
             // Consider only the patches inside the mask
            // resized to the resolution of the layer
             if  then
                   // Assign the style patch closest to the input patch
Algorithm 3 IndependentMapping

2.2 Second Pass: High-Quality Refinement

As can be seen in Figure 1(b), the results after the first pass match the desired style but suffer from artifacts. In our early experiment, we tried to fine-tune the first pass but our attempts only improved some results at the expense of others. Adding constraints to achieve a better quality was making the process less robust to style diversity. We address this challenge with a second pass that focuses on visual quality. The advantage of starting a complete new pass is that we now start from an intermediate image close to the desired result and robustness is not an issue anymore. We design our pass such that the input-to-style mapping is consistent across layers and space. We ensure that the activation vectors assigned to the same image location on different layers were already collocated in the style image. We also favor the configuration where vectors adjacent in the style image remain adjacent in the mapping. Enforcing such strict requirements directly on the input image often yields poor results (Fig. 4d) but when starting from the intermediate image generated by the first pass, this approach produces high quality outputs (Fig. 4h). We also build on previous work to improve the reconstruction step. We explain the details of each step below.

(our setting)
Figure 3: Setting to conv3_1 produces low-quality results due to poor matches between the input and style images (a). Instead we use conv4_1 that yields better results (b). Using the deeper layer conv5_1 generates lower-quality texture (c) but the degradation is minor compared to using conv3_1. The inputs are the same as in Figure 1.

We start with a nearest-neighbor assignment similar to the first pass (§ 2.1) but applied only to a single layer, which we call the reference layer (Alg. 4, Step #1). We tried several layers for and found that conv4_1 provided a good trade-off between deeper layers that ignore texture, and shallower layers that ignore scene semantics, e.g., pairing unrelated objects together (Fig. 3).

(a) Style image
(b) Cut-and-paste
(c) Independent mapping ( pass only, our intermediate result)
(d) Consistent mapping ( pass only, bad correspondence)
(e) Entire pipeline without and using instead of
(f) Entire pipeline using
instead of
(g) Entire pipeline without painting estimator (default parameters, style is too weak)
(h) Our final result
Figure 4: Ablation study. (c-h) are cropped for best fit. Zoom in for details.
input input neural coefficients and mask
style neural coefficients
output input-to-style mapping
// Step #1: Find matches for the reference layer.
// Do the same as in Alg. 3 but only for the reference layer.
for  do
       if  then
             // is an intermediate input-to-style mapping refined
            // in the next step of the algorithm.
// Step #2: Enforce spatial consistency.
for  do
       if  then
             // Look up the corresponding style patch.
             // Initialize a set of candidate style patches.
             // For all adjacent patches...
             for  do
                   // Duplicate its assignment, i.e.:
                  // 1. Look up the style patch of the adjacent patch
                  //     and apply the opposite offset .
                  // 2. Add the result to the set of candidates.
            // Select the candidate the most similar to the style patches
            // associated to the neighbors of .
             // with
// Step #3: Propagate the matches in the ref. layer to the other layers.
// For each layer in the network excluding the reference layer...
for  do
       for  do
             if  then
                   // Compute the index of the patch in
                  // at the same image location as .
                   // Fetch matching style patch in the reference layer.
                   // Change the resolution back.
Algorithm 4 ConsistentMapping

Then, we process this single-layer mapping to improve its spatial consistency by removing outliers. We favor configurations where all the style vectors assigned to an input region come from the same region in the style image. For each input vector , we compare the style vector assigned by the nearest-neighbor correspondence above as well as the vectors obtained by duplicating the assignments of the neighbors of . Among these candidates, we pick the vector pointing to a style feature that is most similar to its neighbors’ features. In practice, this removes small outlier regions that are inconsistent with their neighborhood. This procedure is Step #2 of Algorithm 4.

Last, we propagate these correspondences to the other layers so that the activation values are consistent across layers. For a given location in the input image, all the activation values across all the layers are assigned style activations that come from the same location in the style image (Alg. 4, Step #3).


A first option is to apply the same reconstruction as the first pass (§ 2.1), which already gives satisfying results although some minor defects remain (Fig. 4e). We modify the reconstruction as follows to further improve the output. First, we observe that in some cases, the nearest-neighbor assignment selects the same style vector many times, which generates poor results. We address this issue by selecting each vector at most once and ignoring the additional occurrences, i.e., each vector contributes at most once to the Gram matrix used in the style loss. We name this variant of the style loss. We also add the histogram and total-variation losses, and (§ 1.2.2). Together, these form the loss that we use to reconstruct our final output:


where the weights , , and control the balance between the terms. Figure 4 illustrates the benefits of this loss. We explain in Section 3 how to set these weights depending on the type of painting provided as the style image.


Our constrained mapping was inspired by the nearest-neighbor field upsampling used in the Deep Analogy work [LYY17, § 4.4] that constrains the matches at each layer to come from a local region around the location at a previous layer. When the input and style images are similar, this technique performs well. In our context, the intermediate image and the style image are even more similar. This encouraged us to be even stricter by forcing the matches to come from the exact same location. Beside this similar aspect, the other algorithmic parts are different and as we shall see, our approach produces better results for our application.

We experimented with using the same reconstruction in the first pass as in the second pass. The quality gains were minimal on the intermediate image and mostly non-existent on the final output. This motivated us to use the simpler reconstruction in the first pass as described in Section 2.1 for the sake of efficiency.

(a) Inset
(b) After guided filtering
(c) After patch synthesis
(d) channel
(before guided filtering)
(e) channel
(after guided filtering)
(f) Our result
after post-processing
Figure 5: Post-processing: Given deconvolution result with inset (a), we perform Chrominance Denoising to produce (b) and Patch Synthesis on (b) to produce (c). (d) and (e) show the insets of channel in CIE-Lab space before and after denosing. (f) is the final full-resolution result.

2.3 Post-processing

The two-pass process described thus far yields high quality results at medium and large scales but in some cases, fine-scale details can be inaccurate. Said differently, the results are good from a distance but may not be as satisfactory on close examination. The two-step signal-processing approach below addresses this.

Chrominance Denoising.

We observed that, in our context, high-frequency artifacts primarily affect the chrominance channels while the luminance is comparatively cleaner. We exploit this characteristic by converting the image to CIE-Lab color space and applying the Guided Filter [HST10] to filter the chrominance channels with the luminance channel as guide. We use the parameters suggested by the authors, i.e., . This effectively suppresses the highest-frequency color artifacts. However, some larger defects may remain. The next step addresses this issue.

Patch Synthesis.

The last step uses patch synthesis to ensure that every image patch in the output appears in the painting. We use PatchMatch [BSFG09] to find a similar style patch to each output patch. We reconstruct the output by averaging all overlapping style patches, thereby ensuring that no new content is introduced. However, the last averaging step tends to smooth details. We mitigate this effect by separating the image into a base layer and a detail layer using the Guided Filter again (using the same parameters). The base layer is the output of the filter and contains the coarse image structure, and the detail layer is the difference with the original image that contains the high-frequency details. We then apply patch synthesis on the base layer only and add back the details. This procedure ensures that the texture is not degraded by the averaging, thereby producing crisp results.

3 Painting Estimator

Strength Art style examples
Weak Baroque, High Renaissance
Medium Abstract Art, Post-Impressionism
Strong Cubism, Expressionism
Table 1: Weights for selected art styles. Please refer to the supplemental document for compact art styles and parameter weights. The final weight is a linear interpolation of different art styles using our trained painting estimator network. TV weights are computed separately based on the noise level of the painting image (Sec. 4) .

The above algorithm has two important parameters that affect the stylistic properties of the output — style and histogram weights ( and ). We observed that different sets of parameters gave optimal results for different paintings based on their level of stylization. For example, Cubism paintings often contain small multifaceted areas with strong and sharp brush strokes, while High Renaissance and Baroque paintings are more photorealistic. Rather than tweak parameters for each input, we developed a trained predictor of the weights to make our approach to weight selection more robust.

We train a painting estimator that predicts the optimization parameters for our algorithm such that parameters that allow deeper style changes are used when the background painting is more stylized and vice versa. To train this estimator, we split the parameter values into three categories (“Weak”, “Medium” and “Strong”), and manually assign each painting style to one of the categories. Table 1 presents a subset of painting styles and their categories and weight values. Other styles appear in the supplementary material.

We collected 80,000 paintings from and fine-tuned the VGG-16 network [SZ14] on classifying 18 different styles. After training, we remove the last classification layer and use weighted linear interpolation on the softmax layer based on style categories to output a floating value for and indicating the level of stylization. These parameter values (shown in Table 1) are then used in the optimization.

4 Implementation Details

(a) Source
(b) Target painting
(c) [PGB03]
(d) [SJMP10]
(e) [DSB12]
(f) Ours
Figure 6: When pasting the face of Ginevra de’ Benci (a) on Mona Lisa (b), Poisson Blending [PGB03] does not match the texture (c), Mulitscale Harmonization [SJMP10] adds texture but does not reproduce the paint cracks (d), Image Melding [DSB12] adds cracks but not everywhere, e.g., there are no cracks below the eye on the right (e). In comparison, our result generates cracks everywhere (f).

This section describes the implementation details of our approach. We employed pre-trained VGG-19 [SZ14] as the feature extractor. For the first-pass optimization, we chose conv4_1 ( for this layer and for all other layers) as the content representation, and conv3_1, conv4_1 and conv5_1 ( for those layers and for all other layers) as the style representation, since higher layers have been transformed by the CNN into representations with more of the actual content, which is crucial for the semantic-aware nearest-neighbor search. We used these layer preferences for all the first-pass results. For the second-pass optimization, we chose conv4_1 as the content representation, conv1_1, conv2_1, conv3_1 and conv4_1 as the style representation. We also employed the histogram loss and used conv1_1, and conv4_1 ( for these layers and for all other layers) as the histogram representation as suggested by the original authors. We chose conv4_1 as the reference layer for the nearest-neighbor search in the second-pass optimization. We name the output floating number of the painting estimator and set the parameters , , and , where the and is the median total variation (Eq. 3) of the painting . We found the parameters for the sigmoid function empirically. The intuition is that we impose less smoothness when the original painting is textured.

Our main algorithm is developed in Torch + CUDA. All our experiments are conducted on a PC with an Intel Xeon E5-2686 v4 processor and an NVIDIA Tesla K80 GPU. We use the L-BFGS solver [LN89] for the reconstruction with 1000 iterations. The runtime on a image takes about 5 minutes. We will release our implementation upon acceptance for non-commercial use and future research.

5 Results

We now evaluate our harmonization algorithm in comparison with related work and through user studies.

Main Results.

In Figures 10 and  11, we compare our method with four state-of-the-art methods: Neural Style [GEB16], CNNMRF [LW16a], Multi-Scale Image Harmonization [SJMP10], and Deep Image Analogy [LYY17] across paintings with various styles and genres. Neural Style tends to produce “style summaries” that rarely work well; for example, background sky texture appears in the foreground eiffel tower (Fig. 10(iv)). This is due to the lack of semantic matching since the Gram matrix is computed over the entire painting. CNNMRF often generates weak style transfers that do not look as good when juxtaposed with the original painting. Multi-Scale Image Harmonization performs noise matching to fit high-frequency inter-scale texture but does not capture spatially-varying brush strokes common in paintings with heavy styles. Deep Image Analogy is more robust compared to the other three methods but its results are sometimes blurred due to patch synthesis and voting, e.g., Figures 10(i-iii), 11(vii). Its coarse-to-fine pipeline also sometimes misses parts (Fig. 10(iv)).

User Studies.

We conduct user studies to quantitatively characterize the quality of our results. The first user study, “Edited or Not”, aims to understand whether the harmonization quality is good enough to fool an observer. The second user study, “Comparison”, compares the quality of our results with that of related algorithms.

Figure 7: Results of the“Edited or Not” user study. A higher painting classification rate means better harmonization performance since users were unable to identify the edit. See text for more details. The large symbols represent the average of each category.

Study 1: Edited or Not. We showed the users 20 painterly composites, each edited by one of four algorithms: CNNMRF [LW16a], Multi-Scale Image Harmonization [SJMP10], Deep Image Analogy [LYY17], and ours. We asked the users whether the painting had been edited. If they thought it was, we asked them to click on the part of the image they believed was edited (this records the coordinates of the edited object) so that we could verify the correctness of the answer. This verification is motivated by our pilot study where we found that people would sometimes claim an image is edited by erroneously identifying an element as edited although it was part of the original painting. Such misguided classification is actually a positive result that shows the harmonization is of high quality. We also recorded the time it took users to answer each question.

One potential problem in the study that we had to consider is that people might spot the edited object in a painting due to reasons other than the harmonization quality. For example, an edit in a famous painting will be instantly recognizable, or if the composition of the edited painting is semantically wrong, e.g., a man’s face in a woman’s head, or a spaceship in a painting from the 19th century. To avoid these problems, we selected typically unfamilar paintings and made the composition sensible; for example adding a park bench in a meadow or a clock on a wall (see supplementary material for more examples). We further asked the users for each example if they were familiar with the painting. If they were, we eliminated their judgement as being tainted by prior knowledge.

Figure 7 shows the results of Study 1 using two metrics: the average painting classification rate and average answer time. Let denote the total number of users with answer . For an original painting, the painting classification rate is computed as , where is the number of answers with and is the number of answers with . For edited paintings using those four algorithms, the painting classification rate is computed as , where is the number of answers with but with the wrong XY coordinates of the mouse click. This captures all the cases where the viewer was “fooled” by the harmonization result. A higher rate means better harmonization quality since users were unable to identify the modification. Figure 7 shows that our algorithm achieves a painting classification rate significantly higher than that of the other algorithms and close to that of unedited paintings.

The answer time has a less straightforward interpretation since it may also reflect how meticulous users are. Nonetheless, Figure 7 shows that the answer time for our algorithm is close to that of unedited paintings and significantly different from that of the other algorithms, which also suggests that our results share more similarities with actual paintings than the outputs of the other methods.

Figure 8: Results of the “Comparison” user study. Our algorithm is often the most preferred among the four algorithms.

Study 2: Comparison. We showed the users 17 paintings, each painting had been edited with the same four algorithms as the first study. We asked the users to select the result that best captures the consistency of the colors and of the texture in the painting. The quantitative results are shown in Figure 8. For most paintings our algorithm is most preferred. We provide more detailed results in the supplementary document.

Image Harmonization Comparisons.

We compare our results with Poisson blending and two state-of-the-art harmonization solutions, [SJMP10] and [DSB12] in Figure 6. Poisson blending achieves good overall color matching but does not capture the texture of the original painting. Multiscale Harmonization [SJMP10] transforms noise to transfer texture properties, in addition to color and intensity. However, it is designed to fit small-scale, noise-like texture and is not well suited for more structured patterns such as painting brush strokes and cracks. Image Melding [DSB12] improves texture quality by using patch synthesis combined with Poisson blending, but the texture disappears at some places. In comparison, our method better captures both the spatial and inter-scale texture and structure properties (Fig. 6f).

Harmonization of a canonical object across styles.

In the above examples, we picked different objects for different paintings to create plausible combinations. In this experiment, we use the same canonical object across a variety of styles to demonstrate the stylization independent of the inserted object. We introduce a hot air balloon into paintings with a wide range of styles. We randomly selected paintings from the dataset to paste the balloon into (Fig. 9).

Figure 9: Canonical object harmonization results for hot air balloon (upper-left).

6 Conclusions

We have described an algorithm to copy an object in a photograph and paste it into a painting seamlessly, i.e., the composite still looks like a genuine painting. We have introduced a two-pass algorithm that first transfers the overall style of the painting to the input and then refines the result to accurately match the painting’s color and texture. This latter pass relies on mapping neural response statistics that ensures consistency across the network layers and in image space. To cope with different painting styles, we have trained a separate network to adjust the transfer parameters as a function of the style of the background painting. Our experiments show that our approach succeeds on a diversity of input and style images, many of which are challenging for other methods. We have also conducted two user studies that show that users often identify our results as unedited paintings and prefer them to the outputs of other techniques.

We believe that our work opens new possibilities for creatively editing and combining images and hope that it will inspire artists. From a technical perspective, we have demonstrated that global painterly style transfer methods are not well suited for local transfer, and we carefully designed an effective local approach. This suggests fundamental differences between the local and global statistics of paintings, and further exploring this difference is an exciting avenue for future work. Other avenues of future work include fast feed-forward network approximations of our optimization framework, as well an extension to painterly video compositing.



(a) Cut-and-paste
(b) [GEB16]
(c) [LW16a]
(d) [SJMP10]
(e) [LYY17]
(f) Ours
Figure 10: Example results with insets on proposed composite for unadjusted cut-and-paste, four state-of-the-art methods and our results. We show that our method captures both spatial and inter-scale color and texture and produces harmonized results on paintings with various styles. Zoom in for details.


(a) Cut-and-paste
(b) [GEB16]
(c) [LW16a]
(d) [SJMP10]
(e) [LYY17]
(f) Ours
Figure 11: Continued.


  • [BSFG09] Barnes C., Shechtman E., Finkelstein A., Goldman D. B.: Patchmatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28, 3 (2009), 24–1.
  • [CHM10] Chu H.-K., Hsu W.-H., Mitra N. J., Cohen-Or D., Wong T.-T., Lee T.-Y.: Camouflage images. ACM Trans. Graph. 29, 4 (2010), 51–1.
  • [CKIW15] Chen Z., Kim B., Ito D., Wang H.: Wetbrush: Gpu-based 3d painting simulation at the bristle level. ACM Trans. Graph. 34, 6 (2015), 200.
  • [DSB12] Darabi S., Shechtman E., Barnes C., Goldman D. B., Sen P.: Image melding: Combining inconsistent images using patch-based synthesis. ACM Trans. Graph. 31, 4 (2012), 82–1.
  • [GEB16] Gatys L. A., Ecker A. S., Bethge M.: Image style transfer using convolutional neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2016).
  • [GEB17] Gatys L. A., Ecker A. S., Bethge M., Hertzmann A., Shechtman E.: Controlling perceptual factors in neural style transfer. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (July 2017).
  • [HB17] Huang X., Belongie S.: Arbitrary style transfer in real-time with adaptive instance normalization. In The IEEE International Conference on Computer Vision (ICCV) (Oct 2017).
  • [HST10] He K., Sun J., Tang X.: Guided image filtering. In Proceedings of European Conference on Computer Vision (ECCV) (2010), Springer-Verlag, pp. 1–14.
  • [JAFF16] Johnson J., Alahi A., Fei-Fei L.: Perceptual losses for real-time style transfer and super-resolution. In Proceedings of European Conference on Computer Vision (ECCV) (2016), Springer, pp. 694–711.
  • [LFY17] Li Y., Fang C., Yang J., Wang Z., Lu X., Yang M.-H.: Universal style transfer via feature transforms. In Advances in Neural Information Processing Systems (2017).
  • [LN89] Liu D. C., Nocedal J.: On the limited memory bfgs method for large scale optimization. Mathematical programming 45, 1 (1989), 503–528.
  • [LPSB17] Luan F., Paris S., Shechtman E., Bala K.: Deep photo style transfer. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (July 2017).
  • [LW16a] Li C., Wand M.: Combining markov random fields and convolutional neural networks for image synthesis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2016).
  • [LW16b] Li C., Wand M.: Precomputed real-time texture synthesis with markovian generative adversarial networks. In Proceedings of European Conference on Computer Vision (ECCV) (2016), Springer, pp. 702–716.
  • [LYY17] Liao J., Yao Y., Yuan L., Hua G., Kang S. B.: Visual attribute transfer through deep image analogy. ACM Trans. Graph. 36, 4 (2017), 120:1–120:15.
  • [MV15] Mahendran A., Vedaldi A.: Understanding deep image representations by inverting them. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015), pp. 5188–5196.
  • [ODO16] Odena A., Dumoulin V., Olah C.: Deconvolution and checkerboard artifacts. Distill 1, 10 (2016), e3.
  • [PD84] Porter T., Duff T.: Compositing digital images. In ACM Siggraph Computer Graphics (1984), vol. 18, ACM, pp. 253–259.
  • [PGB03] Pérez P., Gangnet M., Blake A.: Poisson image editing. In ACM Trans. Graph. (2003), vol. 22, ACM, pp. 313–318.
  • [SJMP10] Sunkavalli K., Johnson M. K., Matusik W., Pfister H.: Multi-scale image harmonization. In ACM Trans. Graph. (2010), vol. 29, ACM, p. 125.
  • [SZ14] Simonyan K., Zisserman A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
  • [TSL17] Tsai Y.-H., Shen X., Lin Z., Sunkavalli K., Lu X., Yang M.-H.: Deep image harmonization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (July 2017).
  • [ULVL16] Ulyanov D., Lebedev V., Vedaldi A., Lempitsky V.: Texture networks: Feed-forward synthesis of textures and stylized images. In Proceedings of International Conference on Machine Learning (2016),, pp. 1349–1357.
  • [WRB17] Wilmot P., Risser E., Barnes C.: Stable and controllable neural texture synthesis and style transfer using histogram losses. arXiv preprint arXiv:1701.08893 (2017).
  • [XADR12] Xue S., Agarwala A., Dorsey J., Rushmeier H.: Understanding and improving the realism of image composites. ACM Trans. Graph. 31, 4 (2012), 84.
  • [ZKSE15] Zhu J.-Y., Krahenbuhl P., Shechtman E., Efros A. A.: Learning a discriminative model for the perception of realism in composite images. In The IEEE International Conference on Computer Vision (ICCV) (2015), pp. 3943–3951.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
Add comment
Loading ...
This is a comment super asjknd jkasnjk adsnkj
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test description