Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph

Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph

  
Abstract

With the rapid development of social network and multimedia technology, customized image and video stylization has been widely used for various social-media applications. In this paper, we explore the problem of exemplar-based photo style transfer, which provides a flexible and convenient way to invoke fantastic visual impression. Rather than investigating some fixed artistic patterns to represent certain styles as was done in some previous works, our work emphasizes styles related to a series of visual effects in the photograph, e.g. color, tone, and contrast. We propose a photo stylistic brush, an automatic robust style transfer approach based on Superpixel-based BIpartite Graph (SuperBIG). A two-step bipartite graph algorithm with different granularity levels is employed to aggregate pixels into superpixels and find their correspondences. In the first step, with the extracted hierarchical features, a bipartite graph is constructed to describe the content similarity for pixel partition to produce superpixels. In the second step, superpixels in the input/reference image are rematched to form a new superpixel-based bipartite graph, and superpixel-level correspondences are generated by a bipartite matching. Finally, the refined correspondence guides SuperBIG to perform the transformation in a decorrelated color space. Extensive experimental results demonstrate the effectiveness and robustness of the proposed method for transferring various styles of exemplar images, even for some challenging cases, such as night images.

Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph


Jiaying Liu
Peking University

liujiaying@pku.edu.cn
Wenhan Yang
Peking University

yangwenhan@pku.edu.cn and
Xiaoyan Sun
Microsoft Research Asia

xysun@microsoft.com
Wenjun Zeng
Microsoft Research Asia

wezeng@microsoft.com


\@float

copyrightbox[b]

\end@float
  • Image stylization, superpixel, bipartite graph, stylistic brush

    Figure \thefigure: Illustration of the proposed stylistic brush, SuperBIG.

    With the prevalence of multimedia social networking, it has become popular to share photos online. Most people nowadays prefer uploading photos with special artistic enhancement made by various Apps such as Facebook and Instagram instead of the original ones. This kind of photo style enhancement makes pictures dramatically more impressive and inspires new imagination. However, existing systems either allow users to only roughly change the photo in a fixed template, or require a series of subtle processes by experienced photographers using the editing software.

    Image style transfer aims to automatically change the stylistic elements of an input image (color, texture, contrast, etc.) to follow a given exemplar, e.g. well-known paintings or fabulous pictures taken by professional photographers. Early works start by transferring one of these elements among images. The color transfer methods either extract the most representative colors from the images and build a conversion algorithm between those colors [?, ?], or directly adjust the color distribution via a histogram feature fitting [?, ?]. Contrast is usually transferred in the frequency band space, such as the bilateral space [?], Laplacian pyramid [?] or Haar pyramid [?]. Since these methods only consider one specific stylized element, they may produce some visual effect, but are difficult to be applied widely in practice.

    Meanwhile, the image stylization is also explored in the computer graphics community, referred to as non-photorealistic rendering (NPR). It aims to generate non-photorealistic style images, such as watercolor painting [?], sketch generation [?] and abstract drawing [?]. By a carefully crafted design, a bunch of stylized elements are extracted to represent the artistic style of an image and further used to transfer artistic visual effects. However, these hand-crafted features, designed with certain type of artworks, lack expandability by nature and are not adaptive in representing other styles or new styles.

    In real applications, it is unrealistic to ask most people to give a specific description about what style they exactly want. Usually, what they could offer is a real example they saw before, e.g. ‘Mona Lisa’, or an abstract word they read from books, e.g. ‘Baroque’. Knowing little about image editing, they need a tool to define a bunch of style settings from these examples and make adjustments automatically. Like the format painter of Microsoft Office, Stylistic Brush provides a desirable and powerful tool to enable an automatic arbitrary style transfer between images. More specifically, this functionality could be implemented by exemplar-based stylization, as shown in Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph. The style is extracted dynamically from the fantasy reference image (also referred to as target image). A new output image is synthesized based on the content of the input image and the extracted styles of the reference one.

    Therefore, some works investigating image stylization by considering the style composition instead of a single style element are emerging. Most of these methods devote to separating and dealing with the content and style individually. An early work [?] explored the concept of ‘image analogy’ by building a multiscale autoregression framework to adaptively learn a wide variety of “image filter”. Zhang et al. [?] proposed to perform an image component analysis to decompose an image into three components and constructed a coarse-to-fine Markov random field to propagate colors in the paint and edge components. In [?], a deep network-based method was proposed to separate and recombine the content and style. A composition of the learned CNN features gives a clue of content correspondence and guides the production of new artistic images via transferring the style features.

    These methods suffer from two limitations: 1) From the model aspect, the assumption that the content and the style could be separable may be questional. Some common observations, such as sunset with red color and grass with certain texture patterns, lead to the conclusion that some styles are highly correlated with the image content. Thus, previous methods with such a separable assumption lose some style information in the transformation. 2) From the application aspect, these methods mainly focus on painting styles and are good at transferring or generating texture styles. However, in real applications of photography, people usually pay more attention to visual effects caused by color, light, contrast etc. than textures.

    In this paper, we aim to create a stylistic brush to help people beautify their photos by transferring desirable styles of a chosen exemplar image to the input one. Focusing on photos, we pay more attention to the color, light and contrast of a photograph instead of the factors related to art, such as textures or strokes. Compared to previous methods, we make two more reliable assumptions: 1) For most photos, the Internet enables us to collect a content similar reference with a favorable style. It is usually the case for a certain category of images, such as the landmark or face images; 2) Different from general content-based features, we obtain matched points of the same scene between the reference and input images as more reliable guidance of content similarity, via dense correspondence detection methods.

    With the above considerations, the proposed stylistic brush is realized by a robust style transfer method based on the Superpixel BIpartite Graph (SuperBIG) framework for image stylization. First, a dense correspondence between the input and reference images is estimated to obtain matched pixels as the primitives. By exploiting hierarchical features in different-granularity, we measure the distances from pixels to the identified matched points in the feature space to cluster these pixels into superpixels. Then a bipartite graph partition is exploited to assign unclustered pixels into superpixels by considering both the local and global consistency. Afterwards, superpixels of two images are rematched to form a new superpixel bipartite graph to refine the final superpixel-level correspondent relationship. Finally, SuperBIG transfers colors within each superpixel correspondence in a decorrelated color space to achieve the stylization.

    The main contributions of our work are summarized as follows:

    • We analyze the challenges for practical photo stylization, and propose “Stylistic Brush” to solve this problem integrally, i.e. stylizing the input image based on the styles of a given exemplar. To the best of our knowledge, this is the first attempt to transfer complex natural photo styles instead of painting strokes by example images.

    • We propose an automatic robust style transfer framework based on the Superpixel BIpartite Graph (SuperBIG). It estimates the superpixels from the input image and performs correspondence matching between these superpixels jointly by a two-step bipartite partition and matching. This step-by-step abstraction integrates the local consistency of superpixel and the global matching of the bipartite graph effectively.

    • Benefiting from diversity of the proposed hierarchical features in different granularity, as well as the advantages of the unified bipartite graph framework, SuperBIG achieves promising results in terms of effectiveness and robustness in extensive experiments, even for some challenging cases, such as night images.

    Non-photorealistic rendering was first proposed by Winkenbach and Salesin [?]. It aims to produce images derived from a wide variety of styles such as painting, drawing, sketching, illustration and animation for digital art. Non-experts can transfer artistic styles of famous painters to ordinary photos taken everyday with the help of NPR. Nowadays, many ad-hoc NPR schemes have been proposed for this task with a varying degree of success [?]. While Li et al. [?] proposed to create and view interactive exploded views of 3D models, Pouli and Reinhard [?] utilized a user-specified target image’s color palette to achieve creative effects. For artistic styles rendering, some researchers focus on simulating virtual brush strokes to obtain a particular style [?, ?]. Region-based methods are also used to independently render the interiors of regions [?, ?]. In the meantime, many image processing filters have been applied to produce images in artistic styles [?, ?]. Different from NPR studying on artistic patterns rendering, our work aims to address the challenge of photo style transfer, where more diversified styles are faced and photometric properties, such as light, contrast, change more abruptly within an image.

    Hand-crafted style transfer techniques aim to adjust the color, contrast and tone of images, with the aid of signal properties, e.g. the statistic information of colors, without considering the content-level correspondence. For color transfer, the work in [?] transferred colors by matching the statistics of color distributions. Subsequent works improved the accuracy and robustness of statistical estimation, such as soft-segmentation [?], multi-dimensional distribution matching [?] and minimal displacement mapping [?]. There are also some methods [?, ?] that consider colorizing the image with user defined colors. These methods propagate colors with an elaborately designed constraint to ensure natural visual effect of the produced result. For contrast and tone, adjustment is manipulated in the frequency domain, such as bilateral space [?], Laplacian pyramid [?] or Haar pyramid [?]. Our work focuses on transferring photo styles adaptively based on the given references instead of a crafted architecture designed for the transfer of a certain style.

    Figure \thefigure: The flowchart of SuperBIG algorithm. (a) Input and reference images. (b) Matched points detected by dense correspondence method. (c) Hierarchical features for each pixel. (d) Superpixels obtained by the distance between each pixel and matched points. (e) Superpixels obtained by pixel-level bipartite graph partition. (f) The superpixel correspondence generated by superpixel bipartite graph matching. (g) The styled result based on colors of input and reference images, as well as the superpixel correspondence.

    For image stylization, only exploiting signal properties and statistical correspondence cannot guarantee the correctness of the local style decision. Recently, some methods explore ways to create and utilize the content-level correspondence to benefit the stylization. In [?, ?], the input and reference images are segmented first. Then, colors are propagated from color images to greyscale images via a set of locally homogeneous patches or basic elements called color scribbles. Charpiat et al. [?] assigned colors to the greyscale image by solving an optimization problem in the framework of graph cut. In [?], after manual segmentation of major foreground objects, a belief-propagation colorizes the greyscale image with the help of Internet images. In [?, ?], colors are transferred by estimating per-pixel registered correspondence between input and reference images. Kumar et al. [?] proposed to create correspondences between superpixels by fast cascade feature matching, and then refine the transfer results by a voting approach. Cheng et al. [?] proposed a superpixel-based recoloring scheme based on a soft matching embedded with color statistics, texture characteristics and spatial constraints to generate new recolored images. There are also some works that aim to conduct favorite exemplars recommendation based on visual information [?] or patch aggregation [?]. Compared with previous methods, our method aims to address the general style transfer of photos instead of a certain style element, such as only color, or some artistic styles. We devote to offering an integrated solution to transfer the composition of light, color, contrast automatically.

    The proposed SuperBIG transfers the style of the reference image to the input image by a two-step bipartite graph framework as shown in Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph. SuperBIG first detects the dense correspondence (Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(b)) and calculates the designed hierarchical features (Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(c)). Based on the correspondence and features, SuperBIG then aggregates pixels into superpixels using a simple clustering algorithm (Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(d)) for the pixels around the matched points and a bipartite graph framework (Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(e)) for the pixels far from the matched points. Afterwards, SuperBIG transfers the colors between corresponding superpixels (Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(f)) in a decorrelated color space.

    Superpixel is a pixel cluster consisting of several pixels with similar color and brightness. It is proposed to well define coherent regions, as basic elements of over-segmentation. It usually provides an initialization for segmentation [?, ?, ?] or a soft constraint on segmentation [?, ?]. Compared with raw pixels, superpixel is a more sparse and efficient representation, while it provides more reliable and fine-grained regions in comparison with segmented objects.

    SuperBIG creates and embeds superpixels of input and reference images in a unified bipartite graph framework. It obtains superpixels through two steps. The first one is to cluster pixels into superpixels based on distance measurement with dense correspondence, which is estimated by deep matching [?]. The relevant hierarchical features for measuring the distances between pixels include colors, intensity patterns, textures, etc. The second step is to employ an automatic bipartite partition in a unsupervised way to group pixels that are not covered by any superpixel in the first step. Here we elaborate on the related features.

    We use the subscript to index the pixel location of an image and utilize superscript and to denote features of the input and reference images, respectively. is defined as the intensity of a pixel at the location . We extract a set of features for the following two purposes: To measure the content similarity in the same domain/style (e.g. within an image) or to measure that cross domains/styles (e.g. in two styled images). Thus, the extracted features are classified into two categories: style-related (including patch intensity, color, gradient, absolute location) and style-independent (including texture, relative location, locality-constrained linear coding feature). All these extracted features are described below,

    • Intensity vector of a patch:

      where the set contains locations of pixels in a patch centered at the location .

    • Color at pixel , which is composed of,

      where , and are three channels of an image . They are related to the intensity of that pixel as follows,

    • Gradient of a patch:

      where and denote the intensity variation of the original image along horizontal and vertical directions, respectively.

    • Absolute location:

      where and are the height and width of an image. It is defined as the normalized location in the original coordinates for the image.

    • Texture feature of a patch centered at pixel . Details about its calculation are presented in [?].

    • Relative location, . SuperBIG regards the dense points as reliable locations and utilizes them to ‘relocate’ the pixels with the novel coordinates, which takes the locations of these matched points as the basis. It is defined as the representation coefficients of a pixel location, when taking locations of several nearest matched points within the image as the basis. Locations of five nearest matched points to pixel are denoted as,

      The current location is represented by the multiplication of and a representation coefficient ,

      Then, is solved by,

      where is the ridge parameter for to avoid singular solutions. To generate the relative location , we put the solved to in the corresponding dimension that belongs to the matched point and zeros in other dimensions.

    • Locality-constrained linear coding (LLC) feature, . Similar to the idea of calculating the relative location, we calculate the ‘relative location’ in the feature space, to generate a measurement of content similarity, independent on the style. Similarly, with the matched points provided by deep matching, we use features of these matched points as the basis (or the coordinates in the feature space) to calculate the representation coefficients, independent on the style. Assume the five nearest matched points at the location are represented in the feature space,

      Then, a sparse coefficient is calculated by solving,

      We then have,

      where is the ridge parameter for to avoid singular solutions. To generate the relative location , we put the solved to in the corresponding dimension that belongs to the matched point and zeros in other dimensions.

      With the help of the above mentioned features of several nearest matched points or , and are representation coefficients of the unmatched points from the input and reference images.

    Intuitively, these features are diverse in order to cover most information to build the content correspondence. As mentioned above, according to whether a feature is capable of measuring the content similarity cross styles, these features are classified into: style-related and style-independent. The former is mainly utilized to measure the similarity between input and reference images, while the latter is exploited to measure the similarity between two pixels in the same image.

    Here we create superpixels around matched points and build a mapping based on the correspondences of these points. Intuitively, coupled superpixels around paired matched points share the same style transformation. We use and to index two arbitrary pixels in the input and reference images, respectively. And let index an arbitrary pixel in one of them. For each pair of matched point locations and , the distance of one pixel in the input image to the corresponding matched point in the reference image is calculated by style-dependent features as follows,

    (1)

    where are weighting parameters to balance the effect of each term. The distance in can be computed similarly. Then, we create super-pixel clusters and containing all the pixels with a distance to and respectively less than a given threshold . After that, superpixels around the matched points are obtained. SuperBIG further deals with other unsettled pixels in a bipartite graph framework hereafter.

    After obtaining the superpixel around matched points, SuperBIG constructs a pixel-level bipartite graph from the uncovered pixels that do not belong to any given superpixel. Afterward, a bipartite partition is followed to cluster those unsettled pixels into superpixels.

    Let and represent the hierarchical features corresponding to the pixel located at in the input and reference images. Because we aim to calculate the content closeness of pixels in two images with different styles, the hierarchical features consist of style-free features, such as locations, gradient, textures, defined as follows,

    (2)

    So does .

    Based on the hierarchical features to calculate the affinities between nodes, SuperBIG constructs the pixel bipartite graph. Let and denote the node corresponding to the pixel in the location of the input and reference image, respectively. Here only represents the location of unsettled pixels. There is an edge connection between corresponding nodes in the bipartite graph, only when the nearest dense points of their corresponding pixels are largely matched. Then, the pixel corresponds to the node in the graph, and edge weights (affinities) are calculated based on hierarchical features and adjusted by weighting parameters for each kind of features as follows,

    (3)

    Then, a weighted bipartite graph is constructed between two nodes , corresponding to the pixels of images that are exactly paired matched points in the dense correspondence. Their edge weights (affinities) correspond to the similarities, which are independent of the style.

    When performing the graph partition, a natural choice is spectral clustering. It is exploited to capture the cluster structure of a graph by clustering the spectrum of the Laplacian matrix. is defined as the degree matrix. It is formulated as a generalized eigen-problem,

    (4)

    where is the eigenvalue to be optimized. And is the Laplacian matrix and is the degree matrix. is a unit vector and denotes the affinity (adjacent) matrix of the graph, that contains the affinity of every paired nodes in the graph. For clustering, the Laplacian matrix is approximated by a block-diagonal matrix including eigenvalues block-diagonal matrix. The Laplacian matrix can be also defined as the normalized Laplacian or generalized Laplacian .

    It can be solved with the Lanczos method [?] on the normalized affinity matrix or partial SVD [?] on normalized across-affinity matrix. Adopting the latter solution in our method, the bottom eigenvectors of (4) are obtained by the top left and right singular vectors of the normalized across-affinity matrix,

    (5)

    where and denote the degree matrix of and , respectively. Then, we obtain superpixel clusters and and get a set of coupled superpixel clusters and .

    In the above step, SuperBIG estimates the superpixels for the pixels that are not covered by superpixels of matched points. In this process, superpixels of matched points and their covered pixels are totally ignored in the constructed pixel-level bipartite graph. It may lead to inaccurate matchings when some superpixels of matched pixels in the input image in fact correspond to the superpixels of unmatched pixels in the reference image.

    Thus, SuperBIG constructs a superpixel bipartite graph and performs a graph matching on it. The nodes of the new graph represent superpixels of and . There is an edge connection between corresponding nodes, only when their hierarchical features are close enough in the feature space. Considering that the pixels in a superpixel share similar features, for similarity, hierarchical features of a superpixel are defined as the mean vector of hierarchical features of pixels within it. And the affinities between superpixel bipartite graph are calculated based on the superpixel hierarchical feature, in the same way as (Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph). Then, SuperBIG solves the bipartite graph matching by the Hungarian algorithm [?], obtaining final superpixel correspondences and .

    After we obtain a reliable superpixel correspondence, the style transfer based on such a correspondence is built. Color and contrast transfer usually changes the dominant color and contrast distribution, and maps to desirable color and contrast casts. A slightly more general approach is to fit the color statistic of the input image into that of the reference one. Global methods based on the color statistic cannot handle some tough cases, such as the image containing complex details and diverse colors. Based on the SuperBIG framework, the styles of an image could be transferred locally at the granularity of superpixel.

    SuperBIG transfers colors by manipulating the statistic in the -CIE space, a de-correlated color space, as our local mapping method. Here we define , where is a predefined transformation matrix and are three channels of a RGB image. Then, we convert to the logarithmic space,

    This decorrelation makes three color channels independent. SuperBIG then adjusts the color statistic in such space by matching mean and variance as follows,

    (6)

    where is the operator to calculate the mean and is the variance of the image for a given channel. With the de-correlated style transfer for local regions, SuperBIG transfers styles between each pair of estimated corresponding superpixel pairs in and . To avoid the boundary effect between superpixels, we finally smooth the transferred results by the guided image filter [?].

    We compare the proposed method (SuperBIG) with the following six state-of-the-art style/color transfer methods: L decorrelated color space (L) [?], color “mood” transfer (MoodTrans) [?], multi-scale harmonization (Harmonization) [?], landmark sparse color representation (Landmark) [?], neural algorithm of artistic style (NeutralArt) [?] and superpixels matching (SuperMatch) [?]. Results of these methods are generated by the published codes kindly provided by the authors. When compared to the colorization methods, SuperBIG first turns the input image into greyscale one, then colorizes the generated greyscale image. We set the parameters as: and .

    The comparison results of SuperBIG and other state-of-the-art methods for three input images are presented in Figures Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph-Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph. Please enlarge and view these figures on the screen for better comparison. The subjective quality of these results demonstrates the superiority of the proposed SuperBIG. L and Harmonization totally fail to transfer the color, because of wrong dominant color prediction in Figures Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(b) and Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(b) as well as heavily blurred or extremely rough sky regions in Figures Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(c)-Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(c), respectively. Landmark, NeutralArt and SuperMatch suffer from wrong local style predictions, e.g. blue color near the edges and corners of the pyramid in Figures Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(d)(f)(g) and the color artifacts on the top of the towers of Taj Mahal in Figures Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(d)(f)(g). Thanks to informative hierarchical features and effective superpixel bipartite framework for modeling in the global and local correspondences, SuperBIG transfers the proper styles for the local regions in the generated results as shown in Figures Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(h)-Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(h).

    (a) Input
    (b) L
    (c) Harmonization
    (d) Landmark
    (e) Reference
    (f) NeutralArt
    (g) SuperMatch
    (h) SuperBig
    Figure \thefigure: Visual comparisons of style transfer from (a) to (e) among different algorithms.
    (a) Input
    (b) L
    (c) Harmonization
    (d) Landmark
    (e) Reference
    (f) NeutralArt
    (g) SuperMatch
    (h) SuperBig
    Figure \thefigure: Visual comparisons of style transfer from (a) to (e) among different algorithms.
    (a) Input
    (b) L
    (c) Harmonization
    (d) Landmark
    (e) Reference
    (f) NeutralArt
    (g) SuperMatch
    (h) SuperBig
    Figure \thefigure: Visual comparisons of style transfer from (a) to (e) among different algorithms.

    The subjective results of SuperBIG to transfer different styles are showed in Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph. From the results, we observe that SuperBIG generates the results containing clear and natural content while successfully changing their styles based on the reference images, leading to similar spatial distribution of color and contrast. It is worth noting that, even for the night image as shown in the right-bottom of Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(b), where background light is dim, SuperBIG can still achieve the transformation successfully and generate naturally looking results.

    : Styles transferred photos from the examples. The inserts show the examples.
    (a) Input
    (b) Output: Styles transferred photos from the examples. The inserts show the examples.
    Figure \thefigure: Visual comparisons of SuperBIG style transfer for different reference images.

    To compare different stylization results from an observer’s perspective, we employ the paired comparisons approach, where the participants are shown two stylized images at a time, side by side, and are asked to simply choose the preferred one by considering both visual quality and similar style to the exemplar. We have a total of 90 participants, including both domain experts and generally knowledgeable individuals, each given 105 pairwise comparisons over a set of five images with seven different style transfer methods. Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph illustrates the seven methods, ranked by the number of votes received. It can be seen that the proposed SuperBIG outperforms other methods in four out of the five cases, and achieves overall superior performance. Even in the exceptional case with the test image Arch, it still shows comparable performance with the first ranked method. Besides the voting statistic, we also show the stability analysis, which is calculated by the rank product [?]. Table Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph shows the results of the rank product , where is the specific ranking for method and image . Compared with others, SuperBIG produces the best consistency among different test cases to achieve the best visual quality.

    Method CIE-L Harmonization Landmark MoodTrans NeutralArt SuperMatch SuperBIG
    Rank 4.04 5.07 4.22 6.35 2.83 2.83 1.15
    Table \thetable: Comparison of the rank product of seven methods.
    Figure \thefigure: The number of votes per testing image and the total ranking of seven methods.

    To further explore the functionality of each step of SuperBIG, we perform the ablation analysis of each step in the flowchart as shown in Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph. We find that deep matching provides a large amount of matched points. It can be observed from Figures Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(b)(g) that most of them are visually correct. Taking a given portion of matched points ( with highest confidence scores) and calculating the hierarchical features, SuperBIG obtains superpixels around matched points as shown in Figures Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(c)(h). Afterwards, uncovered pixels are handled in a pixel-level bipartite graph to generate other superpixels in Figures Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(d)(i). According to the correspondence obtained so far, we generate the style transfer result of Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(e). It can be seen that, because the matching from the previous steps does not consider the global information, it generates only the locally consistent result. There are some visually unpleasant details. First, there are some inaccurate color transfer results in the right- bottom part of the image. Second, the sky in Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(e) presents abundant textures, different from that in both the input and reference images. Thus, SuperBIG reconsiders the matching between all superpixels of the two images. Due to the feature refined from pixels to superpixels and global optimization, SuperBIG generates a well-constructed result in Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph(j).

    Figure \thefigure: The ablation analysis for SuperBIG. (a) The input image. (b) Dense correspondence in (a). (c) Superpixels for matched points in (a). (c) Superpixels for other pixels in (a). (e) The transfer results with the superpixel correspondece generated from the pixel-level bipartite graph partition. (f) The reference image. (g) Dense correspondence in (f). (h) Superpixels for matched points in (f). (i) Superpixels for other pixels in (f). (j) The transfer results with the superpixel correspondece generated from the superpixel-level bipartite graph matching.

    We also explore the effectiveness of each feature in the hierarchical features. We only focus on the functionality of primitive features: color, distance (absolute and relative), texture, patch intensity vector, gradient. Figure Photo Stylistic Brush: Robust Style Transfer via Superpixel-Based Bipartite Graph shows the results generated by SuperBIG with the compositions of these five features. From the results, it could be seen that the composition of color and distance, or patch intensity vector alone leads to the result containing many falsely transferred regions. Adding the texture feature removes many false regions by texture consistency. However, the quality of the sky is limited. The patch intensity vector puts the local constraint on the transfer and generates naturally looking result. The gradient feature generates a more smooth result with a higher visual quality.

    Figure \thefigure: The validation of the hierarchical features in SuperBIG. (a) Color + Distance. (b) Patch intensity vector. (c) Color + Distance + Texture. (d) Color + Distance + Texture + Patch intensity vector. (e) Color + Distance + Texture + Patch intensity vector + Gradient.

    In this paper, we first introduce the concept of image stylistic brush and accordingly design an exemplar-based photo stylization method, SuperBIG, powered by a two-step bipartite graph algorithm. Specifically, a bipartite graph is constructed by considering dense correspondence and hierarchical features to partition pixels of the input and reference images into superpixels first. Then, we generate a superpixel-level bipartite graph, which produces correspondences of the superpixels by bipartite matching. The correspondence is then used to guide the style transformation in a decorrelated color space. Extensive experimental results demonstrate that the proposed SuperBIG method achieves superior visual quality compared to state-of-the-art methods while providing style consistent with the reference image.

    Although SuperBIG shows very promising results in the extensive experiments, there is still room for improvement. First, SuperBIG assumes that the input and reference images contain the same scene. How to utilize the general category information to enable a more general exemplar-based stylization is worth exploring. Second, due to the graph structure of SuperBIG, it is time-consuming and difficult to apply in real applications. Thus, we aim to explore ways to speed up the processing with some optimizations (such as image rescaling), in order to facilitate real-world applications.

    • [1] S. Bae, S. Paris, and F. Durand. Two-scale tone management for photographic look. ACM Trans. Graphics, 25(3):637–645, 2006.
    • [2] D. Bruff. The assignment problem and the hungarian method. Notes for Math, 20, 2005.
    • [3] W. Cheng, R. Jiang, and C. W. Chen. Color photo makeover via crowd sourcing and recoloring. In Proc. ACM Int’l Conf. Multimedia, pages 943–946, 2015.
    • [4] A. Y.-S. Chia, S. Zhuo, R. K. Gupta, Y.-W. Tai, S.-Y. Cho, P. Tan, and S. Lin. Semantic colorization with internet images. ACM Trans. Graphics, 30(6):156, 2011.
    • [5] C. J. Curtis, S. E. Anderson, J. E. Seims, K. W. Fleischer, and D. H. Salesin. Computer-generated watercolor. In Proc. Conf.  on Computer graphics and interactive techniques, pages 421–430, 1997.
    • [6] X. Z. Fern and C. E. Brodley. Solving cluster ensemble problems by bipartite graph partitioning. In Proc. Int’l Conf. Machine Learning, page 36, 2004.
    • [7] L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. Arxiv:, abs/1508.06576, 2015.
    • [8] G. H. Golub and C. F. Van Loan. Matrix computations, volume 3. JHU Press, 2012.
    • [9] B. Gooch, G. Coombe, and P. Shirley. Artistic vision: painterly rendering using computer vision techniques. In Proc. Int’l symposium on Non-photorealistic animation and rendering, pages 83–ff, 2002.
    • [10] R. K. Gupta, A. Y.-S. Chia, D. Rajan, E. S. Ng, and H. Zhiyong. Image colorization using similar images. In Proc. ACM Int’l Conf. Multimedia, pages 369–378, 2012.
    • [11] K. He, J. Sun, and X. Tang. Guided image filtering. In Proc. IEEE European Conf. Computer Vision, pages 1–14. Springer, 2010.
    • [12] A. Hertzmann. Paint by relaxation. In Proc.  Int’l Conf.  Computer Graphics, pages 47–54, 2001.
    • [13] A. Hertzmann. A survey of stroke-based rendering. IEEE Computer Graphics and Applications, (4):70–81, 2003.
    • [14] A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin. Image analogies. In Proc. Int’l Conf.  on Computer graphics and interactive techniques, pages 327–340, 2001.
    • [15] T.-W. Huang and H.-T. Chen. Landmark-based sparse color representations for color transfer. In Proc. IEEE Int’l Conf. Computer Vision, pages 199–204, 2009.
    • [16] Y.-C. Huang, Y.-S. Tung, J.-C. Chen, S.-W. Wang, and J.-L. Wu. An adaptive edge detection based colorization algorithm and its applications. In Proc. ACM Int’l Conf. Multimedia, pages 351–354, 2005.
    • [17] Y. Hwang, J.-Y. Lee, I. S. Kweon, and S. J. Kim. Color transfer using probabilistic moving least squares. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pages 3342–3349, 2014.
    • [18] R. Irony, D. Cohen-Or, and D. Lischinski. Colorization by example. In Eurographics Symp. on Rendering, volume 2, 2005.
    • [19] J. E. Kyprianidis, J. Collomosse, T. Wang, and T. Isenberg. State of the art: A taxonomy of artistic stylization techniques for images and video. IEEE Trans. on Visualization and Computer Graphics, 19(5):866–885, 2013.
    • [20] J. E. Kyprianidis, H. Kang, and J. Döllner. Image and video abstraction by anisotropic kuwahara filtering. In Computer Graphics Forum, volume 28, pages 1955–1963, 2009.
    • [21] A. Levin, D. Lischinski, and Y. Weiss. Colorization using optimization. In ACM Trans. Graphics, volume 23, pages 689–694, 2004.
    • [22] W. Li, M. Agrawala, B. Curless, and D. Salesin. Automated generation of interactive 3D exploded view diagrams. ACM Trans. Graphics, 27(3):101, 2008.
    • [23] Y. Li, L. Sharan, and E. H. Adelson. Compressing and companding high dynamic range images with subband architectures. ACM Trans. Graphics, 24(3):836–844, 2005.
    • [24] Z. Li, X.-M. Wu, and S.-F. Chang. Segmentation using superpixels: A bipartite graph partitioning approach. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pages 789–796, 2012.
    • [25] L. Lin, K. Zeng, H. Lv, Y. Wang, Y. Xu, and S.-C. Zhu. Painterly animation using video semantics and feature correspondence. In Proc. Int’l symposium on Non-photorealistic animation and rendering, pages 73–80, 2010.
    • [26] X. Liu, L. Wan, Y. Qu, T.-T. Wong, S. Lin, C.-S. Leung, and P.-A. Heng. Intrinsic colorization. ACM Trans. on Graphics (SIGGRAPH Asia 2008 issue), 27(5):152:1–152:9, December 2008.
    • [27] X. Lu, Z. Lin, X. Shen, R. Mech, and J. Z. Wang. Deep multi-patch aggregation network for image style, aesthetics, and quality estimation. In Proc. IEEE Int’l Conf. Computer Vision, pages 990–998, 2015.
    • [28] H. Mobahi, S. R. Rao, A. Y. Yang, S. S. Sastry, and Y. Ma. Segmentation of natural images by texture and boundary compression. Int’l Journal of Computer Vision, 95(1):86–98, 2011.
    • [29] A. Orzan, A. Bousseau, P. Barla, and J. Thollot. Structure-preserving manipulation of photographs. In Proc. Int’l symposium on Non-photorealistic animation and rendering, pages 103–110, 2007.
    • [30] F. Pitie, A. C. Kokaram, and R. Dahyot. N-dimensional probability density function transfer and its application to color transfer. In Proc. IEEE Int’l Conf. Computer Vision, volume 2, pages 1434–1439, 2005.
    • [31] F. Pitié, A. C. Kokaram, and R. Dahyot. Automated colour grading using colour distribution transfer. Computer Vision and Image Understanding, 107(1):123–137, 2007.
    • [32] T. Pouli and E. Reinhard. Progressive histogram reshaping for creative color transfer and tone reproduction. In Proc. Int’l symposium on Non-photorealistic animation and rendering, pages 81–90, 2010.
    • [33] T. Pouli and E. Reinhard. Progressive color transfer for images of arbitrary dynamic range. Computers & Graphics, 35(1):67–80, 2011.
    • [34] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley. Color transfer between images. IEEE Computer graphics and applications, (5):34–41, 2001.
    • [35] J. Revaud, P. Weinzaepfel, Z. Harchaoui, and C. Schmid. Deep convolutional matching. Arxiv, 1506.07656, 2015.
    • [36] M. Rubinstein, D. Gutierrez, O. Sorkine, and A. Shamir. A comparative study of image retargeting. ACM Trans. Graphics, 29(6):160, 2010.
    • [37] K. Sunkavalli, M. K. Johnson, W. Matusik, and H. Pfister. Multi-scale image harmonization. 29(4):125, 2010.
    • [38] Y.-W. Tai, J. Jia, and C.-K. Tang. Soft color segmentation and its applications. IEEE Trans. on Pattern Analysis and Machine Intelligence, 29(9):1520–1537, 2007.
    • [39] B. Wang, Y. Yu, T.-T. Wong, C. Chen, and Y.-Q. Xu. Data-driven image color theme enhancement. ACM Trans. Graphics, 29(6):146, 2010.
    • [40] J. Wang, Y. Jia, X.-S. Hua, C. Zhang, and L. Quan. Normalized tree partitioning for image segmentation. In Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, pages 1–8, 2008.
    • [41] T. Wang, J. Collomosse, D. Slatter, P. Cheatle, and D. Greig. Video stylization for digital ambient displays of home movies. In Proc. Int’l symposium on Non-photorealistic animation and rendering, pages 137–146, 2010.
    • [42] X. Wang and X. Tang. Face photo-sketch synthesis and recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence, 31(11):1955–1967, 2009.
    • [43] T. Welsh, M. Ashikhmin, and K. Mueller. Transferring color to greyscale images. ACM Trans. Graphics, 21(3):277–280, 2002.
    • [44] G. Winkenbach and D. H. Salesin. Computer-generated pen-and-ink illustration. In Proc. Conf.  on Computer graphics and interactive techniques, pages 91–100, 1994.
    • [45] A. Y. Yang, J. Wright, Y. Ma, and S. S. Sastry. Unsupervised segmentation of natural images via lossy data compression. Computer Vision and Image Understanding, 110(2):212–225, 2008.
    • [46] J. Yuan, D. Wang, and A. Cheriyadat. Factorization-based texture segmentation. IEEE Trans. on Image Processing, 24, 2015.
    • [47] H. Zha, X. He, C. Ding, H. Simon, and M. Gu. Bipartite graph partitioning and data clustering. In Proc. IEEE Int’l Conf. Information and Knowledge Management, pages 25–32, 2001.
    • [48] W. Zhang, C. Cao, S. Chen, J. Liu, and X. Tang. Style transfer via image component analysis. IEEE Trans. on Multimedia, 15(7):1594–1601, 2013.
Comments 0
Request Comment
You are adding the first comment!
How to quickly get a good reply:
  • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
  • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
  • Your comment should inspire ideas to flow and help the author improves the paper.

The better we are at sharing our knowledge with each other, the faster we move forward.
""
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
   
Add comment
Cancel
Loading ...
13922
This is a comment super asjknd jkasnjk adsnkj
Upvote
Downvote
""
The feedback must be of minumum 40 characters
The feedback must be of minumum 40 characters
Submit
Cancel

You are asking your first question!
How to quickly get a good answer:
  • Keep your question short and to the point
  • Check for grammar or spelling errors.
  • Phrase it like a question
Test
Test description