Fast color transfer from multiple images
Color transfer between images uses the statistics information of image effectively. We present a novel approach of local color transfer between images based on the simple statistics and locally linear embedding. A sketching interface is proposed for quickly and easily specifying the color correspondences between target and source image. The user can specify the correspondences of local region using scribes, which more accurately transfers the target color to the source image while smoothly preserving the boundaries, and exhibits more natural output results. Our algorithm is not restricted to one-to-one image color transfer and can make use of more than one target images to transfer the color in different regions in the source image. Moreover, our algorithm does not require to choose the same color style and image size between source and target images. We propose the sub-sampling to reduce the computational load. Comparing with other approaches, our algorithm is much better in color blending in the input data. Our approach preserves the other color details in the source image. Various experimental results show that our approach specifies the correspondences of local color region in source and target images. And it expresses the intention of users and generates more actual and natural results of visual effect.
keywords:Image processing, Local color transfer, Locally linear embedding, Edit Propagation
Msc:I.4.9 [Image Processing and Computer Vision]
Color transfer is an image processing method imparting the color characteristics
of a target image to a source image. Ideally, the result by a color transfer
algorithm should apply the color style of the target image to the source image.
A good color transfer algorithm should provide quality both in scene details and colors.
Reinhard et al. 1 () presented a simple and potent color
transfer algorithm which translates and scales an image
pixel by pixel in color space according to the mean
and standard deviation of the color values in the source and target images.
There exist numerous procedures in literature where probability distributions
are used to process the image’s colors 3 (); 15 (); 20 () or deliver
user controllable adjustment of the colors. Out of these, the latter
ones are either restricted to local editing 42 () or contain global
edit propagation 41 (). These procedures have been proven to provide
most satisfactory results, but their common drawback is usually a
somewhat large computational load because of global optimization.
The technique we develop in this paper is local color transfer between images utilizing the statistical information of image effectively. We present a new method of local color transfer based on the simple statistics and locally linear embedding (LLE) to optimize the constraints in a newly defined objective function of images. Our procedure will automatically
determine the influence of edited samples across the whole image
jointly considering spatial distance, sample location and appearance. We will convey
local color of an image to others using LLE.
Although, in previous methodologies,
rough strokes followed by LLE were used for propagation of the color 37 (). Our procedure
differs from others in the sense that we formulate a new methodology for the
optimization problem where we draw from the non-linear manifold learning
formulation. We excogitate the problem as a global optimization task and
show that it can be solved as a sparse linear system. This merges global
editing as in An and Pellacini 41 (), who used a dense solver, and a sparse
optimization borrowed from Lischinski et al. 42 (). We then modified it to our needs.
The inspiration of their work was also from the manifold-learning methodology 39 (),
however, An and Pellacini 41 () exhibited that this work was not suitable
for propagating high quality images. Instead, An and Pellacini gave a methodology
which uses a dense least-squares solver, that allowed them to propagate the
affinities of all pairs of pixels to each other in order to retain the quality.
However, their dense linear system most often does not fit into the computer
memory for usual images. On the other hand, the method of Lischinski et al. 42 ()
employs a sparse solver to deliver high-quality image propagation, but it applies
the edits only to nearby pixels which are spatially coherent.
It also needs a better accuracy in user input to achieve good results.
In this paper we introduce a formulation of the optimization which tries
to achieve both a global pixel interaction along with sparse solution.
To accomplish this we interpret the image color as a manifold in 3D space
by using the locally linear embedding algorithm 40 (). Then our work builds
on that and automatically determines the influence of edit samples across
the whole image jointly considering spatial distance, sample location and
appearance. We show how a color manifold can be warped globally to obtain
recoloring while its local relationships are conserved in order to retain
the appearance of the source image.
In addition, we introduce another
stratagem in order to achi-
eve interactive performance. We sub-sample the image which greatly reduces the number of color points to be considered. Next, we approximate the manifold using the sub-sampled points and interpolate the remaining values. Then we maintain the same sub-sampling for different user inputs and only update the target color values which are provided by the user. Our procedure has a small memory usage and scales linearly in the number of pixels while allowing interactive editing.
In short summary, this article makes the following contributions:
local color transfer between images based on the simple statistics and locally linear embedding (LLE), which more accurately transfers the target color to the source image while preserving the boundaries and exhibits more natural output results,
our algorithm is not restricted to one-to-one image and can have more than one target images to transfer the color in different regions in the source image, and
we also propose the sub-sampling which reduce in order to computational load.
2 Related Work
Now a days, color transfer is a much debated research area.
We can classify these color transfer techniques in two algorithms,
namely global and local algorithms.
Reinhard et al. 1 () and his colleagues were the first to implement a color
transfer method by globally transferring colors, after translating color
data of input images from the RGB color space to the decorrelated
color space. This transferred colors quickly, successfully and also efficiently
generated a convincing output. This technique was further improved by Xiao et al. 14 ().
Pitie et al. 15 () who used a refined probabilistic model.
In Pitie et al. 4 () they furthered their method in order to better
perform non-linear adjustments of color probability distribution between images.
Similarly, Chang et al. 16 (); 17 () suggested global color transfer by introducing
perceptual color categorization for both images and video. Yang et al. 18 ()
initiated a method for color mood transfer which preserves spatial coherence
based on histogram matching. This idea was developed further by Xiao et al. 19 ()
who puzzled out the problem of local fidelity and global transfer in two steps: gradient preserving optimization
and histogram matching. Wang et al. 20 (); 21 () proposed a technique for global
color-mood exchange driven by predefined and labeled color palettes and example images.
Cohen et al. 22 () suggested a methodology which employs color harmony
rules to optimize the overall appearance after some of the colors have been
altered by the user. Shapira et al. 23 () suggested a solution which utilized
navigating the appearance of the image to obtain desired results. Furthermore,
automatic methods for colorizing grayscale images stemming from examples from internet
images 24 () and semantic annotations 25 () were introduced.
In general, color transfer methods which act globally are not competent
enough for accurate re-coloring of small objects or humans.
Other approaches tried to counter above mentioned shortcomings by
introducing rough control in image editing. Various distance measures
and feature spaces were also considered in the literature. To cross
texture and fragment distinct regions, geodesic distance 28 () and
diffusion distance 29 () were applied. Li et al. 30 ()
championed the use of pixel classification based on user input.
Locally linear embedding propagation preserved the manifold structure 37 (); 38 () to tackle color blending.
To prevent the problem relating to color region mixing, Neumann et al. 2 ()
suggested a 3D histogram matching technique to transfer
color components in the hue (H), saturation (S), and intensity (I), respectively,
in the HSI color space. Albeit, the color information transfer between target
and source image can be achieved by this method, the result usually contains
notable spatial artifacts and is also dependent on the resolution of input image.
Pitie et al. 3 (); 4 () resolved the same problem by utilizing an N-dimensional
probability density function which matches the 3D color histogram of input images.
They used an recursive, nonlinear method that was able to estimate the transformation
function by employing a one-dimensional marginal distribution. This technique is
potent enough to match the color pallet of target and source images, but it often
demands further processing to get rid of noise and spatial visual artifacts.
Color region mixing problem resolution and transference of colors to a local region
of an image are problems for which the segmentation-based techniques have been developed.
Tai et al. 5 (); 6 () attempted to provide a solution to these problems by
employing a method for soft color segmentation which is based on a mixture of
Gaussian approximation and allows indirect user control. To resolve the color region mixing problem, some researchers tried to segment
input images by employing a fixed number of color classes 7 (); 8 () which uses a similar approach by Tai et al. 5 (); 6 ().
But when the color styles of input images differ considerably
from the reference color classes, these approaches lack
somewhat in providing appropriate segmentation results. Yoo et al. 27 ()
also used soft segmentation for local color transfer between images.
They have tried to solve color region mixing problem, but one drawback of
their method is that while transferring the color to a local region, the color of other regions also get effected.
Now, we intend to solve this problem by using our
algorithm to transfer the color from the target image to the source
image without effecting the colors in other regions.
For transferring colors among desired regions only, manual
approaches with user interventions were suggested by some researches.
Maslenicova et al. 9 () defined a rectangular area in each input
image where color transfer was desired, then utilizing region propagation,
they generated the color influence map. Pellacini et al. 10 () suggested a stroke-based
color-transfer technique which employs pairs of strokes to specify
the corresponding regions of target and source images. Although it is
possible for users to change the color of a local region by using some
strokes, it may call for strenuous efforts for detailed doctoring such
as oil paintings and complex images.
Recently, Baoyuang et al. 11 ()
proposed color theme enhancement of an image. They effectuated a new
color style image by using predefined color templates instead of source
images. To perform decently, it needs quite accurate user input.
The color transfer methodology is also utilized to apply colors to grayscale images.
Tomihisa et al. 12 () assigned chromaticity values by equating the
luminance channels of target and source images. Abadpour et al. 13 ()
yielded reliable results by employing a principal component analysis method.
Moreover, some researchers have exhibited a keen interest in transformation
of colors among distinct color spaces. Color transfer technique
warrants the use of a color space where major elements of a color are
mutually independent. Since, in the RGB color space, the colors are correlated ,
the decorrelated CIELab color space is usually employed.
This requires a method to effectuate color transfer transformation of the
color space, RGB to CIELab and vice versa. Xiao et al. 14 () proposed an improved solution
that circumvents the transformation process between the correlated color
spaces and uses translation, rotation, and scaling to transfer
colors of a target image in the RGB color space.
The method of Chen et al. 37 () is
based on the local linear embedding 39 (); 40 () so is our approach. The main difference of our algorithm to their’s is the
difference in methodology of local color transfer between images based on the simple
statistics and locally linear embedding (LLE) to optimize the constraints in a
defined objective function of images. Our algorithm that preserves
pairwise distances between pixels in the original image.
It also simultaneously maps the color to the user defined target
values and then use the sub-sampling to reduce the computational load.
3 Formulations for Local Color Propagation
3.1 Local color transfer
Give a source and target image, we can transfer the color from region in target image to region in source image by the following equation
where and are the initial and final value of source image in channel . And and are the averages of the values of channel in color space for source and target image, respectively, and and are the standard deviations of the values of channel for source and target image, respectively. And is the mask region in source image.
3.2 Locally Linear Embedding
Our algorithm is inspired by the Locally Linear Embedding (LLE) 39 (), that eliminates the need to estimate pairwise distances between widely separated data points. LLE enterprises from a high dimensionality to a lower dimensionality manifold settled on the simple intuitional that every sample can be interpreted by a linear combination of its neighbors. Let us suppose a vector to represent a pixel in some feature spaces. Given a data set for each , we find its nearest neighbors, namely . We compute a set of weights that can best rebuild from these neighbors. LLE computes by minimizing
subject to the constraint . Then from the weights we can reconstruct from its neighbors.
3.3 Color propagation
Given a source and target image, we can propagate color by minimizing the following energy
where and are the value of pixel in source and target images, respectively. And is the neighborhood set of pixel . is the region in target image whose color needs to propagate in source image. And is a parameter that determines the relative importance of the second term compared with the first term.
This energy can be further written in a matrix form as
and here is a vector with the element , is the identify matrix. And is a diagonal matrix with the diagonal element if . is a vector with the element if . So the minimization of the energy is equivalent to solving a sparse linear system a follows.
We design the algorithm in a way that it is suitable to work with all
pixels in the image according to the assumed facts.
Unluckily, the algorithm would be requiring the target values for
all pixels which is so tiresome and not desirable to provide such targets.
Moreover, it also requires a very high computational time. To deal with
a significant reduction of computational load and sparse target values,
our strategy is a sub-sampling approach which deal with both of the
above discussed problems. The expression of all the color points
by the linear combination of other points is the observation by making it
Therefore, the idea is to compute a number of substantial
sample points/landmark points and applying optimization
merely on those points. Then, the linear combination of the
landmark points is used to reconstruct the remaining points.
Depending on the application, different sampling strategies may be considered.
But so far random sampling 31 () has been the standard.
Random sampling may work well when the sample size is sufficiently large. But
large sample size will increase the workload.
In the applications random sampling while still achieving good performance.
We determine the landmarks using the original point set
: we draw a random index set from the full index set of all points. In order to
get significant points into , we require the chosen points
to be unique and linearly independent such that
they form a (generalized) Delaunay triangulation in . For
each of the remaining points in the set we
determine the (D +1)-dimensional simplex in which it is
contained and compute its linear coefficients with respect
to . Now, all points can be reconstructed as linear combinations
of the vertices of their Delaunay-simplices, thus,
are in fact barycentric coordinates. Note that they have to
be computed only once in the preprocessing stage.
Now we solve the problem of Eq. 3 only for the landmark
points and all other points
are computed as linear combinations of the known points
using the previously computed linear coefficients . Also
the target values can be assigned in a user interaction pass to
landmarks points only.
We control the ratio by the sub-sampling rate of the points which directly affect the computational speed and more importantly the quality of of the output images. The principle is that to get the better estimation of underlying manifold, requires more landmark points have to be used. The longer computational time is the main flaw then. It has been observed in the experiments we have performed so far that the value of is better tradeoff between quality and speed. Fig. 4 depicts this relationship.
4 Results and applications
The experiment was performed in the computing environment
with Matlab2014a on a PC with an Intel(R) Core (TM)
i5-4690 CPU, 3.50 GHz processor and 8GB RAM under Windows OS.
Furthermore, the time taken by our algorithm with source image of pixels and two target images
of and pixels by setting is about 14.18 seconds.
We set in all the experiments shown in this paper. Our system uses
freehand closed regions to select and transfer colors from each target images to
source image. Then we select some regions in the source image
where the color needs to be transferred and
some regions where it does not required be changed.
This selection of regions can easily be drawn with the mouse freehandedly.
The user interface is quite easy to use even for novice users. Fig. 2 depicts this relationship.
The experimental results of our proposed method are compared
with those of the existing methods, such as the methods
of Reinhard et al. 1 (), Neumann et al. 2 (), Pitie et al. 4 (), Tai
et al. 6 () and Yoo et al 27 ().
The comparison is also made with the results of strokes based techniques
used by Farbman et al. and Chen et al. in 29 () and 37 () respectively.
Our suggested technique with the previous existing techniques were
tested using six different pairs of images that include landscapes
and objects as shown in Fig. 3. The first row of the Fig. 3 indicates the
source image, on the left, and the target image, on the right.
The results of each technique are shown for Reinhard et al. 1 () in (a),
Neumann et al. 2 () in (b), Pitie et al. 4 () in (c), Tai et al. 6 () in (d),
Yoo et al. 27 () in (e) and our implemented results in (f) respectively.
The comparison with the results of Reinhard et al. is made in Fig. 3(a). They transfer the local
color between images, but they were not able to control the transference of color in some regions
where the color needs not to be transferred which results in producing some artifacts as shown in Fig. 3(a).
The results of Neumann et al. and Pitie et al., are shown in Fig. 3(b) and 3(c) respectively.
Both of these are actually the histogram-based
local color transfer methods. The drawbacks of their methods are including the transference of color in
unnecessary regions and the unexpected change in color style after its transference.
This is because of the color mapping which is luminance-based and different distribution of colors, results into
a blur and noisy images. The other reason seems to be the color mapping which is carried out
within pixels of similar luminance value.
The last two results by Tai et al. and Yoo et al., which are segmentation-based methods,
are depicted in Fig. 3(d) and 3(e). Their method matches regions of similar luminance
value hence the tree of the oil painting image in Result 2 is matched to the sky
region of the target image. The Result 3 shows that the
flowers have different luminance values and regions compared
to the input images as the intuitive region matching
between flowers was not performed properly.
The resulting images of our proposed method are depicted in Fig. 3(f).
The comparison between our method and the other existing methods are shown in Fig. 3.
Our proposed method transfers the target color to the source image while
preserving the boundaries, and exhibits more natural output results.
Intuitively, the starting region matching is made between meaningful regions
regardless of the difference in colors and luminance distributions
as shown in Fig. 3 by the region of sky in Result 1, the oil painting image in Result 2,
flowers in Result 3 and Result 5, and the toys in Result 4. The restriction of the one-to-one region matching
is not required by our method, since the number of dominant input images are not always
the same. Our method also excludes minor colors since the
one-to-one matching does not guarantee a satisfactory result
when the color styles of source images are quite different.
In our method, the focus is being put to preserve the boundaries
in the resulting image and to control the color expansion to the regions
where it is not required to be transferred. As a result, the quality of image remains
better as it can be observed from the comparison results in Fig. 3. The color expansion
to other unnecessary regions makes the image blur with the compromise in its quality.
Our proposed method focuses to solve this problem as it is clear from our results.
Our algorithm is not restricted to one-to-one image and can have more than one target images to
transfer the color in different regions in the source image. Consequently, it provides more efficient,
natural and convincing results. The results in the different environments with more than one target images
are depicted in Fig. 1 and 7.
Moreover, we compare our results with stokes based techniques depicted in Fig. 5.
Farbman et al. 29 () diffuse the local color using the stokes in the first row.
This is a challenging task due to a high-contrast transitions
between the buildings. Our method propagates the local color efficiently
while preserving the other color details.
Our method produces comparatively the quality result and visual effects as better as by Farbman et al. 29 ()
While we are having the advantage that our method is a local color transfer between images not stokes
based which clarify it importance.
In a similar fashion, we compare our results with Chen et al. 37 () who have also used stokes based technique.
Consequently, our method achieves the same goal with results relatively of same or slightly better quality.
We further compare our results with Pouli et al. 32 () shown in Fig. 6.
It is clearly seen in first result that while transferring the color to grass they
were not able to preserve the color on tiger which effects the tiger’s color.
Moreover, their transferred color on grass is more sharp than its original color in
source image. We transfer the local color on source image more efficiently while preserving the color
of tiger and our technique develops more natural result with better visual effect.
In second result, they have transferred the local color from flower to flower. Here they
could not preserve the color in the carpel part of the flower. Whereas, we were able
to transfer the color while preserving the color in the carpel part of the flower, which
results a more natural result with better visual effects.
In Fig. 8, shows some more results to our proposed method using two target images.
They all show the color-transferred results that will reflect the target colors to the source images effectively.
Moreover, the boundary preservation in the resulting image is focused and tackled successfully.
We consider images with different environments like beaches, sceneries and indoor images.
The color has been transferred from sky to sky, flower to flower and shirt to shirt with effective
boundary preservation and quality maintenance.
In Fig. 9, shows the result of one source image to one target image, which produces the natural and
color-preserved results in a similar fashion. The color transfer is one-to-one image, though, color has been
transferred merely to those regions which were selected for color transfer.
Limitation: One of the limitation of our algorithm is the fact that we have to give prior instruction to all objects present in the image. We have to select regions also where we need to maintain the original color besides of selecting regions where the color is intended to transfer. The results will not be more natural with better visual effect, if we do not select the regions where the color requires not to be transferred.
We have presented an algorithm of local color transfer between images based on the simple statistics and locally linear embedding for edit propagation. We proposed a window for transferring the local color transfer between the images. Our suggested technique is very user-friendly and can be applied on commercial scale. The algorithm is not restricted to one-to-one image color transfer and can have more than one target images to transfer the color in different regions in the source image. It is not required by our algorithm to select the color regions of same styles and same sizes for source and target images. The proposed algorithm can be applied to a broad range of motives like humans, landscapes, plants and animals. Overall our method delivers much convincing, faster and user-friendly algorithm, which generates more natural results with better visual effect. Comparing with other existing approaches, our method have the same goal but perform better color blending in the input data. In future, we would like extend this approach in order to process colorization.
- (1) Reinhard E, Ashikhmin M, Gooch B, Shirley P. Color transfer between images. IEEE Comput Graph Appl 2001;21(5):34-41.
- (2) Neumann L, Neumann A. Color style transfer techniques using hue lightness and saturation histogram matching. in Computational Aesthetics in Graphics, Visualization and Imaging 2005, p. 111-122, Eurographics Association. 2005.
- (3) Pitie F, Kokaram A C, Dahyot R. N-dimensional probability density function transfer and its application to color transfer. in IEEE Int Conf on Computer Vision. IEEE Computer Society. 2005; Vol. 2, p. 1434-1439,
- (4) Pitie F, Kokaram A C, Dahyot R. Automated colour grading using colour distribution transfer. Comput Vis Image Underst 2007;107(1-2):123-137.
- (5) Tai Y W, Jia J, Tang C K. Local color transfer via probabilistic segmentation by expectation-maximization. in IEEE Computer Society Conf on Computer Vision and Pattern Recognition 2005;1:p.747-754.
- (6) Tai Y W, Jia J, Tang C K. Soft color segmentation and its applications. IEEE Trans Pattern Anal Mach Intell 2007; 29(9): 1520-1537.
- (7) Kim J H, Shin D K, Moon Y S. Color transfer in images based on separation of chromatic and achromatic colors. in Proc of the 4th Int Conf on Computer Vision Computer Graphics Collaboration Techniques; 2009; p. 285-296
- (8) Ha H G. Local color transfer using modified color influence map with color category. in IEEE Int Conf on Consumer Electronics-Berlin 2011; p. 194-197.
- (9) Maslennikova A, Vezhnevets V. Interactive local color transfer between images. in Proc of Graphicon 2007.
- (10) An X, Pellacini F. User-controllable color transfer. Comput Graph Forum 2010; 29(2): 263-271.
- (11) Wang B, Yu Y, Wong TT, Chen C, Xu YQ. Data-driven image color theme enhancement. ACM TOG 2010; 29(6): 146.
- (12) Welsh T, Ashikhmin M, Mueller K. Transferring color to grayscale images. ACM TOG 2002; 21(3): 277-280.
- (13) Abadpour A, Kasaei S. A fast and efficient fuzzy color transfer method. in Proc of the Fourth IEEE Int Symp on Signal Processing and Information Technology 2004, p. 491-494.
- (14) Xiao X, Ma L. Color transfer in correlated color space. in Proc ACM Int Conf on Virtual Reality Continuum and Its Applications 2006, p. 305-309.
- (15) Pitie F, Kokaram A. The linear Monge-Kantorovitch linear colour mapping for example-based colour transfer. In 4th European Conference on Visual Media Production(IETCVMP) 2007, p. 1-9.
- (16) Chang Y, Saito S, Nakajima M. Example-based color transformation of image and video using basic color categories. IEEE Trans Image Process 2007; 16(2): 329-336.
- (17) Chang Y, Saito S, Uchikawa K, Nakajima M. Example-based color stylization of images. ACMTrans Appl Percept 2005; 2(3): 322-345.
- (18) Yang, C.-K., Peng, L.-K.: Automatic mood-transferring between color images. IEEE Comput. Graph. Appl. 28(2), 52-61 (2008)
- (19) Xiao X, Ma L. Gradient-preserving color transfer. Comput Graph Forum 2009; 28(7): 1879-1886.
- (20) Wang B, Yu Y, Wong T T, Chen C, Xu Y Q. Data-driven image color theme enhancement. ACM Trans Graph 2010; 29(6): 146.
- (21) Wang B, Yu Y, Xu Y Q. Example-based image color and tone style enhancement. ACM Trans Graph 2011; 30(4): 64.
- (22) Cohen O D, Sorkine O, Gal R, Leyvand T, Xu Y Q. Color harmonization. ACM Trans. Graph 2006; 25(3): 624.
- (23) Shapira L, Shamir A, Cohen O D. Image appearance exploration by model-based navigation. Comput Graph Forum 2009; 28(2):629-638.
- (24) Liu X, Wan L, Qu Y, Wong T T, Lin S, Leung C S, Heng P A. Intrinsic colorization. ACM Trans Graph 2008; 27(5): 152.
- (25) Chia A Y S, Zhuo S, Gupta R K, Tai Y W, Cho S Y, Tan P, Lin S. Semantic colorization with Internet images. ACM Trans Graph 2011; 30(6):1
- (26) De Silva V, Tenenbaum J B. Sparse multidimensional scaling using landmark points. Technical report Stanford University (2004)
- (27) Yoo J D, Park M K,Lee K H. Local color transfer between images using dominant colors. Journal of Electronic Imaging 2013;22(3): 033003
- (28) Criminisi A, Sharp T, Rother C, Perez P.Geodesic image and video editing. ACM Trans Graph 2010; 29(5):134.
- (29) Farbman Z, Fattal R,Lischinski D. Diffusion maps for edge-aware image editing. ACMTrans Graph 2010; 29(6): 145.
- (30) Li Y, Adelson E H, Agarwala A. Scribbleboost: Adding classification to edge-aware interpolation of local image and video adjustments. Comput Graph Forum 2008; 27(4):1255-1264.
- (31) Pouli T, Reinhard E. Progressive color transfer for images of arbitrary dynamic range. Comput Graph 2011; 35: 6780.
- (32) Chen X, Zou D, Zhao Q, Tan P. Manifold preserving edit propagation. ACM Trans Graph 2012; 31(6): 132.
- (33) Musialski P, Cui M, Ye J, Razdan A, Wonka P. A framework for interactive image color editing. Vis Comput 2013; 29:1173-1186.
- (34) Roweis S T, Saul L K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000; 290(5500): 2323-2326.
- (35) Saul L K, Roweis S T. Think globally, fit locally: unsupervised learning of low dimensional manifolds. J Mach Learn Res 2003; 4(2): 119-155.
- (36) An X, Pellacini F. AppProp: all-pairs appearance-space edit propagation. ACM Trans Graph 2008; 27(3): 40.
- (37) Lischinski D, Farbman Z, Uyttendaele M, Szeliski R. Interactive local adjustment of tonal values. ACM Trans Graph 2006; 25(3):646.