Image Stylization by Interactive Oil Paint Filtering *
Amir Semmo a,1Daniel Limberger aJan Eric Kyprianidis b,cJ¨
aHasso Plattner Institute, University of Potsdam, Germany
bTU Berlin, Germany
This paper presents an interactive system for transforming images into an oil paint look. The system comprises two major stages.
First, it derives dominant colors from an input image for feature-aware recolorization and quantization to conform with a global
color palette. Afterwards, it employs non-linear ﬁltering based on the smoothed structure adapted to the main feature contours of the
quantized image to synthesize a paint texture in real-time. Our ﬁltering approach leads to homogeneous outputs in the color domain
and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive
painting. To this end, our system introduces a generalized brush-based painting interface that operates within parameter spaces to
locally adjust the level of abstraction of the ﬁltering eﬀects. Several results demonstrate the various applications of our ﬁltering
approach to diﬀerent genres of photography.
Keywords: oil paint ﬁltering, artistic rendering, colorization, image ﬂow, interactive painting
Image-based artistic rendering received signiﬁcant attention in
the past decades for visual communication, covering a broad
range of techniques to mimic the appeal of artistic media [
Oil paint is considered to be among the most popular of the ele-
mentary media because of its qualities for subtle color blending
and texturing [
]. Starting with the beginning of semi-automatic
painting systems in 1990 [
], stroke-based techniques that align
and blend primitives on a virtual canvas have been the pre-
dominant category to simulate oil paint [
]. While their example-
based texturing approach is able to provide high-quality outputs
of expressive nature and great opportunities for layering, how-
ever, stroke-based techniques are usually hard to parameterize to
simulate paint with soft color blendings or no visible borders—
e.g., as practiced in the Renaissance era (such as sfumato [
prevalent in many ﬁgurative art works (Figure 1). To this end,
image ﬁltering is a promising alternative approach to produce
painterly looks with more subtle color blendings—in particular
with the recent advancements in shape-adaptive smoothing [
such as anisotropic diﬀusion [
] and shock ﬁltering [
lating the visual characteristics of oil paint via image ﬁltering,
however, is a diﬃcult task with respect to three main issues:
The color distribution should be optimized to conform
to a global color palette while preserving contrasts of
important or prioritized features.
The paint texture should be oriented to the main feature
curves to mimic the way an artist might paint with a brush.
This is the authors’ version of the work. The deﬁnitive version will be published
in Computers & Graphics, 2016. doi: 10.1016/j.cag.2015.12.001.
Figure 1: Oil paintings by J. Vermeer (1665) and C. Monet (1873). The artists
use constrained color palettes with soft color blendings, two characteristics we
simulate by our ﬁltering technique.
The stylization process should be locally adjustable to
enable creative control over the visual output.
In this work we present a technique for image stylization that
employs (re-)colorization and non-linear image ﬁltering to de-
vise artistic renditions of 2D images with oil paint characteristics.
Rather than attempting to simulate oil paint via aligned strokes
] or through physically-based techniques [
work formulates I1 to I3 as sub-problems of image ﬁltering (Fig-
ure 2). The ﬁrst problem is solved by performing a recoloriza-
tion, using the optimization-based approach of Levin et al. [
with the dominant colors of the input image for quantization.
This approach produces more homogeneous color distributions
than local image ﬁltering techniques and gives users more con-
trol in reﬁning global color tones. The second problem is solved
using the smoothed structure tensor [
], which is adapted to the
feature contours of the quantized output, together with princi-
ples of line integral convolution [
] and Phong shading [
Preprint submitted to Computers &Graphics January 29, 2016
Figure 2: Exemplary application of our technique to automatically transform a color image (left) to a ﬁltered variant with oil paint characteristics (right).
obtain a ﬂow-based paint texture in real-time. Finally, the third
problem is addressed by an interactive painting interface that
implements GPU-based per-pixel parametrizations via virtual
brush models to give users local control for adjusting paint direc-
tions, shading eﬀects, and the level of abstraction. Our approach
provides versatile parametrization capabilities to resemble paint
modes that range from high detail to abstract styles.
This paper represents an extended journal version of the
CAe 2015 paper by Semmo et al.
. Compared to the original
paper, the major contributions are twofold: (1) we provide new
methods to parametrize our local ﬁltering eﬀects according to
image masks (e.g., derived from saliency-based metrics) and
create outputs of varying level of abstraction, and (2) we expand
our original algorithms towards an interactive painting system
with brush tools for creative image editing. Accordingly, the
remainder of this work has been restructured as follows. Sec-
tion 2 reviews related work on image stylization and ﬁltering,
color quantization, and paint texture synthesis, now including
topics such as brush-based painting and level of abstraction for
stylized renderings. Section 3 presents the methods used for oil
paint ﬁltering, including extended methods to adjust the level of
abstraction according to importance masks, e.g., using depth or
saliency-based information. Section 4 proposes our interactive
painting interface with brush tools to locally adjust paint conﬁg-
urations and the level of abstraction. Section 5 presents further
results and implementation details, including comparisons to
previous stroke-based techniques and an updated prospect on
future work. Finally, Section 6 concludes this paper.
2. Related Work
Related work is found in the ﬁelds of image stylization and
ﬁltering, color quantization, paint texture synthesis, and brushed-
based painting interfaces.
2.1. Image Stylization and Filtering
For artistic image stylization, three approaches can be distin-
guished: (1) stroke-based and example-based methods, (2) region-
based techniques, and (3) image ﬁltering [
]. A classical method
for stroke-based stylization is to iteratively align brush strokes
of varying color, size, and orientation according to the input
]. For an overview on this topic
we refer to the survey by Hegde et al. [
rendering typically involves texture transfers by image analo-
], a method previously used to create portraits with a
painterly look [
] and in neural networks to mimic painting
], but which typically requires training data as input.
An essential building block for region-based stylization is seg-
mentation. Several methods based on a mean shift have been
proposed for image abstraction [
] and the simulation of
artforms and fabrics, such as stained glass [
] and felt [
However, the rough boundaries of the segmented regions created
by these methods would require elaborate post-processing to
achieve color blending characteristics of oil paint.
To substantially modify areas or image regions, local image
ﬁltering that operates in the spatial domain may be used, which
is often based on anisotropic diﬀusion [
]. A popular choice
is the bilateral ﬁlter, which works by weight averaging pixel
colors in a local neighborhood according to their distances in
space and range [
], e.g., for image-based abstraction [
Flow ﬁelds have been used to adapt bilateral ﬁltering [
and particle-based techniques  to local image structures. In
this work, quantized color outputs are smoothed by ﬂow-based
Gaussian ﬁltering to provide smooth interpolations at curved
boundaries, however, we restrain from weighting in the color
domain to achieve ﬁrmer color blendings.
Additional ﬁlter categories include morphological operations
using dilation and erosion (e.g., for watercolor rendering [
and global optimization schemes for image decomposition, such
as weighted least squares [
], local extrema for edge-preserving
gradient minimization [
these techniques are less suited for rendering with a constrained
color palette, which instead requires ﬁltering schemes found in
2.2. Color Image Quantization and Abstraction
The typical goal of color quantization is to approximate an image
with a relatively small number of colors while minimizing color
abbreviations. Popular approaches are based on the median-cut
], clustering using octrees [
], k-means [
adaptive segmentation via perceptual models [
] or roughness
]. However, these algorithms may absorb colors
because of their global optimization scheme, or only operate
in the color space. Other approaches also consider the spatial
space via local luminance mapping and thresholding [
in its optimization scheme to preserve image details [
], but are
mainly self-organizing with respect to the derived color palette.
By contrast, we propose a technique that derives colors from lo-
cal image regions using a scoring system for optimal distribution,
and uses the derived color palette for image quantization. At this,
the optimization framework of Levin et al. [
] is parametrized
to propagate seed pixels—using colors of the derived palette—
to the remaining pixels at the premise that pixels in space with
similar intensities should have similar colors, with an additional
pass for luminance quantization. A related optimization scheme
was proposed by Kim et al.
to seamlessly stitch uniformly-
colored regions, but for the application of dequantization. The
output produced by our method is then post-processed using a
ﬂow-based Gaussian ﬁlter to provide ﬁrm color blendings.
Artists are trained—beyond diﬀerent painting styles and
techniques—to enhance communication aspects of their emo-
tions and ideas via principles of abstraction and highlighting.
At this, the concept of level of abstraction (LoA) plays a ma-
jor role for guiding a viewer’s focus to certain image regions
and improve the perception of information that are meant to be
of particular interest [
]. Here, a common method is to
parametrize image ﬁlters according to image masks to select
and seamlessly combine diﬀerent LoA representations, e.g., by
explicitly deﬁned regions of interest [
], saliency maps [
and importance maps derived from edge-based hierarchies [
In this work, we also follow the approach of image masking
to provide diﬀerent LoA representations. Here, our main focus
lies on the deﬁnition of parameter sets to locally control the
ﬁltering eﬀects and their LoA, as well as the development of a
modular interface to inject image-based metrics, such as feature
semantics, image saliency, and the view distance derived from
images with depth information (e.g., for foreground/background
separation). Rosin and Lai
also use image salience and
edge-based criteria to direct a color (de-)saturation and render
images with spot colors, e.g., to make foreground objects stand
out. Their technique also uses hue values for color quantiza-
tion, however, they merely focus on three classes to guide the
abstraction: dark (black), intermediate (gray) and light (white).
By contrast, we seek an approach that guides the colorization
via global and possibly reﬁned color palettes that is subject to
an optimization scheme to minimize color diﬀerences.
2.3. Interactive Paint Texture Synthesis
Contrary to physically-based paint modeling [
], we sep-
arate the paint texture synthesis from color abstraction. Our
computation is based on the smoothed structure tensor [
and an eigenanalysis to obtain gradient and tangent information
for directed smoothing similar to the work by Kyprianidis and
. The smoothing pass is adapted to the main fea-
ture contours retrieved via the ﬂow-based Laplacian of Gaussian
(FLoG)  to avoid ringing artifacts and provide outputs with
adaptive detail. Similar to the work by Hertzmann
mapping via Phong-based shading [
] is used to synthesize a
normal-mapped height texture that is aligned to the local fea-
ture orientations of the quantized image. The synthesis involves
noise textures that are blurred in gradient ﬂow direction to create
a painting-like eﬀect, an approach that is similar to line integral
] and is also followed in the texture-based design
of tensor ﬁelds [
]. By contrast, our computation is deﬁned
as a composition of ordered, parametrizable image processing
steps and performs in real-time, thus it may also be injected by
user-speciﬁed motions [9, 57] via painting .
The often tedious, error-prone process of manually tweaking
image ﬁltering eﬀects is typically limited to a subset of global pa-
rameters. Most consumer photo-editing products only rudimen-
tary support to adjust local parameters via masking and blending.
Because our paint texture synthesis is local and fast, we pro-
pose a system for per-pixel parametrization that supports the
modiﬁcation and correction of (pre-)computed or intermediate
ﬁltering results. In contrast to placing dynamic or static render-
ing primitives [
], we extend the concept of specialized local
] to a generalized brush-based painting
within eﬀect-parameter spaces. At this, our approach injects
local parameters into image processing steps, a concept promi-
nently used for WYSIWYG painting in the intensity [
gradient domain [
] to locally adjust stylized renderings [
which is often coupled with user-deﬁned textures that can be
interactively merged, intersected and overlapped [
]. Our gen-
eralized approach enables to aggregate single ﬁlter operations
and parameters (e.g., brush sizes, smoothing kernels) to high-
level brush tools to explicitly control the LoA of the ﬁltering
eﬀects, which is exempliﬁed for local orientation and shading
corrections of paint textures (e.g., to simulate the glossiness of
water features). For a compendium on the synthesis of brush
strokes we refer to the work by DiVerdi .
An overview of our stylization technique is shown in Figure 3.
The processing starts with extracting a user-deﬁned number of
dominant colors from the input image (Section 3.1). Next, a
recolorization based on the optimization approach by [
performed to quantize the chrominance and luminance chan-
nels using the derived color palette (Section 3.2). In parallel,
contour lines are extracted from the input image via diﬀerence-
of-Gaussians (DoG) and Laplacian-of-Gaussians (LoG) ﬁltering,
where the latter is used for parametrizing the computation of a
ﬂow ﬁeld (Section 3.3). The computation is based on the struc-
ture tensor [
], which is adaptively smoothed according to the
1. Color Palette Extraction
2. Seed Placement and Propagation 3. Colorization and Luminance Quantization
5. Flow Extraction 6. Paint Texture Computation
7. Image Smoothing
4. Edge Detection
Seed Placement Color Seeds (Re-)colorized Quantized Smoothed
Flow Field Brush Texture Varnish TextureSigned FLoG
Figure 3: Overview of the diﬀerent stages performed for the proposed stylization technique that automatically transforms color images (left) to ﬁltered variants with
an oil paint look (right). For a detailed description of the ﬁltering stages the reader is referred to Section 3.
derived contour lines to avoid ﬁltering across feature boundaries.
An eigenanalysis is then used to provide gradient and tangent
information for paint texture synthesis. The synthesis performs
in real-time, is based on bump mapping and Phong shading [
and produces outputs that are blended with the quantized image
to compose the ﬁnal result. The ﬁltering stages are presented
in more detail in the following sections, each followed by a
discussion as well as methods for dynamic parametrization to
achieve content-based LoA eﬀects.
3.1. Dominant Color Extraction
To synthesize the way artists paint with a limited number of
base colors, dominant (and contemporary) colors need to be
derived from the input image. Common approaches use global
optimization schemes that cluster similar colors and determine
the base colors from the largest clusters, such as the median-cut
]. These approaches, however, may produce false
colors or absorb contemporary colors.
Our approach computes a color palette
with colors derived
from local image regions of image
. A scoring system is applied
to classify image regions according to their normalized entropy
by penalizing the extraction from features with similar colors.
The color extraction is performed incrementally, starting with an
empty palette. Extracted colors are inserted after each iteration.
To ﬁnd a new color for a given color palette
, we seek for
an image region Rof minimal region score
denotes the area of the region and is used to normalize the
weights: image entropy
, and color distance
The entropy of the image is computed from the probabilities
pR(c)that a binned color cappears in the image region R:
The entropy weight is used to favor the color extraction from re-
gions with constant color tones that preferably belong to a single
image feature. At this, the probability functions are discretized
using color bins of constant size. For all examples in this paper
the bin size is set to 256. To favor vivid colors, the region score
is weighted according to the lightness
ing sRGB gamma corrected intensities. Finally, to favor colors
not yet present, the region score is weighted according to the
minimum color distance to colors of the current palette P:
This way, more priority is given to extracting palettes with di-
verging color tones. In Equation 1, the respective weights are
to yield the normalized entropy, average lightness,
and average color distance.
To ﬁnd a (rectangular) region with minimum score we pro-
ceed heuristically, using the following algorithm. The steps are
performed iteratively in CIE-Lab color space,
-times in total
for a target palette size of n:
1. Color Diﬀerence Mask
: To avoid recomputations in the
is precomputed via the minimal color
) for pixels in
, and the results are
buﬀered in a color diﬀerence mask.
2. Region Search
: Regions are computed for the horizontal
and vertical scanlines of the input image. The color is
extracted from the region with the better score (Figure 4):
(a) First Pass
) is computed for all regions of
0 along the horizontal/vertical scanlines.
) is determined.
For all examples in this paper we set τ=3.
Step 2(a) Step 2(b) Step 2(c) Result
Figure 4: Schematic overview of the region selection for the horizontal pass:
(a) horizontal and (b) vertical scanlines with optimal score are determined,
followed by (c) region growing to select Oh—next to Ovfrom the vertical pass.
(b) Second Pass
: The ﬁrst pass is repeated for the or-
thogonal scanlines, bounded by the region that is
deﬁned by the scanline
. Again, the scanline
) is selected for further processing.
(c) Area Computation
) is determined iteratively
for growing horizontal and vertical boundaries around
the pixel position (
) until a minimum value
3. Color Extraction
: Once a region with minimum score
has been identiﬁed, a representative color is extracted
from the region and inserted into the palette
the representative color is computed by ﬁnding a mode in
the box-ﬁltered histograms of the chrominance channels.
An example of the iterative computation of dominant colors,
color diﬀerence masks, and optimal region selection is given
in Figure 5. In a ﬁnal step, the colors in
are sorted by their
weighted count in the input image (i.e., thresholding ∆
color diﬀerence to previously sorted colors.
Discussion. We empirically determined
=7 as a good
default value for thresholding, using the CIE76 formula
to compute the diﬀerence between two colors, where ∆
corresponds to a just noticeable color diﬀerence [
]. Our al-
gorithm is compared to the median-cut algorithm in Figure 6.
Notice how colors of single features are accurately represented
and not merged with colors of other features (e.g., the butterﬂy
in Figure 6a, the eagle’s beak in Figure 6c, the guitar in Fig-
ure 6e, red tones in Figure 6f). Further, we noticed that our
approach is more resistant to noise as one can observe by the
green background in Figure 6c, where a clustering approach may
also derive false colors. Figure 7a demonstrates this stability
by an image that has been artiﬁcially corrupted with Gaussian
and impulse noise, where only small changes for the derived
contemporary colors are noticeable when using our approach.
In addition, Figure 7b demonstrates the stability for diﬀerent
image resolutions, where a down-sampled image still leads to
plausible palettes with stable estimates for the pre-dominant
colors. Finally, we observed that the number of color extractions
signiﬁcantly aﬀects if image features are accurately represented
or not. To this end, one could use a metric to control the color
coverage, e.g., to derive colors until the maximum and/or mean
of the color diﬀerence mask (
) falls below a given threshold.
This way, more colors could be derived for colorful images with-
out the need for content-dependent parameter adjustments. Here,
we also believe that the accuracy of our algorithm can be further
improved when using a generalized region growing technique to
derive colors from feature-aligned (non-rectangular) regions.
3.2. Image Quantization using Colorization
Our goal is to quantize the input image
using the extracted
dominant colors. We formulate this task as an optimization
problem, performing a (re-)colorization [
]: Given the intensity
of the input image and a number of colored seed pixels, the
colors should be propagated to the remaining pixels such that
pixels with similar intensities have similar colors.
Mask 4Mask 3Mask 2
Mask 7Mask 6Mask 5
Color Guess #5 Color Guess #5
Vertical Scanlines / Step 2(a) Horizontal Scanlines / Step 2(b)
Figure 5: Color diﬀerence masks computed for an image with 8 colors (including
white). The overlaid rectangles indicate the respective regions with minimum
score used for the color extraction. The region scores for the ﬁfth color guess
are shown at the bottom.
a b c
d e f
Figure 6: Comparison of the median-cut algorithm to our approach for dominant
color extraction. The ten dominant colors are sorted from left to right. With our
approach, colors of single features are more accurately represented.
1435 x 1200
360 x 300
Figure 7: Stability tests of our color extraction: a) applied to an image corrupted
with 2% Gaussian and 5% impulse noise, b) applied to a down-sampled image
of the painting The Cardplayers by P. C
ezanne (1892). In all cases, plausible
palettes with stable pre-dominant colors are derived from the images.
Color Seeds 10 Steps 30 Steps 100 Steps
Color Seeds 10 Steps 30 Steps 100 Steps
Figure 8: Image quantization using the algorithm described in Section 3.2 with a palette of 26 colors and
5 for automatic seed placement. The optimization
problem was iteratively solved with the “generalized minimum residual” method .
Input Smoothed Output
Figure 9: Comparison of our quantization method with the median-cut algorithm.
Notice how our approach preserves feature contrasts (e.g., the beard) at a better
scale. The smoothed output is based on a ﬂow-based Gaussian ﬁlter .
1st Pass: Colorization. The optimization is performed with the
constraint that seed pixels
) are set to the respective color
component of the dominant color
with minimal distance:
if and only if the minimal color distance falls below a threshold
0. This replaces the interactive placement of “scribbles”
described in [
]. For a given color channel
, the recolorized
channel C0is then computed via the objective function
subject to C(ri)=c(ri) for i∈Seeds
denotes the squared diﬀerence between the luminance
values of the pixels
) being the 8-connected
. The objective function then yields a large
sparse system of linear equations that is solved for both chromi-
nance channels of the input image in CIE-Lab color space using
the “generalized minimum residual” method  (Figure 8).
α = 4α = 7α = 11
12 Colors 16 Colors 20 Colors
8 12 16 20
Figure 10: Example for adjusting the seed placement threshold
and the number
of colors for the derived palette to adjust the LoA. The results include a post-
processing step by ﬂow-based Gaussian smoothing.
2nd Pass: Luminance Quantization. We introduce a second
pass for luminance quantization, where the objective function
is used to recompute the luminance channel. To this end, we
reformulate the weighting function
to compute the squared
diﬀerence between the two hue values at pixels
recolorized image. The hue value is obtained by converting
the recolorized image to CIE-LCh color space—as cylindrical
version of the CIE-Lab color space. The recomputed luminance
channel is then combined with the color channels of the ﬁrst
pass to yield a recolorized image with potential color transitions.
3rd Pass: Flow-based Gaussian Smoothing. The quantized im-
age is post-processed by a Gaussian ﬁlter with standard deviation
that is adapted to the local structure of image features, derived
from the smoothed structure tensor [
]. This creates smoothed
outputs at curved boundaries and, in general, a more painterly
look (Figure 9).
Discussion. Examples using our method are presented in Fig-
ure 8, Figure 9 and Figure 10. Contrary to schemes based on
global optimization methods, we observed that our approach pro-
duces outputs with better feature contrasts but higher variances
Smoothed Output w/o Mask
Smoothed Output with Mask
Figure 11: Using image masks to weight the color extraction according to
important or salient image features. Notice how the derived color palette for
the modiﬁed weighting (bottom) represents the tree more accurately than the
standard procedure (top), while the environment becomes more abstract. The
salient region detection is based on the algorithm of Cheng et al. .
to the original images (e.g., Figure 9). This behavior can be
explained by deriving colors from local image regions—e.g., in-
stead of prioritizing a global variance minimization—and the
usage of color diﬀerences as weighting factors in the scoring
system introduced in Section 3.1. The threshold for automatic
seed placement should be set greater or equal to the threshold
used for the color extraction to use all colors of the derived
palette. Here, we empirically determined
=7 as a good
default value. The threshold may also be set lower to initially
place fewer color seeds and thus induce more color blendings, or
may be set higher to result in more details and crisp boundaries
between image features. In addition, the thresholding may be
combined with reducing the number of colors in the derived
palette to further adjust the LoA. Figure 10 illustrates the mutual
impact of these two parametrization possibilities.
Adaptive Image Quantization. An advantage of our quantiza-
tion scheme becomes apparent when the LoA should be adjusted
according to image contents, e.g., based on feature semantics,
image saliency, or a foreground/background separation. Ac-
cording to Figure 10, we see two possibilities for local-based
adjustments: on the one hand, user-deﬁned weights can be in-
jected into Equation 1 to adapt the color extraction, and on the
other hand the seed threshold
can be adaptively computed.
We experimented with both adjustments by using importance
masks—explicitly deﬁned prior to processing—to guide the
color extraction to features of interest. Figure 11 shows a result
where more contemporary colors are derived from a salient im-
age feature to depict it with more details—using the algorithm of
Cheng et al.
for saliency estimation—without changing the
Seeds α = 8
Static Seed Thresholding
Adaptive Seed Thresholding
Figure 12: Using adaptive thresholds for the placement of color seeds to control
the level of abstraction for image features. Top: static thresholds
tom: adaptive thresholds
11] selected according to an importance mask
to increase the LoA for features in the background.
number of color extractions. To this end, the score computation
deﬁned in Equation 1 is changed to:
refers to the importance for pixels in region
ﬁned by a normalized input mask. In addition, Figure 12 shows
a result where seed thresholds
] are linearly interpo-
lated according to an importance mask to increase the level of
abstraction in the background regions of an image.
3.3. Paint Texture
Oil painting is a time-consuming process that often comprises
multiple layers of paint and drying phases. During ﬁnishing,
thin protective layers (varnish) may be coated onto the paint
for protection against dirt and dust, and to even out its ﬁnal
appearance. This yields two characteristics of oil paint textures:
(1) reliefs of varying thickness according to the used brushes
and applied number of layers with (2) a matt or glossy tint. The
ﬁrst eﬀect may be perceived as subtle shading that is caused
by external, oﬀ-image illumination, and the second eﬀect as
specular highlighting. To simulate both eﬀects, ﬁrst, a ﬂow
ﬁeld is synthesized by using the local orientation information
obtained from the smoothed structure tensor [
paint textures are synthesized by using the ﬂow ﬁeld for shading.
Flow Field Computation. Local orientation information is de-
rived from an eigenanalysis of the smoothed structure tensor [
a method that provides stable estimates and can be computed in
]. Line integral convolution [
] is then performed
along the stream lines deﬁned by the minor eigenvector ﬁeld of
the smoothed structure tensor. The obtained ﬂow ﬁeld, however,
may contain singularities and blurred feature boundaries leading
to visual artifacts in the paint textures. To this end, we make use
of the following enhancements (Figure 13):
Relaxation: The quantized image may provide large ar-
eas of solid color tones where gradient information are
unreliable or undeﬁned. To this end, structure tensors
with low gradient magnitudes are replaced by inpainted
information via relaxation (Figure 13 middle).
Adaptive Smoothing: The structure tensor is adaptively
smoothed to avoid ﬁltering over feature boundaries and
to obtain more accurate results (Figure 13 bottom). Here,
the main idea is to use the sign of the ﬂow-based Lapla-
cian of Gaussian (FLoG)—derived from the quantized
color image—for thresholding: the Gaussian smoothing
with standard deviation
is adapted to exclude pixel
values from weight averaging when the diﬀerence in the
signed FLoG to the origin pixel reaches a given threshold,
e.g., when the sign ﬂips while crossing feature boundaries.
For a detailed description on the relaxation and FLoG computa-
tion we refer to the work by Kyprianidis and Kang .
Paint Texture Synthesis. Procedural monochromatic noise is
blurred in gradient ﬂow direction deﬁned by the minor eigen-
vector ﬁeld of the adaptively smoothed structure tensor. This
results in ﬂow images similar to those produced by line integral
], but using a ﬂow-based Gaussian ﬁlter kernel
with standard deviation
to elongate the brush reliefs. The
blurred noise is then interpreted as ﬁne-grained height ﬁeld with
and illuminated using a directional light source
Quantized Input Smoothed Flow Field
Structure Tensor With Relaxation
Signed FLoG With Adaptive Smoothing
Figure 13: Enhancements for the smoothed structure tensor (here:
derive ﬂow ﬁelds. Middle: visualized structure tensor (black regions refer to
singularities) and relaxation to avoid singularities, bottom: visualized sign of
the FLoG which is thresholded for adaptive smoothing of the structure tensor.
Brush Texture Varnish Texture
Wood NoiseWhite Noise
Figure 14: Paint textures computed for the ﬂow ﬁeld in Figure 13 using parame-
ters σb=8.0, kscale =10.0, kspecular =3.0, kshininess =8.0.
Quantized Input ne = 8 ne = 1
Non-Adaptive ne = 8 ne = 1
Figure 15: Varying the number of iterations for orientation-aligned bilteral ﬁlter-
) of the quantized image, used for FLoG ﬁltering. Top: visualized sign of
the FLoG ﬁlter, bottom: brush texture with and without adaptive smoothing.
a b c
Figure 16: Diﬀerent noise frequencies and amplitudes to adjust the virtual brush.
Constant parameters: σb=20.0, kscale =20.0, kspecular =0.5, kshininess =20.0.
Here, principles of Phong shading [
] are used to render a brush
texture TBand a varnish texture TVwith pixels p:
TV(p)=kspecular ·(N(p)·L)kshininess .(8)
are used to adjust the varnish texture.
An additional factor
scales the height ﬁeld to control the
relief strength (Figure 14). The standard deviation
for noise ﬁltering is a key parameter to control the LoA of the
paint textures, where we observed that our enhancement for
adaptive smoothing of the structure tensor is crucial to preserve
salient feature curves. Figure 15 shows how to further adjust
the LoA when pre-processing the quantized image—used as
input for FLoG ﬁltering—by an orientation-aligned bilateral
] with a varying number of iterations
. This approach
has also proven to be eﬀective in cartoon-like ﬁltering [
Detailed Background Depth Input Abstract Background
Figure 17: Depth-dependent synthesis (color and depth input rendered with Unreal Engine 4) of paint textures to vary the LoA in the background region of a rendered
0), background (left):
background (right): KQ=(NA,NA,10.0,20.0) and KT=(2,20.0,10.0,0.8,20.0). The visualized brush textures do not include additional ﬁltering by saturation.
Table 1: Overview of parameters with value ranges used within this paper to
adjust the color image quantization and paint texture synthesis. Left-end values
typically refer to a low LoA, whereas right-end value refer to a high LoA.
Parameter Value Range Eﬀect
n36 −8 Size of color palette
α11.0−4.0 Color seed threshold
σs2.0−20.0 Structure tensor smoothing (std. dev.)
σq0.0−20.0 Quantized image smoothing (std. dev.)
ne0−10 Iterations of bilateral ﬁltering for FLoG
σb2.0−20.0 Noise smoothing (std. dev.)
kscale 20.0−0.0 Relief strength
kspecular 5.0−0.3 Specularity for varnish
kshininess 30.0−10.0 Shininess for varnish
to adjust contour lines. We also experimented with diﬀerent
noise implementations and observed that high frequency noise
simulates brush characteristics quite naturally (Figure 14), but
may also be based on lower frequencies and amplitudes to adjust
the brush size (Figure 16).
Image Composition. Finally, the brush texture is multiplied with
the smoothed color image, and the intermediate result is blended
with the varnish texture using linear dodge as blend mode. Op-
tionally, contour lines are enhanced by using a ﬂow-based DoG
] and a canvas texture is blended with the output to
further enhance the sensation of depth. For the latter, the paint
textures may also be ﬁltered in image regions of low saturation
to imitate layering at certain feature boundaries (e.g., Figure 2).
Adaptive Paint Texture Synthesis. The paint texture synthesis is
local and fast (Section 5), and thus may also be adaptively com-
puted according to user interaction or image contents. Table 1
summarizes the parameters used to control the LoA on a per-
pixel basis, where KQ=(n, α, σs, σq) adjusts the image quanti-
zation and abstraction, and
ne, σb,kscale,kspecular ,kshininess
adjusts the brush and varnish textures. Again, we experimented
with image masks to parametrize the paint textures. Figure 17
Figure 18: Saliency-based ﬁltering output of a portrait with detailed paint tex-
tures in image regions with facial skin. High saliency:
0), low saliency:
0). For the computation of the saliency mask, the
algorithm of Cheng et al.  was used.
demonstrates a depth-dependent linear interpolation between
two parameter sets to depict fewer or more details in the back-
ground regions of an image. In addition, Figure 18 shows an
approach where image saliency is used to direct the granularity
of the brush and varnish textures. Additional eﬀects may be easy
to implement, e.g., based on lightﬁeld data to produce stylized
depth of ﬁeld eﬀects [
] or feature semantics with qualitative
parameter sets for content-aware ﬁltering.
4. Interactive Painting
When tweaking the output of an image ﬁlter, users typically
strive for a global parametrization-trade-oﬀthat corresponds
to multiple visual requirements in diﬀerent image regions. Re-
cently, consumer photo-editing products started to extend the
concept of non-destructive per-pixel parametrizations from al-
pha masking to a small set of image computations (adjustments)
Automated Parametrization Locally Painted Parametrization
Figure 19: Failure case of a ﬁltering result and its manual correction using our
painting interface for per-pixel parametrization.
by means of parameter masks. We extended this approach by
exposing ﬁlter parameters as well as intermediate ﬁltering re-
sults as parameter maps that can be manipulated via painting
metaphors. This enables (1) artistic control over the stylization
process, and (2) the modiﬁcation of intermediate ﬁlter outputs
of inadequate local parametrizations (Figure 19).
4.1. Local Parameter Painting
Our method extends the concept of a specialized, locally com-
puted parametrization [
] to a generalized conﬁguration by
brush-painting within parameter spaces of local image ﬁlters. At
this, parameters are encoded as parameter maps and adjusted via
virtual brush models according to well-deﬁned action sequences.
Thereby, we denote the following system:
Aparameter map is a typed map that substitutes either a
uniform ﬁlter parameter or an intermediate computational
result. These maps are usually aligned with the input
image, but might also cover sub-regions and have diﬀerent
An action deﬁnes locally typed computations on param-
eter maps, e.g., replace,add,scale, or blur. It is ﬁlter-
independent and can be assigned to equally-typed param-
eter maps. The type usually depends on the number of a
Abrush shape speciﬁes rules and parameters for the dy-
namic creation of two-dimensional weight masks. Thereby,
atomic shapes—either by functional deﬁnition or shapes
from vector graphics—are (randomly) placed and encoded
as distance maps while satisfying speciﬁc constraints
(e.g., softness, jittering, scattering).
Abrush stroke is derived from a set of sequential user
inputs, e.g., attributed paths with interpolated position,
orientation, and pressure values. Here, a temporal input
mask is created by computing dynamic brush shapes along
the path, enabling per-pixel weighting of actions.
Abrush maps a sequence of actions to parameter maps.
While drawing, these actions are applied and weighted by
the temporal input mask (brush stroke).
This system provides a generic interface and is used to locally
adjust ﬁlter parameters and intermediate ﬁlter results through
Brush Stroke Parameter Maps
1. Select Brush Shape and Setup Actions 5. (Re-)filter Image4. Apply Actions
to Parameters Maps
2. Draw Path 3. Derive Stroke
Figure 20: Schematic overview of our per-pixel parametrization interface: brush
strokes are derived from drawn paths and applied to selected parameter maps.
The maps’ modiﬁed local parameters are then used to (re-)ﬁlter the image.
Painted Water RipplesPainted Noisy Water
Smoothing Structure Tensor Shininess
Figure 21: Example of manual corrections made for parameter layers to adjust
the paint textures: smoothing of the ﬂow ﬁeld, ﬂow direction, and shininess.
painting (schematized in Figure 20). The brush implementation
is hardware-accelerated and is used, in particular, to locally
adjust the parameters deﬁned by
. Examples are
given in Figure 21 and the accompanying video.
The shape and actions of a brush can be parametrized by constant
values or dynamically mapped to user inputs, e.g., the pressure
and orientation of a digital pen, or gestures for touch-enabled
devices. Technically, our implementation enables users to create
and customize brushes at runtime, but which demands a detailed
understanding of the underlying ﬁltering stages. To this end, we
provide a number of predeﬁned brushes:
Arelief brush increases or decreases the relief strength by
applying a multiply action to the height scale kscale.
Avarnish brush allows to adjust the specularity and shini-
ness of the varnish texture. It applies two multiply actions
Result with Warping
Figure 22: Image warping using grid-based resampling, parameterized by a
virtual brush tool. The warping is performed prior to paint texture synthesis.
to the parameter maps of kshininess and kspecular.
Two LoA brushes: one to adjust the structure tensor smooth-
and another to perform bilateral ﬁltering and apply
unsharp masking eﬀects.
Aﬂow brush to adjust the tangential information of the
structure tensor, which is especially helpful to exaggerate
or ﬁx inadequate stroke directions.
Acolorization brush that allows to fade between the color
output and a grayscale version.
An eraser brush that reverts to initial painting states for
all parameter maps or those of speciﬁc brushes.
Additional brushes, e.g., for painting bristle structures by scal-
ing the noise frequency or adjusting additional ﬁlter kernels for
) are possible but not yet implemented. To
simplify the mapping of actions to parameter maps, a single tex-
ture for every parameter or intermediate result is used. For large
images, the application of actions is restricted to sub-regions
of the parameter maps—according to the bounds of a brush
stroke—to maintain a responsive painting system.
4.3. Image Warping
Artists often exaggerate ﬁgures and shapes to amplify the mood
of their paintings. Prominent examples are art works from the
Expressionism era—such as Edvard Munch’s The Scream (1893–
1910). Image ﬁltering, however, is generally less suited for
intentional shape abstraction. Here, one approach is to comple-
ment our technique by an interactive brush tool to perform local
image warping. Starting with a virtual, regular grid, the user is
able to shift local grid points by brush-based painting to create
eﬀects of local compression and distortion (see Figure 22 and the
accompanying video). A similar approach was used before with
facial constraints to create caricatures from photographs [
The adapted grid is then used for texture parametrization and
to resample the quantized image by bilinear interpolation. Fi-
nally, the warped image serves as input for the paint texture
Figure 23: Touchscreen and user interface for the proposed painting system.
synthesis—which may also be performed during warping for
immediate visual feedback. Alternatively, image segmentation
and mass-spring systems may be used to create outputs with
more deliberate shape abstractions as demonstrated by Li and
We have implemented the dominant color extraction using C++,
the colorization and ﬁltering stages on the GPU with CUDA, and
the painting interface with Qt. All images were processed on
3.06 GHz and NVidia
GTX 760 GPU
with 4 GByte VRAM. The painting system was tested with a
85” multitouch monitor with Ultra-HD resolution (Figure 23).
600 pixel image is processed in 50 seconds for a palette
with 25 colors. Here, the color extraction is currently the lim-
iting stage, followed by the colorization, and the paint texture
synthesis that performs in real-time for images with HD reso-
lution. As demonstrated in Figure 7, the color extraction also
provides stable estimates when processing downscaled images—
up to the second pyramid level to speed up the processing by a
factor of two. To enable interactive performance during painting,
the stages after the image quantization are optimized to process
only those image regions that require a recomputation. At this,
the processing performs three steps per render cycle: 1) local
regions deﬁned by the virtual brush are buﬀered and extracted
from the quantized image and parameter layers (stored in main
memory) and are transferred to GPU memory, 2) the sub-images
are processed according to the brush mode and the results are
blit to the framebuﬀer for immediate visual feedback, 3) the
ﬁltered sub-images are transferred back to main memory. Using
pitch linear memory on the described test system, this procedure
enables to interactively adjust images up to 100 MP.
The proposed color quantization gives control to adjust the
level of abstraction. A comparison to a non-quantized version
of an image is shown in Figure 25, and demonstrates how the
quantization is able to ﬁlter detail information and produce large
areas of solid color tones. In particular, the latter eﬀect yields
varying scales of the paint texture, i.e., to simulate thick brushes.
Figure 24 and Figure 26 show comparisons of the proposed tech-
nique to previous works. In contrast to stroke-based rendering,
Input Hertzmann, 1998 Hays and Essa, 2004
Zhao and Zhu, 2010 Zhao and Zhu, 2011 Proposed Method
Figure 24: Comparison of the proposed method with stroke-based rendering techniques. LRTB: input, from Hertzmann
, from Hays and Essa
, from Zhao and
, from Zhao and Zhu
, and proposed method with
0). Each method produces visually distinct
outputs of varying expressiveness, texture, and feature alignment. For instance, the method of Hertzmann
aligns brush strokes in a color-matching scheme, but
tends to overdraw features in regions of low contrast. Hays and Essa
use real brush stroke textures that are feature-aligned with globally interpolated orientations,
yet their approach lacks complementary colors in neighboring strokes. This eﬀect is simulated in the method of Zhao and Zhu
(2011) to explicitly emphasize
feature contrasts. Zhao and Zhu
(2010) simulate the perceptual ambiguities known from abstract art, where shape-simplifying abstraction plays a major role.
Finally, the proposed method produces soft color blendings with no visible borders between brush strokes, yet without the capability for explicit shape abstraction.
without quantization with quantization
Figure 25: Two versions of a stylized image: (left) without prior quantization,
(right) with prior quantization. The quantized version produces a more abstract
look with respect to the color range and scale of the paint texture. Parameters
used: KQ=(40,8.0,3.0,14.0) and KT=(0,14.0,5.0,1.0,16.0).
our approach produces outputs with more soft color blendings
(Figure 24) but is also able to simulate reliefs of varying thick-
ness and strong abstraction (e.g., the background and dress in
Figure 26). In addition, the local parametrization capabilities
are mandatory to provide artistic control over the depiction of
single image features (e.g., the face in Figure 26), i.e., to provide
an adaptive LoA that is similar to hybrid stylization techniques
(e.g., the work of Zeng et al.
, Figure 26). More images
processed by our technique are shown in Figure 28, where the
LoA is adapted to the image contents to have more distinct
colors in colorful images (e.g., Venice) and wide ﬁlter kernels
for Gaussian smoothing to obtain soft color blendings (e.g., the
landscape). Moreover, the synthesized noise plays a major role
for abstraction, e.g., wood noise can be used to simulate thick
paint, and white noise is eﬀective to simulate detail brushes.
An inherent limitation of our technique is that it does not reach
the qualities of shape abstraction as provided by top-down stroke-
based rendering techniques (e.g., the method of Zhao and Zhu
). The proposed painting system with complementary warp-
ing gives users some local control to adjust the LoA. Further,
image features that should be represented with soft color blend-
ings may be ﬁltered with (unwanted) hard transitions, depending
on the seed color placement. Eventually, this requires manual
eﬀort to locally adjust the thresholds for the color quantization.
Finally, the performance of the colorization currently does not
enable interactive color reﬁnements. One approach to allevi-
ate this issue is to visualize intermediate results of the iterative
solver for visual feedback, or to accelerate the colorization using
fast intrinsic distance computations .
5.2. Future Work
We see multiple directions for future work. A major strength of
our technique is that the stylization uses global color palettes
that may be easily reﬁned interactively. Here, we plan to imple-
ment the transfer of color moods to adapt a derived palette to
a target palette or image. Second, the color extraction may be
improved to support feature-aligned (non-rectangular) regions
for a more robust extraction, e.g., via a generalized region grow-
ing. Third, the adaptive ﬁltering approaches could be extended
to support image semantics and stylize certain features with
diﬀerent parameter sets, e.g., by image parsing [
]. Fourth, the
extension of our technique to video is of particular interest. Here,
the colorization could be extended by a temporal constraint as
proposed by Levin et al.
, together with an optical ﬂow to
Input InputWinnemöller et al., 2006 Zeng et al., 2009 Proposed Method
Figure 26: Comparison of the proposed method with the image ﬁltering technique of Winnem
oller et al.
and the stroke-based rendering technique of Zeng et al.
. Parameters: KQ=(38,7.0,16.0,16.0), base: KT=(0,16.0,5.0,1.0,16.0) and skin: KT=(0,16.0,1.0,1.0,10.0) with no color quantization in facial regions.
Input Xu et al., 2011 Our Approach
Figure 27: Comparison of image smoothing via L0 gradient minimization [
with our quantization technique, combined with the output of a DoG ﬁlter, for
JPEG artifact removal.
stabilize the paint texture synthesis. Finally, we believe that the
palette-based quantization and colorization are quite generic in
their application and could be applied to further problems. For
instance, we experimented using our methods for JPEG artifact
removal of clip-arts (Figure 27), where our approach produces
accurate results and may also be used to easily redeﬁne single
We have presented an approach for transforming images into
ﬁltered variants with an oil paint look. The proposed color
extraction and colorization methods enable to quantize color
images according to their dominant color palette. Results show
that our quantization scheme is able to represent selected image
features accurately and provide homogeneous outputs in the
color domain. The ﬂow-based image abstraction and proposed
paint texture synthesis perform in real-time to enable interactive
reﬁnements, and facilitate per-pixel parametrizations to direct
the level of abstraction to user-deﬁned or salient image regions.
Several results demonstrate the manifold application of our ap-
proach to diﬀerent genres of photography and to simulate paint
with soft to moderate color blendings.
We would like to thank the anonymous reviewers for their valu-
able comments and Holger Winnem
oller for his support on the
ﬂowpaint research project. This work was partly funded by
the Federal Ministry of Education and Research (BMBF), Ger-
many, within the InnoProﬁle Transfer research group “4DnD-
), and was partly supported by the ERC
through grant ERC-2010-StG 259550 (XSHAPE).
Kyprianidis, J.E., Collomosse, J., Wang, T., Isenberg, T.. State of the
’Art’: A Taxonomy of Artistic Stylization Techniques for Images and
Video. IEEE Trans Vis Comput Graphics 2013;19(5):866–885. doi:
 Scott, M.. Oil Painter’s Bible. Chartwell Books; 2005.
Haeberli, P.. Paint by Numbers: Abstract Image Representations.
SIGGRAPH Comput Graph 1990;24(4):207–214. doi:
Hertzmann, A.. A survey of stroke-based rendering. IEEE Computer
Graphics and Applications 2003;(4):70–81.
 Earls, I.. Renaissance art: a topical dictionary. ABC-CLIO; 1987.
Figure 28: Image stylization results produced with the proposed oil paint ﬁltering technique.
Weickert, J.. Anisotropic diﬀusion in image processing; vol. 1. Teubner
Kang, H., Lee, S.. Shape-simplifying Image Abstraction. In: Computer
Graphics Forum; vol. 27. 2008, p. 1773–1780.
Hertzmann, A.. Painterly Rendering with Curved Brush Strokes of
Multiple Sizes. In: Proc. ACM SIGGRAPH. ACM; 1998, p. 453–460.
Hays, J., Essa, I.. Image and Video Based Painterly Animation. In: Proc.
NPAR. ACM; 2004, p. 113–120. doi:10.1145/987657.987676.
Zeng, K., Zhao, M., Xiong, C., Zhu, S.C.. From Image Parsing to
Painterly Rendering. ACM Trans Graph 2009;29(1):2:1–2:11. doi:
Baxter, W., Wendt, J., Lin, M.C.. IMPaSTo: A Realistic, Interactive
Model for Paint. In: Proc. NPAR. ACM; 2004, p. 45–148. doi:
Lu, J., Barnes, C., DiVerdi, S., Finkelstein, A.. RealBrush: Painting with
Examples of Physical Media. ACM Trans Graph 2013;32(4):117:1–117:12.
Levin, A., Lischinski, D., Weiss, Y.. Colorization Using Optimiza-
tion. ACM Trans Graph 2004;23(3):689–694. doi:
Brox, T., Boomgaard, R., Lauze, F., Weijer, J., Weickert, J., Mr
et al. Adaptive Structure Tensors and their Applications. In: Visualization
and Processing of Tensor Fields. Springer Berlin Heidelberg; 2006, p.
17–47. doi:10.1007/3-540- 31272-2_2.
Cabral, B., Leedom, L.C.. Imaging Vector Fields Using Line Integral
Convolution. In: Proc. ACM SIGGRAPH. ACM; 1993, p. 263–270.
Phong, B.T.. Illumination for Computer Generated Pictures. Commun
ACM 1975;18(6):311–317. doi:10.1145/360825.360839.
Semmo, A., Limberger, D., Kyprianidis, J.E., D
ollner, J.. Image
Stylization by Oil Paint Filtering Using Color Palettes. In: Proc. CAe. The
Eurographics Association; 2015, p. 149–158.
Gooch, B., Coombe, G., Shirley, P.. Artistic Vision: Painterly Rendering
Using Computer Vision Techniques. In: Proc. NPAR. ACM; 2002, p.
Zhao, M., Zhu, S.C.. Sisley the Abstract Painter. In: Proc. NPAR. ACM;
2010, p. 99–107. doi:10.1145/1809939.1809951.
Lu, J., Sander, P.V., Finkelstein, A.. Interactive Painterly Stylization of
Images, Videos and 3D Animations. In: Proc. ACM I3D. ACM; 2010, p.
Hegde, S., Gatzidis, C., Tian, F.. Painterly rendering techniques: a
state-of-the-art review of current approaches. Comp Anim Virtual Worlds
Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H..
Image Analogies. In: Proc. ACM SIGGRAPH. ACM; 2001, p. 327–340.
Zhao, M., Zhu, S.C.. Portrait Painting Using Active Templates. In: Proc.
NPAR. ACM; 2011, p. 117–124. doi:10.1145/2024676.2024696.
Wang, T., Collomosse, J., Hunter, A., Greig, D.. Learnable Stroke
Models for Example-based Portrait Painting. In: Proc. British Machine
Vision Conference. BMVA; 2013, p. 36.1–36.11.
Gatys, L.A., Ecker, A.S., Bethge, M.. A Neural Algorithm of Artistic
Style. CoRR 2015;URL: http://arxiv.org/abs/1508.06576.
DeCarlo, D., Santella, A.. Stylization and Abstraction of Photographs.
ACM Trans Graph 2002;21(3):769–776. doi:
Wen, F., Luan, Q., Liang, L., Xu, Y.Q., Shum, H.Y.. Color Sketch
Generation. In: Proc. NPAR. ACM; 2006, p. 47–54. doi:
Mould, D.. A Stained Glass Image Filter. In: Proc. EGRW. 2003, p. 20–25.
O’Donovan, P., Mould, D.. Felt-based Rendering. In: Proc. NPAR. ACM;
2006, p. 55–62. doi:10.1145/1124728.1124738.
Tomasi, C., Manduchi, R.. Bilateral Filtering for Gray and Color Images.
In: Proc. ICCV. IEEE; 1998, p. 839–846. doi:
oller, H., Olsen, S.C., Gooch, B.. Real-Time Video Abstrac-
tion. ACM Trans Graph 2006;25(3):1221–1226. doi:
Kyprianidis, J.E., D
ollner, J.. Image Abstraction by Structure Adaptive
Filtering. In: Proc. EG UK TPCG. The Eurographics Association; 2008, p.
Kang, H., Lee, S., Chui, C.K.. Flow-Based Image Abstraction. IEEE
Trans Vis Comput Graphics 2009;15(1):62–76. doi:
Yoon, J.C., Lee, I.K., Kang, H.. Video Painting Based on a Stabilized
Time-Varying Flow Field. IEEE Trans Vis Comput Graphics 2012;18:58–
Bousseau, A., Kaplan, M., Thollot, J., Sillion, F.X.. Interactive Wa-
tercolor Rendering with Temporal Coherence and Abstraction. In: Proc.
NPAR. ACM; 2006, p. 141–149. doi:10.1145/1124728.1124751.
Farbman, Z., Fattal, R., Lischinski, D., Szeliski, R.. Edge-Preserving
Decompositions for Multi-Scale Tone and Detail Manipulation. ACM
Trans Graph 2008;27(3):67:1–67:10. doi:10.1145/1360612.1360666.
Subr, K., Soler, C., Durand, F.. Edge-preserving Multiscale Im-
age Decomposition based on Local Extrema. ACM Trans Graph
Xu, L., Lu, C., Xu, Y., Jia, J.. Image Smoothing via
Minimization. ACM Trans Graph 2011;30(6):174:1–174:12. doi:
Heckbert, P.. Color Image Quantization for Frame Buﬀer Display. SIG-
GRAPH Comput Graph 1982;16(3):297–307. doi:
Gervautz, M., Purgathofer, W.. A Simple Method for Color Quan-
tization: Octree Quantization. In: New Trends in Computer Graph-
ics. Springer Berlin Heidelberg; 1988, p. 219–231. doi:
978-3- 642-83492- 9_20.
Kanungo, T., Mount, D.M., Netanyahu, N.S., Piatko, C.D., Silverman,
R., Wu, A.Y.. An eﬃcient k-means clustering algorithm: Analysis and
implementation. IEEE Trans Pattern Anal Mach Intell 2002;24(7):881–
Chen, J., Pappas, T.N., Mojsilovic, A., Rogowitz, B.. Adaptive Percep-
tual Color-Texture Image Segmentation. IEEE Trans Image Processing
Yue, X., Miao, D., Cao, L., Wu, Q., Chen, Y.. An eﬃcient color
quantization based on generic roughness measure. Pattern Recognition
Yu, J., Lu, C., Sato, Y.. Sparsity-based Color Quantization with Preserved
Image Details. In: SIGGRAPH Asia 2014 Posters. ACM; 2014, p. 32:1–
Kim, T.h., Ahn, J., Choi, M.G.. Image Dequantization: Restoration of
Quantized Colors. Comput Graph Forum 2007;26(3):619–626. doi:
Santella, A., DeCarlo, D.. Visual Interest and NPR: an Evaluation
and Manifesto. In: Proc. NPAR. ACM; 2004, p. 71–150. doi:
Cole, F., DeCarlo, D., Finkelstein, A., Kin, K., Morley, K., Santella,
A.. Directing Gaze in 3D Models with Stylized Focus. In: Proc. EGSR.
The Eurographics Association; 2006, p. 377–387. doi:
Cong, L., Tong, R., Dong, J.. Selective Image Abstraction. Vis Comput
2011;27(3):187–198. doi:10.1007/s00371-010- 0522-2.
Collomosse, J., Hall, P.. Genetic Paint: A Search for Salient Paintings.
In: Applications of Evolutionary Computing; vol. 3449. Springer Berlin
Heidelberg; 2005, p. 437–447. doi:
10.1007/978-3- 540-32003- 6_44
Orzan, A., Bousseau, A., Barla, P., Thollot, J.. Structure-preserving
Manipulation of Photographs. In: Proc. NPAR. ACM; 2007, p. 103–110.
Rosin, P.L., Lai, Y.K.. Non-photorealistic Rendering with Spot Colour.
In: Proc. CAe. ACM; 2013, p. 67–75. doi:
Chu, N., Baxter, W., Wei, L.Y., Govindaraju, N.. Detail-preserving
Paint Modeling for 3D Brushes. In: Proc. NPAR. ACM; 2010, p. 27–34.
Kyprianidis, J.E., Kang, H.. Image and Video Abstraction by Coherence-
Enhancing Filtering. Comput Graph Forum 2011;30(2):593–602. doi:
Hertzmann, A.. Fast Paint Texture. In: Proc. NPAR. ACM; 2002, p.
Zhang, E., Hays, J., Turk, G.. Interactive Tensor Field Design and Visual-
ization on Surfaces. IEEE Trans Vis Comput Graphics 2007;13(1):94–107.
Kagaya, M., Brendel, W., Deng, Q., Kesterson, T., Todorovic, S.,
Neill, P., et al. Video Painting with Space-Time-Varying Style Parameters.
IEEE Trans Vis Comput Graphics 2011;17(1):74–87. doi:
Olsen, S.C., Maxwell, B.A., Gooch, B.. Interactive Vector Fields
for Painterly Rendering. In: Proc. Graphics Interface. Canadian Human-
Computer Communications Society; 2005, p. 241–247.
Hanrahan, P., Haeberli, P.. Direct WYSIWYG Painting and Texturing
on 3D Shapes. SIGGRAPH Comput Graph 1990;24(4):215–223. doi:
Schwarz, M., Isenberg, T., Mason, K., Carpendale, S.. Modeling with
Rendering Primitives: An Interactive Non-photorealistic Canvas. In: Proc.
NPAR. ACM; 2007, p. 15–22. doi:10.1145/1274871.1274874.
Anjyo, K.i., Wemler, S., Baxter, W.. Tweakable Light and Shade for
Cartoon Animation. In: Proc. NPAR. ACM; 2006, p. 133–139. doi:
Todo, H., Anjyo, K.i., Baxter, W., Igarashi, T.. Locally Controllable
Stylized Shading. ACM Trans Graph 2007;26(3):17:1–17:7. doi:
McCann, J., Pollard, N.S.. Real-time Gradient-domain Painting. ACM
Trans Graph 2008;27(3):93:1–93:7. doi:10.1145/1360612.1360692.
Baxter, W.V., Lin, M.C.. A Versatile Interactive 3D Brush Model. In:
Proc. Paciﬁc Graphics. IEEE; 2004, p. 319–328.
Ritter, L., Li, W., Curless, B., Agrawala, M., Salesin, D.. Painting
With Texture. In: Proc. EGSR. The Eurographics Association; 2006, p.
DiVerdi, S.. A brush stroke synthesis toolbox. In: Image and Video-
Based Artistic Stylisation; vol. 42. Springer London; 2013, p. 23–44.
doi:10.1007/978-1- 4471-4519- 6_2.
Mahy, M., Eycken, L., Oosterlinck, A.. Evaluation of Uniform Color
Spaces Developed after the Adoption of CIELAB and CIELUV. Color Re-
search & Application 1994;19(2):105–121. doi:
Saad, Y., Schultz, M.H.. GMRES: A generalized minimal residual
algorithm for solving nonsymmetric linear systems. SIAM J Sci and Stat
Comput 1986;7(3):856–869. doi:10.1137/0907058.
Cheng, M.M., Warrell, J., Lin, W.Y., Zheng, S., Vineet, V., Crook, N..
Eﬃcient Salient Region Detection with Soft Image Abstraction. In: Proc.
ICCV. IEEE; 2013, p. 1529–1536. doi:10.1109/ICCV.2013.193.
Bousseau, A.. Non-Linear Aperture for Stylized Depth of Field. In: ACM
SIGGRAPH Talks. ACM; 2009, p. 57:1–57:1. doi:
Gooch, B., Reinhard, E., Gooch, A.. Human Facial Illustrations: Creation
and Psychophysical Evaluation. ACM Trans Graph 2004;23:27–44.
Li, J., Mould, D.. Image Warping for a Painterly Eﬀect. In: Proc. CAe.
The Eurographics Association; 2015, p. 131–140.
Zhao, M., Zhu, S.C.. Customizing Painterly Rendering Styles Using
Stroke Processes. In: Proc. NPAR. ACM; 2011, p. 137–146. doi:
Yatziv, L., Sapiro, G.. Fast image and video colorization using chromi-
nance blending. IEEE Trans Image Processing 2006;15(5):1120–1129.
Original photographs used in Figure 6a/d/e, Figure 7a, Figure 12, Figure 13, and
Figure 28 (car) courtesy Phillip Greenspun. Photographs from ﬂickr.com kindly
provided under Creative Commons license by Anita Priks (Figure 2), Vincent van
der Pas (Figure 3), Gulsen Ozcan (Figure 8), Akaporn Bhothisuwan (Figure 9),
Valerija Fetsch (Figure 11), matthiashn (Figure 16), Rajarshi Mitra (Figure 18),
Mark Pouley (Figure 28 /landscape), Jelto Buurman (Figure 28 /still life),
Harclade (Figure 28 /ballerina), Christophe Chenevier (Figure 28 /girl), and
Florence Ivy (Figure 28 /chameleon).