Conference PaperPDF Available

Interactive Image Filtering with Multiple Levels-of-Control on Mobile Devices

Authors:

Abstract and Figures

With the continuous development of mobile graphics hardware, interactive high-quality image stylization based on nonlinear filtering is becoming feasible and increasingly used in casual creativity apps. However, these apps often only serve high-level controls to parameterize image filters and generally lack support for low-level (artistic) control, thus automating art creation rather than assisting it. This work presents a GPU-based framework that enables to parameterize image filters at three levels of control: (1) presets followed by (2) global parameter adjustments can be interactively refined by (3) complementary on-screen painting that operates within the filters' parameter spaces for local adjustments. The framework provides a modular XML-based effect scheme to effectively build complex image processing chains-using these interactive filters as building blocks-that can be efficiently processed on mobile devices. Thereby, global and local parameterizations are directed with higher-level algorithmic support to ease the interactive editing process, which is demonstrated by state-of-the-art stylization effects, such as oil paint filtering and watercolor rendering.
Content may be subject to copyright.
Interactive Image Filtering with Multiple Levels-of-Control on Mobile Devices *
Amir Semmo Tobias D¨
urschmid Matthias Trapp Mandy Klingbeil J¨
urgen D¨
ollner
Hasso Plattner Institute, University of Potsdam, Germany
Sebastian Pasewaldt
Digital Masterpieces GmbH
Oil-paint Filtering
Input Toon Rendering
Default Preset Global Parameterization Local Parameterization Default Preset Global Parameterization Local Parameterization
Figure 1: Results obtained with the proposed framework for image stylization. Users are able to parameterize image filters at three levels of
control: presets, global, and local adjustments (here: contour granularity and level of abstraction) enable a creative editing process.
Abstract
With the continuous development of mobile graphics hardware, in-
teractive high-quality image stylization based on nonlinear filter-
ing is becoming feasible and increasingly used in casual creativity
apps. However, these apps often only serve high-level controls to
parameterize image filters and generally lack support for low-level
(artistic) control, thus automating art creation rather than assisting
it. This work presents a GPU-based framework that enables to pa-
rameterize image filters at three levels of control: (1) presets fol-
lowed by (2) global parameter adjustments can be interactively re-
fined by (3) complementary on-screen painting that operates within
the filters’ parameter spaces for local adjustments. The framework
provides a modular XML-based effect scheme to effectively build
complex image processing chains—using these interactive filters
as building blocks—that can be efficiently processed on mobile de-
vices. Thereby, global and local parameterizations are directed with
higher-level algorithmic support to ease the interactive editing pro-
cess, which is demonstrated by state-of-the-art stylization effects,
such as oil-paint filtering and watercolor rendering.
Keywords: mobile, image filtering, NPR, interaction, GPU
Concepts: Computing methodologies Image manipulation;
http://www.hpi3d.de |http://www.digitalmasterpieces.com
c
ACM. This is the authors’ version of the work. It is posted here by per-
mission of ACM for your personal use. Not for redistribution. The definitive
version will be published in SA ’16 Symp on Mobile Graphics and Interac-
tive Applications, December 05-08, 2016, Macao.
<http://dx.doi.org/10.1145/2999508.2999521>
1 Introduction
Image stylization enjoys a growing popularity on mobile platforms
to support casual creativity [Winnem¨
oller 2013]. Today, a tremen-
dous amount of mobile apps exist that implement stylization effects
simulating the aesthetic appeal of artistic media [Dev 2013], such as
oil, watercolor, and pencil. In particular, image filtering is very pop-
ular with the public to transform images into expressive renditions.
However, mobile apps that implement image filters typically serve
high-level controls such as global parameter adjustments and only
limited low-level controls for local adjustments, e.g., via painting,
thus automating art creation rather than assisting it.
Increasing the spectrum of interactivity for these (semi-)automatic
filters towards multiple levels of control, e.g., by integrating
tools for brush-based painting, poses a contemporary field of re-
search [Isenberg 2016] to ease the visual expression for both artists
and non-artists [Salesin 2002; Gooch et al. 2010]. With the con-
tinuous development of mobile graphics hardware and modalities
(e.g., touch with pressure), interactive tools for filter parameteriza-
tion are becoming feasible but remain challenging in two respects:
1. High-level and low-level controls should be non-mutually ex-
clusive, and permit an iterative parameterization at global and
pixel level to support different user skills and needs.
2. Complex stylization effects that require several passes of
(non-)linear filtering need to be processed at interactive frame
rates to support immediate visual feedback.
Both challenges need to be addressed to enable an interactive edit-
ing that fosters the creativity of both “experts” and casual users.
This paper presents an extensible GPU-based framework for mo-
bile devices that addresses these challenges by shading-based fil-
tering that couples higher-level algorithmic support with low-level
controls for parameterization. The framework provides a modular
XML-based effect scheme to rapidly build interactive image pro-
cessing chains, e.g., for toon rendering [Winnem¨
oller et al. 2006]
or oil-paint filtering [Semmo et al. 2016b]. At this, presets and
global adjustments can be complemented with brush-based painting
configures inc./dec./mask
Parameteriz ation
Presets Global Parameters Parameter Masks
Effect Rendering Rendered OutputEffect Instance uses renders
State Sets
Device State
(Sensors, Orientation, Pressure)
Effect State
(Pass Inputs, Pass Iterations)
configures
Rendering State
(Render Targets, Viewport)
updates
updatesdefines
Effect Specifi cation
Input Data
(Camera Fe ed, Images, V ideos)
Asset Data
(Shader, Textures, Geom etry)
Effect Structure
(Passes, Pipeline, Parame ter)
User
consumes
configures
configures
selects
1choose
2adjusts
3modifies
4
Developer
influences
controlscreates
Figure 2: Overview of the framework from the perspective of the end user (red) and the effect developer (gray).
that operates within the parameter spaces of the stylization effects,
e.g., to locally adjust colors and the level of abstraction (Figure 1).
This way, results can be obtained quickly while advanced control is
given for low-level adjustments.
To summarize, this paper makes the following contributions:
C1 A framework for effectively building complex image process-
ing chains that can be efficiently processed on mobile devices.
C2 Concepts for interactive per-pixel parameterizations of image
filters and their interplay with global parameterizations, which
are demonstrated by stylization effects such as toon rendering
and oil paint filtering using the proposed framework.
2 Related Work
Image processing constitutes an essential building block for mo-
bile applications such as augmented reality and painterly render-
ing [Thabet et al. 2014] to facilitate the expression, recognition,
and communication of image contents [Dev 2013]. Previous works
that deal with image stylization primarily focused on technology
transfers and optimizations to address the inherent limitations of
mobile graphics hardware such as computing power and memory
resources [Capin et al. 2008], while mobile apps primarily explored
modalities for interactive parameterization.
Image filtering and stroke-based rendering [Kyprianidis et al. 2013]
have been particularly used in mobile expressive rendering [Dev
2013] to simulate popular media and effects such as cartoon [Fis-
cher et al. 2008; Kim and Lin 2012], watercolor [Oh et al. 2012;
DiVerdi et al. 2013], and oil paint [Wexler and Dezeustre 2012;
Kang and Yoon 2015]. Most of these works implement optimiza-
tion techniques to enable real-time performance, for instance us-
ing separated filter kernels with optimized branching and varying
floating-point precision [Singhal et al. 2011], genetic search algo-
rithms [Wexler and Dezeustre 2012], and vector graphics [DiVerdi
et al. 2013]. These works, however, typically provide only effect-
specific implementations and solutions. By contrast, we propose a
framework for authoring complex stylization effects with a uniform
XML-based scheme, which provides a modular asset management
for rapid prototyping. This includes shading-based effects that sup-
port nonlinear filtering, e.g., adapted to local images structures by
following the approach of [Kyprianidis and D¨
ollner 2008], to ob-
tain flow-aligned outputs of deliberate, edge-preserving levels of
abstraction. Thereby, we demonstrate that current mobile GPUs en-
able interactive filter-based image stylization that serves equivalent
features than the respective desktop versions, such as toon [Win-
nem¨
oller et al. 2006], oil paint [Semmo et al. 2016b] and water-
color [Bousseau et al. 2006] rendering.
The productization of research systems also led to a number of mo-
bile apps, such as PencilFX,ToonPAINT [Winnem¨
oller 2013] and
PaintCan [Benedetti et al. 2014], which typically involves reducing
a large number of technical parameters to a few comprehensive pa-
rameters that are exposed to users. State-of-the-art “filtering” apps,
such as Waterlogue1and Brushstroke2, typically provide only high-
level parameters for effect tuning (e.g., presets, global adjustments).
By contrast, we follow the approach of coupling low-level painting
with algorithmic support—e.g., as used in ToonPAINT and Paint-
Can [Benedetti et al. 2014]—and extend the concept of specialized
local parameterizations [Anjyo et al. 2006; Todo et al. 2007] to a
generalized brush-based painting within effect-parameter spaces.
Thereby, stylization effects are defined as a composition of ordered,
parametrizable image filters that may be injected by user-specified
motions [Hays and Essa 2004; Olsen et al. 2005] via on-screen
painting [Hanrahan and Haeberli 1990]. We show that this ap-
proach greatly increases the feasibility of implementing interactive
tools that adjust the appearance of filter-based stylization effects at
run-time, which poses a contemporary field of research of the NPR
community [Isenberg 2016] to help non-artists explore parameter
spaces more easily [Lum and Ma 2002] and enable right-brained
thinking [Gooch et al. 2010]. Previous works showed that this type
of “user-centric NPR” [Winnem¨
oller 2013] has potential to assist
art creation, e.g., enabling user-defined stroke orientations on desk-
top systems [Salisbury et al. 1997; Gerl and Isenberg 2013] and
mobile devices [Benedetti et al. 2014].
3 Technical Approach
Figure 2 gives an overview of the stages and interfaces—for ef-
fect developers and users—of the proposed framework. The stages
comprise the effect specification (Section 3.1) and parameteriza-
tion (Section 3.2), which are described in the following.
3.1 Effect Specification
For effective design and implementation of complex image process-
ing pipelines, a modular approach is used for specifying: (1) render-
ing resources (e.g., effect shaders, textures, and buffers), (2) control
1http://www.tinrocket.com/waterlogue
2http://www.codeorgana.com/brushstroke
CombinationParameter Space N-to-1 1-to-N 1-to-1
Presets
(High Level)
Global Parameters
(Medium Level)
Technical Inputs
(Low Level)
v vv
f ff
v v
f f
v
f
v
f
v
f
v v v
ff f
Figure 3: Mechanisms for combining parameters spaces: 1-to-1
(propagation) supports advanced technical control, 1-to-N (com-
position) supports casual usages by reduction, N-to-1 (decompo-
sition) supports single-element settings of vector inputs, and their
respective combinations (here: indicated by the respective colors).
flow and state of rendering passes (such as ping-pong rendering or
image pyramid synthesis), and (3) parameters for different levels of
control using a XML-based format. This way, effect specifications
are facilitated that combine implementation-specific asset data with
a platform-independent effect structure and parameterization that is
required by user interfaces (refer to the supplemental material for
an example). The format can be interpreted and instantiated for
different rendering frameworks on various platforms, e.g., OpenGL
ES with Android, iOS, or WebGL.
Moreover, this modular approach facilitates the implementation of
complex pipelines for effect developers of different skill levels.
While shader developers may have direct control at implementa-
tion level, e.g., shader source code, technical parameters, and ren-
dering state, effect composers are able to create complex effects
by re-using, combining or referencing existing effects or only parts
thereof, e.g., edge enhancement or color quantization.
The core effect specification consists of (1) shader program dec-
larations that reference fragment and vertex shader files, (2) tex-
ture declarations that define OpenGL textures with parameters such
as wrap modes, filter modes and an optional bitmap source, and
(3) rendering pass specifications that configure pass inputs and out-
puts for corresponding shader programs. Using this XML format,
complex processing chains can be defined by creating a pipeline
with multiple parallel or sequential rendering passes connected by
pass input and output textures. An example is shown in Listing A1.
3.2 Effect Parameterization
For effect parameterization, the main idea is to decouple complex
technical parameters—e.g., defined in the effect shaders as uniform
variables—from parameters that are exposed to a user, particularly
to hide technical complexity and facilitate ease-of-use. The con-
ceptual overview shown in Figure 3 divides a parameterization into
three technical layers. Each layer serves as abstraction of the under-
lying effect configuration by grouping it semantically and providing
a meaningful name. At this, presets may provide high-level control
by assigning concrete values vto global parameters (medium-level
control). Global parameters map to technical (shader) parameters
using a transformation function fthat can be adjusted by parameter
masks injected by painting to provide low-level control.
3.2.1 Presets (high-level control)
A preset comprises a “convincing” configuration of global param-
eters targeting a characteristic style. A set of presets should reveal
the assorted characteristics and variations of an effect—e.g., differ-
ent hatching styles (Figure 4)—that each serve as a starting point
for fine tuning. Presets adjust values of global parameters that map
to internal, technical parameters defined by the effect developer.
A B C D
E F G H
Figure 4: Overview of presets defined for pencil hatching: (A) in-
put, (B) default, (C) fine liner, (D) fine, (E) strong black and white,
(F) lighten black and white, (G) stumpy pencil, (H) sharp pencil.
original thickness=1 thickness=2
thickness=3
thickness=4
Figure 5: Adjusting the global parameter “stroke thickness”. The
thickness and level of abstraction scales with the global parameter.
Effect developers define presets in the XML effect file with a de-
scriptive name, an icon to be displayed in the GUI and target values
for global parameters (Listing A2). End users are then able to select
one of the presets by choosing a named icon from a list.
3.2.2 Global Parameters (medium-level control)
A global parameter maps a user-definable value to interdependent
technical inputs that influence visual attributes of an effect (Fig-
ure 5). A global parameter consists of a value of a specific type
(e.g., integer, float, color, boolean) within a pre-defined value range.
Effect developers map global parameters to input values of effect
shaders using the XML specification. This mapping may include
a basic mathematical transformation—e.g., adding or multiplying
constants and value parameters, or non-linear functions such as log-
arithmic (Listing A3, line 10). This way, global parameter values—
changed by the end user—are automatically propagated to the effect
shaders. Thereby, GUI elements such as value-range sliders, color
pickers, or switch buttons can be used to influence visual attributes
such as the color brightness, amount of color, contour width, or
level of abstraction.
3.2.3 Parameter Masks (low-level control)
Parameter painting is a per-pixel adjustment of the functions that
transform global parameters to technical inputs. Here, the metaphor
of brush-based on-screen painting with touch-enabled inputs is ap-
plied (e.g., using fingers, pen) to locally refine the appearance.
Effect developers define a painting parameter as a distinguished
global parameter with (1) an additional associated mask texture that
is automatically updated by the framework when painting on-screen
and is passed to the effect shaders, and (2) a set of masking brushes
that can be used for parameter painting (Listing A4). Analogous
to [DiVerdi 2013], these masking brush configurations are used to
adjust the painting strength,stroke width, and falloff that controls
the smoothness at the brush border. A mask is either referenced
with an add or subtract operation to adjust or scale a global param-
eter in correspondence with the XML-based effect specification and
effect shaders. Examples for these modes are given in Section 4.
Table 1: Overview of presets, global parameters and technical inputs defined in the effect structure and asset data for toon rendering. The
mapping of global parameters to technical parameters is indicated on the right-hand side, which includes bilateral filtering, color quantiza-
tion, rendering with a spot color, and XDoG filtering as described in [Winnem¨
oller et al. 2006; Winnem¨
oller et al. 2012]. Each mapping can
be locally adjusted by the proposed parameter painting. The color highlights exemplary show categories of parameter mappings (Figure 3).
Presets Global Parameters Technical Inputs
Comic Details Papercut Kriek ID Value Range σrσdq ϕqα β σeσmp ϕ 
1.5 3.0 0.5 0.5 Details 0.03.0x x x x
10 24 8 64 Colors 564 x
1.6 2.5 1.0 1.0 Color Blur 1.03.0x
1.0 1.0 1.0 0.65 Color Threshold 0.01.0x
#00000000 #00000000 #00000000 #99C5D0FF Spot Color colors x
2.5 1.0 1.1 1.1 Contour Width 0.44.0x
0.98 0.99 0.96 0.96 Contour Gran. 0.81.0x
0.0 0.0 2.2 2.2 Blackness 0.03.0x x
BF Quant. Color XDoG
End users are able to locally adjust a selected parameter in local im-
age regions by touch-enabled inputs such as wiping and dragging.
Further, they can choose between the pre-defined brush configura-
tions in order to customize the painting process. This enables to
use small hard brushes for fine-granular painting as well as large
smooth brushes for speed painting. Furthermore, users can choose
between increasing and decreasing parameter values (e.g. lighten
vs. darken) which enables to undo a painted process.
Finally, a specialized painting mode—direction painting—is intro-
duced that traces the tangent information of the virtual brush stroke.
As shown in Figure 6C, this mode enables pencil stroke rotations
according to the painting direction by using a three-channel texture
containing the x- and y-component of the tangent and the strength-
/pressure of the stroke.
A B C
Figure 6: Parameter painting: (A) global adjustments, (B) locally
adjusted contours, (C) unified stroke directions via painting.
4 Case Studies
The framework and effects were implemented with the Android
SDK using Java, OpenGL ES, and the OpenGL ES Shading Lan-
guage. In addition, the interaction concept was tested on iOS de-
vices. We have implemented a range of state-of-the-art effects that
build on image filtering and example-based texturing, and which
were originally designed for (semi-)automatic stylization, i.e., to
demonstrate the effectiveness of our methods to increase the spec-
trum of interactivity. The following effects were deployed on a
GoogleTM Pixel C with a NVIDIA R
Maxwell 256 core GPU.
Toon Filtering. We have implemented the toon effect (Figure 7)
of [Winnem¨
oller et al. 2006], which is based on bilateral filtering,
local luminance thresholding, and difference-of-Gaussians (DoG)
filtering, and enhanced it by rendering with spot colors [Rosin and
Lai 2013]. A this, flow-based filter kernels are used for bilateral
and extended DoG (XDoG) filtering—following the approaches
described in [Kyprianidis and D¨
ollner 2008; Winnem¨
oller et al.
2012]—that are adapted to local image structures to provide smooth
outputs at curved boundaries. In total, eight global parameters
enable interactive control (Table 1). On the one hand, the color
amount and their transitions mapping to technical quantization pa-
rameters [Winnem¨
oller et al. 2006], the level of detail parameter-
izing the bilateral filter, contour granularity and color amount, and
the spot color with its threshold. On the other hand, the contour
width,contour granularity, and blackness map to technical param-
eters of the XDoG filter [Winnem¨
oller et al. 2012]. Each mapping
can be locally adjusted with the proposed parameter-painting con-
cept. XML-specific definitions of this effect are found in the sup-
plemental materials.
Pencil Hatching. The pencil hatching effect shown in Figure 4–6
aligns tonal art maps [Praun et al. 2001; Webb et al. 2002] with the
main directions of image features—information that is obtained by
an eigenanalysis of the smoothed structure tensor [Brox et al. 2006].
The approach is enhanced by gradations in luminance and comple-
mented by contours derived from a flow-based DoG filter [Kypri-
anidis and D¨
ollner 2008]. At this, contour enhancement is handled
separately from tonal depiction, where a painting value parameter
is introduced to locally remove contours. The virtual brush mod-
els for the pencil hatching effect vary with the global parameters.
In particular, brightness and luminance-related parameters use a
small strength and a soft brush to accomplish smooth color tran-
sitions, while parameter masks that adjust the contour granularity
may be drawn with a stronger brush to instantly remove contours.
The painting of the contours has scale semantic, i.e., the impact of
the value parameter is reduced by the mask. Therefore, users are
able to globally adjust the strength of the contours and locally re-
move them. The painting of the brightness parameter uses the add
and subtract mode—i.e., the painting is interpreted as positive or
negative offset around the value of the global parameter—to locally
lighten or darken image regions.
Toon Rendering (Global) Watercolor Rendering (Global)
-+
Global
Local
Global
Local
Parameter Mask Parameter Mask
Toon Rendering (Local) Watercolor Rendering (Local)
Figure 7: Results of toon and watercolor rendering implemented with our framework. The insets show parameter masks with color-encoded
values that were injected by brush-based painting, i.e., blackness for toon rendering and the color threshold for watercolor rendering.
Oil Paint and Watercolor Rendering. We used our framework
to implement the oil-paint filter described in [Semmo et al. 2016c],
using flow-based joint bilateral upsampling [Kopf et al. 2007] to
work on image pyramids, and thus to be able to reduce the num-
ber of texture fetches when decreasing the image and filter kernel
sizes [Semmo et al. 2016c]. This multi-scale approach enables in-
teractive local parameterizations at run-time, e.g., of Gaussian filter
kernel sizes to adjust the level of abstraction (Figure 1) and the flow
direction. In addition, we have implemented watercolor rendering
based on the works described in [Bousseau et al. 2006; Wang et al.
2014] that simulates effects such as wobbing, edge darkening, pig-
ment density variation, and wet-in-wet (Figure 7). Each of these
effects can be locally parameterized, e.g., to adjust the wetness by
varying the filter kernel sizes when scattering noise in gradient di-
rection of feature contours. Both effects provide presets for entry-
level parameterizations. Here, the interested reader is referred to
the supplemental video.
5 Discussion
We evaluated our framework with respect to performance, user ex-
perience, and developer experience, which are described in the fol-
lowing.
Performance. Evaluations of the nonlinear image effects using
the test system described in Section 4 indicate that the framework
is able to perform at interactive frame rates for images with full HD
resolution, i.e., toon rendering runs at 14 frames per second (fps),
pencil hatching at 8 fps, oil-paint filtering at 4 fps, and watercolor
rendering at 5 fps, primarily due to optimizations such as separated
filter kernels [Singhal et al. 2011] and processing of image pyra-
mids [Semmo et al. 2016c].
User Experience. We have implemented the interaction con-
cept with the oil paint and watercolor rendering in the iOS app
BeCasso [Semmo et al. 2016a] and gathered qualitative feedback
from 26 participants—using their smartphone (iPhone) and tablet
(iPad)—as part of a public beta testing. Overall, they noted that
“many apps currently do this type of image manipulation” but ex-
pressed that the proposed (filtering) results are of “higher quality
than what is [currently] available”. Example outputs using test im-
ages of the NPR benchmark of [Mould and Rosin 2016] are shown
in Figure 9. However, our tests also exposed significant challenges
in usability. These are mainly related to the trade-off in providing
an easy-to-learn interface for novices while offering a highly effi-
cient, but hard-to-learn brush-based painting tool for expert users.
The learnability—the ease of completing a task the first time ex-
ploring the design—and the efficiency—the speed of accomplish-
ing a task after learning the design—as two key components of us-
ability as described by [Nielsen 1993] were affected the most. To
measure the users’ performance, we asked them to apply and edit
a specific preset to a given image. First, users should adjust a set
of parameters of the preset globally, using the appropriate sliders.
As expected, both novices and experts reached a reasonable level of
usage proficiency and efficiency within a short time. Secondly, we
asked them to adjust a specific parameter locally, using their finger
as brush or eraser to increase or decrease the value of the param-
eter at a specific part of the image. All in all, more than 90% of
the sample group needed assistance to complete this task, as they
were not able to grasp the connection between a chosen parameter
and the brush or eraser. Those results are backed up by our quantita-
tive analysis, exposing the users’ behavior and performance through
event observations within the app.
We achieved much better results in other key components, includ-
ing satisfaction—the pleasure of using the design—and memora-
bility—the ease of reestablishing proficiency after a period of not
using the design. Upon understanding the concept, users willingly
returned to use the app and highlighted that the “parameter paint-
ing function sets the app apart from others”. Due to the amount of
mistakes made, users only requested a precise undo functionality,
recording each individual step of global and local changes. In ad-
dition, some users would find it exciting to combine different styles
at different spots of their images.
Developer Experience. Our framework is used for teaching im-
age and video processing on mobile devices with OpenGL ES. We
had 20 graduate and undergraduate students that developed novel
stylization effects such as the proposed pencil hatching and a photo-
graphic plate simulation, using our XML-based effect specification
and asset data as a construction kit. The students’ results (Figure 8)
and evaluations indicate that the proposed modular effect composit-
ing is suitable to engage developers in rapid prototyping, help focus
on effect design, and address usability concerns such as parameter
optimization. Effect-specific features such as render to mipmap or
ping-pong render buffers, however, can be easily integrated with
additional programmatic effort.
We anticipate that the proposed methods and the parameterization
concept will lead to improvements in user-defined image filtering
that aspires casual creativity. Nonetheless, there are open questions
that need to be addressed in future studies, such as the range of
variation and level of engagement that can be achieved.
Sheng-Qi Filter
Photographic Plate Simulation Bloom
Figure 8: Additional effects implemented by our undergraduate students as part of a seminal project, using the proposed framework for
development: (left) photographic plate simulation based on unsharp masking, grayscale rendering and noise, (middle) simulation of the
Sheng Qi painting style by rendering with spot colors and visualizing paint drops, (right) bloom effect based on lightening image regions.
6 Conclusions and Future Work
This paper presents a GPU-based framework that enables to deploy
and parameterize image filters on mobile devices for stylization ef-
fects such as toon, oil paint and watercolor rendering. The key con-
tribution of this framework is an effect parameterization at three
levels of control that allows users to rapidly obtain stylized outputs
of aesthetic value, while giving creative control for global and lo-
cal effect adjustments with algorithmic support. Moreover, with
the integrated asset-based effect specification, developers are able
to rapidly compose multi-stage nonlinear image effects and help
focus on the interplay of technical and user-adjustable parameters.
For future work, we plan to resolve usability issues by exposing lo-
cal effect adjustments as an own set of tools to the users, visually
separated from the preset parameters. For example, this could in-
clude a pencil to add edges to a local spot, a spray bottle and paper
tissues to control the wetness of an image, or brushes to regulate
the amount of paint used. These tools resemble real-world objects,
making the process less technical, but more authentic. Further,
we consider to support multiple granularities of effect specification
(e.g., expert modes with 1-to-1 parameter mappings), use mobile
sensors for effect parameterization, and employ machine learning
to precompute parameter masks according to image contents.
Acknowledgments
This work was funded by the Federal Ministry of Education and
Research (BMBF), Germany, for the AVA project 01IS15041B and
within the InnoProfile Transfer research group “4DnD-Vis” (www.
4dndvis.de).
References
ANJ YO, K .-I., WE ML ER , S. , AN D BAXT ER , W. 2006. Tweakable
Light and Shade for Cartoon Animation. In Proc. NPAR, 133–
139.
BEN ED ET TI , L. , WINNEM ¨
OL LE R, H., C ORSINI, M. , AND
SCOPIGNO, R . 2014. Painting with Bob: Assisted Creativity
for Novices. In Proc. ACM UIST, 419–428.
BOU SS EAU , A. , KA PLA N, M., THO LL OT, J. , AND SILLION, F. X.
2006. Interactive Watercolor Rendering with Temporal Coher-
ence and Abstraction. In Proc. NPAR, 141–149.
BROX, T., WEICKERT, J., BUR GE TH , B. , AN D MR´
AZ EK , P. 2006.
Nonlinear structure tensors. Image and Vision Computing 24, 1,
41–55.
CAP IN , T., PUL LI , K., AND AKENINE-M LL ER , T. 2008. The
State of the Art in Mobile Graphics Research. IEEE Computer
Graphics and Applications 28, 4, 74–84.
DEV, K. 2013. Mobile Expressive Renderings: The State of the
Art. IEEE Computer Graphics and Applications 33, 3, 22–31.
DIVER DI , S., KRISHNASWAMY, A., ME CH , R. , AN D ITO , D.
2013. Painting with Polygons: A Procedural Watercolor Engine.
IEEE Trans. Vis. Comput. Graphics 19, 5, 723–735.
DIVER DI , S . 2013. A Brush Stroke Synthesis Toolbox. In Image
and Video-Based Artistic Stylisation. Springer, 23–44.
FIS CH ER , J. , HA LLER, M., A ND TH OM AS , B. 2008. Stylized De-
piction in Mixed Reality. International Journal of Virtual Reality
7, 4 (December), 71–79.
GER L, M., A ND IS EN BE RG, T. 2013. Interactive Example-based
Hatching. Computers & Graphics 37, 1–2, 65–80.
GOO CH , A. A. , L ON G, J., JI, L., EST EY, A., AN D GOO CH ,
B. S. 2010. Viewing Progress in Non-photorealistic Render-
ing through Heinlein’s Lens. In Proc. NPAR, 165–171.
HAN RA HA N, P., A ND HA EB ER LI, P. 1990. Direct WYSIWYG
Painting and Texturing on 3D Shapes. Computer Graphics 24,
4, 215–223.
HAYS, J ., A ND ES SA, I. 2004. Image and Video Based Painterly
Animation. In Proc. NPAR, 113–120.
ISE NB ER G, T. 2016. Interactive NPAR: What Type of Tools Should
We Create? In Proc. NPAR, 89–96.
KAN G, D., A ND YO ON , K . 2015. Interactive Painterly Rendering
for Mobile Devices. In Entertainment Computing - ICEC 2015.
Springer International Publishing, 445–450.
KIM , T. H. , A ND LI N, I., 2012. Real-Time Non-photorealistic
Viewfinder on the Tegra 3 Platform. Stanford University, unpub-
lished.
KOP F, J., CO HE N, M. F., LISCHINSKI, D., AN D UYT TE NDAE LE ,
M. 2007. Joint Bilateral Upsampling. ACM Trans. Graph. 26,
3.
KYPRIANIDIS, J. E., A ND D¨
OL LN ER , J . 2008. Image Abstraction
by Structure Adaptive Filtering. In Proc. EG UK TPCG, 51–58.
KYPRIANIDIS, J. E ., COL LO MOS SE , J., WANG , T., A ND IS EN -
BE RG , T. 2013. State of the ’Art’: A Taxonomy of Artistic
Stylization Techniques for Images and Video. IEEE Trans. Vis.
Comput. Graphics 19, 5, 866–885.
Figure 9: Results produced with the proposed framework for mobile image stylization: toon, pencil hatching, oil paint and watercolor. The
input images are obtained from the benchmark of [Mould and Rosin 2016] and are standardized at a 1024 pixels width (shown cropped).
LUM , E. B ., AN D MA, K.-L. 2002. Interactivity is the Key to
Expressive Visualization. SIGGRAPH Comput. Graph. 36, 3,
5–9.
MOU LD , D., AN D ROS IN , P. L. 2016. A Benchmark Image Set
for Evaluating Stylization. In Proc. NPAR, 11–20.
NIE LS EN , J. 1993. Usability Engineering. Morgan Kaufmann
Publishers Inc., San Francisco, CA, USA, 23–48.
OH, J., MA EN G, S ., A ND PARK , J. 2012. Efficient Watercolor
Painting on Mobile Devices. International Journal of Contents
8, 4, 36–41.
OLS EN , S. C., M AX WE LL, B. A., AN D GOO CH , B. 2005. Inter-
active Vector Fields for Painterly Rendering. In Proc. Graphics
Interface, 241–247.
PRAU N, E., H OP PE, H., WEB B, M ., A ND FI NKE LS TE IN , A.
2001. Real-Time Hatching. In Proc. ACM SIGGRAPH, 581–
586.
ROS IN , P. L., A ND LA I, Y.-K. 2013. Non-photorealistic Render-
ing with Spot Colour. In Proc. CAe, 67–75.
SAL ES IN , D . H., 2002. Non-Photorealistic Animation & Render-
ing: 7 Grand Challenges. Keynote talk at NPAR.
SAL IS BU RY, M. P., WONG , M . T., HUG HES , J . F., AND
SAL ES IN , D . H . 1997. Orientable Textures for Image-based
Pen-and-ink Illustration. In Proc. ACM SIGGRAPH, 401–406.
SEM MO , A. , D ¨
OL LN ER , J. , AN D SCH LEG EL , F. 2016. BeCasso:
Image Stylization by Interactive Oil Paint Filtering on Mobile
Devices. In Proc. ACM SIGGRAPH Appy Hour, 6:1–6:1.
SEM MO , A. , LI MB ERG ER , D., KYPRIANIDIS, J. E ., AN D
D¨
OL LN ER , J . 2016. Image Stylization by Interactive Oil Paint
Filtering. Computers & Graphics 55, 157–171.
SEM MO , A. , TR AP P, M., D ¨
UR SC HM ID , T., D ¨
OL LN ER , J. , AN D
PASE WALD T, S . 2016. Interactive Multi-scale Oil Paint Filtering
on Mobile Devices. In Proc. ACM SIGGRAPH Posters, 42:1–
42:2.
SIN GH AL , N. , YO O, J. W., CH OI , H. Y., AND PARK , I. K. 2011.
Design and Optimization of Image Processing Algorithms on
Mobile GPU. In ACM SIGGRAPH Posters, 21:1–21:1.
THA BE T, R., MA HM OU DI , R. , AND BE DO UI , M. H . 2014. Image
processing on mobile devices: An overview. In Proc. IPAS, 1–8.
TOD O, H., A NJ YO , K.-I. , BA XT ER , W., AN D IGA RAS HI , T. 2007.
Locally Controllable Stylized Shading. ACM Trans. Graph. 26,
3, 17:1–17:7.
WANG, M., WANG , B., FE I, Y., QIA N, K., WAN G, W., C HE N,
J., AN D YON G, J.-H . 2014. Towards Photo Watercolorization
with Artistic Verisimilitude. IEEE Trans. Vis. Comput. Graphics
20, 10, 1451–1460.
WEB B, M., P RAU N, E., FI NK EL ST EI N, A., A ND HO PPE , H.
2002. Fine tone control in hardware hatching. In Proc. NPAR,
53–58.
WEX LE R, D., A ND DE ZE UST RE , G. 2012. Intelligent Brush
Strokes. In Proc. ACM SIGGRAPH Talks, 50:1–50:1.
WIN NEM ¨
OL LE R, H., O LS EN , S. C., AN D GOO CH, B. 2006. Real-
Time Video Abstraction. ACM Trans. Graph. 25, 3, 1221–1226.
WIN NEM ¨
OL LE R, H., K YPRIANIDIS, J. E. , A ND OL SE N, S. C.
2012. XDoG: an extended difference-of-Gaussians compendium
including advanced image stylization. Computers & Graphics
36, 6, 740–753.
WIN NEM ¨
OL LE R, H . 2013. NPR in the Wild. In Image and Video-
Based Artistic Stylisation. Springer, 353–374.
Appendix
Listing A1: Color quantization and conversion passes defined with
the XML format used by the proposed framework. The passes are
connected by using a single texture as output buffer of the first
pass (line 15) and as input resource of the second pass (line 22).
1<pass id="colorQuantizationPass" enabled="true"
shaderprogram="colorQuantizationShaderProgram">
2<passinputs>
3<passinput id="u_Texture" type="sampler">
4<value>colorBilateralPass1Texture</value>
5</passinput>
6<passinput id="u_NumBins" type="float">
7<value>2.0</value>
8</passinput>
9<passinput id="u_PhiQ" type="float">
10 <value>3.4</value>
11 </passinput>
12 </passinputs>
13 <passoutputs>
14 <passoutput id="colorQuantizationPassOutput0">
15 <value>colorQuantizationTexture</value>
16 </passoutput>
17 </passoutputs>
18 </pass>
19 <pass id="lab2RgbPass" enabled="true"
shaderprogram="lab2RgbShaderProgram">
20 <passinputs>
21 <passinput id="u_Texture" type="sampler">
22 <value>colorQuantizationTexture</value>
23 </passinput>
24 </passinputs>
25 <passoutputs>
26 <passoutput id="lab2RgbPassOutput0">
27 <value>colorQuantizationTextureLAB</value>
28 </passoutput>
29 </passoutputs>
30 </pass>
Listing A2: Example of a portrait preset defined with the XML
format to adjust the values of referenced global parameters.
1<preset name="Portrait" icon="fx/toon/preset/portrait.png">
2<parameter ref="details">3.0</parameter>
3<parameter ref="colorBlur">2.1</parameter>
4<parameter ref="colorQuantization">9.0</parameter>
5</preset>
Listing A3: Example of the definition of a global parameter in our
XML format. The pass inputs of the defined passes are referenced.
1<valueparameter id="colorQuantization" description="Color
Quantization"
icon="fx/toon/icon/parameter/color_quantization.png"
type="float" hidden="false">
2<default>24</default>
3<minrange>5</minrange>
4<maxrange>30</maxrange>
5<step>1.0</step>
6<setpassinput pass="colorQuantizationPass"
passinput="u_NumBins"/>
7<expressionset pass="colorQuantizationPass"
passinput="u_PhiQ">
8<parameterexpressionvariable ref="colorBlur"/>
9<expression>
10 (colorQuantization *0.0339) + 0.0135 + (3.0 -
colorBlur) *(3.0 - colorBlur)
11 </expression>
12 </expressionset>
13 </valueparameter>
Listing A4: XML-based definition of a painting parameter.
1<paintingvalueparameter description="Paper Structure"
hidden="false"
icon="fx/pencilhatching/icon/parameter/paper_structure.png"
id="paperStructure" type="float">
2<mask>paperStructureMaskTexture</mask>
3<brushes>
4<brush>large</brush>
5<defaultbrush>strong</defaultbrush>
6</brushes>
7<default>5.0</default>
8<minrange>0.0</minrange>
9<maxrange>20.0</maxrange>
10 <step>1.0</step>
11 <setpassinput pass="hatchTexturingPass"
passinput="u_PaperStructure"/>
12 </paintingvalueparameter>
... Before discussing mappings between sensor data events and state changes of abstraction techniques and the application itself, we briefly review the different LOCs offered by modern, real-time implementation of image and video abstraction techniques [20]. In particular, these are: ...
... These parameters are typically numerical, or an enumeration value. They can be changed on a global basis, i.e., for every pixel in the image, or locally, i.e., for a subset of pixels in the image, usually caused by mask painting [20]. ...
... To address these challenges, Semmo et al . introduced a document format that allows to decompose effects into several components [20] . In the proposed format, implementation-specific files are complemented by human-readable eXtensible Markup Language (XML) description files, which describe an effect in an abstract way. ...
Chapter
Acquisition and consumption of visual media such as digital image and videos is becoming one of the most important forms of modern communication. However, since the creation and sharing of images is increasing exponentially, images as a media form suffer from being devalued, as the quality of single images are getting less and less important, and the frequency of the shared content turns to be the focus. In this work, an interactive system which allows users to interact with volatile and diverting artwork based on their eye movement only is presented. The system uses real-time image-abstraction techniques to create an artwork unique to each situation. It supports multiple, distinct interaction modes, which share common design principles, enabling users to experience game-like interactions focusing on eye-movement and the diverting image content itself. This approach hints at possible future research in the field of relaxation exercises and casual art consumption and creation.
... The existing microservices for image processing must be extended in a way that images can also be processed using a tile-based approach. The existing Visual Computing Assets (VCAs) [39], which are used for image processing, have to be examined to what extent a tile-based processing affects the results. The VCAs that produce erroneous results in the tile-based approach must be identified and modified. ...
... At this, popular applications such as BeCasso [38,25] and Pictory [44,26] typically employ a user-centric approach for assisted image stylization targeting mobile artists and users seeking casual creativity, thereby integrating user experience concepts for making image filters usable in their daily life [16]. This is technically achieved by parameterizing image filters at three levels of control, i. e., using presets, global parameter adjustments and complementary on-screen painting that operates within the filters' parameter spaces for local adjustments [39]. The advancement of this concept further enables to interactively design parameterizable image stylization components on-device by reusing building blocks of image processing effects and pipelines [8], which forms a particular requirement for a rapid software product line development of mobile apps [9], service-based image processing and provisioning of processing techniques [34,57,55], as well as novel interaction techniques [50]. ...
... All these services work with the data provided by the Asset Database [49], Resource Manager, User Manager, and Job Manager services. These microservices of the data layer store resources such as images and videos, user data, and the VCAs [39], which are required for processing and provisioning to the other microservices. ...
Thesis
Full-text available
With the improvement of cameras and smartphones, more and more people can now take high-resolution pictures. Especially in the field of advertising and marketing, images with extremely high resolution are needed, e. g., for high quality print results. Also, in other fields such as art or medicine, images with several billion pixels are created. Due to their size, such gigapixel images cannot be processed or displayed similar to conventional images. Processing methods for such images have high performance requirements. Especially for mobile devices, which are even more limited in screen size and memory than computers, processing such images is hardly possible. In this thesis, a service-based approach for processing gigapixel images is presented to approach this problem. Cloud-based processing using different microservices enables a hardware-independent way to process gigapixel images. Therefore, the concept and implementation of such an integration into an existing service-based architecture is presented. To enable the exploration of gigapixel images, the integration of a gigapixel image viewer into a web application is presented. Furthermore, the design and implementation will be evaluated with regard to advantages, limitations, and runtime.
... In order to access different intermediate steps of a stylization effect to be used for vectorization, a system is required where a stylization effect is divided into individual sub-steps, which can be combined as desired. Our approach is based on the work of Semmo et al. [12], which provides a GPU-based framework for complex image stylization effects and was also used for ProsumerFX [4]. It consists of three components, which are explained below: ...
... The original pipeline already consists of several steps. The underlying GPU-based effect pipeline based on Semmo et al. [12] for toon stylization calculates two parts from an input image. On the one hand an edge detection is performed with the extended difference-of-Gaussians and on the other hand a color quantization based on local luminance in the LAB (a) Input raster image (1280 × 1920 pixels) (b) Inkscape vectorization (11 colors) (c) Proposed approach Figure 12: Comparison of the vectorized results for a toon effect (a). ...
... (a) Raster image approach as abstract overview of the 29 rendering passes based on Semmo et al.[12] ...
... In order to access different intermediate steps of a stylization effect to be used for vectorization, a system is required where a stylization effect is divided into individual sub-steps, which can be combined as desired. Our approach is based on the work of Semmo et al. [12], which provides a GPU-based framework for complex image stylization effects and was also used for ProsumerFX [4]. It consists of three components, which are explained below: ...
... The original pipeline already consists of several steps. The underlying GPU-based effect pipeline based on Semmo et al. [12] for toon stylization calculates two parts from an input image. On the one hand an edge detection is performed with the extended difference-of-Gaussians and on the other hand a color quantization based on local luminance in the LAB (a) Input raster image (1280 × 1920 pixels) (b) Inkscape vectorization (11 colors) (c) Proposed approach Figure 12: Comparison of the vectorized results for a toon effect (a). ...
... (a) Raster image approach as abstract overview of the 29 rendering passes based on Semmo et al.[12] ...
Conference Paper
Full-text available
This paper presents a new approach for the vectorization of stylized images using intermediate data representations to interface image stylization and vectorization techniques. It enables the combination of efficient GPU-based implementations of interactive image stylization techniques and the advantages of vectorized image representations. We demonstrate the capabilities of our approach using half-toning and toon stylization techniques.
... Prior to mapping high-level sensor data events to state changes of abstraction techniques and the application itself, we briefly review the different Level-of-Controls (LOCs) offered by modern, real-time implementation of image and video abstraction techniques (Semmo et al., 2016). In particular, these are: Pipeline Manipulation: Since an image-abstraction pipeline consists of one or more imageabstraction effects that each have global and local parameters and presets for controlling these, it is possible to define pipeline presets and pipeline parameters. ...
... They can be numerical, floating point, natural numbers, or an enumeration values. These parameters can be changed globally, i.e., for every pixel in the image, or locally, i.e., for only a subset of the pixels within the image, usually adjusted with mask painting (Semmo et al., 2016). ...
Conference Paper
Full-text available
With the spread of smart phones capable of taking high-resolution photos and the development of high-speed mobile data infrastructure, digital visual media is becoming one of the most important forms of modern communication. With this development, however, also comes a devaluation of images as a media form with the focus becoming the frequency at which visual content is generated instead of the quality of the content. In this work, an interactive system using image-abstraction techniques and an eye tracking sensor is presented, which allows users to experience diverting and dynamic artworks that react to their eye movement. The underlying modular architecture enables a variety of different interaction techniques that share common design principles, making the interface as intuitive as possible. The resulting experience allows users to experience a game-like interaction in which they aim for a reward, the artwork, while being held under constraints, e.g., not blinking. The conscious eye movements that are required by some interaction techniques hint an interesting, possible future extension for this work into the field of relaxation exercises and concentration training.
... In recent years, research has focused on imitating these techniques to create computer graphics-based images using Non-photorealistic Rendering (NPR) techniques [Kyp13;Dev13]. The resulting images enjoy great popularity and are increasingly used and shared on social media platforms [Sem162]. ...
Conference Paper
Full-text available
The paper presents a new approach of optimized vectorization to generate stylized artifacts such as drawings with a plotter or cutouts with a laser cutter. For this, we developed a methodology for transformations between raster and vector space. More over, we identify semiotic aspects of Geometry-based Stylization Techniques (GSTs) and the combination with raster-based stylization techniques. Therefore, the system enables also Fused Stylization Techniques (FSTs).
... • Location-based Filtering & User Interaction. All design aspects can be configured on preset level to obtain quick results, which can be further fine-tuned by global and local adjustments-operating in spatial space [Semmo et al. 2016]to ease the usability [Klingbeil et al. 2017] (Figure 2). • Point/Line/Area. ...
Conference Paper
Full-text available
We present StyleTune, a mobile app for interactive multi-level control of neural style transfers that facilitates creative adjustments of style elements and enables high output fidelity. In contrast to current mobile neural style transfer apps, StyleTune supports users to adjust both the size and orientation of style elements, such as brushstrokes and texture patches, on a global as well as local level. To this end, we propose a novel stroke-adaptive feed-forward style transfer network, that enables control over stroke size and intensity and allows a larger range of edits than current approaches. For additional level-of-control, we propose a network-agnostic method for stroke-orientation adjustment by utilizing the rotation-variance of Convolutional Neural Networks (CNNs). To achieve high output fidelity, we further add a patch-based style transfer method that enables users to obtain output resolutions of more than 20 Megapixel (Mpix). Our approach empowers users to create many novel results that are not possible with current mobile neural style transfer apps.
Preprint
Full-text available
We present StyleTune, a mobile app for interactive multi-level control of neural style transfers that facilitates creative adjustments of style elements and enables high output fidelity. In contrast to current mobile neural style transfer apps, StyleTune supports users to adjust both the size and orientation of style elements, such as brushstrokes and texture patches, on a global as well as local level. To this end, we propose a novel stroke-adaptive feed-forward style transfer network, that enables control over stroke size and intensity and allows a larger range of edits than current approaches. For additional level-of-control, we propose a network agnostic method for stroke-orientation adjustment by utilizing the rotation-variance of CNNs. To achieve high output fidelity, we further add a patch-based style transfer method that enables users to obtain output resolutions of more than 20 Megapixel. Our approach empowers users to create many novel results that are not possible with current mobile neural style transfer apps.
Presentation
Full-text available
Presentation of research paper "Controlling Image-Stylization Techniques using Eye Tracking".
Conference Paper
Full-text available
BeCasso is a mobile app that enables users to transform photos into an oil paint look that is inspired by traditional painting elements. In contrast to stroke-based approaches, the app uses state-of-the-art nonlinear image filtering techniques based on smoothed structure information to interactively synthesize oil paint renderings with soft color transitions. BeCasso empowers users to easily create aesthetic oil paint renderings by implementing a two-fold strategy. First, it provides parameter presets that may serve as a starting point for a custom stylization based on global parameter adjustments. Second, it introduces a novel interaction approach that operates within the parameter spaces of the stylization effect to facilitate creative control over the visual output: on-screen painting enables users to locally adjust the appearance in image regions, e.g., to vary the level of abstraction, brush and stroke direction. This way, the app provides tools for both higher-level interaction and low-level control to serve the different needs of non-experts and digital artists.
Poster
Full-text available
This work presents an interactive mobile implementation of a filter that transforms images into an oil paint look. At this, a multi-scale approach that processes image pyramids is introduced that uses flow-based joint bilateral upsampling to achieve deliberate levels of abstraction at multiple scales and interactive frame rates. The approach facilitates the implementation of interactive tools that adjust the appearance of filtering effects at run-time, which is demonstrated by an on-screen painting interface for per-pixel parameterization that fosters the casual creativity of non-artists.
Article
Full-text available
This paper presents an interactive system for transforming images into an oil paint look. The system comprises two major stages. First, it derives dominant colors from an input image for feature-aware recolorization and quantization to conform with a global color palette. Afterwards, it employs non-linear filtering based on the smoothed structure adapted to the main feature contours of the quantized image to synthesize a paint texture in real-time. Our filtering approach leads to homogeneous outputs in the color domain and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive painting. To this end, our system introduces a generalized brush-based painting interface that operates within parameter spaces to locally adjust the level of abstraction of the filtering effects. Several results demonstrate the various applications of our filtering approach to different genres of photography.
Article
Full-text available
Image processing technology has grown significantly over the past decade. Its application on low-power mobile devices has been the interest of a wide research group related to newly emerging contexts such as augmented reality, visual search, object recognition, and so on. With the emergence of general-purpose computing on embedded GPUs and their programming models like OpenGL ES 2.0 and OpenCL, mobile processors are gaining a more parallel computing capability. Thereby, the adaptation of these advancements for accelerating mobile image processing algorithms has become actually an important topical issue. In this paper, our interest is based on reviewing recent challenging tasks related to mobile image processing using both serial and parallel computing approaches in several emerging application contexts.
Article
We present a novel watercolor painting drawing system which can work even on low powered computing machine such as tablet PC. Most digital watercolor systems are generated to perform on desktop, not low powered mobile computing system such as iPad. Our system can be utilized for art education besides professional painters. Our system is not a na?ve imitation of real watercolor painting, but handles with properties of watercolor such as diffusion, boundary salience, and mixing of water and pigment.
Chapter
During the early years of computer graphics, the results were arguably not as realistic as the intended goal set forth. However, it was not until sometime later that non-realism was accepted as a goal worthwhile pursuing. Since then, NPR has matured considerably and found uses in numerous applications, ranging from movies and television, production tools, and games, to novelty and casual creativity apps on mobile devices. This chapter presents examples from each of these categories within their historical and applied context.
Conference Paper
Painterly rendering is among the most popular non-photorealistic rendering techniques and has been employed in many applications. Research on painterly rendering mainly focuses on automatically generating artistic results. In this study, we aim to develop an entertaining and interactive application for painterly rendering with touchscreen mobile devices. The proposed application provides user interaction for added enjoyment and ensures high-quality painterly results. We provide a method for finding the appropriate position of the brush stroke around the points touched by the user and for generating and rendering the brush stroke. With the proposed method, users can quickly generate high-quality painterly results.
Chapter
This chapter discusses the multidimensional nature of usability. Usability has multiple components and is traditionally associated with five main usability attributes, namely, learnability, efficiency, memorability, errors, and subjective satisfaction. Only by defining the abstract concept of “usability” in terms of these more precise and measurable components one can arrive at engineering discipline where usability is not just argued about but is systematically approached, improved, and evaluated or possibly measured. Usability measurement starts with the definition of a representative set of test tasks, relative to which the different usability attributes can be measured. To determine a system's overall usability on the basis of a set of usability measures, one normally takes the mean value of each of the attributes that have been measured and checks whether these means are better than some previously specified minimum. Because users are known to be very different, it is probably better to consider the entire distribution of usability measures and not just the mean value.
Chapter
A core component of natural media painting is the generation of brush strokes that have expressive qualities similar to real brush strokes, and subsequently there are many different approaches that have been explored in the research community. As a brush stroke is a physical phenomenon consisting of many stiff bristles in sliding contact with a canvas, simulation has been a popular approach, considering mesh and spline based models and physical and data-driven dynamics. Because of the difficulty of high fidelity physical simulation, an alternative approach is to acquire the dynamic shape of real bristle brushes during strokes, and then playback those deformations directly, driven by user input. Regardless of whether simulation or acquisition is used, the result is a discrete set of instantaneous brush shapes, which then must be combined into a continuous brush stroke. Available options include stamping and sweeping, with both raster and vector output capabilities. At the end of this chapter, the reader will have in his or her toolbox all the necessary tools to tailor brush stroke generation to particular input, output, and performance requirements.