PosterPDF Available


This work presents an interactive mobile implementation of a filter that transforms images into an oil paint look. At this, a multi-scale approach that processes image pyramids is introduced that uses flow-based joint bilateral upsampling to achieve deliberate levels of abstraction at multiple scales and interactive frame rates. The approach facilitates the implementation of interactive tools that adjust the appearance of filtering effects at run-time, which is demonstrated by an on-screen painting interface for per-pixel parameterization that fosters the casual creativity of non-artists.
Interactive Multi-scale Oil Paint Filtering on Mobile Devices *
Amir Semmo Matthias Trapp Tobias D¨
urschmid J¨
urgen D¨
Hasso Plattner Institute, University of Potsdam, Germany
Sebastian Pasewaldt
Digital Masterpieces GmbH
Figure 1: Results of the interactive multi-scale oil paint filtering approach that processes image pyramids and uses flow-based joint bilateral
upsampling (FJBU) with the input image (left). Scale factors: 100% / full resolution without FJBU (middle), 33% / with FJBU (right).
This work presents an interactive mobile implementation of a fil-
ter that transforms images into an oil paint look. At this, a multi-
scale approach that processes image pyramids is introduced that
uses flow-based joint bilateral upsampling to achieve deliberate lev-
els of abstraction at multiple scales and interactive frame rates. The
approach facilitates the implementation of interactive tools that ad-
just the appearance of filtering effects at run-time, which is demon-
strated by an on-screen painting interface for per-pixel parameteri-
zation that fosters the casual creativity of non-artists.
Keywords: oil paint filter, flow-based joint bilateral upsampling
Concepts: Computing methodologies Image manipulation;
1 Introduction and Motivation
Image stylization enjoys a growing popularity on mobile devices to
foster casual creativity [Winnem¨
oller 2013]. However, the imple-
mentation and provision of high-quality image effects for artistic
rendering is still faced by the inherent limitations of mobile graph-
ics hardware such as computing power and memory resources.
In particular with the continuous advancements of mobile cam-
era hardware, the interactive processing of high-resolution image
data becomes an increasingly challenging task. This especially con-
cerns image-based artistic rendering [Kyprianidis et al. 2013] that
requires several passes of (non-)linear filtering. This work presents
answers to these challenges by the example of an interactive oil
paint filter. It demonstrates how complex nonlinear image filters
can be efficiently processed on mobile GPUs, while providing fine-
grained controls for high-level and low-level run-time parameteri-
zation to support the visual expression of non-artists—a contempo-
rary field of research of the NPR community [Isenberg 2016]. |
This is the authors’ version of the work. It is posted here for your personal
use. Not for redistribution. The definitive version will be published in Pro-
ceedings of the 43rd International Conference and Exhibition on Computer
Graphics & Interactive Techniques (SIGGRAPH ’16).
2016 Copyright held by the owner/author(s).
SIGGRAPH ’16, July 24-28, 2016, Anaheim, CA,
ISBN: 978-1-4503-4371-8/16/07
2 Technical Approach
The original oil paint filter requires wide kernels for Gaussian fil-
tering (σ20) and leads to a high number of texture fetches to
achieve firm color blendings [Semmo et al. 2016]—a performance
limiting factor on mobile GPUs. Previous works typically employ
separated filter kernels to alleviate this problem, but do not ulti-
mately solve it for multi-stage and iterated nonlinear filtering.
The proposed solution is based on a multi-scale approach that oper-
ates on image pyramids and uses joint bilateral upsampling [Kopf
et al. 2007] with the high-resolution input (Figure 1). At this, flow-
based joint bilateral upsampling (FJBU) is proposed that uses the
smoothed structure—adapted to the main feature contours of the
filtered low-resolution image—to produce a painterly look. The
FJBU uses a separable orientation-aligned implementation that fil-
ters in the gradient direction and along the flow curves induced
by the tangent field. Together with real-time color grading us-
ing lookup tables, the enhancements enable interactive performance
when processing input images with full HD resolution, and thus al-
low interactive per-pixel parameterizations via on-screen painting.
The filter was implemented using the OpenGL ES Shading Lan-
guage and deployed on Android. For images with full HD resolu-
tion, it performs at 10 fps (scale factor 25%) and 6 fps (scale factor
50%) on a OnePlus Two with an Adreno 430 GPU.
Acknowledgments. This work was partly funded by the Federal
Ministry of Education and Research (BMBF), Germany, within the
InnoProfile Transfer research group “4DnD-Vis”.
ISENBERG, T. 2016. Interactive NPAR: What Type of Tools Should We Create? In
Proc. NPAR, The Eurographics Association, Goslar, Germany, 89–96.
KOPF, J ., COHE N, M . F., LISCHINSKI, D ., AND UY TT END AEL E, M . 2007. Joint
Bilateral Upsampling. ACM Trans. Graph. 26, 3.
of the ’Art’: A Taxonomy of Artistic Stylization Techniques for Images and Video.
IEEE Trans. Vis. Comput. Graphics 19, 5, 866–885.
OLL NE R, J. 2016. Image
Stylization by Interactive Oil Paint Filtering. Computers & Graphics 55, 157–171.
OLL ER , H . 2013. NPR in the Wild. In Image and Video-Based Artistic
Stylisation. Springer, 353–374.
... Image-and video-based artistic stylisation [57] and stroke-based rendering [39] have been particularly used in mobile expressive rendering [14] to simulate popular media and effects such as cartoon [19,37,82], watercolor [7,15,48,77], oil paint [34,64,66,79], and pencil hatching [26,52,78]. The popular image filtering app Instagram uses vignettes and color Lookup Tables (LUTs) to create retro looks or color transformations. ...
... The oil paint effect [66,64] renders flow-based painting strokes onto a canvas textures. It comprises 18 render-to-texture passes. ...
Full-text available
With the continuous advances of mobile graphics hardware, high-quality image stylization, e.g., based on image filtering, stroke-based rendering, and neural style transfer, is becoming feasible and increasingly used in casual creativity apps. Nowadays, users want to create and distribute their own works and become a prosumer, i.e., being both consumer and producer. However, the creativity facilitated by contemporary mobile apps, is typically limited with respect to the usage and application of pre-defined visual styles, that ultimately does not include their design and composition – an inherent requirement of prosumers. This thesis presents the concept and implementation of a GPU-based mobile application that enables to interactively design parameterizable image stylization effects on-device, by reusing building blocks of image processing effects and pipelines. The parameterization is supported by three levels of control: (1) convenience presets, (2) global parameters, and (3) local parameter adjustments using on-screen painting. Furthermore, the created visual styles are defined using a platform-independent document format and can be shared with other users via a web-based community platform. The presented app is evaluated with regard to variety and visual quality of the styles, run-time performance measures, memory consumption, and implementation complexity metrics to demonstrate the feasibility of the concept. The results show that the app supports the interactive combination of complex effects such as neural style transfer, watercolor filtering, oil paint filtering, and pencil hatching filtering to create unique high-quality effects. This approach supports collaborative works for designing visual styles, including their rapid prototyping, A/B testing, publishing, and distribution. Hence, it satisfies the needs for creative expression of both professionals and novice users, i.e., the general public.
... For instance, non-linear filtering based on the smoothed structure tensor can be used to synthesize oil paint renditions of 2D images [42,41] and 3D scenes [23] in real-time, and GPU-based image deformations to simulate caricatures and animations [54], color quantization [51], and vector-based representations [11]. With the continuous development of mobile graphics hardware, interactive high-quality image and video processing, such as based on nonlinear filtering for oil paint stylization [45,47] and a MapReduce approach [22], is becoming feasible and thus of particular interest for industrial and educational purposes [52], e. g., when used for implementing casual creativity applications. At this, popular applications such as BeCasso [38,25] and Pictory [44,26] typically employ a user-centric approach for assisted image stylization targeting mobile artists and users seeking casual creativity, thereby integrating user experience concepts for making image filters usable in their daily life [16]. ...
Full-text available
With the improvement of cameras and smartphones, more and more people can now take high-resolution pictures. Especially in the field of advertising and marketing, images with extremely high resolution are needed, e. g., for high quality print results. Also, in other fields such as art or medicine, images with several billion pixels are created. Due to their size, such gigapixel images cannot be processed or displayed similar to conventional images. Processing methods for such images have high performance requirements. Especially for mobile devices, which are even more limited in screen size and memory than computers, processing such images is hardly possible. In this thesis, a service-based approach for processing gigapixel images is presented to approach this problem. Cloud-based processing using different microservices enables a hardware-independent way to process gigapixel images. Therefore, the concept and implementation of such an integration into an existing service-based architecture is presented. To enable the exploration of gigapixel images, the integration of a gigapixel image viewer into a web application is presented. Furthermore, the design and implementation will be evaluated with regard to advantages, limitations, and runtime.
... To make the painterly rendering system more accessible to novice users, several researchers investigated beautification. Semmo et al. [22] presented an interactive mobile implementation of a filter that transforms images into an oil paint look using flow-based joint bilateral upsampling to achieve deliberate levels of abstraction at multiple scales and interactive frame rates. However, the connection between the edge-preserving image simplification and the artistic rendering is less obvious, because the significant artistic look is often achieved or further reinforced by taking the local image structure and brush stroke details into account, rather than the global image abstraction. ...
Full-text available
As one of the most successful multimedia tools for digital media and creative industry, computer-aided drawing system assists users to convert the input real photos into painterly style images. Nowadays, it is widely developed as cloud brush engine service in many creative software tools and applications of artistic rendering such as Prisma, Photoshop Cloud, and Meitu, because the machine learning server has more powerful than the stand-alone version. In this paper, we propose a web collaborative Stroke-based Learning and Rendering (WebSBLR) system. Different from the existing methods that are mainly focused on the artistic filters, we concentrate on the stroke realistic rendering engine for browser on client using WebGL and HTML5. Moreover, we implement the learning-based stroke drawing path generation module on the server. By this way, we enable to achieve the computer-supported cooperative work (CSCW), especially for multi-screen synchronous interaction. The experiments demonstrated our method are efficient to web-based multi-screen painting simulation.It can successfully learn artists’ styles and render pictures with consistent and smooth brush strokes.
... This work presents BeCasso, a mobile app that implements a GPUbased, efficient image analysis and processing pipeline to realize the objective of an interactive image processing on mobile devices: (1) real-time color grading using lookup tables is employed to simulate rendering with reduced color palettes, (2) a multi-scale approach processes images on downsampled versions and performs upsampling to achieve deliberate levels of abstraction [Semmo et al. 2016], (3) graph-based processing chains of multi-stage effects are analyzed to dynamically trigger only invalidated stages, and (4) algorithms for an efficient (re-)use of textures reduce the memory footprint while maintaining rendering performance. These enhancements significantly facilitate the implementation of interactive tools to adjust filtering effects at run-time, such as toon, watercolor and oil paint (Figure 1). ...
Conference Paper
Full-text available
BeCasso is a mobile app that enables users to transform photos into high-quality, high-resolution non-photorealistic renditions, such as oil and watercolor paintings, cartoons, and colored pencil drawings, which are inspired by real-world paintings or drawing techniques. In contrast to neuronal network and physically-based approaches, the app employs state-of-the-art nonlinear image filtering. For example, oil paint and cartoon effects are based on smoothed structure information to interactively synthesize renderings with soft color transitions. BeCasso empowers users to easily create aesthetic renderings by implementing a two-fold strategy: First, it provides parameter presets that may serve as a starting point for a custom stylization based on global parameter adjustments. Thereby, users can obtain initial renditions that may be fine-tuned afterwards. Second, it enables local style adjustments: using on-screen painting metaphors, users are able to locally adjust different stylization features, e.g., to vary the level of abstraction, pen, brush and stroke direction or the contour lines. In this way, the app provides tools for both higher-level interaction and low-level control to serve the different needs of non-experts and digital artists.
Conference Paper
In this study, we showcase a mobileaugmented reality application where a user places various 3D models in atabletop scene. The scene is captured and then rendered as Claude Monet’s impressionistic art style. One possibleuse case for this application is to demonstrate the behavior of the impressionistic art style of Claude Monet, byapplying this to tabletop scenes, which can be useful especially for art students. This allows the user to create theirown "still life" composition and study how the scene is painted. Our proposed framework is composed of threesteps. The system first identifies the context of the tabletop scene, through GIST descriptors, which are used asfeatures to identify the color palette to be used for painting. Our application supports three different color palettes,representing different eras of Monet’s work. The second step performs color mixing of two different colors in thechosen palette. The last step involves applying a three-stage brush stroke algorithm where the image is renderedwith a customized brush stroke pattern applied in each stage. While deep learning techniques are already capableof performing style transfer from paintings to real-world images, such as the success of CycleGAN, results showthat our proposed framework achieves comparable performance to deep learning style transfer methods on tabletopscenes.
Conference Paper
Full-text available
This work presents Pictory, a mobile app that empowers users to transform photos into artistic renditions by using a combination of neural style transfer with user-controlled state-of-the-art nonlinear image filtering. The combined approach features merits of both artistic rendering paradigms: deep convolutional neural networks can be used to transfer style characteristics at a global scale, while image filtering is able to simulate phenomena of artistic media at a local scale. Thereby, the proposed app implements an interactive two-stage process: first, style presets based on pre-trained feed-forward neural networks are applied using GPU-accelerated compute shaders to obtain initial results. Second, the intermediate output is stylized via oil paint, watercolor, or toon filtering to inject characteristics of traditional painting media such as pigment dispersion (watercolor) as well as soft color blendings (oil paint), and to filter artifacts such as fine-scale noise. Finally, on-screen painting facilitates pixel-precise creative control over the filtering stage, e. g., to vary the brush and color transfer, while joint bilateral upsampling enables outputs at full image resolution suited for printing on real canvas.
Conference Paper
Full-text available
With the continuous development of mobile graphics hardware, interactive high-quality image stylization based on nonlinear filtering is becoming feasible and increasingly used in casual creativity apps. However, these apps often only serve high-level controls to parameterize image filters and generally lack support for low-level (artistic) control, thus automating art creation rather than assisting it. This work presents a GPU-based framework that enables to parameterize image filters at three levels of control: (1) presets followed by (2) global parameter adjustments can be interactively refined by (3) complementary on-screen painting that operates within the filters' parameter spaces for local adjustments. The framework provides a modular XML-based effect scheme to effectively build complex image processing chains-using these interactive filters as building blocks-that can be efficiently processed on mobile devices. Thereby, global and local parameterizations are directed with higher-level algorithmic support to ease the interactive editing process, which is demonstrated by state-of-the-art stylization effects, such as oil paint filtering and watercolor rendering.
Full-text available
This paper presents an interactive system for transforming images into an oil paint look. The system comprises two major stages. First, it derives dominant colors from an input image for feature-aware recolorization and quantization to conform with a global color palette. Afterwards, it employs non-linear filtering based on the smoothed structure adapted to the main feature contours of the quantized image to synthesize a paint texture in real-time. Our filtering approach leads to homogeneous outputs in the color domain and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive painting. To this end, our system introduces a generalized brush-based painting interface that operates within parameter spaces to locally adjust the level of abstraction of the filtering effects. Several results demonstrate the various applications of our filtering approach to different genres of photography.
Full-text available
This paper surveys the field of non-photorealistic rendering (NPR), focusing on techniques for transforming 2D input (images and video) into artistically stylized renderings. We first present a taxonomy of the 2D NPR algorithms developed over the past two decades, structured according to the design characteristics and behavior of each technique. We then describe a chronology of development from the semi-automatic paint systems of the early nineties, through to the automated painterly rendering systems of the late nineties driven by image gradient analysis. Two complementary trends in the NPR literature are then addressed, with reference to our taxonomy. First, the fusion of higher level computer vision and NPR, illustrating the trends toward scene analysis to drive artistic abstraction and diversity of style. Second, the evolution of local processing approaches toward edge-aware filtering for real-time stylization of images and video. The survey then concludes with a discussion of open challenges for 2D NPR identified in recent NPR symposia, including topics such as user and aesthetic evaluation.
Full-text available
Image analysis and enhancement tasks such as tone mapping, col- orization, stereo depth, and photomontage, often require computing a solution (e.g., for exposure, chromaticity, disparity, labels) over the pixel grid. Computational and memory costs often require that a smaller solution be run over a downsampled image. Although general purpose upsampling methods can be used to interpolate the low resolution solution to the full resolution, these methods gener- ally assume a smoothness prior for the interpolation. We demonstrate that in cases, such as those above, the available high resolution input image may be leveraged as a prior in the con- text of a joint bilateral upsampling procedure to produce a better high resolution solution. We show results for each of the applica- tions above and compare them to traditional upsampling methods. CR Categories: I.3.7 (Computer Graphics): Three-Dimensional Graphics and Realism—Color, shading, shadowing, and texture
During the early years of computer graphics, the results were arguably not as realistic as the intended goal set forth. However, it was not until sometime later that non-realism was accepted as a goal worthwhile pursuing. Since then, NPR has matured considerably and found uses in numerous applications, ranging from movies and television, production tools, and games, to novelty and casual creativity apps on mobile devices. This chapter presents examples from each of these categories within their historical and applied context.
  • J Cohen
  • M F Lischinski
  • D And
KOPF, J., COHEN, M. F., LISCHINSKI, D., AND UYTTENDAELE, M. 2007. Joint Bilateral Upsampling. ACM Trans. Graph. 26, 3.
Interactive NPAR: What Type of Tools Should We Create?
  • T Isenberg
ISENBERG, T. 2016. Interactive NPAR: What Type of Tools Should We Create? In Proc. NPAR, The Eurographics Association, Goslar, Germany, 89-96.
NPR in the Wild In Image and Video-Based Artistic Stylisation
  • H Winnemöller