added 3 research items
This paper presents a service-based approach towards content moderation of digital visual media while browsing web pages. It enables the automatic analysis and classification of possibly offensive content, such as images of violence, nudity, or surgery, and applies common image abstraction techniques at different levels of abstraction to these to lower their affective impact. The system is implemented using a microservice architecture that is accessible via a browser extension, which can be installed in most modern web browsers. It can be used to facilitate content moderation of digital visual media such as digital images or to enable parental control for child protection.
This paper presents the concept and experience of teaching an undergraduate course on data-driven image and video processing. When designing visual effects that make use of Machine Learning (ML) models for image-based analysis or processing, the availability of training data typically represents a key limitation when it comes to feasibility and effect quality. The goal of our course is to enable students to implement new kinds of visual effects by acquiring training datasets via crowdsourcing that are used to train ML models as part of a video processing pipeline. First, we propose our course structure and best practices that are involved with crowdsourced data acquisitions. We then discuss the key insights we gathered from an exceptional undergraduate seminar project that tackles the challenging domain of video annotation and learning. In particular, we focus on how to practically develop annotation tools and collect high-quality datasets using Amazon Mechanical Turk (MTurk) in the budget-and time-constrained classroom environment. We observe that implementing the full acquisition and learning pipeline is entirely feasible for a seminar project, imparts hands-on problem solving skills, and promotes undergraduate research. EG-library open access link: https://diglib.eg.org/handle/10.2312/eged20211000
This paper describes an approach for using the Unity game engine for image processing by integrating a custom GPU-based image processor. It describes different application levels and integration approaches for extending the Unity game engine. It further documents the respective software components and implementation details required, and demonstrates use cases such as scene post-processing and material-map processing.
This paper presents a new approach for the vectorization of stylized images using intermediate data representations to interface image stylization and vectorization techniques. It enables the combination of efficient GPU-based implementations of interactive image stylization techniques and the advantages of vectorized image representations. We demonstrate the capabilities of our approach using half-toning and toon stylization techniques.
A convenient post-production video processing approach is to apply image filters on a per-frame basis. This allows the flexibility of extending image filters—originally designed for still images—to videos. However, per-image filtering may lead to temporal inconsistencies perceived as unpleasant flickering artifacts, which is also the case for dense light-fields due to angular inconsistencies. In this work, we present a method for consistent filtering of videos and dense light-fields that addresses these problems. Our assumption is that inconsistencies—due to per-image filtering—are represented as noise across the image sequence. We thus perform denoising across the filtered image sequence and combine per-image filtered results with their denoised versions. At this, we use saliency based optimization weights to produce a consistent output while preserving the details simultaneously. To control the degree-of-consistency in the final output, we implemented our approach in an interactive real-time processing framework. Unlike state-of-the-art inconsistency removal techniques, our approach does not rely on optic-flow for enforcing coherence. Comparisons and a qualitative evaluation indicate that our method provides better results over state-of-the-art approaches for certain types of filters and applications.
This paper reports on an service-based approach to enable real-time stream-processing of high-resolution videos. It presents a concept for integrating black-box image and video processing operations into a streaming framework. It further describes approaches to optimize data flow between the processing implementation and the framework to increase throughput and decrease latency. This enables the composition of streaming services to allow scaling for further throughput increase. We demonstrate the effectiveness of our approach by means of two real-time streaming-processing application examples.
Presentation of research paper "Controlling Image-Stylization Techniques using Eye Tracking".
With the spread of smart phones capable of taking high-resolution photos and the development of high-speed mobile data infrastructure, digital visual media is becoming one of the most important forms of modern communication. With this development, however, also comes a devaluation of images as a media form with the focus becoming the frequency at which visual content is generated instead of the quality of the content. In this work, an interactive system using image-abstraction techniques and an eye tracking sensor is presented, which allows users to experience diverting and dynamic artworks that react to their eye movement. The underlying modular architecture enables a variety of different interaction techniques that share common design principles, making the interface as intuitive as possible. The resulting experience allows users to experience a game-like interaction in which they aim for a reward, the artwork, while being held under constraints, e.g., not blinking. The conscious eye movements that are required by some interaction techniques hint an interesting, possible future extension for this work into the field of relaxation exercises and concentration training.
We present ViVid, a mobile app for iOS that empowers users to express dynamics in stylized Live Photos. This app uses state-of the-art computer-vision techniques based on convolutional neural networks to estimate motion in the video footage that is captured together with a photo. Based on this analysis and best practices of contemporary art, photos can be stylized as a pencil drawing or cartoon look that includes design elements to visually suggest motion, such as ghosts, motion lines and halos. Its interactive parameterizations enable users to filter and art-direct composition variables, such as color, size and opacity. ViVid is based on Apple’s CoreML, Metal, and PhotoKit APIs for optimized on-device processing. Thus, the motion estimation is scheduled to utilize the dedicated neural engine, while shading-based image stylization is able to process the video footage in real-time on the GPU. This way, the app provides a unique tool for creating lively photo stylizations with ease.
Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. In this work, we first propose a problem characterization of interactive style transfer representing a trade-off between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, we enhance state-of-the-art neural style transfer techniques by mask-based loss terms that can be interactively parameterized by a generalized user interface to facilitate a creative and localized editing process. We report on a usability study and an online survey that demonstrate the ability of our app to transfer styles at improved semantic plausibility.
This paper presents an approach and performance evaluation of performing service-based image processing using software rendering implemented using Mesa3D. Due to recent advances in cloud computing technology (with respect to both, hardware and software) as well as increased demands of image processing and analysis techniques, often within an eco-system of devices, it is feasible to research and quantify the impact of service-based approaches in this domain regarding cost-performance relation. For it, we provide a performance comparison for service-based processing using GPU-accelerated and software rendering.
This paper presents a GPU-based approach to color quantization by mapping of arbitrary color palettes to input images using Look-up Tables (LUT). For it, different types of LUT, their GPU-based generation, representation, and respective mapping implementations are described and their run-time performance evaluated and compared.
Presentation of Research Paper "Evaluating the Perceptual Impact of Rendering Techniques on Thematic Color Mappings in 3D Virtual Environments"
Presentation of Research Paper "Towards Architectural Styles for Android App Software Product Lines"
Presentation of Education Paper "Teaching Image-Processing Programming for Mobile Devices"
Mobile expressive rendering gained increasing popularity amongst users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, the neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles and media without deep prior knowledge of photo processing or editing. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization, e.g., with respect to image feature semantics or the user's ideas and interest. The goal of this work is to implement and enhance state-of-the-art neural style transfer techniques, providing a generalized user interface with interactive tools for local control that facilitate a creative editing process on mobile devices. At this, we first propose a problem characterization consisting of three goals that represent a trade-off between visual quality, run-time performance and ease of control. We then present MaeSTrO, a mobile app for orchestration of three neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors to direct a semantics-based composition and perform location-based filtering. Based on first user tests, we conclude with insights, showing different levels of satisfaction for the implemented techniques and user interaction design, pointing out directions for future research.
We present MaeSTrO, a mobile app for image stylization that empowers users to direct, edit and perform a neural style transfer with creative control. The app uses iterative style transfer, multi-style generative and adaptive networks to compute and apply flexible yet comprehensive style models of arbitrary images at run-time. Compared to other mobile applications, MaeSTrO introduces an interactive user interface that empowers users to orchestrate style transfers in a two-stage process for an individual visual expression: first, initial semantic segmentation of a style image can be complemented by on-screen painting to direct sub-styles in a spatially-aware manner. Second, semantic masks can be virtually drawn on top of a content image to adjust neural activations within local image regions, and thus direct the transfer of learned sub-styles. This way, the general feed-forward neural style transfer is evolved towards an interactive tool that is able to consider composition variables and mechanisms of general artwork production, such as color, size and location-based filtering. MaeSTrO additionally enables users to define new styles directly on a device and synthesize high-quality images based on prior segmentations via a servicebased implementation of compute-intensive iterative style transfer techniques.
This work presents enhancements to state-of-the-art adaptive neural style transfer techniques, thereby providing a generalized user interface with creativity tool support for lower-level local control to facilitate the demanding interactive editing on mobile devices. The approaches are implemented in a mobile app that is designed for orchestration of three neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors to perform location-based filtering and direct the composition. Based on first user tests, we conclude with insights, showing different levels of satisfaction for the implemented techniques and user interaction design, pointing out directions for future research.
We present the first empirical study on using color manipulation and stylization to make surgery images more palatable. While aversion to such images is natural, it limits many people's ability to satisfy their curiosity, educate themselves, and make informed decisions. We selected a diverse set of image processing techniques, and tested them both on surgeons and lay people. While many artistic methods were found unusable by surgeons, edge-preserving image smoothing gave good results both in terms of preserving information (as judged by surgeons) and reducing repulsiveness (as judged by lay people). Color manipulation turned out to be not as effective.
Digital images and image streams represent two major categories of media captured, delivered, and shared on the Web. Techniques for their analysis, classification, and processing are fundamental building blocks in today's digital media applications ranging from mobile image transformation apps to professional digital production suites. To efficiently process such digital media (1) independent of hardware requirements, (2) at different data complexity scales, while (3) yielding high-quality results, poses several challenges for software frameworks and hardware systems, in particular for mobile devices. With respect to these aspects, using service-based architectures is a common approach to strive for. However, unlike geodata, there is currently no standard approach for service definition, implementation, and orchestration in the domain of digital images and videos. This paper presents an approach for service-based image-processing and provisioning of processing techniques by the example of image-abstraction techniques. The generality and feasibility of the proposed system is demonstrated by different client applications that have been implemented for the Android operating system, for Google's G-Suite Software-as-a-Service infrastructure, as well as for desktop systems. The performance of the system is discussed at the example of complex, resource-intensive image-abstraction techniques, such as watercolor rendering.
In this paper we present a concept of a research course that teaches students in image processing as a building block of mobile applications. Our goal with this course is to teach theoretical foundations, practical skills in software development as well as scientific working principles to qualify graduates to start as fully-valued software developers or researchers. The course includes teaching and learning focused on the nature of small team research and development as encountered in the creative industries dealing with computer graphics, computer animation and game development. We discuss our curriculum design and issues in conducting undergraduate and graduate research that we have identified through four iterations of the course. Joint scientific demonstrations and publications of the students and their supervisors as well as quantitative and qualitative evaluation by students underline the success of the proposed concept. In particular, we observed that developing using a common software framework helps the students to jump start their course projects, while industry software processes such as branching coupled with a three-tier breakdown of project features helps them to structure and assess their progress.
Photo filtering apps successfully deliver image-based stylization techniques to a broad audience, in particular in the ubiquitous domain (e.g., smartphones, tablet computers). Interacting with these inherently complex techniques has so far mostly been approached in two different ways: (1) by exposing many (technical) parameters to the user, resulting in a professional application that typically requires expert domain knowledge, or (2) by hiding the complexity via presets that only allows the application of filters but prevents creative expression thereon. In this work, we outline challenges of and present approaches for providing interactive image filtering on mobile devices, thereby focusing on how to make them usable for people in their daily life. This is discussed by the example of BeCasso, a user-centric app for assisted image stylization that targets two user groups: mobile artists and users seeking casual creativity. Through user research, qualitative and quantitative user studies, we identify and outline usability issues that showed to prevent both user groups from reaching their objectives when using the app. On the one hand, user-group-targeting has been improved by an optimized user experience design. On the other hand, multiple level of controls have been implemented to ease the interaction and hide the underlying complex technical parameters. Evaluations underline that the presented approach can increase the usability of complex image stylization techniques for mobile apps.
This work presents advances in the design and implementation of Pictory, an iOS app for artistic neural style transfer and interactive image editing using the CoreML and Metal APIs. Pictory combines the benefits of neural style transfer, e.g., high degree of abstraction on a global scale, with the interactivity of GPU-accelerated stateof-the-art image-based artistic rendering on a local scale. Thereby, the user is empowered to create high-resolution, abstracted renditions in a two-stage approach. First, a photo is transformed using a pre-trained convolutional neural network to obtain an intermediate stylized representation. Second, image-based artistic rendering techniques (e.g., watercolor, oil paint or toon filtering) are used to further stylize the image. Thereby, fine-scale texture noise—introduced by the style transfer—is filtered and interactive means are provided to individually adjust the stylization effects at run-time. Based on qualitative and quantitative user studies, Pictory has been redesigned and optimized to support casual users as well as mobile artists by providing effective, yet easy to understand, tools to facilitate image editing at multiple levels of control.
With the continuous advances of mobile graphics hardware, high-quality image stylization—e.g., based on image filtering, stroke-based rendering, and neural style transfer—is becoming feasible and increasingly used in casual creativity apps. The creative expression facilitated by these mobile apps, however, is typically limited with respect to the usage and application of pre-defined visual styles, which ultimately does not include their design and composition—an inherent requirement of prosumers. We present ProsumerFX, a GPU-based app that enables to interactively design parameterizable image stylization components on-device by reusing building blocks of image processing effects and pipelines. Furthermore, the presentation of the effects can be customized by modifying the icons, names, and order of parameters and presets. Thereby, the customized visual styles are defined as platform-independent effects and can be shared with other users via a web-based platform and database. Together with the presented mobile app, this system approach supports collaborative works for designing visual styles, including their rapid prototyping, A/B testing, publishing, and distribution. Thus, it satisfies the needs for creative expression of both professionals as well as the general public.
With the continuous advances of mobile graphics hardware, high-quality image stylization, e.g., based on image filtering, stroke-based rendering, and neural style transfer, is becoming feasible and increasingly used in casual creativity apps. Nowadays, users want to create and distribute their own works and become a prosumer, i.e., being both consumer and producer. However, the creativity facilitated by contemporary mobile apps, is typically limited with respect to the usage and application of pre-defined visual styles, that ultimately does not include their design and composition – an inherent requirement of prosumers. This thesis presents the concept and implementation of a GPU-based mobile application that enables to interactively design parameterizable image stylization effects on-device, by reusing building blocks of image processing effects and pipelines. The parameterization is supported by three levels of control: (1) convenience presets, (2) global parameters, and (3) local parameter adjustments using on-screen painting. Furthermore, the created visual styles are defined using a platform-independent document format and can be shared with other users via a web-based community platform. The presented app is evaluated with regard to variety and visual quality of the styles, run-time performance measures, memory consumption, and implementation complexity metrics to demonstrate the feasibility of the concept. The results show that the app supports the interactive combination of complex effects such as neural style transfer, watercolor filtering, oil paint filtering, and pencil hatching filtering to create unique high-quality effects. This approach supports collaborative works for designing visual styles, including their rapid prototyping, A/B testing, publishing, and distribution. Hence, it satisfies the needs for creative expression of both professionals and novice users, i.e., the general public.
This work presents Pictory, a mobile app that empowers users to transform photos into artistic renditions by using a combination of neural style transfer with user-controlled state-of-the-art nonlinear image filtering. The combined approach features merits of both artistic rendering paradigms: deep convolutional neural networks can be used to transfer style characteristics at a global scale, while image filtering is able to simulate phenomena of artistic media at a local scale. Thereby, the proposed app implements an interactive two-stage process: first, style presets based on pre-trained feed-forward neural networks are applied using GPU-accelerated compute shaders to obtain initial results. Second, the intermediate output is stylized via oil paint, watercolor, or toon filtering to inject characteristics of traditional painting media such as pigment dispersion (watercolor) as well as soft color blendings (oil paint), and to filter artifacts such as fine-scale noise. Finally, on-screen painting facilitates pixel-precise creative control over the filtering stage, e. g., to vary the brush and color transfer, while joint bilateral upsampling enables outputs at full image resolution suited for printing on real canvas.
In this meta paper we discuss image-based artistic rendering (IB-AR) based on neural style transfer (NST) and argue, while NST may represent a paradigm shift for IB-AR, that it also has to evolve as an interactive tool that considers the design aspects and mechanisms of artwork production. IB-AR received significant attention in the past decades for visual communication, covering a plethora of techniques to mimic the appeal of artistic media. Example-based rendering represents one the most promising paradigms in IB-AR to (semi-)automatically simulate artistic media with high fidelity, but so far has been limited because it relies on pre-defined image pairs for training or informs only low-level image features for texture transfers. Advancements in deep learning showed to alleviate these limitations by matching content and style statistics via activations of neural network layers, thus making a generalized style transfer practicable. We categorize style transfers within the taxonomy of IB-AR, then propose a semiotic structure to derive a technical research agenda for NSTs with respect to the grand challenges of NPAR. We finally discuss the potentials of NSTs, thereby identifying applications such as casual creativity and art production.
Software product line development for Android apps is difficult due to an inflexible design of the Android framework. However, since mobile applications become more and more complex, increased code reuse and thus reduced time-to-market play an important role, which can be improved by software product lines. We propose five architectural styles for developing software product lines of Android apps: (1) activity extensions, (2) activity connectors, (3) dynamic preference entries, (4) decoupled definition of domain-specific behavior via configuration files, (5) feature model using Android resources. We demonstrate the benefits in an early case study using an image processing product line which enables more than 90% of code reuse.
In this work we present new weighting functions for the anisotropic Kuwahara filter. The anisotropic Kuwahara filter is an edge-preserving filter that is especially useful for creating stylized abstractions from images or videos. It is based on a generalization of the Kuwahara filter that is adapted to the local shape of features. For the smoothing process, the anisotropic Kuwahara filter uses weighting functions that use convolution in their definition. For an efficient implementation, these weighting functions are usually sampled into a texture map. By contrast, our new weighting functions do not require convolution and can be efficiently computed directly during the filtering in real-time. We show that our approach creates output of similar quality as the original anisotropic Kuwahara filter and present an evaluation scheme to compute the new weighting functions efficiently by using rotational symmetries.
Aerial images represent a fundamental type of geodata with a broad range of applications in GIS and geovisualization. The perception and cognitive processing of aerial images by the human, however, still is faced with the specific limitations of photorealistic depictions such as low contrast areas, unsharp object borders as well as visual noise. In this paper we present a novel technique to automatically abstract aerial images that enhances visual clarity and generalizes the contents of aerial images to improve their perception and recognition. The technique applies non-photorealistic image processing by smoothing local image regions with low contrast and emphasizing edges in image regions with high contrast. To handle the abstraction of large images, we introduce an image tiling procedure that is optimized for post-processing images on GPUs and avoids visible artifacts across junctions. This is technically achieved by filtering additional connection tiles that overlap the main tiles of the input image. The technique also allows the generation of different levels of abstraction for aerial images by computing a mipmap pyramid, where each of the mipmap levels is filtered with adapted abstraction parameters. These mipmaps can then be used to perform level-of-detail rendering of abstracted aerial images. Finally, the paper contributes a study to aerial image abstraction by analyzing the results of the abstraction process on distinctive visible elements in common aerial image types. In particular, we have identified a high abstraction potential in landscape images and a higher benefit from edge enhancement in urban environments.
Most techniques for image-based artistic rendering are based on edge-preserving filtering to reduce image details without loss of salient structures, e.g., by locally weight averaging in the color domain and range, or by solving an optimization problem for image decomposition [Kyprianidis et al. 2013]. However, these approaches are typically limited in not being able to process image contents according to a pre-defined color palette, which is often a desired feature by artists to have creative control over the filtered output.
In this work we present new weighting functions for the anisotropic Kuwahara filter. The anisotropic Kuwahara filter is an edge-preserving filter that is especially useful for creating stylized abstractions from images or videos. It is based on a generalization of the Kuwahara filter that is adapted to the local shape of features. For the smoothing process, the anisotropic Kuwahara filter uses weighting functions that use convolution in their definition. For an efficient implementation, these weighting functions are therefore usually sampled into a texture map. By contrast, our new weighting functions do not require convolution and can be efficiently computed directly during the filtering in real-time. We show that our approach creates output of similar quality as the original anisotropic Kuwahara filter and present an evaluation scheme to compute the new weighting functions efficiently by using rotational symmetries.
This paper presents VideoMR: a novel map and reduce framework for real-time video processing on graphic processing units (GPUs). Using the advantages of implicit parallelism and bounded memory allocation, our approach enables developers to focus on implementing video operations without taking care of GPU memory handling or the details of code parallelization. Therefore, a new concept for map and reduce is introduced, redefining both operations to fit to the specific requirements of video processing. A prototypic implementation using OpenGL facilitates various operating platforms, including mobile development, and will be widely interoperable with other state-of-the-art video processing frameworks.
We present Pic2Comic – an application that enables the user to semi-automatically transform photos into cartoons and caricatures interactively. A caricature (lat. "caricare": load, exaggerate) is a picture, description, or imitation of a person or thing in which certain striking characteristics are exaggerated in order to create a comic or grotesque effect. They are a widely used medium to communicate humorous and satiric messages. The creation of caricatures of portraits typically involves image stylization and deformation of the face’s characteristics. Modern non-photorealistic rendering techniques allow the automatic transformation of real photographs into stylized cartoon images. In contrast, caricaturing requires an artist’s experience to identify the distinctive features of a face and to present them in an exaggerated way. Given intuitive tools that leverage image stylization and deformation techniques, even inexperienced users could easily and instantly create caricatures out of real photographs.
Interactive image deformation is an important feature of modern image processing pipelines. It is often used to create caricatures and animation for input images, especially photos. State-of-the-art image deformation techniques are based on transforming vertices of a mesh, which is textured by the input image, using affine transformations such as translation, and scaling. However, the resulting visual quality of the output image depends on the geometric resolution of the mesh. Performing these transformations on the CPU often further inhibits performance and quality. This is especially problematic on mobile devices where the limited computational power reduces the maximum achievable quality. To overcome these issue, we propose the concept of an intermediate deformation buffer that stores deformation information at a resolution independent of the mesh resolution. This allows the combination of a high-resolution buffer with a low-resolution mesh for interactive preview, as well as a high-resolution mesh to export the final image. Further, we present a fully GPU-based implementation of this concept, taking advantage of modern OpenGL ES features, such as compute shaders.
Image stylization enjoys a growing popularity on mobile devices to foster casual creativity. However, the implementation and provision of high-quality image filters for artistic rendering is still faced by the inherent limitations of mobile graphics hardware such as computing power and memory resources. This work presents a mobile implementation of a filter that transforms images into an oil paint look, thereby highlighting concepts and techniques on how to perform multi-stage nonlinear image filtering on mobile devices. The proposed implementation is based on OpenGL ES and the OpenGL ES shading language, and supports on-screen painting to interactively adjust the appearance in local image regions, e.g., to vary the level of abstraction, brush, and stroke direction. Evaluations of the implementation indicate interactive performance and results that are of similar aesthetic quality than its original desktop variant.
This paper presents an approach for transforming images into an oil paint look. To this end, a color quantization scheme is proposed that performs feature-aware recolorization using the dominant colors of the input image. In addition, an approach for real-time computation of paint textures is presented that builds on the smoothed structure adapted to the main feature contours of the quantized image. Our stylization technique leads to homogeneous outputs in the color domain and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive painting.
This paper presents an interactive system for transforming images into an oil paint look. The system comprises two major stages. First, it derives dominant colors from an input image for feature-aware recolorization and quantization to conform with a global color palette. Afterwards, it employs non-linear filtering based on the smoothed structure adapted to the main feature contours of the quantized image to synthesize a paint texture in real-time. Our filtering approach leads to homogeneous outputs in the color domain and enables creative control over the visual output, such as color adjustments and per-pixel parametrizations by means of interactive painting. To this end, our system introduces a generalized brush-based painting interface that operates within parameter spaces to locally adjust the level of abstraction of the filtering effects. Several results demonstrate the various applications of our filtering approach to different genres of photography.
BeCasso is a mobile app that enables users to transform photos into high-quality, high-resolution non-photorealistic renditions, such as oil and watercolor paintings, cartoons, and colored pencil drawings, which are inspired by real-world paintings or drawing techniques. In contrast to neuronal network and physically-based approaches, the app employs state-of-the-art nonlinear image filtering. For example, oil paint and cartoon effects are based on smoothed structure information to interactively synthesize renderings with soft color transitions. BeCasso empowers users to easily create aesthetic renderings by implementing a two-fold strategy: First, it provides parameter presets that may serve as a starting point for a custom stylization based on global parameter adjustments. Thereby, users can obtain initial renditions that may be fine-tuned afterwards. Second, it enables local style adjustments: using on-screen painting metaphors, users are able to locally adjust different stylization features, e.g., to vary the level of abstraction, pen, brush and stroke direction or the contour lines. In this way, the app provides tools for both higher-level interaction and low-level control to serve the different needs of non-experts and digital artists.
With the continuous development of mobile graphics hardware, interactive high-quality image stylization based on nonlinear filtering is becoming feasible and increasingly used in casual creativity apps. However, these apps often only serve high-level controls to parameterize image filters and generally lack support for low-level (artistic) control, thus automating art creation rather than assisting it. This work presents a GPU-based framework that enables to parameterize image filters at three levels of control: (1) presets followed by (2) global parameter adjustments can be interactively refined by (3) complementary on-screen painting that operates within the filters' parameter spaces for local adjustments. The framework provides a modular XML-based effect scheme to effectively build complex image processing chains-using these interactive filters as building blocks-that can be efficiently processed on mobile devices. Thereby, global and local parameterizations are directed with higher-level algorithmic support to ease the interactive editing process, which is demonstrated by state-of-the-art stylization effects, such as oil paint filtering and watercolor rendering.