Content uploaded by Sung-A Jang
Author content
All content in this area was uploaded by Sung-A Jang on Oct 31, 2017
Content may be subject to copyright.
Incorporating Kinesthetic Creativity and Gestural Play into
Immersive Modeling
Sung-A Jang
Korea Culture Technology Institute
Gwangju, Korea
sjang@gist.ac.kr
Graham Wakeeld
Arts, Media, Performance & Design
York University
Toronto, Canada
grrrwaaa@yorku.ca
Sung-Hee Lee
Graduate School of Culture &
Technology
KAIST
Daejeon, Korea
sunghee.lee@kaist.ac.kr
Figure 1: (Left): Spring-mass brush in velocity mode. Variations in stroke weight reect velocity changes. (Center): Experiments
in shape-making with the spring-mass brush. (Right): Variations in curvatures achieved by adjusting the spring’s dynamic
settings.
ABSTRACT
The 3D modeling methods and approach presented in this paper
attempt to bring the richness and spontaneity of human kinesthetic
interaction in the physical world to the process of shaping dig-
ital form, by exploring playfully creative interaction techniques
that augment gestural movement. The principal contribution of
our research is a novel dynamics-driven approach for immersive
freeform modeling, which extends our physical reach and supports
new forms of expression. In this paper we examine three augmenta-
tions of freehand 3D interaction that are inspired by the dynamics
of physical phenomena. These are experienced via immersive aug-
mented reality to intensify the virtual physicality and heighten the
sense of creative empowerment.
CCS CONCEPTS
•Human-centered computing →Gestural input
;Interaction
design;Virtual reality;Mixed / augmented reality;
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specic permission
and/or a fee. Request permissions from permissions@acm.org.
MOCO’17, 28-30 June 2017, London, United Kingdom
©
2017 Copyright held by the owner/author(s). Publication rights licensed to Associa-
tion for Computing Machinery.
ACM ISBN 978-1-4503-5209-3/17/06.. .$15.00
https://doi.org/http://dx.doi.org/10.1145/3077981.3078045
KEYWORDS
Embodied interaction; kinesthetic interaction; gestural augmenta-
tion; immersive modeling; 3D modeling; 3D user interface; aug-
mented reality
ACM Reference format:
Sung-A Jang, Graham Wakeeld, and Sung-Hee Lee. 2017. Incorporating
Kinesthetic Creativity and Gestural Play into Immersive Modeling. In Pro-
ceedings of MOCO’17, London, United Kingdom, 28-30 June 2017, 8 pages.
https://doi.org/http://dx.doi.org/10.1145/3077981.3078045
1 INTRODUCTION
Myron Krueger, a computer artist and pioneer in virtual reality (VR)
interaction, argued that the real power of VR is not in its capacity of
illusion, but in its potential to extend our physical reach; and what
is critical in constituting “reality” to our perception is the “degree of
physical involvement” [
28
]. In the same spirit, our research utilizes
immersive technologies, continuous gesture capture, and physically-
inspired simulation to extend our bodily powers into the creation
of virtual sculptural forms that would be near impossible to achieve
in the physical world.
The main contribution is a new dynamics-driven approach for
immersive modeling, bringing to HMD-based VR an enrichment
of creative processes aorded “gestural augmentation” inspired
by physical simulations [
16
]. The dynamic models that virtually
augment gesture in this research are physically inspired yet under
the creative control we exert over and through our own bodies. The
interaction is designed to incorporate sophisticated movements on
an intimate scale – the ne-tuned physical control we can exert
using our hands and ngers – in tandem with the intuitions we
MOCO’17, 28-30 June 2017, London, United Kingdom S. Jang et al.
have of the physical dynamics of familiar materials and objects.
The focus here is on expressive capacity supporting creative pro-
cesses, rather than eciently or eectively obtaining a specic
output. VR and augmented reality (AR) technologies are primarily
utilized as a means of embodied interaction that expands our own
creative capacities. The ultimate goal is to eectively extend our
physical reach and support new forms of expression; to dexterously
create visually expressive forms that would be arduous to achieve
otherwise, all through playful experimentation.
2 RELATED WORK
2.1 Augmented Gestural Strokes
In 1989 Paul Haeberli developed a “dynamic drawing technique” for
his 2D drawing program Dynadraw [
8
]. He re-imagined the brush
as being a physical mass that was attached to the mouse position by
a damped spring and tugged around whenever the mouse moved,
instead of being at the exact point of the mouse itself. By aug-
menting gestural strokes with a spring-mass simulation, Dynadraw
creates expressive strokes that amplify the qualities of the gesture.
Scott Snibbe’s Dynasculpt [
25
] was directly inspired by Dynadraw
and attempted to use its drawing method to draw in 3D. Our ap-
proach expands upon Snibbe’s exploration of novel opportunities
aorded by the use of physically inspired dynamic models uncon-
strained by real-world laws. The interaction design of our system
also uses physical simulation to enrich the dynamic between the
user and the system and thus expand the expressive capacities of
gestural movement. Unlike Dynasculpt, our spring-mass prototype
has users directly steer the mass with the unltered movement of
their ngers, and their drawings materialize directly where they
perceive the mass to be, augmented within their own physical space.
Head movements naturally correspond with changes of view of the
emerging sculptural form. With the removal of these perceptual
barriers, we theorize that users would not show a “tendency to
draw in planes” [25] as with Dynasculpt.
2.2 Gestural 3D Modeling
A detailed introduction to early graphics research in using sweeping
3D input for modeling can be found in [
14
], which outlines the
substantial advantages of using a stereoscopic or immersive display
for anything from simple CAD-style manipulations to the complex
operations of freeform extrusion.
Most recent freeform modeling approaches that incorporate
sweeping 3D input belong to two broad categories: those that rely
on the in-air movements of a tracked device, and those that employ
haptic mediation to enhance control over input or mimic tactile
interactions with clay-like substances in physical reality [11].
Surface Drawing [
23
] used glove-based input in a semi-immersive
environment and used sweeping movements of the hand to gen-
erate a surface. One of its biggest drawbacks was having to use a
custom-made data glove with a tabletop VR device. CavePainting
[
12
], a full-featured 3D painting medium for artists and design-
ers, allowed users to create 3D brushstrokes with physical props
within an immersive CAVE environment. While Cave-Painting
worked well as a new art medium that allowed artists to paint in 3D
space, its aesthetics and interactivity remained generally tethered
to emulating a 2D medium in its painterly style and method of
mark-making. It also required a highly specialized environment
with expensive equipment (the CAVE itself). While Surface Draw-
ing and CavePainting were both successful in demonstrating the
potential application of direct, gestural 3D input for art and design,
they were dependent on custom-made devices and used immobile
platforms that were xed to a physical device or environment. Their
methods were also focused on visualizing the gestural stroke as
accurately as possible and refrained from using any form of gestu-
ral augmentation. Mäkelä’s experiments with the Näprä prototype
[
17
–
19
] proposed a slightly dierent and more delicate approach
to 3D mark making. Näprä–a wearable mechanic device for both
hands that wraps around the ngers–is designed to track the ne
and subtle movements of the ngertips for fuller expression and in-
teraction in a CAVE-like immersive environment [
22
]. Her ndings
suggest that real-time ngertip interaction allows for ne control
and intuitive command, and works especially well for two-handed
tasks that require both hands to work simultaneously in dierent
capacities. While the prototype presented in this paper relies solely
on vision-based ngertip tracking, its interface is also built around
the expressive potential and control capacity of our ngertips.
2.3 Immersive Painting and Sculpting Systems
The substantial advantages of an immersive modeling environment,
especially the benets of using our spatial intuitions to create and
manipulate virtual models, have been conrmed and discussed in
numerous studies [
1
,
4
–
7
,
9
,
10
,
17
–
19
,
24
,
30
]. The potential of im-
mersive technologies for supporting the early stages of the creative
process in design practices is methodically explored in [
1
,
9
,
10
,
30
].
Many concrete potential advantages of immersive 3D sketching
for creative design, such as being able to sketch life-size models
in proportion to our bodies and observe their spatial impact in
the process, were identied through empirical studies and expert
discussions [
10
,
30
]. A recent in-depth user study conducted over a
two-week period [
9
] showed that it was possible for designers to
develop their own unique creative strategies of gaining a degree of
mastery in handling digital substance, in the absence of material
constraints in an immersive modeling environment. [
20
] and [
26
]
exemplify recent research endeavors to integrate immersive inter-
action techniques into existing 3D modeling software, specically
Blender and SketchUp, to combine the eectiveness of spatial in-
teraction with the powerful capacities of established full-featured
modeling tools. These experiments in VR adaptations, however,
are only in its preliminary stages and have been limited to simple
manipulation tasks that do not involve freeform gestural input.
Rapid technological advances in VR and AR are fueling the de-
velopment of new immersive modeling software for emerging head-
mounted display (HMD) platforms such as the HTC Vive and Ocu-
lus Rift. VRClay [
29
] is a 3D sculpting software made for the Rift
that supports two-handed interaction with Razer Hydra controllers.
With VRClay users can enjoy the powers of digital sculpting di-
rectly in 3D, with one hand controlling the model while the other
sculpts. Tilt Brush [
27
] is also a VR HMD application that allows
users to paint with on 2D planes that could be moved and rotated
in 3D space to create 3D paintings. Grati 3D [
13
], which allows
users to draw directly in 3D with their ngers in AR with a Leap
Incorporating Kinesthetic Creativity and Gestural Play into Immersive Modeling MOCO’17, 28-30 June 2017, London, United Kingdom
Motion sensor mounted onto the Oculus Rift, was developed us-
ing the same Leap Motion VR platform that was used to build our
prototype system. All the immersive drawing and sculpting tools
mentioned above capitalize on the aordances of an immersive
environment to provide a novel and more intuitive experience of
creating 3D virtual content that is liberated from the rigid connes
of a 2D static monitor. None of them however incorporate dynamic
models to augment gestural expression.
3 SYSTEM DESIGN
The system presents three dierent physically inspired augmen-
tations of freehand 3D gesture, within an HMD-based VR or see-
through AR immersive experience.
Figure 2: The working environment.
3.1 Hardware and working environment
The system’s hardware consists of a VR HMD (Oculus Rift Develop-
ment Kit v.2) including an external camera-based tracking sensor;
a hand-tracking motion sensor (Leap Motion Controller) axed to
the front faceplate of the HMD; and a computer (we used a quad-
core 2.6 GHz Intel Core i7 Mac Mini). The working environment is
shown in Figure 2. The user wearing the HMD is seated in front
of a desk, facing the HMD’s tracking sensor at about one meter’s
distance ahead. The user typically keeps one hand on the keyboard
for switching modes and triggering actions, and the other hand in
the air for gestural motion tracking.
The HMD provides 1080p resolution per eye, 100
°
eld-of-view
(FOV), 75Hz refresh rate, and minimal motion blurring [
21
]. The
external tracking system provides accurate and low-latency ori-
entation and positional coordination over the entire working en-
vironment. The Leap Motion Controller provides sub-millimeter
tracking of hands and ngertips, including gesture recognition,
over a wide 150
°
eld of view with an eective range of roughly
25 to 600 millimeters (however hand detection and tracking works
best when the user’s hands are well within this range and closer
the center of the user’s view). It also provides an infrared video
feed capable of supporting see-through AR (see Figure 3).
Figure 3: In-HMD view of virtual hand models overlapping
infrared imagery of the real hands [15].
3.2 Software Architecture
The system’s software was developed with Unity
®
3D v.5. The soft-
ware continuously extracts tracking information and raw infrared
imagery through the Leap Motion API. The extracted information
is used to drive the gestural augmentations that generate virtual
forms within the context of real-world imagery captured by the
sensor’s two infrared cameras. The output is rendered onto the
Oculus Rift display at 75Hz via Unity’s Oculus Rift API.
The hand tracking data is used to construct virtual hand models
in estimated posture, position, orientation, and velocity of every
bone, nger, and ngertip, relative to the virtual position of the
Leap device. Hand models are visualized as virtual skeletons and
overlaid onto Leap Motion’s video pass-through imagery, with
tracking condence represented via the opacity of the virtual hand
models. These virtual hands are rendered to the HMD in the same
stereoscopic 3D space as user-generated sculptural forms.
3.2.1 3D Interface Design. The principal focus of interaction
is freeform, continuous mark making in space, with one of three
physical augmentation brushes as described in the next section.
However the dynamics, shape, and appearance of these brushes are
partly determined a number of settings that are congured through
a set of interface palettes, which we describe rst.
3.2.2 Palee Mode. All congurable system settings are ac-
cessed through the system’s three palettes (Figure 4). When re-
quested (by pressing the spacebar key), three large palettes are
situated in space at an approachable distance from the user, at an
accessible scale, and in a spatial cockpit arrangement to take advan-
tage of the HMD’s wide eld of view. The palettes situate instantly
familiar 2D desktop metaphors of clickable buttons and draggable
sliders in the 3D space.
The palettes oer a breadth of options to maximize the user’s
creative control over the brush’s properties. The leftmost palette
modies a brush’s shape, orientation, scale, extrusion and thickness
variation. The right palette selects color and material. The central
palette ne-tunes critical parameters of the dynamic behavior of
the current brush. Changes made with the palettes are promptly
visualized in the form and movement of all active brushes.
MOCO’17, 28-30 June 2017, London, United Kingdom S. Jang et al.
Pressing the spacebar once more hides the palettes and returns
to the unobstructed drawing mode. The two modes keep the virtual
creations and palettes separate and prevent any overlap or occlusion.
An additional overlay can be used to save, load, or reset the current
scene.
Users can also swipe the hand vertically just in front of the HMD
to toggle between VR and AR modes. The swipe gesture semanti-
cally corresponds to its function and makes it easy to transition
between modes while drawing or editing.
Figure 4: In-HMD view of the conguration palettes.
3.2.3 Dynamics-Based Interaction Design. Three types of brush
prototypes inspired by dynamic models are presented in this system:
spring-mass brush, ribbon brush, and rope brush. The spring-mass
brush builds upon Snibbe’s Dynasculpt model [
25
] and adapts its
concept to an immersive platform with freehand 3D interaction.
The ribbon and rope brushes also utilize dynamic simulation-driven
models inspired by real-world physical phenomena. The three brush
prototypes were chosen to evaluate the creative potential of con-
necting physically inspired dynamics with the creation of form
in ways that are only possible in virtual space. Although focus
is primarily on airborne gestural strokes of the tracked ngertips,
specic keys on the physical keyboard are also used as a convenient
mode of invoking subsidiary functions.
Spring-Mass Brush. Snibbe’s Dynasculpt was a 3D adaptation
of Haeberli’s Dynadraw application, which focused on exploring
how dynamic models transform the creative process. Our system’s
spring-mass brush prototype is based on the same spring-mass-
damper model, but diers from prior work in that users’ hand
movements provide direct 3D input and its immersive display al-
lows virtual forms to directly materialize at the user’s ngertips,
including multiple ngertips simultaneously. Fingertips of all ex-
tended ngers in view are respectively augmented by a virtual
spring with a mass attached to its other end (Figure 5). The line that
extends from the ngertip to the mass represents a spring that can
stretch and contract as it tows the mass along. The attached spring
and mass follow the ngertip in accordance with its dynamic set-
tings as long as the nger remains extended. The masses attached
to each nger can collide with those of other extended ngers, and
rebound.
To begin drawing with the brush, users must press their desig-
nated “draw” key (the left Control key by default). The 3D stroke
begins from the virtual mass and continues along its 3D path while
the draw key is held down (Figure 6), and ends when the key is
lifted. The virtual brush “springs” into action once the distance be-
tween the ngertip and the mass grows past its state of equilibrium.
The virtual matter that appears in its trail is promptly erased if
the brushstroke is cut prematurely and fails to travel a pre-dened
minimal distance.
Figure 5: (Left) Virtual masses attached to ngertips by vir-
tual springs. (Right) All extended ngers receive masses.
Figure 6: A drawing progressing with the spring-mass brush.
Figure 7: Examples of sketches showing greater degree of
control. Both were created by an artist on the rst trial of
the system, after less than one hour of experimentation.
Users are given direct control over three specic parameters that
dene the brush’s character and quality of movement: 1. Weight
of the mass, 2. Spring constant, 3. Damper value. Other subsidiary
factors such as air drag are kept at an optimal constant. Adjustments
in parameter values are made through the dynamics palette, and
any changes are instantly applied and observable in the brush’s
behavior. The default base shape is a at rectangle, chosen for
the ribbon-like forms that reveal dynamic changes in orientation it
creates. The default extrusion type is a are, which shows variations
in thickness with less computational power than the velocity type
(which adjusts the stroke’s thickness according to speed of the
stroke). The shape and extrusion type of the brush, as well as its
stroke weight and thickness contrast level are changeable via the
palette.
Incorporating Kinesthetic Creativity and Gestural Play into Immersive Modeling MOCO’17, 28-30 June 2017, London, United Kingdom
Figure 8: A ribbon is captured in its current position and an-
other one immediately appears in its place.
Ribbon Brush. The ribbon prototype was inspired by rhythmic
gymnasts’ manipulations of the ribbon, with which they create
dynamically changing shapes in mid-air to augment their perfor-
mance. While the ribbon is in constant motion, it routinely holds
its form long enough to appear magically suspended in air. Pho-
tographs sometimes manage to capture these exquisite moments in
a ribbon’s ephemeral dance. The ribbon brush prototype attempts
to capture this eeting beauty of a ribbon at a select moment in
time in its full-dimensional form.
In the prototype, the user’s extended ngers are virtually aug-
mented with a long strip of cloth in the form of a ribbon (Figure 8).
One end of the ribbon is xed onto the ngertip like a gymnast’s
ribbon on the end of a stick. The ribbon exists in a zero gravity
zone and stays still in the air in the absence of movement. When
the nger or hand moves, the ribbon naturally tugs along with
it. Users have the power to freeze the dancing ribbon in time and
space by pressing the designated “capture” key. Multiple active rib-
bons are captured simultaneously and promptly replaced by newly
generated ones.
Rope Brush. The rope model was inspired by how rope as a ma-
terial is skillfully manipulated and appropriated for a great variety
of purposes. Rope-based physics is prominently used in game de-
velopment to simulate the real-world behavior of malleable linear
objects in dynamic contexts. A piece of rope invites tactile engage-
ment and manipulation, and our hands have a highly developed
understanding of how it behaves and what we can do with it. When
both hands are closed to form two sts, a virtual rope appears that
loosely spans from one hand to the other, with its ends xed to the
palm of each hand (Figure 9). While both hands remain closed, the
rope dangles in between the two hands. The rope may be stretched,
twisted, and swung up and down by naturally moving one’s hands
(Figure 9).
The rope turns into a static mesh the moment both hands rope
is let loose, users must close both hands again to generate another
rope. The rope’s length is determined by the distance between two
hands the instant it is being made.
4 INITIAL EXPLORATIONS
The spring-mass prototype was informally tested by students, artists,
and designers during various stages of its development. Most had
never used a VR or AR system for virtual content creation before
and found the hand-based 3D interface and interaction techniques
highly intuitive and enjoyable; many voluntarily came back to test
upgrades or new features.
Figure 9: (Top): The rope is being held by its two ends and
stretched. (Bottom) The rope, once swung up then released,
stays frozen in its spot.
Users were quick to begin using and experimenting with the
brush, though some found it challenging to exercise ne-grained
control. Based on user feedback, the 3D positions of active ngertips
were made more prominent by adding distinct white spheres as
visual indicators, and the virtual representation of the spring was
made more readily distinguishable through brighter and more vivid
colors. The default dynamic parameters of the spring and mass
were also adjusted to better match general user expectations and
preferences.
While some users were immediately adept at handling the 3D
palette interface, others had trouble determining how far they
needed to reach to click buttons (an apparent misjudgment of
depth). In the absence of physical feedback, some users needed
more practice in manipulating slider handles to adjust their values
with precision. Visual cues and feedback such as motion eects
that mimic a button being clicked, or changes in color brightness of
slider handles when pressed upon, are amplied to facilitate these
interactions.
The layout of the palettes including the placement, scale, and
trigger distance of its buttons and sliders were also recongured
and adjusted to make it more accessible for all users. Toggle buttons
for each palette were added for users who did not want to always
keep all palettes open at once.
Originally the palettes had been conceived as co-existing with
the space of the 3D sculptural forms being made, however it was
immediately apparent in user trials that this negatively constrained
the spaces in which users worked and caused distracting occlusions.
Giving users the ability to freely move the palettes wherever they
want appeared to be a probable solution, but we eventually settled
for the simpler solution of dividing the two modes.
5 EVALUATION
Following these initial studies, expert interviews were conducted
for a more informed analysis of the system’s design and approach.
This was followed by a broader user study to obtain an objective
assessment of the system’s usability and user experience from the
perspective of prospective users.
5.1 Expert Interviews
We sought expert opinions as an ecient and eective means of as-
sessing the following: (1) The conceptual originality of the system’s
MOCO’17, 28-30 June 2017, London, United Kingdom S. Jang et al.
approach; (2) The eectiveness of its interaction design and techni-
cal realization; (3) The eectiveness of the VR versus AR modes; (4)
Its viability as an artistic tool, or other potential applications; (5)
The identication of specic areas for continued development, or
critical factors that may have been overlooked in the design.
5.1.1 Selected Experts. Two experienced professionals–an artist
who works with immersive systems and a research scientist special-
izing in AR and VR interactions–were consulted independently for
proper assessment and feedback on the above issues. Dr. Ji is a 3D
sculptor, media artist, and educator with over 10 years of research
experience in immersive art and technologies and over 20 years of
experience in sculpture. Dr. Ha is a researcher and entrepreneur in
immersive technologies with over 10 years of research experience
in Mixed Reality (MR), AR, and 3D user interaction.
5.1.2 Procedure. The expert evaluation was conducted in four
stages: (1) a brief overview of related works (5 min), (2) a video
overview of the spring-mass prototype and its features (5 min),
(3) a trial run of the working prototype (20 min), and (4) an in-
depth interview (30~40 min). System trials were video-captured and
interviews were audio-recorded with prior consent. The interviews
were open-ended and loosely structured around a list of questions
to facilitate discussion.
5.1.3 Results and Analysis. Dr. Ha found the spring-mass proto-
type highly engaging and fun to draw with, and felt more motivated
and procient in drawing with the 3D brush than with a pencil
and paper. He was particularly struck by its intuitive and playful
interactivity and thought it would be a promising creative tool
for children. He found the 3D interface design eective in how it
organized and presented complex information, and opportune for
an immersive environment. Dr. Ha considered the use of ngertips
somewhat arbitrary and suggested potentially using virtual or phys-
ical props that users could easily grasp and maneuver. Moreover, in
the prototype’s current state, he thought the real-world backdrop
in AR mode was more distracting than helpful; and the greater ad-
vantages of an AR modeling environment were in his view not fully
explored. Dr. Ji also identied playfulness–a quality she described
as “the joy of creating”–as one of the system’s greatest assets. She
noted how the augmented dynamic behavior of the brush expanded
the expressive capacity of hand movement and made it easy to cre-
ate highly complex, sweeping curve, whereas creating such curves
with such immediacy and spontaneity was very dicult even for
advanced users of traditional NURBS-based modeling tools. She
thus saw strong potential applications of this modeling approach in
rapid 3D sketching and prototyping for sculptors and architects. She
posited that it would be advantageous for creating rough sketches
to be later edited rened in high-end modeling software, and even
3D-printed as physical mock-ups. However to strengthen it as an
artistic tool Dr. Ji stressed the importance of oering a diverse range
of dynamic brushes and enabling more sophisticated control over
their dynamic behavior. She highly recommended a two-handed
interface that allowed one hand to control the model’s orientation
while the other sculpts or draws.
In contrast to Dr. Ha she found it preferable to work in AR than
VR mode. Although she recognized that the VR mode could permit
users to fully customize their virtual workspace, she suggested the
AR mode oered four signicant advantages: (1) an intuitive sense
of shape, scale, and position relative to one’s body and hands, (2)
real-world dimensions for models being designed for a specic
space, (3) the ability to visually identify supplementary means of
input such as the keyboard, and (4) awareness of the presence of
others, which opens up the potential for collaboration.
5.2 User Study
The main purpose of the general user study was to assess the
system’s ease of use, the eectiveness of its interaction design,
and user satisfaction.
5.2.1 Participants. A total of ten graduate students–six male,
four female–participated in the study. Half of the participants had
used 3D modeling software before, mostly in a beginner capacity.
Three had experience with VR or AR systems, and none had any
experience in digital sculpting. Only one student had substantial
experience working with physical media for creative output.
5.2.2 Procedure. The user study was conducted in the following
six steps: (1) video-based introduction to the spring-mass prototype
and its main features (5 min), (2) training and practice in using the
spring-mass brush (10 min), (3) free doodling session (10~15 min),
(4) video overview of the ribbon and rope prototype, (5) System
Usability Scale (SUS) questionnaire, and (6) short answer question-
naire.
SUS is widely used across elds as simple and robust tool for
a general assessment of a system’s usability and learnability [
3
].
The questionnaire consists of ten statements and uses a 5-point
rating scale ranging from “strongly disagree” to “strongly agree”.
Instructions were added to the top of the SUS form to have peo-
ple record their most immediate and instinctive responses, as was
recommended by [
2
]. The short answer questionnaire consisted of
open-ended questions including what users liked and disliked most
about the system.
5.2.3 Results and Discussion. The average SUS score was 67.5,
which lies in between the threshold of an “OK” and “Good” result
[
2
]. For detailed analysis, mean scores and standard deviations were
calculated for each item. The item that received the highest score
(mean: 4.1, SD: 0.7) indicated that users did not nd the system
unnecessarily complex. High scores were also given to the system’s
consistency (mean: 4, SD: 0.63) and the user’s condence in using it
(mean: 4, SD: 0.45). Items that received the poorest ratings suggested
that many found the system dicult to use (mean: 3.3, SD: 1.19) and
that it required a lot of learning (mean: 3.3, SD: 1.0). This particular
response was unexpected since preliminary informal testing had
suggested the very opposite. We believe that the perceived usability
of the system dropped because of the increased set of features in
the palettesâĂŞmore time is spent conguring advanced options
in these interfaces, drawing attention away from the simplicity
of the core sculpting gesturesâĂŞand thus it may be preferable to
initially present a curated palette of popular brush modes rather
than a fully-featured conguration panel. Participants were clearly
very quick to understand how the brush works and began drawing
at once when given the opportunity. They became rapidly adept at
maneuvering the brush, and enjoyed playing with it and discovering
forms it was capable of creating. Many appreciated having direct
Incorporating Kinesthetic Creativity and Gestural Play into Immersive Modeling MOCO’17, 28-30 June 2017, London, United Kingdom
control over the dynamic behavior of the brush and the freedom
to tweak it to their liking. Some noted that the brush prototype
is not ideal for detailed operations, and that it would take time to
gain sucient mastery for more sophisticated control. What users
identied as liking most about the system was the fully 3D nature
of drawing and the ability to take any position-tracked perspective
desired. What was most often criticized was the 2D palette interface:
some had diculty reaching certain buttons or sliders, or would
accidentally trigger others, and some found it tiresome to learn all
its options. Most agreed that having both VR and AR modes was
advantageous: the VR model helped focus concentration on the
drawing, while the AR model granted a better sense of scale. Some
noted that the AR mode would be benecial for modeling works
intended for an actual physical space.
6 DISCUSSION
During the evaluation we observed how, with nuanced maneuver-
ing, users were able to steer the virtual mass-spring system to uidly
create curvaceous strokes, even when stretching much further than
the nger’s scope of movement. Users generally showed instant
familiarity and intuitive understanding of its mechanics, and gained
greater command of the brush for more nuanced expressions with
a short amount of practice. Results showed immediate expressivity
that would require extensive training to achieve with traditional
3D software.
The intuitive modeling process, however, was intermittently
interrupted due to the system’s susceptibility to errors in hand
detection and tracking. In particular, users’ gestural strokes could
be cut o prematurely due to self-occlusion problems in the hand
tracking. When users were informed about why this was happening,
however, many found ways of working around these limitations and
through conscious eort grew more procient in keeping ngertips
visible while wielding the interactive brush. We anticipate future
improvements to tracking hardware/software to overcome this
issue. In the interim an undo key was also added as an edit function
to allow users to delete any 3D strokes that were abruptly cut short.
The interaction design for the palettes also made their function-
ing vulnerable to tracking errors. More stable and robust palette
designs that enable swift changes in brush properties and uid
transitions between modes would be vital for robust use of the
dynamics-driven brush. Substituting the ‘draw’ key with a foot
pedal or voice-activated input would also free users to draw with
both hands and also control the stroke weight.
The ribbon and rope prototypes show the exciting new creative
opportunities that emerge from expanding our dynamic-driven
approach to virtual form creation. Their interaction designs take
advantage of our bodily intuitions in manipulating various objects
and material and the freedom to suspend the laws of physics in
virtual contexts. The spontaneous, expressive forms users are ca-
pable of creating with these systems are immensely dicult to
approximate with conventional modeling tools. In their current
state, however, the prototypes fall short of showing the full creative
potential of this approach. Enhanced realism of virtual simulations,
a diverse range of changeable form factors and interaction methods,
and sophisticated controls over dynamic behavior would greatly
expand their utility and expressive capacity for artistic output.
It would be benecial to conduct a more comprehensive and
meticulously designed user evaluation with an updated spring-mass
prototype to specify and validate the advantages of using a brush
that uses dynamic models to augment gestural input as opposed to
one without any gestural augmentation. It would also be insightful
to compare dierences in user performance in learning and creating
with the prototype when a monoscopic desktop monitor is used as
opposed to a fully immersive display.
7 CONCLUSION
An immense eort was put into making the physical interaction as
uid and seamless as possible in the above systems. Augmenting
gestural strokes with dynamic simulations for immersive modeling
has proven to encourage creative exploration and experimentation.
Gestural interaction and dynamic models drawn from real-world be-
havior engage our natural spatial instincts and intelligence toward
the shaping of form. The brush prototypes successfully combined
physically inspired interaction techniques and the unique plastic-
ity the virtual world to extend our expressive capacity and create
new forms of expression. Users generally found the experience
creatively satisfying and liberating. Dynamic interactive brushes
may nd application in virtual sculpting, design ideation, rapid
prototyping, and authoring animated virtual environments.
Each prototype in its current state still imposes a strong visual
style specic to its dynamic model. Much more experimentation
with dynamic parameters and applicable forces would be necessary
to nd robust ways of diversifying its creative output.
As identied by expert interviewee Ji, supporting the export of
models will be essential both for renement and for re-use in other
environments or for 3D printing. Moreover, supporting collabora-
tive creation in a shared augmented space could add signicant
value to the experience. As identied by expert interviewee Ha, the
prototypes would also benet from taking more advantage of the
unique aordances of an AR environment to enable richer forms of
interaction that further integrate the physical and virtual, such as
detecting recognizable surfaces and objects in the physical context
and granting them functional roles in the simulated dynamics.
REFERENCES
[1]
Julian Adenauer, Johann Habakuk Israel, and Rainer Stark. 2013. Virtual Reality
Technologies for Creative Design. In CIRP Design 2012, Amaresh Chakrabarti
(Ed.). Springer London, 125–135. https://doi.org/10.1007/978-1-4471- 4507-3_13
[2]
Aaron Bangor, Philip Kortum, and James Miller. 2009. Determining What Indi-
vidual SUS Scores Mean: Adding an Adjective Rating Scale. J. Usability Studies 4,
3 (May 2009), 114–123. http://dl.acm.org/citation.cfm?id=2835587.2835589
[3]
John Brooke. 1996. SUS-A quick and dirty usability scale. In Usability evaluation
in industry. CRC Press. https://www.crcpress.com/product/isbn/9780748404605
ISBN: 9780748404605.
[4]
Je Butterworth, Andrew Davidson, Stephen Hench, and Marc. T. Olano. 1992.
3DM: A Three Dimensional Modeler Using a Head-mounted Display. In Proceed-
ings of the 1992 Symposium on Interactive 3D Graphics (I3D ’92). ACM, New York,
NY, USA, 135–138. https://doi.org/10.1145/147156.147182
[5]
James H. Clark. 1976. Hierarchical Geometric Models for Visible Surface Al-
gorithms. Commun. ACM 19, 10 (Oct. 1976), 547–554. https://doi.org/10.1145/
360349.360354
[6]
Michael F. Deering. 1995. HoloSketch: A Virtual Reality Sketching/Animation
Tool. ACM Trans. Comput.-Hum. Interact. 2, 3 (Sept. 1995), 220–238. https:
//doi.org/10.1145/210079.210087
[7]
Michael F. Deering. 1996. The HoloSketch VR Sketching System. Commun. ACM
39, 5 (May 1996), 54–61. https://doi.org/10.1145/229459.229466
[8]
Paul Haeberli. 1989. Dynadraw. (1989). Retrieved January 10, 2016 from http:
//www.gracaobscura.com/dyna/index.html
MOCO’17, 28-30 June 2017, London, United Kingdom S. Jang et al.
[9]
Johann Habakuk Israel, Laurence Mauderli, and Laurent Greslin. 2013. Mastering
Digital Materiality in Immersive Modelling. In Proceedings of the International
Symposium on Sketch-Based Interfaces and Modeling (SBIM ’13). ACM, New York,
NY, USA, 15–22. https://doi.org/10.1145/2487381.2487388
[10]
Johann Habakuk Israel, Eva Wiese, Magdalena Mateescu, Christian Zöllner,
and Rainer Stark. 2009. Investigating three-dimensional sketching for early
conceptual design - Results from expert discussions and user studies. Computers
& Graphics 33, 4 (2009), 462–473. https://doi.org/10.1016/j.cag.2009.05.005
[11]
Daniel F. Keefe. 2007. Interactive 3D Drawing for Free-form Modeling in Scientic
Visualization and Art: Tools, Methodologies, and Theoretical Foundations. Ph.D.
Dissertation. Providence, RI, USA. Advisor(s) Laidlaw, David H. AAI3271999.
[12]
Daniel F. Keefe, Daniel Acevedo Feliz, Tomer Moscovich, David H. Laidlaw, and
Joseph J. LaViola, Jr. 2001. CavePainting: A Fully Immersive 3D Artistic Medium
and Interactive Experience. In Proceedings of the 2001 Symposium on Interactive
3D Graphics (I3D ’01). ACM, New York, NY, USA, 85–93. https://doi.org/10.1145/
364338.364370
[13]
Scott Kuehnert. 2015. Express Yourself! Augmenting Reality with Graf-
ti. (2015). Retrieved June 23, 2015 from http://blog.leapmotion.com/
express-augmenting-reality-grati- 3d
[14]
Joseph J. LaViola and Daniel F. Keefe. 2011. 3D Spatial Interaction: Applications
for Art, Design, and Science. In ACM SIGGRAPH 2011 Courses (SIGGRAPH ’11).
ACM, New York, NY, USA, Article 1, 75 pages. https://doi.org/10.1145/2037636.
2037637
[15]
Leap Motion. 2015. 3D Motion and Gesture Control for PC & Mac. (2015).
Retrieved June 2, 2015 from https://www.leapmotion.com/product/vr
[16]
Golan Levin. 2000. Painterly interfaces for audiovisual performance. Master’s
thesis. Massachusetts Institute of Technology, Cambridge, MA. Advisor(s) Maeda,
John. http://hdl.handle.net/1721.1/61848
[17]
Wille Mäkelä. 2005. Working 3d meshes and particles with nger tips, towards
an immersive artists. In Interface, Proc. IEEE Virtual Reality Workshop. Citeseer.
[18]
Wille Mäkelä and Tommi Ilmonen. Drawing, painting and sculpting in the air:
Development studies about an immersive free-hand interface for artists.
[19]
Wille Mäkelä, Markku Reunanen, and Tapio Takala. 2004. Possibilities and
Limitations of Immersive Free-hand Expression: A Case Study with Profes-
sional Artists. In Proceedings of the 12th Annual ACM International Confer-
ence on Multimedia (MULTIMEDIA ’04). ACM, New York, NY, USA, 504–507.
https://doi.org/10.1145/1027527.1027649
[20]
Mark Mine, Arun Yoganandan, and Dane Coey. 2015. Principles, interactions
and devices for real-world immersive modeling. Computers & Graphics 48 (2015),
84–98.
[21]
Oculus VR. 2015. Oculus Best Practices Guide. (2015). Retrieved June 15, 2015
from http://developer.oculusvr.com/best-practices
[22]
M. Reunanen, K. Palovuori, T. Ilmonen, and W. Mäkelä. 2005. NäPrä: Aordable
Fingertip Tracking with Ultrasound. In Proceedings of the 11th Eurographics
Conference on Virtual Environments (EGVE’05). Eurographics Association, Aire-la-
Ville, Switzerland, 51–58. https://doi.org/10.2312/EGVE/IPT_EGVE2005/051- 058
[23]
Steven Schkolne, Michael Pruett, and Peter Schröder. 2001. Surface Drawing:
Creating Organic 3D Shapes with the Hand and Tangible Tools. In Proceedings of
the SIGCHI Conference on Human Factors in Computing Systems (CHI ’01). ACM,
New York, NY, USA, 261–268. https://doi.org/10.1145/365024.365114
[24]
Christopher Schmandt. 1983. Spatial Input/Display Correspondence in a Stereo-
scopic Computer Graphic Work Station. SIGGRAPH Comput. Graph. 17, 3 (July
1983), 253–261. https://doi.org/10.1145/964967.801156
[25]
Scott Snibbe, Sean Anderson, and Bill Verplank. Springs and constraints for 3d
drawing. In Proceedings of the Third Phantom Users Group Workshop.
[26]
T. M. Takala, M. Mäkäräinen, and P. Hämäläinen. 2013. Immersive 3D modeling
with Blender and o-the-shelf hardware. In 2013 IEEE Symposium on 3D User
Interfaces (3DUI). 191–192. https://doi.org/10.1109/3DUI.2013.6550243
[27]
Tilt Brush. 2015. Tilt Brush. (2015). Retrieved June 22, 2015 from http://www.
tiltbrush.com
[28]
Jeremy Turner. 2002. Myron Krueger Live. (2002). http://www.ctheory.net/
articles.aspx?id=328
[29]
VR Clay. 2015. Sculpting in Virtual Reality with Oculus Rift and Razer Hydra.
(2015). Retrieved June 22, 2015 from http://vrclay.com
[30]
E. Wiese, J. H. Israel, A. Meyer, and S. Bongartz. 2010. Investigating the Learnabil-
ity of Immersive Free-hand Sketching. In Proceedings of the Seventh Sketch-Based
Interfaces and Modeling Symposium (SBIM ’10). Eurographics Association, Aire-la-
Ville, Switzerland, 135–142. http://dl.acm.org/citation.cfm?id=1923363.1923387