Conference PaperPDF Available

Incorporating Kinesthetic Creativity and Gestural Play into Immersive Modeling

Authors:

Abstract and Figures

The 3D modeling methods and approach presented in this paper attempt to bring the richness and spontaneity of human kinesthetic interaction in the physical world to the process of shaping digital form, by exploring playfully creative interaction techniques that augment gestural movement. The principal contribution of our research is a novel dynamics-driven approach for immersive freeform modeling, which extends our physical reach and supports new forms of expression. In this paper we examine three augmentations of freehand 3D interaction that are inspired by the dynamics of physical phenomena. These are experienced via immersive augmented reality to intensify the virtual physicality and heighten the sense of creative empowerment.
Content may be subject to copyright.
Incorporating Kinesthetic Creativity and Gestural Play into
Immersive Modeling
Sung-A Jang
Korea Culture Technology Institute
Gwangju, Korea
sjang@gist.ac.kr
Graham Wakeeld
Arts, Media, Performance & Design
York University
Toronto, Canada
grrrwaaa@yorku.ca
Sung-Hee Lee
Graduate School of Culture &
Technology
KAIST
Daejeon, Korea
sunghee.lee@kaist.ac.kr
Figure 1: (Left): Spring-mass brush in velocity mode. Variations in stroke weight reect velocity changes. (Center): Experiments
in shape-making with the spring-mass brush. (Right): Variations in curvatures achieved by adjusting the spring’s dynamic
settings.
ABSTRACT
The 3D modeling methods and approach presented in this paper
attempt to bring the richness and spontaneity of human kinesthetic
interaction in the physical world to the process of shaping dig-
ital form, by exploring playfully creative interaction techniques
that augment gestural movement. The principal contribution of
our research is a novel dynamics-driven approach for immersive
freeform modeling, which extends our physical reach and supports
new forms of expression. In this paper we examine three augmenta-
tions of freehand 3D interaction that are inspired by the dynamics
of physical phenomena. These are experienced via immersive aug-
mented reality to intensify the virtual physicality and heighten the
sense of creative empowerment.
CCS CONCEPTS
Human-centered computing Gestural input
;Interaction
design;Virtual reality;Mixed / augmented reality;
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for prot or commercial advantage and that copies bear this notice and the full citation
on the rst page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specic permission
and/or a fee. Request permissions from permissions@acm.org.
MOCO’17, 28-30 June 2017, London, United Kingdom
©
2017 Copyright held by the owner/author(s). Publication rights licensed to Associa-
tion for Computing Machinery.
ACM ISBN 978-1-4503-5209-3/17/06.. .$15.00
https://doi.org/http://dx.doi.org/10.1145/3077981.3078045
KEYWORDS
Embodied interaction; kinesthetic interaction; gestural augmenta-
tion; immersive modeling; 3D modeling; 3D user interface; aug-
mented reality
ACM Reference format:
Sung-A Jang, Graham Wakeeld, and Sung-Hee Lee. 2017. Incorporating
Kinesthetic Creativity and Gestural Play into Immersive Modeling. In Pro-
ceedings of MOCO’17, London, United Kingdom, 28-30 June 2017, 8 pages.
https://doi.org/http://dx.doi.org/10.1145/3077981.3078045
1 INTRODUCTION
Myron Krueger, a computer artist and pioneer in virtual reality (VR)
interaction, argued that the real power of VR is not in its capacity of
illusion, but in its potential to extend our physical reach; and what
is critical in constituting “reality” to our perception is the “degree of
physical involvement” [
28
]. In the same spirit, our research utilizes
immersive technologies, continuous gesture capture, and physically-
inspired simulation to extend our bodily powers into the creation
of virtual sculptural forms that would be near impossible to achieve
in the physical world.
The main contribution is a new dynamics-driven approach for
immersive modeling, bringing to HMD-based VR an enrichment
of creative processes aorded “gestural augmentation” inspired
by physical simulations [
16
]. The dynamic models that virtually
augment gesture in this research are physically inspired yet under
the creative control we exert over and through our own bodies. The
interaction is designed to incorporate sophisticated movements on
an intimate scale – the ne-tuned physical control we can exert
using our hands and ngers – in tandem with the intuitions we
MOCO’17, 28-30 June 2017, London, United Kingdom S. Jang et al.
have of the physical dynamics of familiar materials and objects.
The focus here is on expressive capacity supporting creative pro-
cesses, rather than eciently or eectively obtaining a specic
output. VR and augmented reality (AR) technologies are primarily
utilized as a means of embodied interaction that expands our own
creative capacities. The ultimate goal is to eectively extend our
physical reach and support new forms of expression; to dexterously
create visually expressive forms that would be arduous to achieve
otherwise, all through playful experimentation.
2 RELATED WORK
2.1 Augmented Gestural Strokes
In 1989 Paul Haeberli developed a “dynamic drawing technique” for
his 2D drawing program Dynadraw [
8
]. He re-imagined the brush
as being a physical mass that was attached to the mouse position by
a damped spring and tugged around whenever the mouse moved,
instead of being at the exact point of the mouse itself. By aug-
menting gestural strokes with a spring-mass simulation, Dynadraw
creates expressive strokes that amplify the qualities of the gesture.
Scott Snibbe’s Dynasculpt [
25
] was directly inspired by Dynadraw
and attempted to use its drawing method to draw in 3D. Our ap-
proach expands upon Snibbe’s exploration of novel opportunities
aorded by the use of physically inspired dynamic models uncon-
strained by real-world laws. The interaction design of our system
also uses physical simulation to enrich the dynamic between the
user and the system and thus expand the expressive capacities of
gestural movement. Unlike Dynasculpt, our spring-mass prototype
has users directly steer the mass with the unltered movement of
their ngers, and their drawings materialize directly where they
perceive the mass to be, augmented within their own physical space.
Head movements naturally correspond with changes of view of the
emerging sculptural form. With the removal of these perceptual
barriers, we theorize that users would not show a “tendency to
draw in planes” [25] as with Dynasculpt.
2.2 Gestural 3D Modeling
A detailed introduction to early graphics research in using sweeping
3D input for modeling can be found in [
14
], which outlines the
substantial advantages of using a stereoscopic or immersive display
for anything from simple CAD-style manipulations to the complex
operations of freeform extrusion.
Most recent freeform modeling approaches that incorporate
sweeping 3D input belong to two broad categories: those that rely
on the in-air movements of a tracked device, and those that employ
haptic mediation to enhance control over input or mimic tactile
interactions with clay-like substances in physical reality [11].
Surface Drawing [
23
] used glove-based input in a semi-immersive
environment and used sweeping movements of the hand to gen-
erate a surface. One of its biggest drawbacks was having to use a
custom-made data glove with a tabletop VR device. CavePainting
[
12
], a full-featured 3D painting medium for artists and design-
ers, allowed users to create 3D brushstrokes with physical props
within an immersive CAVE environment. While Cave-Painting
worked well as a new art medium that allowed artists to paint in 3D
space, its aesthetics and interactivity remained generally tethered
to emulating a 2D medium in its painterly style and method of
mark-making. It also required a highly specialized environment
with expensive equipment (the CAVE itself). While Surface Draw-
ing and CavePainting were both successful in demonstrating the
potential application of direct, gestural 3D input for art and design,
they were dependent on custom-made devices and used immobile
platforms that were xed to a physical device or environment. Their
methods were also focused on visualizing the gestural stroke as
accurately as possible and refrained from using any form of gestu-
ral augmentation. Mäkelä’s experiments with the Näprä prototype
[
17
19
] proposed a slightly dierent and more delicate approach
to 3D mark making. Näprä–a wearable mechanic device for both
hands that wraps around the ngers–is designed to track the ne
and subtle movements of the ngertips for fuller expression and in-
teraction in a CAVE-like immersive environment [
22
]. Her ndings
suggest that real-time ngertip interaction allows for ne control
and intuitive command, and works especially well for two-handed
tasks that require both hands to work simultaneously in dierent
capacities. While the prototype presented in this paper relies solely
on vision-based ngertip tracking, its interface is also built around
the expressive potential and control capacity of our ngertips.
2.3 Immersive Painting and Sculpting Systems
The substantial advantages of an immersive modeling environment,
especially the benets of using our spatial intuitions to create and
manipulate virtual models, have been conrmed and discussed in
numerous studies [
1
,
4
7
,
9
,
10
,
17
19
,
24
,
30
]. The potential of im-
mersive technologies for supporting the early stages of the creative
process in design practices is methodically explored in [
1
,
9
,
10
,
30
].
Many concrete potential advantages of immersive 3D sketching
for creative design, such as being able to sketch life-size models
in proportion to our bodies and observe their spatial impact in
the process, were identied through empirical studies and expert
discussions [
10
,
30
]. A recent in-depth user study conducted over a
two-week period [
9
] showed that it was possible for designers to
develop their own unique creative strategies of gaining a degree of
mastery in handling digital substance, in the absence of material
constraints in an immersive modeling environment. [
20
] and [
26
]
exemplify recent research endeavors to integrate immersive inter-
action techniques into existing 3D modeling software, specically
Blender and SketchUp, to combine the eectiveness of spatial in-
teraction with the powerful capacities of established full-featured
modeling tools. These experiments in VR adaptations, however,
are only in its preliminary stages and have been limited to simple
manipulation tasks that do not involve freeform gestural input.
Rapid technological advances in VR and AR are fueling the de-
velopment of new immersive modeling software for emerging head-
mounted display (HMD) platforms such as the HTC Vive and Ocu-
lus Rift. VRClay [
29
] is a 3D sculpting software made for the Rift
that supports two-handed interaction with Razer Hydra controllers.
With VRClay users can enjoy the powers of digital sculpting di-
rectly in 3D, with one hand controlling the model while the other
sculpts. Tilt Brush [
27
] is also a VR HMD application that allows
users to paint with on 2D planes that could be moved and rotated
in 3D space to create 3D paintings. Grati 3D [
13
], which allows
users to draw directly in 3D with their ngers in AR with a Leap
Incorporating Kinesthetic Creativity and Gestural Play into Immersive Modeling MOCO’17, 28-30 June 2017, London, United Kingdom
Motion sensor mounted onto the Oculus Rift, was developed us-
ing the same Leap Motion VR platform that was used to build our
prototype system. All the immersive drawing and sculpting tools
mentioned above capitalize on the aordances of an immersive
environment to provide a novel and more intuitive experience of
creating 3D virtual content that is liberated from the rigid connes
of a 2D static monitor. None of them however incorporate dynamic
models to augment gestural expression.
3 SYSTEM DESIGN
The system presents three dierent physically inspired augmen-
tations of freehand 3D gesture, within an HMD-based VR or see-
through AR immersive experience.
Figure 2: The working environment.
3.1 Hardware and working environment
The system’s hardware consists of a VR HMD (Oculus Rift Develop-
ment Kit v.2) including an external camera-based tracking sensor;
a hand-tracking motion sensor (Leap Motion Controller) axed to
the front faceplate of the HMD; and a computer (we used a quad-
core 2.6 GHz Intel Core i7 Mac Mini). The working environment is
shown in Figure 2. The user wearing the HMD is seated in front
of a desk, facing the HMD’s tracking sensor at about one meter’s
distance ahead. The user typically keeps one hand on the keyboard
for switching modes and triggering actions, and the other hand in
the air for gestural motion tracking.
The HMD provides 1080p resolution per eye, 100
°
eld-of-view
(FOV), 75Hz refresh rate, and minimal motion blurring [
21
]. The
external tracking system provides accurate and low-latency ori-
entation and positional coordination over the entire working en-
vironment. The Leap Motion Controller provides sub-millimeter
tracking of hands and ngertips, including gesture recognition,
over a wide 150
°
eld of view with an eective range of roughly
25 to 600 millimeters (however hand detection and tracking works
best when the user’s hands are well within this range and closer
the center of the user’s view). It also provides an infrared video
feed capable of supporting see-through AR (see Figure 3).
Figure 3: In-HMD view of virtual hand models overlapping
infrared imagery of the real hands [15].
3.2 Software Architecture
The system’s software was developed with Unity
®
3D v.5. The soft-
ware continuously extracts tracking information and raw infrared
imagery through the Leap Motion API. The extracted information
is used to drive the gestural augmentations that generate virtual
forms within the context of real-world imagery captured by the
sensor’s two infrared cameras. The output is rendered onto the
Oculus Rift display at 75Hz via Unity’s Oculus Rift API.
The hand tracking data is used to construct virtual hand models
in estimated posture, position, orientation, and velocity of every
bone, nger, and ngertip, relative to the virtual position of the
Leap device. Hand models are visualized as virtual skeletons and
overlaid onto Leap Motion’s video pass-through imagery, with
tracking condence represented via the opacity of the virtual hand
models. These virtual hands are rendered to the HMD in the same
stereoscopic 3D space as user-generated sculptural forms.
3.2.1 3D Interface Design. The principal focus of interaction
is freeform, continuous mark making in space, with one of three
physical augmentation brushes as described in the next section.
However the dynamics, shape, and appearance of these brushes are
partly determined a number of settings that are congured through
a set of interface palettes, which we describe rst.
3.2.2 Palee Mode. All congurable system settings are ac-
cessed through the system’s three palettes (Figure 4). When re-
quested (by pressing the spacebar key), three large palettes are
situated in space at an approachable distance from the user, at an
accessible scale, and in a spatial cockpit arrangement to take advan-
tage of the HMD’s wide eld of view. The palettes situate instantly
familiar 2D desktop metaphors of clickable buttons and draggable
sliders in the 3D space.
The palettes oer a breadth of options to maximize the user’s
creative control over the brush’s properties. The leftmost palette
modies a brush’s shape, orientation, scale, extrusion and thickness
variation. The right palette selects color and material. The central
palette ne-tunes critical parameters of the dynamic behavior of
the current brush. Changes made with the palettes are promptly
visualized in the form and movement of all active brushes.
MOCO’17, 28-30 June 2017, London, United Kingdom S. Jang et al.
Pressing the spacebar once more hides the palettes and returns
to the unobstructed drawing mode. The two modes keep the virtual
creations and palettes separate and prevent any overlap or occlusion.
An additional overlay can be used to save, load, or reset the current
scene.
Users can also swipe the hand vertically just in front of the HMD
to toggle between VR and AR modes. The swipe gesture semanti-
cally corresponds to its function and makes it easy to transition
between modes while drawing or editing.
Figure 4: In-HMD view of the conguration palettes.
3.2.3 Dynamics-Based Interaction Design. Three types of brush
prototypes inspired by dynamic models are presented in this system:
spring-mass brush, ribbon brush, and rope brush. The spring-mass
brush builds upon Snibbe’s Dynasculpt model [
25
] and adapts its
concept to an immersive platform with freehand 3D interaction.
The ribbon and rope brushes also utilize dynamic simulation-driven
models inspired by real-world physical phenomena. The three brush
prototypes were chosen to evaluate the creative potential of con-
necting physically inspired dynamics with the creation of form
in ways that are only possible in virtual space. Although focus
is primarily on airborne gestural strokes of the tracked ngertips,
specic keys on the physical keyboard are also used as a convenient
mode of invoking subsidiary functions.
Spring-Mass Brush. Snibbe’s Dynasculpt was a 3D adaptation
of Haeberli’s Dynadraw application, which focused on exploring
how dynamic models transform the creative process. Our system’s
spring-mass brush prototype is based on the same spring-mass-
damper model, but diers from prior work in that users’ hand
movements provide direct 3D input and its immersive display al-
lows virtual forms to directly materialize at the user’s ngertips,
including multiple ngertips simultaneously. Fingertips of all ex-
tended ngers in view are respectively augmented by a virtual
spring with a mass attached to its other end (Figure 5). The line that
extends from the ngertip to the mass represents a spring that can
stretch and contract as it tows the mass along. The attached spring
and mass follow the ngertip in accordance with its dynamic set-
tings as long as the nger remains extended. The masses attached
to each nger can collide with those of other extended ngers, and
rebound.
To begin drawing with the brush, users must press their desig-
nated “draw” key (the left Control key by default). The 3D stroke
begins from the virtual mass and continues along its 3D path while
the draw key is held down (Figure 6), and ends when the key is
lifted. The virtual brush “springs” into action once the distance be-
tween the ngertip and the mass grows past its state of equilibrium.
The virtual matter that appears in its trail is promptly erased if
the brushstroke is cut prematurely and fails to travel a pre-dened
minimal distance.
Figure 5: (Left) Virtual masses attached to ngertips by vir-
tual springs. (Right) All extended ngers receive masses.
Figure 6: A drawing progressing with the spring-mass brush.
Figure 7: Examples of sketches showing greater degree of
control. Both were created by an artist on the rst trial of
the system, after less than one hour of experimentation.
Users are given direct control over three specic parameters that
dene the brush’s character and quality of movement: 1. Weight
of the mass, 2. Spring constant, 3. Damper value. Other subsidiary
factors such as air drag are kept at an optimal constant. Adjustments
in parameter values are made through the dynamics palette, and
any changes are instantly applied and observable in the brush’s
behavior. The default base shape is a at rectangle, chosen for
the ribbon-like forms that reveal dynamic changes in orientation it
creates. The default extrusion type is a are, which shows variations
in thickness with less computational power than the velocity type
(which adjusts the stroke’s thickness according to speed of the
stroke). The shape and extrusion type of the brush, as well as its
stroke weight and thickness contrast level are changeable via the
palette.
Incorporating Kinesthetic Creativity and Gestural Play into Immersive Modeling MOCO’17, 28-30 June 2017, London, United Kingdom
Figure 8: A ribbon is captured in its current position and an-
other one immediately appears in its place.
Ribbon Brush. The ribbon prototype was inspired by rhythmic
gymnasts’ manipulations of the ribbon, with which they create
dynamically changing shapes in mid-air to augment their perfor-
mance. While the ribbon is in constant motion, it routinely holds
its form long enough to appear magically suspended in air. Pho-
tographs sometimes manage to capture these exquisite moments in
a ribbon’s ephemeral dance. The ribbon brush prototype attempts
to capture this eeting beauty of a ribbon at a select moment in
time in its full-dimensional form.
In the prototype, the user’s extended ngers are virtually aug-
mented with a long strip of cloth in the form of a ribbon (Figure 8).
One end of the ribbon is xed onto the ngertip like a gymnast’s
ribbon on the end of a stick. The ribbon exists in a zero gravity
zone and stays still in the air in the absence of movement. When
the nger or hand moves, the ribbon naturally tugs along with
it. Users have the power to freeze the dancing ribbon in time and
space by pressing the designated “capture” key. Multiple active rib-
bons are captured simultaneously and promptly replaced by newly
generated ones.
Rope Brush. The rope model was inspired by how rope as a ma-
terial is skillfully manipulated and appropriated for a great variety
of purposes. Rope-based physics is prominently used in game de-
velopment to simulate the real-world behavior of malleable linear
objects in dynamic contexts. A piece of rope invites tactile engage-
ment and manipulation, and our hands have a highly developed
understanding of how it behaves and what we can do with it. When
both hands are closed to form two sts, a virtual rope appears that
loosely spans from one hand to the other, with its ends xed to the
palm of each hand (Figure 9). While both hands remain closed, the
rope dangles in between the two hands. The rope may be stretched,
twisted, and swung up and down by naturally moving one’s hands
(Figure 9).
The rope turns into a static mesh the moment both hands rope
is let loose, users must close both hands again to generate another
rope. The rope’s length is determined by the distance between two
hands the instant it is being made.
4 INITIAL EXPLORATIONS
The spring-mass prototype was informally tested by students, artists,
and designers during various stages of its development. Most had
never used a VR or AR system for virtual content creation before
and found the hand-based 3D interface and interaction techniques
highly intuitive and enjoyable; many voluntarily came back to test
upgrades or new features.
Figure 9: (Top): The rope is being held by its two ends and
stretched. (Bottom) The rope, once swung up then released,
stays frozen in its spot.
Users were quick to begin using and experimenting with the
brush, though some found it challenging to exercise ne-grained
control. Based on user feedback, the 3D positions of active ngertips
were made more prominent by adding distinct white spheres as
visual indicators, and the virtual representation of the spring was
made more readily distinguishable through brighter and more vivid
colors. The default dynamic parameters of the spring and mass
were also adjusted to better match general user expectations and
preferences.
While some users were immediately adept at handling the 3D
palette interface, others had trouble determining how far they
needed to reach to click buttons (an apparent misjudgment of
depth). In the absence of physical feedback, some users needed
more practice in manipulating slider handles to adjust their values
with precision. Visual cues and feedback such as motion eects
that mimic a button being clicked, or changes in color brightness of
slider handles when pressed upon, are amplied to facilitate these
interactions.
The layout of the palettes including the placement, scale, and
trigger distance of its buttons and sliders were also recongured
and adjusted to make it more accessible for all users. Toggle buttons
for each palette were added for users who did not want to always
keep all palettes open at once.
Originally the palettes had been conceived as co-existing with
the space of the 3D sculptural forms being made, however it was
immediately apparent in user trials that this negatively constrained
the spaces in which users worked and caused distracting occlusions.
Giving users the ability to freely move the palettes wherever they
want appeared to be a probable solution, but we eventually settled
for the simpler solution of dividing the two modes.
5 EVALUATION
Following these initial studies, expert interviews were conducted
for a more informed analysis of the system’s design and approach.
This was followed by a broader user study to obtain an objective
assessment of the system’s usability and user experience from the
perspective of prospective users.
5.1 Expert Interviews
We sought expert opinions as an ecient and eective means of as-
sessing the following: (1) The conceptual originality of the system’s
MOCO’17, 28-30 June 2017, London, United Kingdom S. Jang et al.
approach; (2) The eectiveness of its interaction design and techni-
cal realization; (3) The eectiveness of the VR versus AR modes; (4)
Its viability as an artistic tool, or other potential applications; (5)
The identication of specic areas for continued development, or
critical factors that may have been overlooked in the design.
5.1.1 Selected Experts. Two experienced professionals–an artist
who works with immersive systems and a research scientist special-
izing in AR and VR interactions–were consulted independently for
proper assessment and feedback on the above issues. Dr. Ji is a 3D
sculptor, media artist, and educator with over 10 years of research
experience in immersive art and technologies and over 20 years of
experience in sculpture. Dr. Ha is a researcher and entrepreneur in
immersive technologies with over 10 years of research experience
in Mixed Reality (MR), AR, and 3D user interaction.
5.1.2 Procedure. The expert evaluation was conducted in four
stages: (1) a brief overview of related works (5 min), (2) a video
overview of the spring-mass prototype and its features (5 min),
(3) a trial run of the working prototype (20 min), and (4) an in-
depth interview (30~40 min). System trials were video-captured and
interviews were audio-recorded with prior consent. The interviews
were open-ended and loosely structured around a list of questions
to facilitate discussion.
5.1.3 Results and Analysis. Dr. Ha found the spring-mass proto-
type highly engaging and fun to draw with, and felt more motivated
and procient in drawing with the 3D brush than with a pencil
and paper. He was particularly struck by its intuitive and playful
interactivity and thought it would be a promising creative tool
for children. He found the 3D interface design eective in how it
organized and presented complex information, and opportune for
an immersive environment. Dr. Ha considered the use of ngertips
somewhat arbitrary and suggested potentially using virtual or phys-
ical props that users could easily grasp and maneuver. Moreover, in
the prototype’s current state, he thought the real-world backdrop
in AR mode was more distracting than helpful; and the greater ad-
vantages of an AR modeling environment were in his view not fully
explored. Dr. Ji also identied playfulness–a quality she described
as “the joy of creating”–as one of the system’s greatest assets. She
noted how the augmented dynamic behavior of the brush expanded
the expressive capacity of hand movement and made it easy to cre-
ate highly complex, sweeping curve, whereas creating such curves
with such immediacy and spontaneity was very dicult even for
advanced users of traditional NURBS-based modeling tools. She
thus saw strong potential applications of this modeling approach in
rapid 3D sketching and prototyping for sculptors and architects. She
posited that it would be advantageous for creating rough sketches
to be later edited rened in high-end modeling software, and even
3D-printed as physical mock-ups. However to strengthen it as an
artistic tool Dr. Ji stressed the importance of oering a diverse range
of dynamic brushes and enabling more sophisticated control over
their dynamic behavior. She highly recommended a two-handed
interface that allowed one hand to control the model’s orientation
while the other sculpts or draws.
In contrast to Dr. Ha she found it preferable to work in AR than
VR mode. Although she recognized that the VR mode could permit
users to fully customize their virtual workspace, she suggested the
AR mode oered four signicant advantages: (1) an intuitive sense
of shape, scale, and position relative to one’s body and hands, (2)
real-world dimensions for models being designed for a specic
space, (3) the ability to visually identify supplementary means of
input such as the keyboard, and (4) awareness of the presence of
others, which opens up the potential for collaboration.
5.2 User Study
The main purpose of the general user study was to assess the
system’s ease of use, the eectiveness of its interaction design,
and user satisfaction.
5.2.1 Participants. A total of ten graduate students–six male,
four female–participated in the study. Half of the participants had
used 3D modeling software before, mostly in a beginner capacity.
Three had experience with VR or AR systems, and none had any
experience in digital sculpting. Only one student had substantial
experience working with physical media for creative output.
5.2.2 Procedure. The user study was conducted in the following
six steps: (1) video-based introduction to the spring-mass prototype
and its main features (5 min), (2) training and practice in using the
spring-mass brush (10 min), (3) free doodling session (10~15 min),
(4) video overview of the ribbon and rope prototype, (5) System
Usability Scale (SUS) questionnaire, and (6) short answer question-
naire.
SUS is widely used across elds as simple and robust tool for
a general assessment of a system’s usability and learnability [
3
].
The questionnaire consists of ten statements and uses a 5-point
rating scale ranging from “strongly disagree” to “strongly agree”.
Instructions were added to the top of the SUS form to have peo-
ple record their most immediate and instinctive responses, as was
recommended by [
2
]. The short answer questionnaire consisted of
open-ended questions including what users liked and disliked most
about the system.
5.2.3 Results and Discussion. The average SUS score was 67.5,
which lies in between the threshold of an “OK” and “Good” result
[
2
]. For detailed analysis, mean scores and standard deviations were
calculated for each item. The item that received the highest score
(mean: 4.1, SD: 0.7) indicated that users did not nd the system
unnecessarily complex. High scores were also given to the system’s
consistency (mean: 4, SD: 0.63) and the user’s condence in using it
(mean: 4, SD: 0.45). Items that received the poorest ratings suggested
that many found the system dicult to use (mean: 3.3, SD: 1.19) and
that it required a lot of learning (mean: 3.3, SD: 1.0). This particular
response was unexpected since preliminary informal testing had
suggested the very opposite. We believe that the perceived usability
of the system dropped because of the increased set of features in
the palettesâĂŞmore time is spent conguring advanced options
in these interfaces, drawing attention away from the simplicity
of the core sculpting gesturesâĂŞand thus it may be preferable to
initially present a curated palette of popular brush modes rather
than a fully-featured conguration panel. Participants were clearly
very quick to understand how the brush works and began drawing
at once when given the opportunity. They became rapidly adept at
maneuvering the brush, and enjoyed playing with it and discovering
forms it was capable of creating. Many appreciated having direct
Incorporating Kinesthetic Creativity and Gestural Play into Immersive Modeling MOCO’17, 28-30 June 2017, London, United Kingdom
control over the dynamic behavior of the brush and the freedom
to tweak it to their liking. Some noted that the brush prototype
is not ideal for detailed operations, and that it would take time to
gain sucient mastery for more sophisticated control. What users
identied as liking most about the system was the fully 3D nature
of drawing and the ability to take any position-tracked perspective
desired. What was most often criticized was the 2D palette interface:
some had diculty reaching certain buttons or sliders, or would
accidentally trigger others, and some found it tiresome to learn all
its options. Most agreed that having both VR and AR modes was
advantageous: the VR model helped focus concentration on the
drawing, while the AR model granted a better sense of scale. Some
noted that the AR mode would be benecial for modeling works
intended for an actual physical space.
6 DISCUSSION
During the evaluation we observed how, with nuanced maneuver-
ing, users were able to steer the virtual mass-spring system to uidly
create curvaceous strokes, even when stretching much further than
the nger’s scope of movement. Users generally showed instant
familiarity and intuitive understanding of its mechanics, and gained
greater command of the brush for more nuanced expressions with
a short amount of practice. Results showed immediate expressivity
that would require extensive training to achieve with traditional
3D software.
The intuitive modeling process, however, was intermittently
interrupted due to the system’s susceptibility to errors in hand
detection and tracking. In particular, users’ gestural strokes could
be cut o prematurely due to self-occlusion problems in the hand
tracking. When users were informed about why this was happening,
however, many found ways of working around these limitations and
through conscious eort grew more procient in keeping ngertips
visible while wielding the interactive brush. We anticipate future
improvements to tracking hardware/software to overcome this
issue. In the interim an undo key was also added as an edit function
to allow users to delete any 3D strokes that were abruptly cut short.
The interaction design for the palettes also made their function-
ing vulnerable to tracking errors. More stable and robust palette
designs that enable swift changes in brush properties and uid
transitions between modes would be vital for robust use of the
dynamics-driven brush. Substituting the ‘draw’ key with a foot
pedal or voice-activated input would also free users to draw with
both hands and also control the stroke weight.
The ribbon and rope prototypes show the exciting new creative
opportunities that emerge from expanding our dynamic-driven
approach to virtual form creation. Their interaction designs take
advantage of our bodily intuitions in manipulating various objects
and material and the freedom to suspend the laws of physics in
virtual contexts. The spontaneous, expressive forms users are ca-
pable of creating with these systems are immensely dicult to
approximate with conventional modeling tools. In their current
state, however, the prototypes fall short of showing the full creative
potential of this approach. Enhanced realism of virtual simulations,
a diverse range of changeable form factors and interaction methods,
and sophisticated controls over dynamic behavior would greatly
expand their utility and expressive capacity for artistic output.
It would be benecial to conduct a more comprehensive and
meticulously designed user evaluation with an updated spring-mass
prototype to specify and validate the advantages of using a brush
that uses dynamic models to augment gestural input as opposed to
one without any gestural augmentation. It would also be insightful
to compare dierences in user performance in learning and creating
with the prototype when a monoscopic desktop monitor is used as
opposed to a fully immersive display.
7 CONCLUSION
An immense eort was put into making the physical interaction as
uid and seamless as possible in the above systems. Augmenting
gestural strokes with dynamic simulations for immersive modeling
has proven to encourage creative exploration and experimentation.
Gestural interaction and dynamic models drawn from real-world be-
havior engage our natural spatial instincts and intelligence toward
the shaping of form. The brush prototypes successfully combined
physically inspired interaction techniques and the unique plastic-
ity the virtual world to extend our expressive capacity and create
new forms of expression. Users generally found the experience
creatively satisfying and liberating. Dynamic interactive brushes
may nd application in virtual sculpting, design ideation, rapid
prototyping, and authoring animated virtual environments.
Each prototype in its current state still imposes a strong visual
style specic to its dynamic model. Much more experimentation
with dynamic parameters and applicable forces would be necessary
to nd robust ways of diversifying its creative output.
As identied by expert interviewee Ji, supporting the export of
models will be essential both for renement and for re-use in other
environments or for 3D printing. Moreover, supporting collabora-
tive creation in a shared augmented space could add signicant
value to the experience. As identied by expert interviewee Ha, the
prototypes would also benet from taking more advantage of the
unique aordances of an AR environment to enable richer forms of
interaction that further integrate the physical and virtual, such as
detecting recognizable surfaces and objects in the physical context
and granting them functional roles in the simulated dynamics.
REFERENCES
[1]
Julian Adenauer, Johann Habakuk Israel, and Rainer Stark. 2013. Virtual Reality
Technologies for Creative Design. In CIRP Design 2012, Amaresh Chakrabarti
(Ed.). Springer London, 125–135. https://doi.org/10.1007/978-1-4471- 4507-3_13
[2]
Aaron Bangor, Philip Kortum, and James Miller. 2009. Determining What Indi-
vidual SUS Scores Mean: Adding an Adjective Rating Scale. J. Usability Studies 4,
3 (May 2009), 114–123. http://dl.acm.org/citation.cfm?id=2835587.2835589
[3]
John Brooke. 1996. SUS-A quick and dirty usability scale. In Usability evaluation
in industry. CRC Press. https://www.crcpress.com/product/isbn/9780748404605
ISBN: 9780748404605.
[4]
Je Butterworth, Andrew Davidson, Stephen Hench, and Marc. T. Olano. 1992.
3DM: A Three Dimensional Modeler Using a Head-mounted Display. In Proceed-
ings of the 1992 Symposium on Interactive 3D Graphics (I3D ’92). ACM, New York,
NY, USA, 135–138. https://doi.org/10.1145/147156.147182
[5]
James H. Clark. 1976. Hierarchical Geometric Models for Visible Surface Al-
gorithms. Commun. ACM 19, 10 (Oct. 1976), 547–554. https://doi.org/10.1145/
360349.360354
[6]
Michael F. Deering. 1995. HoloSketch: A Virtual Reality Sketching/Animation
Tool. ACM Trans. Comput.-Hum. Interact. 2, 3 (Sept. 1995), 220–238. https:
//doi.org/10.1145/210079.210087
[7]
Michael F. Deering. 1996. The HoloSketch VR Sketching System. Commun. ACM
39, 5 (May 1996), 54–61. https://doi.org/10.1145/229459.229466
[8]
Paul Haeberli. 1989. Dynadraw. (1989). Retrieved January 10, 2016 from http:
//www.gracaobscura.com/dyna/index.html
MOCO’17, 28-30 June 2017, London, United Kingdom S. Jang et al.
[9]
Johann Habakuk Israel, Laurence Mauderli, and Laurent Greslin. 2013. Mastering
Digital Materiality in Immersive Modelling. In Proceedings of the International
Symposium on Sketch-Based Interfaces and Modeling (SBIM ’13). ACM, New York,
NY, USA, 15–22. https://doi.org/10.1145/2487381.2487388
[10]
Johann Habakuk Israel, Eva Wiese, Magdalena Mateescu, Christian Zöllner,
and Rainer Stark. 2009. Investigating three-dimensional sketching for early
conceptual design - Results from expert discussions and user studies. Computers
& Graphics 33, 4 (2009), 462–473. https://doi.org/10.1016/j.cag.2009.05.005
[11]
Daniel F. Keefe. 2007. Interactive 3D Drawing for Free-form Modeling in Scientic
Visualization and Art: Tools, Methodologies, and Theoretical Foundations. Ph.D.
Dissertation. Providence, RI, USA. Advisor(s) Laidlaw, David H. AAI3271999.
[12]
Daniel F. Keefe, Daniel Acevedo Feliz, Tomer Moscovich, David H. Laidlaw, and
Joseph J. LaViola, Jr. 2001. CavePainting: A Fully Immersive 3D Artistic Medium
and Interactive Experience. In Proceedings of the 2001 Symposium on Interactive
3D Graphics (I3D ’01). ACM, New York, NY, USA, 85–93. https://doi.org/10.1145/
364338.364370
[13]
Scott Kuehnert. 2015. Express Yourself! Augmenting Reality with Graf-
ti. (2015). Retrieved June 23, 2015 from http://blog.leapmotion.com/
express-augmenting-reality-grati- 3d
[14]
Joseph J. LaViola and Daniel F. Keefe. 2011. 3D Spatial Interaction: Applications
for Art, Design, and Science. In ACM SIGGRAPH 2011 Courses (SIGGRAPH ’11).
ACM, New York, NY, USA, Article 1, 75 pages. https://doi.org/10.1145/2037636.
2037637
[15]
Leap Motion. 2015. 3D Motion and Gesture Control for PC & Mac. (2015).
Retrieved June 2, 2015 from https://www.leapmotion.com/product/vr
[16]
Golan Levin. 2000. Painterly interfaces for audiovisual performance. Master’s
thesis. Massachusetts Institute of Technology, Cambridge, MA. Advisor(s) Maeda,
John. http://hdl.handle.net/1721.1/61848
[17]
Wille Mäkelä. 2005. Working 3d meshes and particles with nger tips, towards
an immersive artists. In Interface, Proc. IEEE Virtual Reality Workshop. Citeseer.
[18]
Wille Mäkelä and Tommi Ilmonen. Drawing, painting and sculpting in the air:
Development studies about an immersive free-hand interface for artists.
[19]
Wille Mäkelä, Markku Reunanen, and Tapio Takala. 2004. Possibilities and
Limitations of Immersive Free-hand Expression: A Case Study with Profes-
sional Artists. In Proceedings of the 12th Annual ACM International Confer-
ence on Multimedia (MULTIMEDIA ’04). ACM, New York, NY, USA, 504–507.
https://doi.org/10.1145/1027527.1027649
[20]
Mark Mine, Arun Yoganandan, and Dane Coey. 2015. Principles, interactions
and devices for real-world immersive modeling. Computers & Graphics 48 (2015),
84–98.
[21]
Oculus VR. 2015. Oculus Best Practices Guide. (2015). Retrieved June 15, 2015
from http://developer.oculusvr.com/best-practices
[22]
M. Reunanen, K. Palovuori, T. Ilmonen, and W. Mäkelä. 2005. NäPrä: Aordable
Fingertip Tracking with Ultrasound. In Proceedings of the 11th Eurographics
Conference on Virtual Environments (EGVE’05). Eurographics Association, Aire-la-
Ville, Switzerland, 51–58. https://doi.org/10.2312/EGVE/IPT_EGVE2005/051- 058
[23]
Steven Schkolne, Michael Pruett, and Peter Schröder. 2001. Surface Drawing:
Creating Organic 3D Shapes with the Hand and Tangible Tools. In Proceedings of
the SIGCHI Conference on Human Factors in Computing Systems (CHI ’01). ACM,
New York, NY, USA, 261–268. https://doi.org/10.1145/365024.365114
[24]
Christopher Schmandt. 1983. Spatial Input/Display Correspondence in a Stereo-
scopic Computer Graphic Work Station. SIGGRAPH Comput. Graph. 17, 3 (July
1983), 253–261. https://doi.org/10.1145/964967.801156
[25]
Scott Snibbe, Sean Anderson, and Bill Verplank. Springs and constraints for 3d
drawing. In Proceedings of the Third Phantom Users Group Workshop.
[26]
T. M. Takala, M. Mäkäräinen, and P. Hämäläinen. 2013. Immersive 3D modeling
with Blender and o-the-shelf hardware. In 2013 IEEE Symposium on 3D User
Interfaces (3DUI). 191–192. https://doi.org/10.1109/3DUI.2013.6550243
[27]
Tilt Brush. 2015. Tilt Brush. (2015). Retrieved June 22, 2015 from http://www.
tiltbrush.com
[28]
Jeremy Turner. 2002. Myron Krueger Live. (2002). http://www.ctheory.net/
articles.aspx?id=328
[29]
VR Clay. 2015. Sculpting in Virtual Reality with Oculus Rift and Razer Hydra.
(2015). Retrieved June 22, 2015 from http://vrclay.com
[30]
E. Wiese, J. H. Israel, A. Meyer, and S. Bongartz. 2010. Investigating the Learnabil-
ity of Immersive Free-hand Sketching. In Proceedings of the Seventh Sketch-Based
Interfaces and Modeling Symposium (SBIM ’10). Eurographics Association, Aire-la-
Ville, Switzerland, 135–142. http://dl.acm.org/citation.cfm?id=1923363.1923387
... • Multi-view projections that represent the shape from more 25 than one angle so that collectively, the different views pro- 26 vide useful information about all three dimensions. A set 27 of principal views in third angle orthographic projection is models that result -have a lot in common with freehand PB 23 sketching. ...
... A set 27 of principal views in third angle orthographic projection is models that result -have a lot in common with freehand PB 23 sketching. One particular feature of these new tools is that the 24 user wears a head-mounted display (HMD) and waves hand-25 held controllers around to make strokes, which are then per- 26 sistently suspended in space (from the perspective of the user, 27 see Figure 2). The user can move around these strokes dur- 28 ing the act of sketching and can inspect, add, modify and delete 29 strokes from any angle of view (as well as performing other 30 operations). ...
... Many 18 users wanted immersive 3D sketching tools to include stroke 19 snapping, scaling, haptic feedback, visual depth cues, motion 20 parallax and editing tools to circumvent their lack of sensori-21 motor control [1,3,6]. Such findings motivated researchers to 22 conclude that freehand immersive 3D sketching is insufficient, 23 prompting the development of tools with assistive features: 24 that beautify stroke-based freehand sketches to make them 25 look aesthetically pleasing [7]; tools that make the user sketch 26 on virtual planes [3,7,8]; tools that create geometries and 27 strokes from hand gestures made by the user [9,10,11]; tools 28 that use real-world shapes to guide the placement of sketched 29 strokes [4]. However, immersive 3D sketching tools remain 30 undervalued, misunderstood and unused by many designers 31 who view them as frivolous for serious design tasks [12,3,13]. ...
Article
Full-text available
Paper-based (PB) sketching involves the challenge of representing three-dimensional (3D) shapes on two-dimensional (2D) surfaces. The recent generation of virtual reality (VR) sketching tools offer a way to overcome this challenge. These immersive 3D sketching environments permit the rapid construction of freehand stroke-based 3D models in 3D space while replicating the immediate experience of PB sketching. To explore the potential advantages of VR sketching in visual thinking and visual communication, we conducted investigations with sixteen architectural students engaged in PB and VR sketching tasks. We observed their visualization behavior during VR sketching and their behavior in transitioning between PB and VR sketching. The participants experiences of the two media were also recorded in semi-structured interviews and questionnaires. Our observations show that immersive 3D sketching is a unique form of visual representation that facilitates the rapid and flexible creation of large and detailed (but inaccurate) 3D computer models. It is a multimodal medium that supports visual thinking and communication behaviors associated with PB sketching, CAD modeling, physical model-making and gesturing, all within the same space. This unique combination enables users to engage in visual thinking and visual communication activities in ways that cannot presently be achieved with any other single representation technique.
... Dr. Graham Wakefield's Alice Lab Recent progress in immersive and curious content generation spans many fields and the complex behaviours are often described using trans-disciplinary models of creativity. Multisensory immersive modalities (McCormack 2018), frameworks of socially creative multi-agent AI systems (Wiggins 2006;Wiggins and Bhattacharya 2014), and kinaesthetic virtual expression (Jang 2017;Vi 2017) provide context and implementable strategies that form a basis for the work presented here. Complex visual effects (VFX), referred to here as those which are 3-Dimensional (3D), high-quality and cinematically suitable, is not readily found for generating virtual content from within. ...
... Augmenting kinaesthetic gestures in VR/XR through dynamics-driven simulations (Jang 2017;OMITTED 2016) makes unique artistic integration possible (Morrison 2011). For example, fluidic paint-like trails can leave the user's fingertips with a life of their own; twisting, floating, diffusing, and combining into new forms of artificial, yet curious, life (OMITTED 2019). ...
Preprint
This paper summarizes the development of a novel application that addresses creativity across multiple domains including music, games, visual arts, entertainment , and programming. Through a participatory iterative design and evaluation research methodology this creative application pursues new depths of mixed reality (XR) human-machine interaction (HCI). The nature of the co-creative framework emphasizes the machine's role as a central agent in virtual world-building, and whose creative and artistic decisions are separate from, but of a collaboratory nature to, a Human actor. By harnessing powerful procedural 3D animation and visual effects tools designed and used as industry standards by digital artists to create the highest-quality cinematic results, complex world-making from within Virtual Reality (VR) is made possible. Presented here is a unique system that combines Artificial Intelligence (AI), VR, and complex content generation that utilizes Web-and Cloud-based frameworks to integrate Real-time 3D rendering with procedural modelling and dynamic simulation. Collaborative creativity (CC) is therefore made accessible to both multiple Human agents through tele-presence and to Artificial agents-creatively responding to their constantly evolving virtual world.
... Enhancing the creation of 3D objects by using immersive technologies has a long history in HCI research, from very early works by Butterworth et al. [13], Deering [17], and Wesche and Seidel [54] to very recent ones for free form sketching of curvatures [29,5] and surface modeling [37,30]. Even commercially developed tools like Google TiltBrush [2] are available. ...
... For the scope of this work, we chose to exclude outdoor systems like work by Piekarski [44] or Zlatanova [58] and focus on indoor design environments. In almost all related work, a rough distinction can be made between the two main categories of 3D sketching [5,18,29,32,33,34,55], i.e., drawing lines or contours, and 3D modeling [9,15,30,38,42,43,47,49,57], i.e., creating polygonal or parametric surface models. Of those research projects that employ some form of Mixed Reality, most use Virtual Reality [32,29,37,38,42,47,57], some use Augmented Reality [5,15,18,49], and even unconventional technologies like Light Field Displays [50]. ...
Conference Paper
We present DesignAR, an augmented design workstation for creating 3D models. Our approach seamlessly integrates an interactive surface displaying 2D views with head-mounted, stereoscopic Augmented Reality (AR). This creates a combined output space that expands the screen estate and enables placing 3D objects beyond display borders. For the effective combination of 2D and 3D views, we define different levels of proximity and alignment. Regarding input, multi-touch and pen mitigate issues of precision and ergonomics commonly found in mid-air VR/AR interaction. For creating and refining 3D models, we propose a set of pen and touch techniques with immediate AR feedback, including sketching of rotational solids or tracing physical objects on the surface. To further support a designer's modeling process, we additionally propose orthographic model views and UI offloading in AR as well as freely placeable model instances with real-world reference. Based on our DesignAR prototype, we report on challenges and insights regarding this novel type of display augmentation. The combination of high-resolution, high-precision interactive surfaces with carefully aligned AR views opens up exciting possibilities for future work and design environments, a vision we call Augmented Displays.
... One VR-specific affordance we are exploring right now is embedding nuanced gestural articulations with motion-tracked hands via brush-like tools for 'painting' control sequences and graphic scores. Light painting is a proven application of VR, and we have elsewhere demonstrated the value of extensions of nuanced gesture through simulated dynamics (Jang, Wakefield, & Lee, 2017). We believe this is a unique and distinct use of VR affordances with great potential -and we suspect to some degree that it may mitigate the challenges of precise positioning without haptic feedback, evident in the difficulty of performing a specific note on a Theremin, for example. ...
Article
Despite decades of virtual reality (VR) research, current creative workflows remain far from VR founder Jaron Lanier’s musically inspired dream of collaboratively ‘improvising reality’ from within. Drawing inspiration from modular synthesis as a distinctive musically immersed culture and practice, this article presents a new environment for visual programming within VR that supports live, fine-grained, multi-artist collaboration, through a new framework for operational transformations on graph structures. Although presently focused on audio synthesis, it is articulated as a first step along a path to synthesising worlds.
... Following work in Jang et al. [7], I also felt compelled to create using the perceivable limitless 'space' afforded by a virtual environment-in particular, choices in vertical alignment and spatial extension in the vertical axis were driven by their effective affect induced in an interactant's perceived immersive embodiment and sensitivity to change along this axis. Triggering events in the full 360 degrees available in a physically immersive virtual reality environment sought to encourage a more kinesthetic experience. ...
Conference Paper
Full-text available
The "Curious Creatures" project is an exploratory Research-Creation journey. Here, digital media practices in virtual reality (VR) are developed through an ongoing and evolving methodology. Sensorial engagement and embodiment practices are explored through practical exposure and theoretical study. Interactions between a user and their (VR) environment (as both agents of design and agents of use during the creation process) mirror intellectual and emotional decisions faced throughout the ongoing construction process. Through the study of and participation in the creative process, human agency is tested through these human-computer interactions where virtual environments are constructed with the anticipation of controlling the user's actions. Parallels are drawn to existing art, conceptual frameworks, engineering practices, and technology that inspire this curiosity driven exploration.
Article
This paper describes the process of developing a software tool for digital artistic exploration of 3D human figures. Previously available software for modeling mesh-based 3D human figures restricts user output based on normative assumptions about the form that a body might take, particularly in terms of gender, race, and disability status, which are reinforced by ubiquitous use of range-limited sliders mapped to singular high-level design parameters. CreatorCustom, the software prototype created during this research, is designed to foreground an exploratory approach to modeling 3D human bodies, treating the digital body as a sculptural landscape rather than a presupposed form for rote technical representation. Building on prior research into serendipity in Human-Computer Interaction and 3D modeling systems for users at various levels of proficiency, among other areas, this research comprises two qualitative studies and investigation of the impact on the first author's artistic practice. Study 1 uses interviews and practice sessions to explore the practices of six queer artists working with the body and the language, materials, and actions they use in their practice; these then informed the design of the software tool. Study 2 investigates the usability, creativity support, and bodily implications of the software when used by thirteen artists in a workshop. These studies reveal the importance of exploration and unexpectedness in artistic practice, and a desire for experimental digital approaches to the human form.
Conference Paper
Full-text available
gestural music compositions through painted animations in VR
Conference Paper
Full-text available
We present an immersive 3D modeling application with stereoscopic graphics, head tracking, and 3D input devices. The application was built in three weeks on top of Blender, an open source 3D modeling software, and relies solely on affordable, off-the-shelf hardware like PlayStation Move controllers. Our goal was to create an easy to use 3D modeling environment that employs both 2D and 3D interaction techniques and contains several modeling tools. We conducted a basic user study where novice and professional 3D artists created 3D models with our application. The study participants thought that the application was fun and intuitive to use, but accurate posing of objects was difficult. We also examined the participants' beliefs about future use of immersive technology in 3D modeling. The short implementation time of the application, its many features, and the 3D models created by the study participants set an example of what can be achieved with open source software and off-the-shelf hardware.
Article
Building a real-world immersive 3D modeling application is hard. In spite of the many supposed advantages of working in the virtual world, users quickly tire of waving their arms about and the resulting models remain simplistic at best. The dream of creation at the speed of thought has largely remained unfulfilled due to numerous factors such as the lack of suitable menu and system controls, inability to perform precise manipulations, lack of numeric input, challenges with ergonomics, and difficulties with maintaining user focus and preserving immersion. The focus of our research is on the building of virtual world applications that can go beyond the demo and can be used to do real-world work. The goal is to develop interaction techniques that support the richness and complexity required to build complex 3D models, yet minimize expenditure of user energy and maximize user comfort. We present an approach that combines the natural and intuitive power of virtual reality (VR) interaction, the precision and control of 2D touch surfaces, and the richness of a commercial modeling package. We discuss the benefits of collocating 2D touch with 3D bimanual spatial input, the challenges in designing a custom controller targeted at achieving the same, and the new avenues that this collocation creates. We describe our Touch Where You Can technique that adapts the user interface to support a wide array of hand sizes, minimizing the ergonomic impact on the user. Finally, we demonstrate new interface designs that are better suited for the thumbs-only touch interactions favored by our system.
Article
As immersive 3D user interfaces reach broader acceptance, their use as sketching media is attracting both commercial and academic interests. So far, little is known about user requirements and cognitive aspects of immersive 3D sketching. Also the latter's integration into the workflow of virtual product development is far from being solved. We present results from two focus group expert discussions, a comparative user study on immersive 3D sketching conducted among professional furniture designers and a qualitative content analysis of user statements. The results of the focus group discussions show a strong interest in using the three-dimensional (3D) space as a medium for conceptual design. Users expect it to provide new means for the sketching process, namely spatiality, one-to-one proportions, associations, and formability. Eight groups of functions required for 3D sketching were outlined during the discussions. The comparative study was intended to find and investigate advantages of immersive three-dimensional space and its additional degrees-of-freedom for creative/reflective externalization processes. We compared a 3D and a 2D baseline condition in the same technical environment, a VR-Cave system. In both conditions, no haptic feedback was provided and the 2D condition was not intended to simulate traditional 2D sketching (on paper). The results from our user study show that both the sketching process and the resulting sketches differ in the 2D and 3D condition, namely in terms of the perceived fluency of sketch creation, in terms of the perceived appropriateness for the task, and in terms of the perceived stimulation by the medium, the movement speed, the sketch sizes, the degree of detail, the functional aspects, and the usage time. In order to validate the results of the focus group discussions, we produced a questionnaire to check for the subjectively perceived advantages and disadvantages in both the 2D and 3D conditions. A qualitative content analysis of the user statements revealed that the biggest advantage of 3D sketching lies in the sketching process itself. In particular, the participants emphasized the system's ability to foster inspiration and to improve the recognition of spatiality and spatial thinking. We argue that both 2D and 3D sketching are relevant for early conceptual design. As we progress towards 3D sketching, new tangible interactive tools are needed, which account for the user's perceptual and cognitive abilities.
Conference Paper
Due to its immense visualization and interaction possibilities virtual reality (VR) is often regarded as the “ultimate” future technology for product development and engineering tasks. Although the majority of current industrial use cases for VR lie in fields of reviewing and validating, this article argues that VR has enormous potential to support creative design in the early conceptional phases of product development. To demonstrate this potential three sample VR systems are presented and matched to the “design principles for tools to support creative thinking” developed by Shneiderman et al. [1]. Based on this exemplary systems and related studies, it is argued that VR holds high potential to considerably support creative design and to improve the early phases of product development.
Conference Paper
In theory, the potential to use virtual reality systems for creating visually rich and free-spirited models and prototypes is high. In contrast, immersive modelling is not relevant in today's design practice and design researchers are often sceptical if it will ever be possible to use virtual environments (i.e. virtual material) with the same fidelity as physical materials. The aim of this paper is to search for bridges which allow designers to use the potential of immersive modelling even though no materiality (i.e. no touchable material) is present. It describes four approaches of mastering digital materiality which emerged during a design study among four design students who used an immersive modelling system for two weeks all day long. All approaches imply different means of substituting the missing material constraints. Furthermore the results of a post-study survey among the participants are discussed. The results of this study suggest that designers can find individual ways to handle digital material in immersive environments which may satisfy their professional expectations and standards. They may possibly be able to develop a professional level of manipulative skills within virtual environments comparable to their work with physical material. It can be expected that more approaches to immersive modelling appear as the technology advances and designers become engaged with it.
Article
3D interfaces use motion sensing, physical input, and spatial interaction techniques to effectively control highly dynamic virtual content. Now, with the advent of the Nintendo Wii, Sony Move, and Microsoft Kinect, game developers and researchers must create compelling interface techniques and game-play mechanics that make use of these technologies. At the same time, it is becoming increasingly clear that emerging game technologies are not just going to change the way we play games, they are also going to change the way we make and view art, design new products, analyze scientific datasets, and more. This introduction to 3D spatial interfaces demystifies the workings of modern videogame motion controllers and provides an overview of how it is used to create 3D interfaces for tasks such as 2D and 3D navigation, object selection and manipulation, and gesture-based application control. Topics include the strengths and limitations of various motion-controller sensing technologies in today's peripherals, useful techniques for working with these devices, and current and future applications of these technologies to areas beyond games. The course presents valuable information on how to utilize existing 3D user-interface techniques with emerging technologies, how to develop interface techniques, and how to learn from the successes and failures of spatial interfaces created for a variety of application domains.