Content uploaded by Alexander Pasko
Author content
All content in this area was uploaded by Alexander Pasko on Jan 29, 2018
Content may be subject to copyright.
22 January/February 2016 Published by the IEEE Computer Society 0272-1716/16/$33.00 © 2016 IEEE
Applications Editor: Mike Potel
Virtual Sculpting and 3D Printing for
Young People with Disabilities
Leigh Mcloughlin and Oleg Frya zinov
Bournemouth University
Mark Moseley
Victoria Education Centre
Mathieu Sanchez, Valery Adzhiev, Peter Comninos, and Alexander Pasko
Bournemouth University
A
rtistic activities are an important educa-
tional subject in their own right, but they
can also provide strong links with other
core subjects and life skills, including spatial
awareness, object recognition, and aspects such as
self-expression and building self-con dence. How-
ever, using clay or any other sculpting material is
nontrivial for disabled individuals with little or no
limb control.
Young people with disabilities may have a very
different experience of the physical world than
those without. This experience may be in uenced
by their range of movement, limited gross or
ne motor control, or having spent their lives in
wheelchairs. Because of these physical dif culties,
they may not have had the opportunit y to explore
the physical properties of different objects and
materials in a conventional sense. New technolo-
gies are helping to provide ways for young people
with disabilities to have these experiences in a
virtual sense.
The SHIVA project was designed to use computer-
based technologies to extend access to artis-
tic tools for par ticularly vulnerable population
groups: people in rehabilitation (through French
partners at the Lille 1 University and through the
HOPALE Foundation) and young people with vari-
ous types of disabilities (through Bournemouth
University and Victoria Education Centre in the
UK). To achieve this, we built a generic, accessible
GUI and a suitable geometric modeling system
and used these to produce two prototype model-
ing exercises. These tools were deployed in a school
for students with complex disabilities and are now
being used for a variety of educational and de-
velopmental purposes. Speci cally, our goal was
to enable such young people to learn about ma-
nipulating objects by providing a few basic virtual
sculpting tools.
Given these motivations, our primary research
objectives were to
■consider and establish the ranges of interface
requirements needed by young people with dis-
abilities to enable them to use software of vary-
ing complexit y;
■examine how a generic user interface system
can be designed to meet these requirements to
give access to users with a broad range of physi-
cal input requirements; and
■identify and develop virtual sculpting methods
that would be appropriate for the target audi-
ence (young people with physical and/or cog-
nitive disabilities) and accessible to them given
their input requirements, while allowing 3D
printing for real-world output.
The current state of the art in accessible tech-
nology and virtual sculpting (see the sidebar) in-
cludes a range of impressive tools. However, we
found that the available virtual sculpting software
would not allow us to achieve the goals of the
SHIVA project. Our team had to develop new solu-
tions for the virtual sculpting based on the group’s
g1app.indd 22 12/21/15 12:14 PM
IEEE Computer Graphics and Applications 23
T
wo aspects were essential to the SHIVA project: acces-
sible user interface technologies and virtual sculpting.
The brief summaries provided here review the state of the
art in both areas.
User Models and Accessible User Interfaces
User interfaces provide both the hardware and the software
interface layers between the user and the software applica-
tion. Most computers currently support user input from a
keyboard, mouse, touch screen, and to some extent, voice
command, and most output modalities rely on information-
rich visual displays, which can be dif cult for the visually
impaired. Although some of the students in our project
could use a mouse or touch screen, others struggle to press
a single large button or have absolutely no limb control and
can only access software through an eye-gaze system.
One of the key aspects of access for disabled users is that
each individual has different and specic inter face require-
ments, these requirements can vary according to their specic
physical or cognitive abilities, and these abilities are dy-
namic and liable to change, often throughout each day. The
user interface must therefore store some information about
its users in order to be congured to their requirements.
This information is called a user model and can generally be
stored in two ways: a medical approach stores information
about the user’s physical capabilities and limitations, and a
functional approach stores the user’s interface requirements.
To overcome modern operating system limitations,
several specialist accessible inter face toolkits have been
produced. The Grid 2 is a commercial system commonly
used in special schools that provides disabled access
primarily for communication as well as for other features
(see http://sensorysoftware.com/grid-software-for-aac/).
It has a deep level of support for communication, such as
offering phonetics for synthesized vocal output, but these
features require a specialist speech or language therapist
for use and setup. The software provides access through
switch, touch screen, eye gaze, head pointer, and mouse
and keyboard. This covers a wide range of inputs and
features but is not open or exible enough for a heavy-
weight application such as interactive 3D modeling.
The MyUI project included an adaptive interface but was
targeted mostly at elderly people.1 The UI presents a set of
predened applications including email access, TV, games,
and similar consumables. The system automatically adjusts
the user prole, which covers vision, hearing, language,
information processing, memory, computer skills, speech,
dexterity, and muscle strength, while the user is working
with it. However, a limited number of sensors are used to
detect the user’s abilities and only simple data is stored, so
it is unclear if this data can be used in any practical sense.
The GUIDE project has a user-centered approach to creat-
ing basic UIs with a variety of input and accessibility options.2
The project, which is mainly aimed at and tested with elderly
people, simplies access to entertainment (TV ) and com-
munication (video calls). The system incorporates user prole
setup by guiding users through a simple test that determines
their cognitive abilities as well as some disabilities such as
color blindness. The interface’s input devices include tablets,
speech, gestures, and eye gaze. For our purposes, the main
limitation of the system is that it is oriented toward informa-
tion consumption rather than information creation.
Virtual Sculpting
Virtual sculpting is a computer-aided technology that allows
for the creation of sculptural artifacts. It can be performed
in various ways: using 2D/3D input, an interactive model-
ing technique that employs pressure-sensitive or haptic
interactions, or alternatively, a set of VR interface tools such
as cybergloves or digital clay.
Virtual sculpting techniques employ a variety of sculptur-
al metaphors,3 such as the construction of 3D shapes using
constructive solid geometry (CSG) techniques or global
and local deformations that are suitable for disabled users.
Interactive local modications can be done using different
sculpting metaphors. The concept of virtual clay is perhaps
the most natural metaphor for virtual sculpting.4 It is sup-
ported by a number of commercial and research products
such as Geomagic Freeform and Claytools, Cubify Sculpt,
Pixologic’s ZBrush, and Sculptris.
In our earlier Augmented Sculpture Project,3 we created a
specic interactive environment with embedded sculptural
means. Users experience an immersion into a virtual space
where they can generate new shapes using either meta-
morphosis between several predened sculpture models or
the virtual carving tool with such operations as subtraction,
offsetting, and blending. Finally, we 3D printed the resulting
sculpting artifacts to produce new physical sculptures. The
project had both artistic and educational merits5 and the
tools and lessons learned fed directly into the SHIVA project.
References
1. M. Peissner et al., “MyUI: Generating Accessible User Interfaces
from Multimodal Design Patterns,” Proc. 4th ACM SIGCHI Symp.
Eng. Interactive Computing Systems (EICS), 2012, pp. 81–90.
2. C. Jung et al., “GUIDE: Personalisable Multi-modal User Modal
User Interfaces for Web Applications on TV Interfaces,” Proc.
NEM Summit, 2012, pp. 35–40.
3. V. Adzhiev et al., “Functionally Based Augmented Sculpting,”
J. Visualization and Computer Animation, vol. 16, no. 1, 2005,
pp. 2 5 –3 9.
4. K.T. McDonnell, H. Qin, and R.A. Wlodarczyk, “Virtual Clay:
A Real-Time Sculpting System with Haptic Toolkit s,” Proc. ACM
Symp. Interactive 3D Graphics, 2001, pp. 179–290.
5. A. Pasko and V. Adzhiev, “Constructive Function-Based
Modeling in Multilevel Education,” Comm. ACM, vol. 52, no.
9, 2009, pp. 118–122.
Accessible Technologies and Virtual Sculpting
g1app.indd 23 12/21/15 12:14 PM
24 January/February 2016
Applications
long-standing research in geometric modeling fo-
cused on the Function Representation (FRep)1 due
to its feature set and natural 3D printing suitabil-
ity. We also decided that existing accessible user
interface systems were unsuitable for the project’s
needs, so we developed a new solution.
This article presents the SHIVA project’s moti-
vations, approach, and implementation details to-
gether with initial results, including examples of
the 3D printed objects designed by young people
with disabilities.
Our Approach
The core of the SHIVA project is interactive appli-
cations that allow users to manipulate the virtual
objects in order to create shapes in a 3D environ-
ment. All the implemented software applications
incorporate two main components: an accessible
GUI and an interactive solid modeling system that
allows the manipulation of geometric shapes.
Accessible GUI
To achieve the project’s research objectives and
overcome the limitations of the current state of
the art (see the sidebar), we needed to develop new
interface tools to allow each disabled user to inter-
act with the software. An interface solution must
therefore provide a vast range of exibility and the
ability to store settings for each user.
For this user modeling, a functional approach
was determined the most suitable because it di-
rectly maps the user’s interface needs with the
software settings. The choice of input mode and
the specic settings would be stored in a user pro-
le, which was to be created for each individual
student to meet their needs.
The user’s physical needs were gradually iden-
tied and discussed, often using simple test ap-
plications, in a way that focused on the direct
requirements for the interface and identied the
required features and their ranges. This informed
the development of a generic GUI system that
could be used to map all sculpting features to on-
screen buttons. This provided a lowest common
denominator from which all interface devices
and modalities could be successfully employed.
For single button access, the interface used switch
scanning,2 where each GUI button is highlighted in
turn until the user presses a switch to select the
current element. The use of switch scanning al-
lows an interactive system to work with a range of
physical user interface technologies ranging from
gaze/blink systems to sip-and-puff tubes; nger,
foot, or head motions; and others that are adapt-
able for users across a broad range of disabilities.
The nal SHIVA GUI system features include
switch-scanning support with adjustable tim-
ing parameters; direct progression with multiple
switches; mouse or touchscreen control; button
debouncing options; key-mapping options with
activation on trailing or leading edges; basic eye-
gaze support with adjustable dwell time and con-
gurable rest zones; fully congurable GUI layouts
that can be saved and loaded from user prole;
visual styling in themes for use across multiple
proles; visual adjustment in themes and proles;
and congurable graphics for buttons, symbols,
and text, including sophisticated color replace-
ment in graphics.
Shape Modeling System Core
Another project objective was to allow the users
to perform operations on geometric shapes in a
virtual environment (sculpt virtual objects). To
make the modeling core of the system extensible,
the layout that works with representation and
manipulations of the geometry should be uni-
versal. In our system we represent geometry in
the implicit form by using the FRep,1 which rep-
resents geometric objects using continuous real
functions over a set of coordinates. This represen-
tation lets us describe a vast number of geometric
primitives and perform operations in the shape
modeling system in a more simple and efcient
way compared with other traditional representa-
tions, such as polygonal meshes. Easy formula-
tion allows us to work with traditional geometric
primitives such as a sphere, box, and cylinder.
Beyond this, more complex geometric primitives
such as polygonal meshes can be represented ef-
ciently in the form of signed distance elds,3
which are a natural subset of FRep. Traditionally,
the main disadvantage of such a representation
is that the object’s geometr y cannot be rendered
using the common software for standard graphics
hardware. Instead, in our system we use direct
rendering of the object in the form of real-time
ray-casting through acceleration with graphics
hardware.4
The objects and operations are represented in
the form of a tree structure that generates the de-
ning function for the model. In the leaves of such
a tree, we have the geometric primitives, while in
the nodes we have operations over other nodes and
leaves.5 This allows us to perform operations in
the modeling system by modif ying the structure
of the tree itself by adding and removing nodes.
Existing models are fully parameterized by modi-
fying the values for the parameters of primitives
and operations.
g1app.indd 24 12/21/15 12:14 PM
IEEE Computer Graphics and Applications 25
As an intermediate format for the geometry,
we use the volumetric object format developed by
the Norwegian company Uformia. This supports
most of the operations and primitives existing
in the current state of the ar t in modeling with
geometry represented in the implicit form. This
format allows the interchange of models between
applications, using them as the source for direct
fabrication and also converting these models
to other formats for further operations in other
modeling systems and applications. For example,
most 3D printing hardware takes only polygonal
meshes as an input, so we must convert the model
from implicit form to a polygonal mesh by using
polygonization methods.
Applications
Within the SHIVA project, we developed two exer-
cise applications, Metamorphosis and Totem Pole,
with different levels of complexity for both the
user interface and geometric modeling sides.
Metamorphosis Exercise
The rst prototype software using the accessible
GUI was a metamorphosis exercise, specically
for younger or less cognitively able students. Here,
the user chooses two objects and can produce an
intermediate shape that is a blend of the two ob-
jects. Primary interaction for this is through a
slider, and the blended shape is displayed to the
user and updated interactively. The user can then
rotate their object and apply a color to it.
Figure 1 shows the interface of the Metamor-
phosis application. The interface is implemented
in a simplistic way that allows even users with se-
vere disabilities to interactively perform the transi-
tion operation between two shapes.
From the geometric point of view, in the ap-
plication we perform metamorphosis operations
over two FRep objects. Given the nature of the
application, we used polygonal meshes of exist-
ing real-world objects converted to scalar elds as
sources for the input shapes. The metamorphosis
operation in its simplest form can be seen as a lin-
ear interpolation between the values of the scalar
eld for the initial object and the target object.
We have also tried more complex metamorphosis
operations6 such that the user can inuence the
process in order to obtain more artist-friendly in-
termediate shapes.
The output of the application is a solid object rep-
resenting the intermediate stage of the metamor-
phosis between two objects. Because the object is
solid, it can be further used as an input object for
the application or used as an input for 3D printing.
Totem Pole Exercise
The second software protot ype was a totem pole
exercise, which provides a more complex sculpting
environment. Here, the user stacks a small num-
ber of objects together and then performs simple
modeling operations such as afne transforma-
tions on individual objects within the stack, or
operations such as blending and drilling on the
entire stack. The accessible UI is more advanced for
this application (see Figure 2), compared with the
Metamorphosis application, because it supports
more operations, and more input is required from
the user. The general approach remains the same,
however.
(a)
(b)
(c)
Figure 1. The
Metamorphosis
exercise. The
student can
(a) choose two
objects,
(b) make
a morph
between them,
and
(c) then rotate
and color the
resulting shape.
Here, a morph
between a
sheep and a
frog is selected,
with an
intermediate
shape seen in
green.
g1app.indd 25 12/21/15 12:14 PM
26 January/February 2016
Applications
In this application, we allow the user to choose
from a number of simple geometric shapes as in-
put. In the current implementation, these shapes
include a sphere, box, cylinder, and cone. The set of
operations over these objects includes set-theoretic
operations (union and subtraction), smooth union
blending between objects, and afne transforma-
tions. From the implementation point of view, the
operations are achieved by modifying the tree rep-
resenting the object.
As an example, for the drill operation (see Fig-
ure 2b) the following steps are performed on the
existing tree in order to modify the object based
on the user’s input:
1. The parameters for the drill are obtained from
the camera position and direction and from
the predened (or set by the user from the
GUI) diameter. These parameters dene a cyl-
inder and a parent afne transformation node
that gives it the correct orientation.
2. A node describing a subtraction operation is
added to the tree. The branches of this node
are the cylinder’s transform node from step 1
and the top node for the model.
3. The top node for the model becomes the sub-
traction node added in the previous step.
Thus, adding objects to the stack are union op-
erations with afne transform nodes giving each
object an offset position. Orientation operations
on each object then adjust the parameters of this
transform node, and blending is achieved with a
smooth-blended union instead. As we discussed
earlier, the easy denition of the geometric ob-
jects and of the operations lets us create a simple
denition of the objects and therefore use these as
input for a direct rendering system in the form of
real-time ray-casting for interactive visualization
of the intermediate stages of the modeling process.
Results
The protot ype software implementations were
installed at the Victoria Education Centre (VEC)
and, thus far, have been used by 11 students with
disabilities including two eye-gaze users (see Fig-
ure 3). As part of the active development phase in
2013 we were able to collect feedback from actual
users, which was vital for rening the GUI system
and software prototypes. Since then, the software
has been used by teachers and speech therapists
with a variety of students, leading to actual 3D
printed results. In each case, the student was un-
der the close supervision and guidance of an assis-
tive technologist in addition to relevant educators
and speech therapists.
3D Printed Sculptures
The primary technical goal of the SHIVA project
was to allow users with disabilities to create virtual
and then 3D-printed sculptural artifacts. Using the
system, students have been able to successfully pro-
duce a range of objects, thereby validating the soft-
ware and the process. Figure 4 shows some results.
Educational Integration
Some of the more able students were invited to try
the software during the development phase. This
was introduced to them in a careful, controlled
manner because their reactions to potentially un-
stable protot ype software were unknown. In reality,
the students were extremely enthusiastic and took
particular delight in discovering software bugs.
The SHIVA software has been successfully used
to help teach students about spatial relationships
between objects and general spatial awareness. It
(a)
(b)
Figure 2. The Totem Pole exercise interface. (a) The main construction
screen includes buttons for adding primitive geometric shapes to the
stack, navigation buttons, and a designed shape showing primitive
stacking, blending, and drilling. (b) The drill operation screen has cross-
hair controls for drill location and controls for rotating the object.
g1app.indd 26 12/21/15 12:14 PM
IEEE Computer Graphics and Applications 27
has been used to teach and reinforce spatial con-
cepts such as up, down, behind, and rotate. This
has also helped the teachers to understand how
young people with severe physical difculties per-
ceive certain concepts.
For example, the SHIVA Totem Pole software was
used by two eye-gaze users, and it was discovered
that they both had similar difculties with the
concept of a stack. Teachers used a physical stack
of primitive objects and asked the users to repro-
duce the stack using the software. In the SHIVA
Totem Pole software, objects are added in a stack
that starts from the bottom, adding one object
on top of the other. However, both eye-gaze users
would consistently tr y to start the stack from the
topmost shape rst. The teachers believe the rea-
son for this is that these users have simply never
had the experience of physically placing one ob-
ject upon another, so the concept is new to them.
Identifying this has then helped the teachers in
their understanding of which fundamental spatial
concepts need to be introduced to such students
and in assessing their comprehension.
The software is frequently used to help students
understand how the shape of an object may be con-
structed from a set of simple primitive shapes and
help them understand the differences between rep-
resentations of 2D and 3D shapes. A teacher will
often draw an idea on a sheet of paper and then ask
the student to reconstruct it using the SHIVA soft-
ware, giving instructions and answering questions
during the process. The teaching approach focuses
on the students enjoying the experience, rather than
on them producing a “correct” end result. However,
students have already shown progress. One student
who started by creating essentially random objects
is now able to create identiable models of simple
objects such as a cat or a teddy bear.
Speech therapists at VEC have successfully used
the software with students in their regular activi-
ties to help with speaking and listening as well
as cognitive development aspects. They work with
students on the concepts of sequencing, following
instructions, communicating ideas, and collabora-
tive work. Early observations suggest this approach
will lead to good results for the students, espe-
cially due to the high levels of engagement with
the software.
The SHIVA software has started to be incor-
porated into regular scheduled art lessons at
VEC. Figure 5 shows examples from a shoe-box
landscape project inspired by the work of British
sculptor Andy Goldsworthy. This project involves
students creating sculptural arches as viewing
windows, similar to those in Goldsworthy’s fa-
mous works. This was well received by students
and led to increased engagement with the project.
Figure 5 shows installations created by students.
The physical benets of the software are also be-
ing investigated. Therapists are starting to use it as
a tool from the point of view of an aid in improv-
ing manual dexterity or, using the touch-screen
interface, to gradually encourage students to in-
crease their range of movement by asking them to
reach for more distant buttons and input controls.
Future Work
The SHIVA software prototypes provided a rela-
tively small subset of modeling features, which
limits the range of objects that can be created with
the system. A more exible system would be desired
for more advanced users (children and adults), al-
though it is unclear which modeling paradigms
would be the most suitable for disabled access.
The SHIVA GUI represented a state-of-the-art
prototype system, but there were a number of issues
and further needs resulting from the project. For
example, the system had a heavy requirement on
Figure 3. A student using the Totem Pole exercise at the Victoria
Education Centre (VEC). In this example, the student is utilizing the
interface’s eye-gaze control.
Figure 4. Designs produced using the SHIVA Totem Pole exercise and
printed at the VEC on a 3D printer. The students largely used touch-
screen interaction or eye-gaze tracking.
g1app.indd 27 12/21/15 12:14 PM
28 January/February 2016
Applications
technical support staff for prole creation, which
was stored in an XML format. Also, automatic user
adaptation was identied as important because it
would help adjust to the user’s needs. In the future,
we plan to include support for additional input mo-
dalities, such as brain computer interface, gesture,
multitouch (including gesture interpretation and
for collaborative input), tablet devices, and more
general and complete eye-gaze support.
Acknowledgments
We thank the VEC staff and students for their sup-
port of this project as well as our colleagues for help-
ing with the writing of this article. T he SHIVA project
was funded by the European Union INTERR EG IV
Two-Seas Program. The SHIVA project has received a
2015 Times Higher Education Award in the UK in the
category of Outstanding Digital Innovation in Teach-
ing or Research.
References
1. A. Pasko et al., “Function Representation in
Geometric Modeling: Concepts, Implementation
and Applications,” The Visual Computer, vol. 11, no.
8, 1995, pp. 429–446.
2. D. Colven and S. Judge, Switch Access to Technology,
ACE Centre Advisor y Trust, 2006.
3. M. Sanchez, O. Fr yazinov, and A. Pasko, “Efcient
Evaluation of Continuous Signed Distance to a
Polygonal Mesh,” Proc. 28th Spring Conf. Computer
Graphics (SCCG), 2012 , pp. 101–108.
4. O. Fr yazinov and A . Pasko, “Interactive Ray Shading
of FRep Objects,” Proc. Int’l Conf. Central Europe
on Computer Graphics, Visualization, and Computer
Vision (WSCG), 2008, pp. 145–152; http://wscg.zcu
.cz/wscg2008/Papers_2008/short/C83-full.pdf.
5. A. Pasko and V. Adzhiev, “Function-Based Shape
Modeling: Mathematical Framework and Specialized
La n gua ge,” Automated Deduction in Geometry,
Springer-Verlag, LNAI 2930, 2004, pp. 132–160.
6. M. Sanchez et al., “Morphological Shape Generation
through User-Controlled Group Metamorphosis,”
Computers & Graphics, vol. 37, no. 6, 2013, pp.
620–627.
Leigh Mcloughlin is a lecturer at Bournemouth Univer-
sity, UK. Contact him at lmcloughlin@bour nemouth.ac.uk.
Oleg Fr yazinov is a senior lecturer at Bournemouth Uni-
versity. Contact him at ofryazinov@bournemouth.ac.uk.
Mark Moseley is an assistive technologist at the Victoria
Education Centre. Contact him at mmoseley@victoria.poole
.sch.uk.
Mathieu Sanchez is a PhD student at Bour nemouth Uni-
versity. Contact him at mathieu.p.sanchez@gmail.com.
Valery Adzhiev is a senior research lecturer at Bournemouth
University. Contact him at vadzhiev@bournemouth.ac.uk.
Peter Comninos is a professor at Bour nemouth University.
Contact him at peterc@bournemouth.ac.uk.
Alexander Pasko is a professor at Bournemouth Univer-
sity. Contact him at apasko@bournemouth.ac.uk.
Contact department editor Mike Potel at potel@wildcrest
.com.
Figure 5.
Example
student
projects. This
shoe-box
landscape
project was
inspired by
the work of
artist Andy
Goldsworthy.
g1app.indd 28 12/21/15 12:14 PM