ArticlePDF Available

Abstract and Figures

Most studies on tangible user interfaces for the tabletop design systems are being undertaken from a technology viewpoint. Although there have been studies that focus on the development of new interactive environments employing tangible user interfaces for designers, there is a lack of evaluation with respect to designers' spatial cognition. In this research we study the effects of tangible user interfaces on designers' spatial cognition to provide empirical evidence for the anecdotal views of the effect of tangible user interfaces. To highlight the expected changes in spatial cognition while using tangible user interfaces, we compared designers using a tangible user interface on a tabletop system with 3D blocks to designers using a graphical user interface on a desktop computer with a mouse and keyboard. The ways in which designers use the two different interfaces for 3D design were examined using a protocol analysis method. The result reveals that designers using 3D blocks perceived more spatial relationships among multiple objects and spaces and discovered new visuo-spatial features when revisiting their design configurations. The designers using the tangible interfaces spent more time in relocating objects to different locations to test the moves, and interacted with the external representation through large body movements implying an immersion in the design model. These two physical actions assist in designers' spatial cognition by reducing cognitive load in mental visual reasoning. Further, designers using the tangible interfaces spent more time in restructuring the design problem by introducing new functional issues as design requirements and produced more discontinuities to the design processes, which provides opportunity for reflection and modification of the design. Therefore this research shows that tangible user interfaces changes designers' spatial cognition, and the changes of the spatial cognition are associated with creative design processes.
Content may be subject to copyright.
- 1 -
The Impact of Tangible User Interfaces on Designers’
Spatial Cognition
Mi Jeong Kim
Key Centre of Design Computing and Cognition, University of Sydney
Mary Lou Maher
Key Centre of Design Computing and Cognition, University of Sydney
RUNNING HEAD: TANGIBLE USER INTERFACES AND SPATIAL
COGNITION
Corresponding Author’s Contact Information:
Mi Jeong Kim, PhD student
Key Centre of Design Computing and Cognition, University of Sydney
Wilkinson Building G04, Sydney, NSW2006, Australia,
Phone +61 2 9351 2053
Email: mkim9133@arch.usyd.edu.au
Brief Authors’ Biographies:
Mary Lou Maher is a design researcher with an interest in novel user interfaces to
support designing and computer support for collaborative design; she is the Professor of
Design Computing at the University of Sydney. Mi Jeong Kim is a design researcher
with an interest in designers’ cognition while using tangible user interfaces; she is a PhD
student in the Key Centre of Design Computing and Cognition at the University of
Sydney.
- 2 -
ABSTRACT
Most studies on tangible user interfaces for the tabletop design systems are being
undertaken from a technology viewpoint. While there have been studies that focus on the
development of new interactive environments employing tangible user interfaces for
designers, there is a lack of evaluation with respect to designers’ spatial cognition. In this
research we study the effects of tangible user interfaces on designers’ spatial cognition to
provide empirical evidence for the anecdotal views of the effect of tangible user
interfaces. In order to highlight the expected changes in spatial cognition while using
tangible user interfaces, we compared designers using a tangible user interface on a
tabletop system with 3D blocks to designers using a graphical user interface on a desktop
computer with a mouse and keyboard. The ways in which designers use the two different
interfaces for 3D design were examined using a protocol analysis method. The result
reveals that designers using 3D blocks perceived more spatial relationships among
multiple objects and spaces, and discovered new visuo-spatial features when revisiting
their design configurations. The designers using the tangible interfaces spent more time
in relocating objects to different locations to test the moves, and interacted with the
external representation through large body movements implying an immersion in the
design model. These two physical actions assist in designers’ spatial cognition by
reducing cognitive load in mental visual reasoning. Further, designers using the tangible
interfaces spent more time in restructuring the design problem by introducing new
functional issues as design requirements, and produced more discontinuities to the design
processes, which provides opportunity for reflection and modification of the design.
Therefore this research shows that tangible user interfaces changes designers’ spatial
cognition, and the changes of the spatial cognition are associated with creative design
processes.
- 3 -
CONTENTS
1. INTRODUCTION
2. SPAIAL COGNITION IN DESIGNING
2.1. Epistemic action vs. Pragmatic action
2.2. Spatial Cognition
2.3. Creative Design Process
2.4. Hypotheses
3. COMPARING GUI TO TUI
3.1. Experiment Design
Interfaces: 3D block vs. Mouse and Keyboard
Systems: Tabletop vs. Desktop
Applications: ARToolKit vs. ArchiCAD
Design Tasks: Home office and Design office
Participants
3.2. Experiment Set-ups
TUI session
GUI session
3.3. Experiment Procedure
Training
Experiment
4. METHOD: PROTOCOL ANALYSIS
4.1. Protocol Analysis in Design Research
4.2. Coding Scheme
4.3. Protocol Coding
Segmentation
Coding Process
5. ANALYSIS OF DESIGNERS’ SPATIAL COGNITION
5.1. Overall Observations
5.2. Analysis of the Three Levels of Designers’ Spatial Cognition
Action Level: 3D modeling and Gesture Actions
Perception Level: Perceptual and Functional Activities
Process Level: Set-up Goal Activities and Co-evolution
5.3. Correlation between Physical Actions and Perceptual Activities
3D modeling Actions and Perceptual Activities
Gesture Actions and Perceptual Activities
6. DISCUSSION AND CONCLUSION
6.1. Hypotheses Validation and Conclusion
6.2. Implications and Future Direction
- 4 -
1. INTRODUCTION
A current paradigm in the study of Human-Computer Interaction (HCI) is to develop
novel user interfaces which afford a natural interaction that take advantage of both human
and computer perceptual capabilities (Turk 1998). People are developing tangible user
interfaces (TUIs) as alternatives to traditional graphical user interfaces (GUIs) to meet a
need for a more natural and direct interaction with computers. The term ‘TUIs’ was
introduced by Ulmer and Ishii (Ullmer and Ishii 1997) as an extension of the ideas of
‘graspable user interfaces1’; they argued that TUIs allow users to ‘grasp & manipulate’
bits by coupling digital information with physical objects and architectural surfaces.
Numerous tabletop systems have been customized for design applications and
demonstrate many potential uses for TUIs (Coquillart and Wessche 1999; Fjeld et al.
1998; Obeysekare et al. 1996; Ullmer and Ishii 1997). They restore some of the
tangibility by providing various physical interfaces through which designers create and
interact with digital models. We are particularly interested in TUIs employed in tabletop
systems for design applications since the tangible interaction afforded by the TUIs has
potential to offer significant benefit to designers for 3D design.
Most studies on TUIs for tabletop systems are being undertaken from a technology
viewpoint (Fitzmaurice et al. 1995; Regenbrecht et al. 2002; Underkoffler and Ishii
1999). They described the fundamental ideas behind the systems and implemented
prototypes for possible applications. Some initial user studies were conducted for the
implementation of the prototypes, but the focus has been on the functionality of the
prototypes, and the prototypes have not been evaluated from a cognitive perspective.
Further, many researchers have argued that TUIs improve designers’ spatial cognition,
but there has been no empirical evidence to support this (Fjeld et al. 1998; Lee et al.
2003; Ma et al. 2003). Although some researchers have reported on the users’ perception
of TUIs using survey questionnaires or designer comments, the subjective nature of self-
reports questions their validity as measures of cognitive ability (Vega et al. 1996). Our
research starts from this gap in the existing research on TUIs, technology-oriented study,
anecdotal views and subjective measurement of cognition.
In this research we study the effects of TUIs on designers’ spatial cognition using
protocol analysis. In the context of this research, spatial cognition is defined as
perceiving and reasoning about visuo-spatial information in an external representation in
architectural design. TUIs can be easily and rapidly manipulated because of the natural
interaction afforded by the physical artifacts. However, this brings a question whether
such a physical interaction improves designers’ spatial cognition in a real design task. We
believe that a more in depth understanding of the effects of TUIs on designers’ spatial
cognition would provide a perspective other than usability and is essential for the
1 Fitzmaurice Fitzmaurice, G (1996). Graspable User Interfaces. PhD Thesis, University of Toronto, Fitzmaurice, GW,
Ishii, H and Buxton, W (1995). Bricks: Laying the Foundations for Graspable User Interfaces. In I. Katz R. Mack, L.
Marks (Ed.) Proceedings of the CHI'95 Conference on Human Factors in Computing Systems, ACM Press, New York,
442-449. defines and explores graspable user interfaces, presenting five basic defining properties: space-multiplex both
input and output; concurrent access and manipulation of interface components; strong specific devices; spatially-aware
computational devices; and spatial re-configurability of devices.
- 5 -
development of tabletop systems. Based on a literature study we propose that TUIs on a
tabletop system will change some aspects of designers’ spatial cognition for 3D design.
Furthermore the changes in spatial cognition may be associated with creative design
processes. Through the comparison of design collaboration using a GUI vs. a TUI in a
pilot study, we found improved spatial understanding of object relationships in the TUI
collaboration (Maher and Kim 2005). In this paper we report the results of an experiment
using a protocol analysis, in which designers’ cognitive activities are collected using the
think aloud method. The significance of this research is to empirically examine the ways
in which designers perform spatial design using TUIs, in terms of spatial cognition.
2. SPATIAL COGNITION IN DESIGNING
This research is concerned with designers’ spatial cognition while carrying out 3D
spatial configuration using user interfaces in a digital environment. Cognitive design
studies have put much emphasis on the localized information processing at the individual
designer level. However, we approach the study of ‘spatial cognition in designing’ from
three different perspectives: action, perception and process. This distributed cognition
approach emphasizes the interaction of a person with tools and artifacts (Halverson 1994;
Rogers and Ellis 1994). It is important to understand how user interfaces to digital
models affect designers’ actions, how spatial cognition2 is defined in designing, and then
what aspects of the design process are associated with designers’ spatial cognition. A
consideration of these three perspectives formed the basis for the development of our
hypotheses and coding scheme.
2.1. Epistemic action vs. Pragmatic action
Fitzmaurice (Fitzmaurice 1996) discussed the notion of epistemic and pragmatic
actions to provide the underlying theoretical support for graspable user interfaces.
Epistemic actions refer to ‘exploratory’ motor activity to uncover information that is hard
to compute mentally. One example of an epistemic action is a novice players’ movement
in chess, which offloads some internal cognitive resources into the external world using
physical actions. Many players move pieces around to candidate positions to assess the
moves and possible counter-moves by an opponent. In contrast, pragmatic actions refer to
‘performatory’ motor activity that directs the user closer to the final goal. For example,
the user expecting only pragmatic actions would set a goal first, and perform the minimal
motor action to reach the goal (Fitzmaurice 1996; Gibson 1962; Kirsh and Maglio 1994).
Our interest is in the argument that interfaces with physical objects may offer more
opportunities for epistemic actions (Fitzmaurice 1996). The potential affordances of the
TUIs such as rapid manipulability and physical arrangements may reduce the designers’
cognitive loads, thus resulting in changes in designers’ spatial cognition. Grey et al.
(Gray et al. 2000) demonstrated that small features of an interface constrain interactive
behaviors, thereby having effects on cognitive behaviors. The coincidence of action and
2 In this research the term ‘designing’ refers to a design activity and the term ‘design’ refers to the result of the design
activity.
- 6 -
perception spaces inherent in the design of TUIs enables epistemic actions, which
provides direct-interpreted 3D design platforms (Fjeld et al. 2001; Lee et al. 2003).
In addition, we consider designers’ hand movements along with their design activity
as possibly beneficial for cognitive processing because such movements characterize the
spatial relationships among entities, thus promoting spatial reasoning (Goldin-Meadow
2003; Lavergne and Kimura 1987). In the ‘coin-counting’ experiment, Kirsh (Kirsh 1995;
Kirsh and Maglio 1994) demonstrated that organizing activities such as positioning and
arranging the position of nearby objects reduce cognitive loads as an complementary
strategy for the task performance. Thus, we explored the role of hand movements or
gestures in terms of a complementary strategy for designing, which would serve a similar
function to the 3D modeling actions that are integral to cognition.
2.2. Spatial Cognition
‘Spatial’, or ‘visuo-spatial’, cognition is a broad field of enquiry emerging from a
range of disciplines (Foreman and Gillett 1997; Knauff et al. 2002). According to De
Vega (Vega et al. 1996), people process visuo-spatial information at least two different
ways. The first way is to pick up information through the visual perception about the
visuo-spatial features of objects and spatial relations among them. Such visual perception
deals with the transition from sensation to perception, in which perceptual images of
spatial scenes are constructed in a bottom-up fashion. The second way is to process
visuo-spatial information without sensory support, derived from the top-down retrieval,
or generation of virtual images that are used in the context of explicit or implicit task
demands. Through the construction of mental representations, people can combine visuo-
spatial elements in new ways, perform and simulate mental transformations on them, and
engage in reasoning and problem solving.
While designing, designers are involved in spatial cognition through constructing
either the external or the internal representations (Bruner 1973; Tversky 2005), in which
they do abstract reasoning from the representations, more specifically functional
inferences related to the behaviors of entities in problem-solving tasks (Carroll et al.
1980). Each level of representation leads designers to evolve their interpretations and
ideas for solutions through the execution of action and reflection (Goldschmidt and Porter
2004; Norman 1993; Schön 1992). For this research we define designers’ spatial
cognition as reflective interaction between the external representation and the internal
representation of the problem-solution processed by the perception and reasoning about
visuo-spatial information. By 'perceiving' we mean the process of receiving and
interpreting information from the representations and by ‘reasoning’, the thinking and
problem-solving activity which goes beyond the information given, and which is closely
related to functions of a physical artifact and space.
2.3. Creative Design Process
Cognitive psychology associates ‘creative’ with certain processes that have the
potential to produce ‘creative’ artifacts in designing (Gero 1992; Visser 2004). In this
research, we adopt the notions of ‘S-invention’ and ‘co-evolution’ for the ‘problem-
- 7 -
finding’ behaviors associated with creative design process. First, Suwa et al. (Suwa et al.
2000) proposes situated invention of new design requirements (S-invention) as a key to
obtaining a creative outcome. ‘S-invention’ refers to the set-up goal activities of
introducing new functions as design requirements for the first time in the current task in a
situated way. The introduction of new constraints captures important aspects of the given
problem, going beyond a synthesis of solutions that satisfy the initially given
requirements. In a similar context, Cross and Dorst (Cross and Dorst 1999) proposes that
creative design can be modeled in terms of the co-evolution of problem and solution
spaces. Co-evolutionary design is an approach to problem-solving in which the design
requirements and solutions evolve separately, but affect each other (Maher et al. 1996).
The restructuring of a problem reflects a change in the designer’s perception of a
problem situation. With regards to the designers’ perception of a problem situation, Suwa
et al. (Suwa et al. 2000) propose ‘unexpected discoveries of attending to implicit visuo-
spatial features in an unexpected way’ as a key to gaining a creative outcome. Suwa and
Tversky (Suwa and Tversky 2001, 2002) propose the co-generation of new conceptual
thought and perceptual discoveries’ as ‘constructive perception’. Such ‘constructive
perception’ allows the designer to perceive in another way, which may evoke the ‘re-
interpretation’ that provides the opportunity for the designer to be more creative (Gero
and Damski 1997). For the generation of ‘re-interpretations’ in external representations,
Gero et al. (Gero and Damski 1997; Gero and Kelly 2005; Gero and Yan 1993; Jun and
Gero 1997) emphasize the process of ‘re-representation’ producing multiple
representations since it allows emergence to occur, thereby introducing new variables for
the revision of design ideas and, as a consequence, leading to creative design.
2.4. Hypotheses
The background study on TUIs has argued that interfaces employing manipulable
physical objects have potential affordance of epistemic actions reducing cognitive loads.
We may argue that TUIs support designers’ spatial cognition if the 3D modeling actions
produced by TUIs can be characterized as epistemic actions while designing. In a similar
way, designers’ gestures are also considered as organizing activities which reduce
cognitive load. Therefore, at the Action level, we hypothesized about designers’ physical
actions while using TUIs as follows;
Hypothesis 1: The use of TUIs can change designers’ 3D modeling actions in
designing - 3D modeling actions may be dominated by epistemic actions.
Hypothesis 2: The use of TUIs can change designers’ gesture actions in designing –
more gesture actions may serve as complementary functions to 3D modeling actions
in assisting in designers’ cognition.
The unstructured forms of pictorial representation in sketches can potentially be
perceived in different ways (Purcell and Gero 1998). However, 3D spatial configuration,
being dealt with in this research, does not present such ambiguous representations. Rather,
the functions of objects and spaces associated with the external representations are
ambiguous. We expected that the tactile interaction afforded by TUIs may stimulate
- 8 -
designers to attend to the dynamic spatial relationships among elements rather than single
elements. Further the multiple representations produced by the TUIs may encourage
designers to create new visuo-spatial features. The perception on the spatial relationships
is especially functional, and this abstract relationship can be linked to more conceptual
information. Therefore, at the Perception level, we hypothesized on designers’ perceptual
activities while using TUIs as follows;
Hypothesis 3: The use of TUIs can change certain types of designers’ perceptual
activities - designers may perceive more spatial relationships between elements, and
create more and attend to new visuo-spatial features through the production of
multiple representations.
Our research is based on the assumption that the changes in designers’ spatial
cognition may affect the design process. If the ‘problem-finding’ behaviors and the
process of ‘re-representation’ increase while using TUIs, we may argue that the design
process is affected by the changes of designers’ spatial cognition, ultimately leading to
creative design. Therefore, at the Process level, we hypothesized the effect of the changes
of spatial cognition on the design processes while using TUIs as follows;
Hypothesis 4: The use of TUIs can change the design process – the changes in
designers’ spatial cognition may increase problem-finding behaviors and the process
of ‘re-representation’, which are associated with creative designing.
3. COMPARING GUI TO TUI
In order to highlight the expected changes in spatial cognition while using TUIs, we
compare designers in the following two settings: A tabletop design environment with
TUIs and a desktop design environment with GUIs. The use of two interfaces is the major
variable in the study, while the remaining variables are set in order to facilitate the
experiments but not influence the results.
3.1. Experiment Design
Interfaces: 3D blocks vs. Mouse & Keyboard
Based on the literature study, we decided to use 3D blocks as tangible input devices
for a TUI and a mouse and keyboard as input devices for a GUI in the experiments.
Among various types of 3D blocks, we adopted the same method used by Daruwala et al.
(Daruwala 2004; Maher et al. 2004) because of its simplicity and relevance to our study.
Multiple 3D blocks allow direct control of virtual objects as space-multiplexed input
devices, each specific to a function and independently accessible. The binary patterns
attached to the 3D blocks were made in ARToolKit for the display of the 3D virtual
models (McCarthy and Monk 1994). In terms of GUIs, a mouse and keyboard are highly
generalized time-multiplexed input devices which control different functions at different
times. These two general input devices are as a baseline against which to compare the 3D
blocks of a TUI. They are used to manipulate a set of GUI elements such as windows,
icons and menus that reside in a virtual form (Fitzmaurice et al. 1995; Turk 1998).
- 9 -
(a) (b) (c)
Figure 1 (a) a 3D block with a pattern; (b) a ‘shelf panel’ virtual model; (c) multiple 3D blocks
Systems: Tabletop vs. Desktop
An existing tabletop system was chosen as the design environment for using a TUI as
opposed to a conventional desktop computer system for a GUI. The tabletop system
constructed by Maher et al. (Daruwala 2004; Maher et al. 2004) is the medium in which
the tangible input and output devices reside and tangible interaction takes places. Figure 2
shows the tabletop system with its horizontal and a LCD vertical display and input
devices (Daruwala 2004). The vertical screen was to extend the visualization of the
spatial system shown in plan view on the horizontal display surface. The desktop system
is a typical desktop PC comprising a LCD screen, a mouse and keyboard. The physical
control space with the mouse and keyboard are separated from the virtual output space by
a vertical screen.
(a) b) (c)
Figure 2. Tabletop system; (a) Horizontal table (b) Vertical screen (c) 3D blocks (Daruwala 2004)
Applications: ARToolKit vs. ArchiCAD
Since AR has the closest relation with TUIs, it was used as the framework for the TUI
experiments. ARToolKit3 was chosen for its suitability for allowing the objects to retain
their physical forms and augmenting the 3D visual outputs on the vertical display. For a
GUI, we used ArchiCAD because designers are already familiar with CAD software, and
ArchiCAD is a popular CAD system with typical GUI features. ARToolKit (Billinghurst
et al. 2003) determines the virtual cameras’ viewpoint to detect the tracking fiducial
markers using vision based tracking methods as shown in Figure 3. In order to create the
database for the design tasks, 30 furniture models were selected from the library in
ArchiCAD and made into the VRML models. This was done to allow the same furniture
models to be used for the two different design environments.
3 ARToolKit is free AR software including tracking libraries and source codes for the libraries.
- 10 -
(a)
(b)
(c)
Figure 3. Diagram showing the image processing used in ARToolKit; (a) a live video image (b) a binary image;
(c) a virtual overlay from http://www.fhbb.ch/hgk/af/livingroom/livingroom1/sources/ARToolKit2.33doc.pdf
ArchiCAD enables the designer to create a ‘virtual building’ with 3D structural
elements like walls, doors and furniture, and provides pre-designed objects in a library.
ArchiCAD can create both 2D and 3D drawings, but the designers were required to
manipulate furniture only in a 3D view, thus the ability to interact with the objects in
ArchiCAD was similar to that in ARToolkit. The 3D forms of virtual models of
ArchiCAD allowed designers to pick them up and drag them using a mouse, and thus
provided designers with a method of manipulation similar to that of the 3D blocks. The
same 30 virtual furniture models used for the 3D blocks were selected.
(a) (b) (c)
Figure 4. ArchiCAD; (a) 2D view (b) 3D view (c) Library from http://www.graphisoft.com
Design Tasks: Home office and Design office
An appropriate design problem for the experiment has to be devised carefully in order
to have a manageable size of protocols (Akin 1986; Cross et al. 1996). We considered our
design problem to be easily understood by architecture students and then chose a small-
scale space-planning problem using furniture because this framework seemed to be the
most relevant for both the 3D blocks in ARToolKit and ArchiCAD. Each 3D block
represents a piece of furniture, and pre-designed furniture can be imported from the
library in ArchiCAD using the mouse and keyboard.
The two design tasks were developed to be similar in complexity and type as shown
in Figure 5. In order to stimulate designers’ perceptual activities on 3D objects, their
relationships to each other, and their location within a 3D space, we made the design
tasks renovation tasks for redesigning existing studios. The goal of the home office
design task was to define four required areas, sleeping, kitchen & dining, working, and
living & meeting areas for renovating a residential studio into a home office for a
- 11 -
computer programmer. The goal of the design office task was to define four required
areas, that of the designer, the secretary, the reception and utility areas for renovating a
designer’s private studio into a commercial design office for the designer and a secretary.
(a)
(b)
Figure 5. (a) 3D Home office plan and (b) 3D Design office plan
Participants
This research explores how different HCI affordances may change designers’ spatial
cognition using protocol analysis, so the decision on the number of designers is different
from those in HCI research that generalizes basic human performance capability from a
large number of designers. Designing is a high level of cognitive activity, so many
empirical studies on designers’ cognition include a relatively small number of designers
to seek an understanding of specific cognitive processes (Akin and Moustapha 2003; Ball
2003; McNeill 1999). Each segment of the design protocols is a data item, so our
protocols contain a large number of data elements. We use fewer designers, but still have
a significant amount of data to validate a quantitative analysis. The designers are 2nd or
3rd year-architecture students competent in ArchiCAD, so they have almost the same
range of experience and educational backgrounds.
3.2. Experiment Set-ups
The two experiment set-ups simulating TUI and GUI design environments were
constructed in the same room. Each designer participated in a complete experiment,
consisting of a design task in a TUI session and a second design task in a GUI session. It
was anticipated that the comparison of the same designers in two different interface
environments would provide a better indication of the impact of the user interfaces than
using different designers and the same design task in each environment.
TUI session
The tabletop environment includes a horizontal table and a vertical screen to facilitate
multiple views of the 3D model. 3D blocks with tracking markers are placed on the
tabletop. Figure 6 shows the equipment set-up of the TUI session. A DVR system (digital
video recording) was set to record two different views on one monitor. A camera is used
to monitor a designer’s behavior and the other view is a video stream directly from the
LCD screen. This enabled the experimenter to simultaneously observe designers
physical actions and the corresponding changes in the external representation. One
Entrance
Bathroom
Windows
Bathroom entrance Windows
Entrance Glass wall
No Glass
wall
- 12 -
microphone is fed into the DVR system and the camera is located far enough from the
table to observe a designer’s gestures as well as the 3D modeling actions.
Figure 6. Experiment Set-up for the TUI session
GUI session
Figure 7 shows the set-up for the GUI session. Instead of the horizontal table, a
typical computer configuration with a vertical screen, keyboard and mouse are used. The
overall experimental set-up was similar to that of the TUI session. However the GUI
setting reduced the camera’s view compared to the camera angle in the TUIs session, and
made it hard to include the external representation and designers’ behaviors in one shot.
Figure 7. Experiment Set-up for the GUI session
3.3. Experiment Procedure
Two pilot studies of individual designers were carried out and then with the lessons
learned from the pilot studies, nine more structured experiments were conducted. Two
experiments were later dropped from the study because of insufficient information from
the protocols, so the final results are based on seven designers. Each designer performed
one design session in one day, and went back to the other session on another day. This is
to eliminate the learning effect and designers’ tiredness, which may affect the result of
the second session. Each design session was completed in 30 minutes.
Training
In the training sessions designers were engaged in manipulating the input devices in
order to review their skills in using specific features of the applications. They did a
warm-up task involving thinking aloud to make sure of their capability for verbalizing
their thoughts during the design session. They were instructed that the entire black
markers on the 3D blocks should be in the field of view of the web camera to obtain the
DVR
Microphone
Desktop
LCD Screen
Camera
Mouse & keyboard
DVR
Microphone
LCD screen
3D blocks
Camera
- 13 -
visualization of digital models in the TUI session. For the GUI session, they were asked
to work in the 3D view in ArchiCAD and instructed on how to access to the furniture
library constructed for the experiments.
Experiment
Designers were given time to read through the design briefs prior to the beginning of
the design sessions. They were asked to report as continuously as possible what they are
thinking as they carry out the design tasks for about 20 minutes. They did not need to
produce a final design by the end of the session because the focus of the experiments was
on the design process, not the final design output. An experimenter stayed in front of the
DVR system to observe the experiment process, not interfering with the designers.
However the experimenter reminded designers of verbalizing their thoughts when the
designers did not think aloud for over 1 minute, and answered their questions during the
design sessions. Table 1 shows the outline of the experiment sessions.
Table 1. Outline of the experiment sessions
TUI session GUI session
Interface/Application 3D blocks/ARToolKit Mouse and keyboard/ArchiCAD
Hardware Tabletop and webcam/LCD screen Desktop/LCD screen
Training/ Design session 5-10 mins/20 mins 5-10 mins/20 mins
Designer Individual 2nd or 3rd architecture student
Design Tasks Home office or Design office renovation
There were some concerns about the validity of the settings with regards to three
different conditions: application, work space and design task. First, in order to eliminate
the effect of the manipulability caused by the applications, we recruited competent
designers in ArchiCAD, and restricted the required functions of ArchiCAD for the tasks
to simple ones. It was consequently observed that there was no significant difference in
designers’ capabilities regarding the two different applications. Secondly, in order to
compensate for the different work spaces between two environments, we adopt a LCD
screen for the TUI session instead of HMDs. This was intended for providing a same
visual modality for the designers. Thirdly, the two design tasks were carefully developed
to be similar in complexity and type and to stimulate designers’ spatial cognition for 3D
design. Designers had to reason about 3D objects and the spatial relationships between
these objects despite the fact that they developed a 2D layout to work on the design tasks.
4. METHOD: PROTOCOL ANALYSIS
Protocol analysis is a widely accepted research method that makes inferences about
the cognitive processes underlying the task performance (Foreman and Gillett 1997; Gero
and Mc Neill 1997; Henwood 1996). We collected concurrent protocols using the think-
aloud method. No questionnaire was used because our focus is on capturing the contents
of what designers do, attend to, and say while designing, looking for their perception of
discovering new spatial information and activities that create new functions in the design.
- 14 -
4.1. Protocol Analysis in Design Research
A protocol is the recorded behavior of the problem solver which is usually
represented in the form of sketches, notes, video or audio recordings (Akin 1986).Recent
design protocol studies employ analysis of actions which provide a comprehensive
picture of physical actions involved during design in addition to the verbal accounts
given by subjects (Brave et al. 1999; Cross et al. 1996). In design research, two kinds of
protocols are used: concurrent protocols and retrospective protocols. Generally,
concurrent protocols are collected during the task and utilized when focusing on the
process-oriented aspect of designing, being based on the information processing view
proposed by Simon (Simon 1992). The ‘think-aloud’ technique is typically used, in which
subjects are requested to verbalize their thoughts as they work on a given task (Ericsson
and Simon 1993; Lloyd et al. 1995). On the other hand, retrospective protocols are
collected after task and utilized when focusing on the content-oriented or cognitive
aspects of design, being concerned with the notion of reflection in action proposed by
Schön (Dorst and Dijkhuis 1995; Foreman and Gillett 1997; Schön 1983). Subjects are
asked to remember and report their past thoughts after the task, where the videotape of
their sketching activities is provided for alleviating the selective retrieval due to decay of
memory (Suwa and Tversky 1997). A number of protocol studies investigated designers’
cognitive activities (Goldschmidt 1991; Kavakli and Gero 2002; Suwa et al. 1998).
4.2. Coding Scheme
Our coding scheme comprises five categories at three levels of spatial cognition: 3D
modeling and gesture actions at the Action level, perceptual activities at the Perception
level, and set-up goal activities and co-evolution at the Process level. We selectively
borrow sub-categories from the Suwa et al. (Suwa et al. 2000; Suwa et al. 1998).
The Action level represents motor activities produced in using the interface. 3D
modeling actions represent operations on external representation, which largely describe
the ‘movement’ of 3D objects and the ‘inspection’ of representations and design briefs.
Gesture actions represent designers’ movements other than 3D modeling actions. The
‘Design gesture’ code is applied to segments when designers develop their ideas via large
hand-movements over the plan, and the ‘General gesture’ code is applied to segments
when designers simply move their hands without a specific design intention. ‘Touch
gesture’ is applied to segments when designers touch multiple 3D blocks using their
hands, or digital images using the mouse, which do not accompany any change in the
design.
The Perception level represents how designers perceive visuo-spatial features from
the external representation. Attention to an existing visuo-spatial feature, creation of and
attention to a new visuo-spatial feature, and unexpected discoveries of a new visuo-
spatial feature were investigated as a measure of designers’ perceptive abilities for spatial
knowledge. For example, if a designer says “there’s a desk near the entrance”, this can be
coded as attention to an existing spatial relationship, but if the designer says “I’m moving
this desk to go near the window”, this can be coded as creation of a new spatial
relationship. Designers sometimes discover a space unexpectedly. For example, “a little
- 15 -
bit, the layout is not...you end up with empty space..!” This example suggests that during
inspection the designer has noticed the unexpected appearance of an empty space.
The Process level represents ‘problem-finding’ behaviors associated with creative
design. Set-up goal activities refer to activities of introducing new design functions as
design requirements, which restructure the design problem. In terms of the semantic
mode, set-up goal activities basically belong to functional activities. If a designer
considers the view from a glass wall to outside, it is still a functional activity. However, if
s/he says “let’s put a display place in front of the glass wall for the view”, then this
becomes an instance of set-up goal activity. The co-evolution category refers to design
activity that explores cognitive movement between design problem and solution spaces.
Table 2. Spatial Cognition Coding Scheme
Action Level
3D modeling actions
PlaceNew Place a new object from the library
PlaceExisting Change the location of a initially given object for the first time
ReplaceExisting Change the location of an existing object
Rotate Change only the orientation of an existing object
Remove Delete/remove an existing object
Library Check library for objects through screen or virtual library
InspectBrief Inspect the design brief
InspectScreen Inspect layout on the screen
InspectTable Inspect layout on the table
Gesture actions
Design gesture Large hand movements above the 3D plan
General gesture General speech-accompanying hand gestures
Touch gesture Touch a 3D blocks with hands or a mouse
Modeling action No gesture because of the modeling actions
Perception Level
Perceptual activities
E-visual feature Attention to an existing visual feature of an element
E-relation Attention to an existing relation among elements or orientation of an element
E-space Attention to an existing location of a space
E-object Attention to an existing location of an object
N-relation Creation of a new relation among elements
N-space Creation of a new space among elements
D-visual feature Discovery of a visual feature of an element
D-relation Discovery of a relation among elements
D-space Discovery of an implicit space between elements
Process Level
Set-up goal activities
G-knowledge Goals to introduce new functions derived from explicit knowledge or experience
G-previous Goals to introduce new functions extended from a previous goal
G-implicit Goals to introduce new functions in a way that is implicit
G-brief Goals to introduce new functions based on the given list of initial requirements
G-repeat Repeated goals from a previous segment
Co-evolution
P-space The features and constraints that specify required aspects of a design solution
S-space The features and behaviours of a range of design solution
- 16 -
Combined codes
We combined some codes of 3D modeling action, perceptual activity and set-up goal
activity into generic activity components in order to highlight observed patterns of design
behaviors in the two design environments as shown in Table 3.
Table 3. Combined Codes
Combined Codes Individual Codes Coding Categories
New PlaceNew, PlaceExisting
Revisited ReplaceExisting, Rotate
Inspection InspectScreen, InspectTable 3D modeling actions
Existing E-visual feature, E-relation, E-space, E-object
Creating N-relation, N-space
Discovery D-visual feature, D-relation, D-space
Object E-visual feature, E-object, D-visual feature
Space E-space, N-space, D-space
Spatial relation E-relation, N-relation, D-relation
Perceptual activities
S-invention G-knowledge, G-previous, G-implicit
Others G-brief, G-repeat Set-up goal activities
New_Revisited_Inspection. ‘New’ activities refer to 3D modeling actions of
importing an object from the furniture library or changing the location of a given object
for the first time. When an object is re-arranged later, it is coded as ‘revisited’ activity.
‘Inspection’ activity refers to the actions of inspecting external representations.
Existing_Creating_Discovery. The perceptual activity codes are combined into three
generic activities: perceiving an existing visuo-spatial feature, creating a new visuo-
spatial feature, and discovering a new visuo-spatial feature unexpectedly. The ‘Existing’
sub-category takes place in the problem space, and Creating and Discovery sub-
categories belong to the solution space..
Object_Space_Spatial relation. The perceptual activity codes are again combined in
another way according to the focus of designers’ attention; perceiving individual objects,
perceiving space, and perceiving spatial relationships among 3D objects.
S-invention_Others. The codes ‘G-knowledge’, ‘G-previous’ and ‘G-implicit’ are
instances of the S-invention, which refers to especially the emergence of new design
issues for the first time during the design process.
4.3. Protocol Coding
Segmentation
A protocol study involves protocol collection, segmentation, coding and analysis.
Segmentation is dividing the protocols into small units, which are to be assigned to
relevant codes according to a coding scheme. The recorded data were transcribed, and
then segmented along the lines of designer’s intentions or the changes of their actions.
Not only the contents of the verbal protocols but video recording of the 3D modeling
activity were looked at to decide the start and end of a segment. Table 4 shows the
segmentations of the protocols excerpt from a TUI session, where a single segment
sometimes comprises a short paragraph and sometimes several sentences.
- 17 -
Table 4. Segmentation: Intention based technique
Segment Time Transcripts 3D modeling actions
Segment 21 04:34-04:43 This thing is quite tall. InspectScreen
Segment 22 04:43-04:50 May be it should be moved to the corner or something. ReplaceExisting
Segment 23 04:50-04:53 This desk fits nicely with this ReplaceExisting
Segment 24 04:53-05:00 Just looking at the alternative desk. This is a corner desk. PlaceNew
Segment 25 05:00-05:06 So move it to here…..ok…. ReplaceExisting
Coding Process
Transcriptions were done by native English speakers and then the segmentation was
done by one of the coders. The protocol coding was done concurrently by two coders,
and a final protocol coding was achieved using a process of arbitration. The coders read
the transcripts and watched the video. By using INTERACT and FileMaker, they coded
each segment according to the coding scheme. Each segment has a single code in 3D
modeling and gesture actions, and multi-codes in perceptual, functional and set-up goal
activities. After each coder finished the coding, they combined their results in a joint
arbitration process in which the coders consulted the transcript, referring to the video
when it was necessary to clarify the subject’s actions. When there was a disagreement
each coder explained reasons for their results and by a consensus approach, an arbitrated
result was achieved. Figure 8 shows an arbitrated data of Designer 1.
Figure 8. Arbitrated data of the Designer 1 in the TUI session
The reliability of the coding process was measured by calculating the Kappa values
between two coders through the three coding phases (1st coder run, 2nd coder run, and
arbitration run). Table 5 shows the average kappa values for each session. The Kappa
values are bigger than 0.75, which means the reliability of the coding is quite high. In
general the reliability of coding the Action level was higher than other levels, because
physical actions are coded by inspecting what happens on the screen.
Table 5. Kappa values for the three coding phases
Kappa values between Kappa values between
F & S F & A S & A F & S F & A S & A
TUI session 0.77 0.86 0.78 GUI session 0.79 0.85 0.81
F & S: First coder’s coding and second coder’s coding
F & A: First coder’s coding and arbitrated coding
S & A: Second coder’s coding and arbitrated coding
- 18 -
5. ANALYSIS OF DESIGNERS’ SPATIAL COGNITION
Prior to going into the protocol analysis we observed how designers cognitively
interacted with external representations and investigated how often they changed their
intention in order to get a sense of the overall tendency of the results.
5.1. Overall Observations
There were some differences between two design sessions, which are largely
categorized into three aspects of designing: initial approach to tasks, development of
design ideas and gestures. First, designers in the TUI sessions initially characterized the
four required areas, considering design issues at a more abstract level. However,
designers in the GUI sessions went into the library directly to search for the furniture.
This finding suggests that designers in the TUI sessions start with the formulation of the
design problem whereas designers in the GUI sessions start in the solution space.
Second, designers in the TUI sessions randomly placed pieces of furniture on the
horizontal display, and then decided on the locations of them while moving around the
furniture. Designers of the TUI sessions seemed to develop design ideas using the
information derived from perceptual activities being stimulated by modeling actions. On
the other hand, designers of the GUI sessions seemed to develop design ideas based on
the information initially given by the design briefs. For example, they often said that “the
designer might need a big work desk” or “the programmer might need more seats”.
Third, it was interesting to note that designers of the TUI sessions often kept touching
the 3D blocks, and designers of the GUI sessions showed similar touching actions using
the mouse while inspecting. We questioned the role of the ‘touching’ actions in assisting
in cognition because ‘touching’ actions did not accompany any change in the design
objects, but seemed to be involved in the cognitive processes.
Table 6 shows the segment durations of design sessions, which give us an idea about
how frequently the designers’ intentions changed over the timeline of the activity due to
the segmentation technique employed for this research. The average segment duration of
the TUI sessions (10.6 sec) is shorter than that of the GUI sessions (17.9 sec), which
suggests that designers in the TUI sessions started new actions quicker and generally had
more actions in the same amount of time. The total time of each GUI session was cut at a
same time point when the corresponding TUI session was completed. Each designer
engaged in two sessions in varied order, in each of which the designer was provided with
different design tasks in order to eliminate the learning effects.
Table 6. Intention shifts in designers’ behaviors: duration of segments
Designer 1 Designer 2 Designer 3 Designer 4 Designer 5 Designer 6 Designer 7
Session TUI1 GUI2 TUI2 GUI1 TUI1 GUI2 TUI2 GUI1 TUI2 GUI1 TUI1 GUI2 TUI2 GUI1
Design task B A B A A B A B B A B A A B
Task completion Yes Yes Yes No Yes No Yes No Yes No Yes No Yes No
Total time 19 min 15 min 20 min 18 min 17 min 19 min 11 min
Segment no 133 80 89 66 120 81 93 55 83 39 99 61 62 31
Mean (sec) 8.54 14.22 10.21 13.14 9.88 14.59 11.39 18.78 12.01 24.52 11.38 18.41 11.00 21.93
Std. Deviation 5.52 9.87 9.20 8.37 7.25 16.43 8.16 11.62 8.44 14.60 7.30 15.09 6.33 11.81
Session: 1– first session; 2 – second session / Design task: A - Home office; B – Design office
- 19 -
5.2. Analysis of the Three Levels of Designers’ Spatial Cognition
Considering the findings in the observation and initial investigation we analyzed the
coded protocols using both statistical and graphical approaches. For the statistical
analysis, we performed a Mann-Whitney U test on each category of encoded protocols to
examine for significant differences in occurrence of or time spent on cognitive activities.
To further measure the differences affected by the interfaces, we explored the structures
of designers’ behavior visually through the graphs. Similar patterns were found for all
designers, so designer 1’s behavior patterns are demonstrated as an example. This section
presents the results of the protocol analysis at each level, and discusses the implications
for designers’ spatial cognition.
Action Level
3D Modeling Actions
Table 7 shows the average occurrence of the combined codes of 3D modeling actions
in TUI versus GUI sessions. There are significant differences in the occurrence of ‘New’
(Z=-2.888, N=7, p<0.05) and ‘Revisited’ (Z=-1.857, N=7, p=0.063) 3D modeling actions
between two design environments. This result indicates that compared to using a mouse,
designers proposed more new ideas and elaborated the ideas while relocating 3D objects.
Table 7. Average occurrence of 3D modeling actions
3D modeling actions TUI session GUI session
New’ 3D modeling actions Mean: 18, Std.D: 1.1 Mean: 11, Std.D: 1.5
Revisited’ 3D modeling actions Mean: 17, Std.D: 4.7 Mean: 9, Std.D: 1.8
However the average time spent per 3D modeling action is longer in the GUI session
as shown in Table 7. In a statistical measurement, there are significant differences in both
of the ‘New’ (Z=-3.130, N=7, p<0.05) and ‘Revisited’ (Z=-2.878, N=7, p<0.05)
modeling actions between the two design sessions. This result indicates that designers’
cognitive load might have been reduced in the TUI session. It seems that rather than
‘internalizing’ the moves of the 3D objects, the designers discharged internal working
memory by performing ‘Revisited’ modeling actions.
Table 8. Average time spent per 3D modeling action
3D modeling actions TUI session GUI session
New’ 3D modeling actions Mean: 9.7 sec, Std.D: 2.0 Mean: 23.1 sec, Std.D: 6.7
Revisited’ 3D modeling actions Mean: 6.8 sec, Std.D: 0.6 Mean: 18.9 sec, Std.D: 3.4
Figure 9 demonstrates Designer 1’s 3D modeling action pattern. The codes are shown
along the timeline of the sessions, where the length of each horizontal bar indicates how
long the designer spent on each action. The horizontal bar of the TUI session has a lot of
short discontinuities, which indicates that the operation of the 3D blocks most likely
occurred in a trial-and error fashion with epistemic payoff. On the other hand, the
horizontal bar of the GUI session has a smaller number of longer discontinuities, which
suggests that operation of the mouse occurred for more pragmatically advantageous
actions.
- 20 -
Figure 9. 3D modeling actions (a) TUI session, (b) GUI session
Gesture Actions
Four experiments were analyzed for gestures because three experiments were
excluded due to the insufficient gestural information captured in the video recordings.
The occurrence of ‘Design’ gesture in the TUI sessions was significantly higher than that
in the GUI sessions (Z=-1.888, N=4, p=0.059). Figure 10 shows the relative average
proportions of gestures in each session. The grey marker cells represent codes that are
significantly higher than those in the other session. In the TUI session, designers
produced more ‘Design’ and ‘General’ gestures and fewer ‘Touch’ gestures compared to
in the GUI session. Employing a larger, more expressive range of gestures designers
exhibited whole body interaction with the representation in the TUI sessions, which
implies designers’ immersion in designing. In contrast, designers exhibited small-scale
finger movements using the mouse in the GUI session..
TUI(%) GUI(%)
12.8 Design gesture 4.3
10.3 General gesture 8.5
23.1 Touch gesture 38.3
53.8 Modeling 48.9
(a) TUI session
Design
gesture,
12.80%
Touch
gesture,
23.10%
Modeli ng
action,
53.80%
General
gesture,
10.30%
Design gesture
General gesture
Touch gesture
Modeling action
(b) GUI session
Touch
gesture,
38.30%
Design
gesture,
4.30%
Modeli ng
action,
48.90%
General
gesture,
8.50%
Design gesture
General gestur e
Touch gesture
Modeling action
Figure 10. Percentage of gestures
We conjectured that the immersion in designing may play a role in structuring the
designers’ spatial cognition. Above all, touching 3D digital images using the mouse was
of interest because the ‘Touch’ actions appeared superfluous but seemed to improve
designers’ spatial cognition.
Perception Level
Perceptual Activities
In order to look into the changes of the focus in the perceptual activities according to
the interaction modes, we investigated the proportions of the codes within each session as
(a) TUI session
(b) GUI session
Revisit
Revisit
New
New
Inspection
Inspection
- 21 -
shown in Figure 11. Perceptual activities related to creating ‘new relations’, discovering
‘new relations’ and ‘space’ increase in the TUI session whereas perceptual activities
related to attending to an ‘existing visual feature’ and ‘space’ and creating ‘new space’
increase in the GUI session. That is, designers in the TUI session created and perceived
new ‘spatial relations’ among elements while designers in the GUI session focused more
on ‘visual feature’ and ‘space’ TUI(%) GUI(%)
4.9 E-visual feature 10.5
21.7 E-relation 19.2
16.2 E-space 19.2
26.2 E-object 27.4
20.9 C-relation 15.0
3.4 C-space 6.3
1.4 D-visual feature 0.7
3.7 D-relation 1.4
(a) TUI session
E-object,
26.2%
D-space,
1.6%
D-re lat ion,
3.7%
D-visual,
1.4%
N-space,
3.4%
E-visual,
4.9%
E-space,
16.2%
N-rel at ion,
20.9%
E-rel ation ,
21.7%
E-visual
E-relation
E-space
E-object
N-rel at ion
N-space
D-visual
D-relation
D-space
1.6 D-space 0.3
(b) GUI session
E-object,
27.4%
D-space,
0.3%
D-relati on,
1.4%
D-visual,
0.7%
N-space,
6.3% E-visual ,
10.5%
E-space,
19.2%
N-relati on,
15.0% E-relati on,
19.2%
E-visual
E-relation
E-space
E-object
N-relation
N-sp ac e
D-visual
D-relation
D-space
Figure 11. Percentage of perceptual activities
In order to identify overall patterns of perceptual activities, we investigated the
average occurrence of the combined codes as shown in Table 9. There are significant
differences in codes ‘Creating’ (Z=-1.863, N=7, p=0.062) and ‘Discovery’ (Z=-2.716,
N=7, p<0.05). Designers created more new visuo-spatial features as well as discovered
new visuo-spatial features unexpectedly while using 3D blocks compared to using the
mouse. We also investigated the focus of designers’ perception, and found that designers
in the TUI session more attended to the ‘Spatial relations’ of the design components
compared to the GUI sessions (Z=-2.753, N=7, p<0.05). It was interesting to note that all
of the designers in the TUI sessions noticed that the relation of the sink to the
surroundings was not relevant, whereas only two designers in the GUI sessions paid
attention to the sink.
Table 9. Average occurrence of combined codes of perceptual activities
Perceptual activity TUI session GUI session
Existing’ Perceptual activities Mean: 49, Std.D: 7.1 Mean: 29, Std.D: 5.6
Creating’ Perceptual activities Mean: 18, Std.D: 3.7 Mean: 9, Std.D: 1.9
Combined
code 1 Discovery’ Perceptual activities Mean: 6, Std.D: 0.9 Mean: 1, Std.D: 0.4
Object’ Perceptual activities Mean: 24, Std.D: 3.5 Mean: 15, Std.D: 3.7
Space’ Perceptual activities Mean: 15, Std.D: 2.4 Mean: 11, Std.D: 2.8
Combined
code 2 Spatial relation’ Perceptual activities Mean: 34, Std.D: 5.3 Mean: 15, Std.D: 2.2
Design Process Level
Set-up goal activities
In this research we basically coded set-up goal activities as functional activities, and
then coded them again in the set-up goal category. The reason for this is that designers
often introduced new functional issues and regarded them as requirements simultaneously
in a segment. Furthermore, they often set up a goal and accomplish it in a segment when
the goal was about function of furniture. It might be easier for designers to set up a goal
- 22 -
and to perform 3D modeling actions for it at the same time compared to the abstract
sketching involving lines and symbols. Figure 12 shows the proportions of the set-up goal
activities within each session. In the TUI session designers set up more goals to introduce
new functions derived from their knowledge (G-knowledge) and new implicit functions
(G-implicit), whereas in the GUI session they introduced more new functions based on
the design requirements (G-brief) and extended (G-previous) or repeated the same goals
(G-repeat). These findings suggest that designers in the TUI session constructed set-up
goals on the fly in a situated way whereas designers in the GUI session retrieved set-up
goals from initially given information.
TUI(%) GUI(%)
18.2 G-knowledge 14.3
9.1 G-previous 14.3
50 G-implicit 35.7
13.6 G-brief 21.4
9.1 G-repeat 14.3
77.3 S-invention 64.3
(a) TUI session
G-re pea t,
9.1%
G-
implicit,
50.0%
G-
knowled
ge, 18.2%
G-br ief ,
13.6%
G-
previous
,9.1%
G-knowledge
G-previous
G-implicit
G-brief
G-repeat
22.7 Others 35.7
(b) GUI session
G-re peat ,
14.3%
G-
implicit,
35.7%
G-
knowled
ge, 14.3%
G-br ie f,
21.4% G-
previous
, 14.3%
G-knowledge
G-previous
G-implicit
G-brief
G-repeat
Figure 12. Set-up goal activities: S-invention_Other goals
Co-evolution
This co-evolution category is concerned with the design process associated with
creative design. We examined the transition between the “problem” and “solution” spaces
using the interactive graphs as shown in Figure 13. There are more discontinuities in the
TUI sessions compared to the GUI session, which indicates that designers refined both
the formulation of a problem and ideas for a solution more pervasively in the former.
This process could be regarded as a co-evolutionary process. Further, the amount of time
spent in the two notional design spaces reveal that in the TUI session designers spent
more time reasoning about the problems compared with in the GUI session (Z=-2.108,
N=7, p<0.05). The time spent in the problem space is also associated with creative design
processes (Christiaans 1992).
Figure 13. Problem-Solution spaces (a) TUI session, (b) GUI session
5.3. Correlation between Physical Actions and Perceptual Activities
We investigated, through the analysis of the correlations, whether the short actions to
be followed by perceptual activities could be seen as epistemic. If such actions supported
the designers’ perceptual capabilities by reducing their internal cognitive loads, they
could be regarded as epistemic actions. Therefore the correlations between physical
(a) TUI session
(b) GUI session
- 23 -
actions, consisting of 3D modeling and gesture actions, and perceptual activities were
investigated visually in the graphs. The graph gives a clear representation of the segment
lengths and a comparison between the categories within the context of the whole design
process.
3D modeling Actions and Perceptual Activities
In the TUI session ‘Revisited’ modeling actions and perceptual activities followed
each other quite frequently or even overlapped as shown in Figure 14 (a) whereas in the
GUI session perceptual activities did not accompany ‘Revisited’ modeling actions as
frequently, as shown in Figure 11 (b). This finding suggests that the majority of
perceptual activities in the TUIs were triggered by performing ‘Revisited’ modeling
actions, but there was not much interaction between ‘Revisited’ modeling actions and
perceptual activities when using GUIs. This finding is consistent with our observation:
designers in the TUI sessions perceive visuo-spatial information while moving around 3D
blocks. Further, it was interesting to note that ‘Revisited’ modeling actions appear in
parallel with ‘Creating’ perceptual activities in the TUI sessions whereas in the GUI
sessions there are few ‘Creating’ perceptual activities during the ‘Revisited’ modeling.
This finding suggests that ‘Revisited’ modeling actions using TUIs supported designers’
creation of new visuo-spatial features, and this may be caused by reducing the cognitive
load in mental computation.
Figure 14. 3D Modeling actions and Perceptual activities in (a) TUI session, (b) GUI session
Gesture Actions and Perceptual Activities
Figure 15 shows gesture and perceptual activities. Regarding the gesture actions, our
focus was to see if ‘Design’ and ‘Touch’ gestures affected designers’ spatial cognition in
a way similar to the 3D modeling actions. It was observed that ‘Perceptual activities’ and
‘Design’ and ‘Touch’ gestures’ overlapped each other throughout the TUI design session,
as shown in Figure 12. This finding implies that there is correlation between ‘Design’ and
‘Touch’ gestures and designers’ perception of ‘existing visuo-spatial features’. Contrary
to our expectation, there seemed to be no difference between hand and mouse ‘Touch’
gestures in correlation of the perceptual activities except in the frequency of touch
actions. Designers seemed to use the ‘Touch’ gestures using hands or the mouse as we
use our fingers when we count a number of coins. This finding suggests that designers
produced hands movement to assist in cognition while they were not performing 3D
modeling actions, and designers in the TUI session produced high frequency of the
gestures.
(a) TUI session
(b) GUI session
- 24 -
Figure 15. Gestures and Perceptual activities in (a) TUI session, (b) GUI session
6. DISCUSSION AND CONCLUSION
6.1. Hypotheses Validation and Conclusion
The hypotheses validation was performed at the three levels of spatial cognition in
order to provide empirical evidence for verifying the anecdotal views on the effect of
TUIs on designers’ spatial cognition.
Action level
Hypothesis 1: The use of TUIs can change designers’ 3D modeling actions in
designing - 3D modeling actions may be dominated by epistemic actions was validated by
the results of the analyses of the 3D modeling actions and their correlation with
perceptual activities. Compared to the GUI session, designers using TUIs exhibited the
following patterns of behavior:
frequent 3D modeling actions possibly reducing cognitive load on user interface
more focus-shifts in design thinking through frequent 3D modeling actions
frequent ‘Revisited’ 3D modeling actions resulting in multiple representations
perceptual ability for creating and perceiving new visuo-spatial features through
frequent ‘Revisited’ 3D modeling actions
Hypothesis 2: The use of TUIs can change designers’ gesture actions in designing –
more gesture actions may serve as complementary functions to 3D modeling actions in
assisting in designers’ cognition was validated by the results of the analyses of gesture
actions and the correlation between the gesture and perceptual activities. Compared to the
GUI session, designers using TUIs exhibited the following patterns of behavior:
more gestures, specifically more ‘Design’ and ‘General’ gestures leading to whole
body interaction with the external representation using hands and arms
perceptual ability for existing visuo-spatial features through ‘Design’ and ‘Touch’
gestures
Through the validation of hypotheses 1 and 2, we concluded that the TUIs produced
epistemic actions revealing information that is hard to compute mentally. Rather than
‘internalizing’ the moves of the 3D objects, the designers performed more 3D modeling
(a) TUI session
(b) GUI session
- 25 -
actions as epistemic actions, which may reflect a reduction of designers’ cognitive load.
Through the ‘Revisited’ 3D modeling actions, designers produced more multiple
representations, resulting in revision of the design ideas. Consequently, designers in the
TUI session changed the external world through the 3D modeling actions, allowing them
to off-load their thought, thereby supporting further perceptual activities.
Furthermore, they exhibited more immersive gestures using large hand movements,
which functioned as a complementary strategy to the 3D modeling actions in assisting in
designers’ perception. The immersive interactions produced by the ‘Design’ gestures
might be associated with designers’ spatial cognition since they support designers’
cognitive process in designing. ‘Touch’ gestures played the role of ‘organizing activities’
that recruit external elements to reduce cognitive loads. They did not produce direct
changes to the external representation, but stimulated designers’ perceptual activities.
Perception level
Hypothesis 3: The use of TUIs can change certain types of designers’ perceptual
activities in designing - designers may perceive more spatial relationships between
elements, and create more and attend to new visuo-spatial features through the
production of multiple representations was validated by the results of the analyses of the
perceptual activities. Compared to the GUI session, designers exhibited the following
patterns of behavior in the TUI session:
more perceptual activities
more new visuo-spatial features were created, perceived, and discovered
greater focus on ‘Spatial relations’ among elements
In testing Hypothesis 3, we found that designers’ perceptive ability for new visuo-
spatial information, especially on spatial relationships, was improved in using the 3D
blocks. These findings suggest that they produced more new interpretations of the
external representation by creating and discovering new visuo-spatial features, and that
they explored more related functional thoughts to the spatial relationships. That is,
designers made more inferences from the visuo-spatial features freeing them from
‘fixation’ on the given requirements or information. Further, they produced more kinds of
conceptual interpretation of the spatial relationships by restructuring the perceived
information.
Process level
Hypothesis 4: The use of TUIs can change the design process – the changes of
designers’ spatial cognition may increase ‘problem-finding’ behaviors and the process of
‘re-representation’, which are associated with creative design process was validated by
the results of the analyses of set-up goal activity and co-evolution categories. Compared
to the GUI session, designers exhibited the following patterns of behavior in the TUI
session:
- 26 -
more new functional issues as design requirements were introduced, specifically
in an implicit way or by retrieving explicit knowledge or experience
more transitions between the ‘Problem’ and ‘Solution’ spaces were produced
more time was spent reasoning about the design problem
The results reveal that designers using 3D blocks spent more time reformulating the
design problem by introducing new functional issues as design requirements or tapping
into prior knowledge and memory. Furthermore, designers developed the design problem
and alternative ideas for a solution more pervasively, exhibiting a co-evolutionary
process. Accordingly, their ‘problem-finding’ behaviors associated with creative design
were clearly increased through the use of TUIs.
In addition, ‘unexpected discoveries’ via their perceptual and ‘Revisited’ 3D
modeling actions were considered for examining the process of re-representation. The
high instances of the combined code ‘Discovery’ suggest that the ‘Revisited’ 3D
modeling actions resulted the production of multiple representations which enabled
designers to discover new visuo-spatial features, and afforded them more opportunities to
gain the sudden ‘insight’ to find key concepts for a creative design.
This research analyses empirical results on the effects of TUIs on designers’ spatial
cognition using a protocol analysis in a comparative study of TUI vs. GUI, where
designing using a GUI was taken as the baseline to observe the changes caused when
using TUIs. Through the validation of the hypotheses a final conclusion of this research
is drawn as follows: TUIs change designers’ spatial cognition and these changes are
associated with creative design processes.
6.2. Implications and Future Direction
The research raises two cognitive issues as major factors to be considered for the
development of user interfaces: ‘off-loading cognition’ and ‘immersion’ in designing. 3D
blocks brought about much richer sensory experience of digital information, which off-
loads designers’ cognition, thereby promoting visuo-spatial discoveries and inferences.
The correlation between perceptual and immersive gesture actions suggests that the sense
of immersion promoted designers’ perceptual ability, thereby facilitating novel
interpretations and design ideas. Thus future research on novel design systems should
proceed in two directions. One would be to focus on multi-modal user interfaces so that
designers could better interact with the representation in cognitively natural ways. The
other direction would be to develop spatial systems with AR, which could promote
designers’ immersion in designing, thus assisting their spatial cognition.
The natural physical affordances of TUIs show the potential of multi-modal user
interfaces to support designers’ cognitive activities in designing. A wide range of
sensorial dimensions, such as 3D surround sound, language and gesture, would certainly
open new perspectives on the development of user interfaces for design systems. As
cognitive tools, visual information facilitates spatial reasoning, however space is multi-
modal, thus designers’ multiple senses must be utilized for supporting their spatial
- 27 -
cognition in designing (Tversky 2005). Multi-modal user interfaces are grounded in
supporting users’ sensorial experiences in real life, but recognize them as new
affordances for digital information. The sensory richness of the user interface plays a
critical role in supporting ‘constructive perception’ in the external representations. Thus
user interface solutions for design applications should act as externalization tools to off-
load cognition, which can be achieved if they are multi-modal.
The tabletop system that goes well beyond the flat monitor serves as an immersive
medium for the generation of new design ideas. Like the users in CAVE environments or
using avatars in VEs, designers using the tabletop appeared to interact with the external
representation through larger body movements. The immersive working environment can
be a powerful setting for designing (Mäkelä et al. 2004; Schkolne and Koenig 1999). We
consider spatial systems with AR as a future direction to be ideal design systems since
they can be strong cognitive design tools in terms of ‘the sense of immersion’ in
designing. We can extend the tangible interactions on the tabletop system into the
development of various spatial systems to embrace the richness of human senses and
skills, thereby facilitating life-like interactions with the physical world.
We do not assume that the results of the experiment can be generalised for all
designers. Some general results could be found from further studies using more subjects
or different subjects with more design expertise. However, this research is the first study
revealing the differences in cognitive behaviors between TUIs and GUIs using protocol
analysis in a systematic way. Thus this research can be applied to other domains or
studies, in which the method and the coding scheme developed for this research can be a
sources of inspiration for the future studies. We do not provide ‘practical guidelines’, but
the knowledge derived from this research can form the basis for the guidelines on novel
design systems.
- 28 -
NOTES
Background. Parts of this paper were presented at the first IEEE International
Workshop on Horizontal Interactive Human-Computer Systems, in Adelaide, Australia,
in 2006 (http://www.tinmith.net/tabletop2006/). Paper link:
http://www.arch.usyd.edu.au/~mkim9133/MMaher_spatial.pdf
Acknowledgments. We would like to acknowledge and thank Yohann Daruwara for
his assistance, and Human Research Ethics Committee for the ethical approval.
Support. This research is supported by an International Postgraduate Research
Scholarship, University of Sydney.
Authors’ Present Addresses. Mi Jeong Kim, Key Centre of Design Computing and
Cognition, University of Sydney, Wilkinson Building G04, Sydney, NSW, 2006,
Australia, Email: mkim9133@arch.usyd.edu.au. Mary Lou Maher, Key Centre of Design
Computing and Cognition, University of Sydney, Wilkinson Building G04, Sydney,
NSW, 2006, Australia, Email: mary@arch.usyd.edu.au
HCI Editorial Record. (supplied by Editor)
- 29 -
REFERENCES
Akin, O (1986). Psychology of Architectural Design. Pion, London.
Akin, Ö and Moustapha, H (2003). Strategic Use of Representation in Architectural
Massing. Design Studies, 25(1), 31-50.
Ball, LJ, Ormerod, T.C., Morley, N.J. (2003). Spontaneous analogising in engineering
design: A comparative analysis of experts and novices. In N. Cross and E.
Edmonds (Eds.) Expertise in Design: Design Thinking Research Symposium 6,
Creativity & Cognition Studios Press.
Billinghurst, M, Belcher, D, Gupta, A and Kiyokawa, K (2003). Communication
behaviors in co-located collaborative AR interfaces. International Journal of
Human-Computer Interaction, 16(3), 395-423.
Brave, S, Ishii, H and Dahley, A (1999). Tangible Interface for Remote Collaboration and
Communication. Proceedings of the CHI'99 Conference on Human Factors in
Computing Systems, 394-401.
Bruner, JS (1973). Beyond the information given: studies in the psychology of knowing.
Oxford, UK: W.W.Norton.
Carroll, JM, Thomas, JC and Malhortra, A (1980). Presentation and Representation in
Design Problem Solving. British Journal of Psychology, 71, 143-153.
Christiaans, H (1992). Creativity in Design. PhD Thesis, Delft University of Technology.
Coquillart, S and Wessche, G (1999). The Virtual Palette and the Virtual Remote Control
Panel: a Device and Interaction Paradigm, for the Responsive Workbench.
Proceedings of Virtual Reality, IEEE Computer Society, 213-218.
Cross, N, Christiaans, H and Dorst, K (1996). Introduction: the Delft Protocols
Workshops. In N. Cross, H. Christiaans and K Dorst (Eds.), Analyzing Design
Activity, 1-16.
Cross, N and Dorst, K (1999). Co-evolution of Problem and Solution Space in Creative
Design. In J. S. Gero and M.L. Maher (Eds.) Computational Models of Creative
Design IV, Key Centre of Design Computing, University of Sydney, 243-262.
Daruwala, Y (2004). 3DT: Tangible Input Techniques used for 3D Design &
Visualization. University of Sydney
Dorst, K and Dijkhuis, J (1995). Comparing Paradigms for Describing Design Activity.
Design Studies, 16(2), 261-275.
- 30 -
Ericsson, KA and Simon, HA (1993). Protocol Analysis: Verbal Reports as Data. MIT
Press, Cambridge, MA.
Fitzmaurice, G (1996). Graspable User Interfaces. PhD Thesis, University of Toronto.
Fitzmaurice, GW, Ishii, H and Buxton, W (1995). Bricks: Laying the Foundations for
Graspable User Interfaces. In I. Katz R. Mack, L. Marks (Ed.) Proceedings of the
CHI'95 Conference on Human Factors in Computing Systems, ACM Press, New
York, 442-449.
Fjeld, M, Bichsel, M and Rauterberg, M (1998). BUILD-IT: an Intuitive Design Tool
based on Direct Object Manipulation. In Wachsmut and Fröhlich (Ed.)
Proceedings of Gesture and Sign Language in Human-Computer Interaction,
Springer-Verlag Berlin, 297-308.
Fjeld, M, Ironmonger, N, Guttormsen Schar, S and Krueger, H (2001). Design and
Evaluation of four AR Navigation Tools using Scene and Viewpoint Handling.
Proceedings of INTERACT 2001, 167--174.
Foreman, N and Gillett, R (1997). Handbook of Spatial Research Paradigms and
Methodologies. Psychology Press, Hove, UK.
Gero, J and Mc Neill, T (1997). An Approach to the Analysis of Design Protocols.
Design Studies, 19(1), 21-61.
Gero, JS (1992). Creativity, Emergence and Evolution in Design. Proceedings of Second
International Round-Table Conference on Computational Models of Creative
Design, 1-28.
Gero, JS and Damski, J (1997). A Symbolic Model for Shape Emergence. Environment
and Planning B: Planning and Design, 24, 509-526.
Gero, JS and Kelly, N (2005). How to Make CAD Tools more Useful to Designers.
Proceedings of ANZAScA 14.
Gero, JS and Yan, M (1993). Discovering Emergent Shapes using a Data-driven
Symbolic Model. In U. Flemming and S. Van Wyk (Eds.) Proceedings of CAAD
Futures'93, 3-17.
Gibson, JJ (1962). Observations on active touch. Psychological Review, 69, 477-491.
Goldin-Meadow, S (2003). Hearing gesture: How our hands help us think. Harvard
University Press, Cambridge, MA.
Goldschmidt, G (1991). The Dialectics of Sketching. Creativity Research Journal, 4(2),
123-143.
Goldschmidt, G and Porter, WL (2004). Design Representation. Springer, New York.
- 31 -
Gray, WD, A, D and Boehm-Davis (2000). Milliseconds Matter: An introduction to
microstrategies and to their use in describing and predicting interactive behavior.
Journal of Experiment Psychology: Applied, 6(4), 322-335.
Halverson, CA (1994), Distributed Cognition as a theoretical framework for HCI. Tech
Report No. 94-03, San Diego.
Henwood, KL (1996). Qualitative Inquiry: Perspectives, Methods and Psychology. In
John T.E. Richardson (Ed.) Handbook of Qualitative Research Methods for
Psychology and the Social Sciences, BPS Books, Leicester, England.
Jun, H and Gero, JS (1997). Representation, Re-representation and Emergence in
Collaborative Computer-Aided Design. In M.L. Maher, J.S. Gero and F.
Sudweeks (Eds.) Preprints Formal Aspects of Collaborative Computer-Aided
Design, University of Sydney, 303-320.
Kavakli, M and Gero, JS (2002). The Structure of Concurrent Cognitive Actions: a Case
Study of Novice and Expert Designers. Design Studies, 23(1), 25-40.
Kirsh, D (1995). The intelligent use of space. Artificial Intelligence, 73, 31-68.
Kirsh, D and Maglio (1994). On distinguishing epistemic from pragmatic action.
Cognitive Science(18), 513-549.
Knauff, M, Schlieder, C and Freksa, C (2002). Spatial cognition: From Rat-research to
Multifunctional Spatial Assistance Systems. KI, 16(4), 5-9.
Lavergne, J and Kimura, D (1987). Hand movement asymmetry during speech: No effect
of speaking topic. Neuropsychologia, 25, 689-693.
Lee, C, Ma, Y and Jeng, T (2003). A Spatially-Aware Tangible User Interface for
Computer-Aided Design. Proceedings of the Conference CHI'03 Human Factors
in Computing Systems, 960-961.
Lloyd, P, Lawson, B and Scott, P (1995). Can Concurrent Verbalization Reveal Design
Cognition? Design Studies, 16, 237-259.
Ma, YP, Lee, CH and Jeng, T (2003). iNavigator: A Spatially-Aware Tangible Interface
for Interactive 3D Visualization. Proceedings of Computer Aided Architectural
Design Research in Asia (CAADRIA2003), 963-973.
Maher, ML, Daruwala, Y and Chen, E (2004). A Design Workbench with Tangible
Interfaces for 3D Design. In E. Edmonds and R. Gibson (Ed.) Proceedings of the
Interaction Symposium, UTS Printing Services, Sydney, 491-522.
Maher, ML and Kim, MJ (2005). Do tangible user interfaces impact spatial cognition in
collaborative design? In Yuhua Luo (Ed.) Cooperative Design, Visualization and
Engineering, Springer-Verlag Berlin, 30-41.
- 32 -
Maher, ML, Poon, J and Boulanger, S (1996). Formalising Design Exploration as Co-
evolution: A Combined Gene Approach. In J.S. Gero and F. Sudweeks (Eds.)
Advances in Formal Design Methods for CAD, 1-28.
Mäkelä, W, Reunanen, M and Takala, T (2004). Possibilities and limitations of
Immersive Free-Hand Expression: A Case Study with Professional Artists.
Proceedings of the12th Annual ACM International Conference on Multimedia,
ACM Press, New York, NY, USA 504-507.
McCarthy, J and Monk, A (1994). Measuring the Quality of Computer-Mediated
Communication. Behaviour & Information Technology, 13, 311-319.
McNeill (1999). The Anatomy of Conceptual Electrinic Design. PhD Thesis, Information
Technology, University of South Australia.
Norman, DA (1993). Things That Make Us Smart. Addison-Wesley, New York.
Obeysekare, U, Willians, C, Durbin, j, Rosenberg, R, Grinstein, F, Ramamurti, R,
Landsberg, A and Sandberg, W (1996). Virtual Workbench-A Non-Immersive
Virtual Environment for Visualising and Interacting with 3D Objects for
Scientific Visualisation. Proceedings of Visualization 96, IEEE Computer Society
Press, 345-349.
Purcell, T and Gero, J (1998). Drawings and the Design Process: A Review of Protocol
Studies in Design and Other Disciplines and Related Research in Cognitive
Psychology. Design Studies, 19(4), 389-430.
Regenbrecht, HT, Wagenr, MT and Baratoff, G (2002). MagicMeeting: A Collaborative
Tangible Augmented Reality System. In Bowen Loftin, Jim X. Chen, Skip Rizzo,
Martin Goebel and Michitaka Hirose (Eds.) Proceedings of IEEE Virtual Reality
2002, IEEE Computer Society Press, 151-166.
Rogers, Y and Ellis, J (1994). Distributed cognition: an alternative framework for
analysing and exploring collaborative working. Journal of Information
Technology, 9(2), 119-128.
Schkolne, S and Koenig, C (1999). Surface Drawing. Proceedings of ACM
SIGGRAPH99 Conference Abstracts and Applications, ACM Press, 166.
Schön, DA (1983). The Reflective Practitioner: How Professionals Think in Action. Basic
Book, New York.
Schön, DA (1992). Designing as Reflective Conversation with the Materials of a Design
Situation. Knowledge-Based System, J(5.1), 3-14.
Simon, H (1992). The Sciences of the Artificial. The MIT Press, Cambridge, MA.
- 33 -
Suwa, M, Gero, J and Purcell, T (2000). Unexpected Discoveries and S-inventions of
Design Requirements: Important Vehicles for A Design Process. Design Studies,
21(6), 539-567.
Suwa, M, Purcell, T and Gero, J (1998). Macroscopic Analysis of Design Processes
based on A Scheme for Coding Designers' Cognitive Actions. Design Studies,
19(4), 455-483.
Suwa, M and Tversky, B (1997). What Do Architects and Students Perceive in Their
Design Sketches? Design Studies, 18(4), 385-403.
Suwa, M and Tversky, B (2001). Constructive Perception in Design. In J.S. Gero and
M.L. Maher (Eds.) Computational and Cognitive Models of Creative Design V,
Key centre of design computing and cognition, University of Sydney, 227-239.
Suwa, M and Tversky, B (2002). External Representations Contribute to the Dynamic
Construction of Ideas. In Hegarty M., Meyer B. and Narayanan N.H. (Eds.)
Diagrammatic Representation and Inference, Springer, 341-343.
Turk, M (1998). Perceptual User Interfaces. In Matthew Turk (Ed.) Workshop on
Perceptual User Interfaces, research.microsoft.com.
Tversky, B (2005). Functional Significance of Visuospatial Representations. In P. Shah
and A. Miyake (Eds.), Handbook of Higher-Level Visuospatial Thinking,
Cambridge: Cambridge University Press, 1-34.
Ullmer, B and Ishii, H (1997). Emerging Frameworks for Tangible User Interfaces. IBM
Systems Journal, 39, 915-931.
Underkoffler, J and Ishii, H (1999). Urp: A Luminous-Tangible Workbench for Urban
Planning and Design. Proceedings of the SIGCHI CHI'99 Conference on Human
Factors in Computing Systems, ACM Press, 386-393.
Vega, MD, Marschark, M, Intons-Peterson, MJ, Johnson-Laird, PN and Denis, M (1996).
Representations of visuospatial cognition: a discussion. In Marc Marschark
Manuel De Vega, Margaret Jean Intons-Peterson, Philip N. Johnson-Laird,
Michel Denis (Ed.) Models of visuospatial cognition, Oxford University Press,
New York, 198-226.
Visser, W (2004), Dynamic Aspects of Design Cognition: Elements for a Cognitive
Model of Design. Theme 3A-Databases, Knowledge Bases and Cognitive
Systems, France.
... Modeling software may not provide a competent context to educate the fundamentals, specifically due to the extraneous cognitive load that GUIs (Graphical User Interfaces) impose on learners [7]. On the other hand, physical model interaction has shown significant impact on spatial visualization and reduction of extraneous cognitive load [8] [9]. Hence, due to the close relationship between spatial reasoning and mathematics, physical interaction could affect learning math concepts positively. ...
... In various areas of STEAM (Science, Technology, Engineering, Arts/Architecture, and Mathematics), the effect of physical models on cognitive learning and spatial abilities has been explored. The studies suggest that physical activities increase the creativity of students in design ideation [8] [11], reduce the extraneous cognitive load in the creative design process [12], promote students' spatial skill in understanding scale relations between geometries [13], and improve interaction and communication in a collaborative working environment [14]. Physical interaction supports embodied spatial awareness and helps students in mental visualization skills [15]. ...
... Multiple research projects have studied the advantage that physical models have over textbooks or computer-based 3D models [9][8], revealing the impact of physical model/interaction on spatial cognition and understanding of complex spatial relations. The results of a study exploring the impact of Tangible User Interface (TUI) versus Graphical User Interface (GUI) found that TUI provides more epistemic actions for designers, promotes spatial cognition, and stimulates design creativity, while GUI restricts designers to only following the design briefs and causes less exploration and discovery between design and solution spaces [8]. ...
Preprint
Full-text available
Despite the remarkable development of parametric modeling methods for architectural design, a significant problem still exists, which is the lack of knowledge and skill regarding the professional implementation of parametric design in architectural modeling. Considering the numerous advantages of digital/parametric modeling in rapid prototyping and simulation most instructors encourage students to use digital modeling even from the early stages of design; however, an appropriate context to learn the basics of digital design thinking is rarely provided in architectural pedagogy. This paper presents an educational tool, specifically an Augmented Reality (AR) intervention, to help students understand the fundamental concepts of para-metric modeling before diving into complex parametric modeling platforms. The goal of the AR intervention is to illustrate geometric transformation and the associated math functions so that students learn the mathematical logic behind the algorithmic thinking of parametric modeling. We have developed BRICKxAR_T, an educational AR prototype, that intends to help students learn geometric transformations in an immersive spatial AR environment. A LEGO set is used within the AR intervention as a physical manipulative to support physical interaction and im-prove spatial skill through body gesture.
... Modeling software may not provide a competent context to educate the fundamentals, specifically due to the extraneous cognitive load that GUIs (Graphical User Interfaces) impose on learners [7]. On the other hand, physical model interaction has shown significant impact on spatial visualization and reduction of extraneous cognitive load [8] [9]. Hence, due to the close relationship between spatial reasoning and mathematics, physical interaction could affect learning math concepts positively. ...
... In various areas of STEAM (Science, Technology, Engineering, Arts/Architecture, and Mathematics), the effect of physical models on cognitive learning and spatial abilities has been explored. The studies suggest that physical activities increase the creativity of students in design ideation [8] [11], reduce the extraneous cognitive load in the creative design process [12], promote students' spatial skill in understanding scale relations between geometries [13], and improve interaction and communication in a collaborative working environment [14]. Physical interaction supports embodied spatial awareness and helps students in mental visualization skills [15]. ...
... Multiple research projects have studied the advantage that physical models have over textbooks or computer-based 3D models [9][8], revealing the impact of physical model/interaction on spatial cognition and understanding of complex spatial relations. The results of a study exploring the impact of Tangible User Interface (TUI) versus Graphical User Interface (GUI) found that TUI provides more epistemic actions for designers, promotes spatial cognition, and stimulates design creativity, while GUI restricts designers to only following the design briefs and causes less exploration and discovery between design and solution spaces [8]. ...
Conference Paper
Full-text available
Despite the remarkable development of parametric modeling methods for architectural design, a significant problem still exists, which is the lack of knowledge and skill regarding the professional implementation of parametric design in architectural modeling. Considering the numerous advantages of digital/parametric modeling in rapid prototyping and simulation most instructors encourage students to use digital modeling even from the early stages of design; however, an appropriate context to learn the basics of digital design thinking is rarely provided in architectural pedagogy. This paper presents an educational tool, specifically an Augmented Reality (AR) intervention, to help students understand the fundamental concepts of parametric modeling before diving into complex parametric modeling platforms. The goal of the AR intervention is to illustrate geometric transformation and the associated math functions so that students learn the mathematical logic behind the algorithmic thinking of parametric modeling. We have developed BRICKxAR_T, an educational AR prototype, that intends to help students learn geometric transformations in an immersive spatial AR environment. A LEGO set is used within the AR intervention as a physical manipulative to support physical interaction and improve spatial skill through body gesture.
... The majority of the articles appeared in academic conferences (n = 25), with the remainder disseminated as peer-reviewed journal publications (n = 11). The studies included in this review span multiple research domains, including but not limited to HCI (Järvinen et al., 2011, Kasahara et al., 2017, VR (Peck et al., 2011, Wang & Lindeman, 2012, robotics (Keren et al., 2012), serious games (Chiu et al., 2018, Freina et al., 2016, educational technologies (Abrahamson & Trninic, 2011, Chiu et al., 2018, Leduc-Mills & Eisenberg, 2011, Lindgren et al., 2016, Zander et al., 2016, user experience, tangibles (Antle & Wang, 2013), psychology (Larrue et al., 2014) and architectural design (Kim & Maher, 2008). Concerning publication venue (see Table 5), the 36 studies were chosen from 23 different journals and conferences. ...
... This is demonstrated in our literature review, since the various capacities of technologies were found beneficial for certain SS. For example, designing through 3D software allows users to think visually in three dimensions, which improves their spatial visualization (Kim & Maher, 2008, Wang & Lindeman, 2012. Other technologies such as AR/VR and GPS navigation were able to enhance SS. ...
... An example of this is Ries et al. (2009)'s investigation into the effects of geometric and motion fidelity of a user's avatar on self-perception, specifically concerning distance estimations, in VR environments. Next, researchers developed tools to help train future employees and support work-related tasks (Kim & Maher, 2008, Lakatos et al., 2014, Quarles et al., 2008, Zaman et al., 2015. There is also an interesting body of research focusing on spatial learning, with children as end-users. ...
Article
Full-text available
Embodied interaction describes the interplay between the brain and the body and its influence on the sharing, creation and manipulation of meaningful interactions with technology. Spatial skills entail the acquisition, organization, utilization and revision of knowledge about spatial environments. Embodied interaction is a rapidly growing topic in human-computer interaction with the potential to amplify human interaction and communication capacities, while spatial skills are regarded as key enablers for the successful management of cognitive tasks. This work provides a systematic review of empirical studies focused on embodied interaction and spatial skills. Thirty-six peer-reviewed articles were systematically collected and analysed according to their main elements. The results summarize and distil the developments concerning embodied interaction and spatial skills over the past decade. We identify embodied interaction capacities found in the literature review that help us to enhance and develop spatial skills. Lastly, we discuss implications for research and practice and highlight directions for future work. RESEARCH HIGHLIGHTS • Systematically reviewed 36 studies to identify aspects of embodied interaction and spatial skills convergence that have been the focus of publications between 2008 and 2018. • Assessed embodied interaction-based spatial skills interventions, paying specific attention to the employed technologies, targeted spatial skills and main thematic categorizations of the research questions. • Discuss three capacities of embodied interaction that might catalyse the development and enhancement of spatial skills engagement: namely, enrichment, transferability and convergent smart physicality.
... La souplesse de l'interaction [Bilda et Damakian 2002], l'expressivité du croquis [Rogers et al. 2 0 0 0 ;MacCall et al. 2001] sont des atouts indéniables de ce mode d'expression. D'autres auteurs ont montré l'intérêt des interfaces tangibles, dont le stylo est une forme rudimentaire, comme vecteur d'un plus grand investissement corporel dans la recherche de solutions, ce qui est reconnu comme facilitant la cognition spatiale [Kim & Maher, 2008]. Néanmoins, peu d'études se sont penchées sur l'influence de la numérisation du croquis, c'est-à-dire la comparaison entre le croquis papier et le croquis numérique. ...
Article
Full-text available
Our research is embedded in the framework of the development of free-hand drawing-based computer assisted architectural design environment. In this paper, we study the activity of drawings duplication, observed during the paper changes in the phase of preliminary sketching. We aim at identifying the impacts of two digital sketches environments, with or without drawings interpretation, on this activity and on the graphical productions. We observe six activities, two with paper-pencil and four distributed on our two prototypes, and draw some operational conclusions for the development of software designed to support the architectural sketches in the preliminary design stage.
... Many argue that prototyping, or the development of a physical product during the design process, is an essential step to allow students to identify design flaws that would have otherwise gone unnoticed and ultimately results in better final project outcomes (Forest et al. 2014;Wilczynski et al. 2016). Similarly, Kim and Maher (2008) argue that prototyping processes activate students' ability to connect education to its applications in the real world and in industry. Makerspaces remove the obstacles to accessing these benefits, through advanced design and manufacturing equipment that does not require extensive training (Wong and Partridge 2016). ...
Article
Full-text available
Background In recent years, makerspaces have become increasingly common venues of STEM education and are rapidly being incorporated into undergraduate programs. These spaces give students and instructors access to advanced design technology and facilitate the incorporation of a wide variety of projects into the curriculum; however, their impacts on students are not yet fully understood. Using matched survey responses (i.e., repeated measures) from undergraduate students enrolled in engineering courses that assigned a makerspace-based project, we evaluate how the use of a university makerspace impacts students’ attitudes towards design, engineering, and technology. Further, we examine whether there are differences based on students’ year in program, gender, and race. Results Paired t -tests were used to analyze whether and how nine factors changed within individual students over one semester. Analyses revealed that students who visited the facility showed significant gains in measures of innovation orientation, design self-efficacy, innovation self-efficacy, technology self-efficacy, belonging to the makerspace, and belonging to the engineering community. Subsequently, repeated measures analyses of variance (RMANOVAs) on the students who visited the makerspace revealed significant main effects of students’ year in program, gender, and race, as well as interactional effects of both year in program and race with time. Conclusions These results affirm the value of incorporating makerspace-based projects into STEM curricula, especially during early coursework. However, our analyses revealed consistent gender gaps in measures of self-efficacy before and after using the makerspace. Similarly, gains in belonging to the makerspace were not equal across racial groups. We conclude that while makerspaces are fulfilling some of their promise for educating innovative problem solvers, more attention needs to be paid to avoid reproducing disparities in STEM education that are already experienced by female students and racial minorities.
Chapter
Despite the remarkable development of parametric modeling methods for architectural design, a significant problem still exists, which is the lack of knowledge and skill regarding the professional implementation of parametric design in architectural modeling. Considering the numerous advantages of digital/parametric modeling in rapid prototyping and simulation most instructors encourage students to use digital modeling even from the early stages of design; however, an appropriate context to learn the basics of digital design thinking is rarely provided in architectural pedagogy. This paper presents an educational tool, specifically an Augmented Reality (AR) intervention, to help students understand the fundamental concepts of parametric modeling before diving into complex parametric modeling platforms. The goal of the AR intervention is to illustrate geometric transformation and the associated math functions so that students learn the mathematical logic behind the algorithmic thinking of parametric modeling. We have developed BRICKxAR/T, an educational AR prototype, that intends to help students learn geometric transformations in an immersive spatial AR environment. A LEGO set is used within the AR intervention as a physical manipulative to support physical interaction and improve spatial skill through body gesture.
Chapter
Full-text available
In this paper, the fundamentals of a 3D nested construction method for 3D-printing stackable tower-like structures are explained, taking into consideration the transportation, storage, assembly, and even disassembly of building components. The proposed method is called “PRINT in PRINT.” This paper also documents the authors’ experience of and findings from designing and printing a column erected out of a series of 3D printed components in a short stack. Employing the design principles of 3D printing in a nested fashion, the authors showcase the main parameters involved in dividing the column’s global geometry into stackable components. By converting formal, technical, and material restrictions of a robotic-assisted 3D printing process into geometric constraints, the paper describes how the column components are divided, namely that one component shapes the adjacent one.
Chapter
Within the industry of architecture, interior design and construction, stakeholders and clients can differ significantly in their level of spatial understanding. Traditional media and new media, such as Virtual Reality (VR), are used to visualize spaces to create a bridge between professionals and non-professionals in the understanding of space. However, it remains unclear which medium increases spatial understanding for non-professionals more effectively. In this study we compared spatial understanding among non-professionals of a real space, an apartment, using three conditions: (a) being in the real space, (b) being in VR and (c) through a traditional desktop screen. Forty-five participants estimated spatial measures such as height, length and depth of a room and its furniture (objective spatial understanding). The results revealed that objective spatial understanding did not differ significantly between the three conditions. However, non-professionals revealed that VR made it easier to estimate the measurements of complex and less familiar objects and made them feel more confident about the accuracy of the estimated measures. The feeling of engagement was found to be a possible predictor for this effect. In addition the possibility to make use of one’s own body as a reference point in VR, increased confidence as well. The results indicate that VR may improve the communication between clients and architects and interior designers, but only when it concerns complicated spaces and unfamiliar objects.
Chapter
This paper presents an intuitive and natural gesture-based methodology for solid modelling in the Augmented Reality (AR) environment. The framework of Client/Server (C/S) is adopted to design the AR-based computer aided design (CAD) system. The method of creating random or constraints- based points using gesture recognition is developed to support modelling. In addition, a prototype system of product 3D solid modelling has been successfully developed, we have compared it with traditional CAD systems through several basic design modeling tasks. Finally, analysis of questionnaire feedback survey shows the intuitiveness and effectiveness of the system, and user studies demonstrate the advantage of helping accomplish the product early design and creating and manipulating the 3D model in the AR environment.
Article
Reconfigurable modular robots (RMRobots) can change their shape and functionality (e.g., locomotion styles) to fit different environments, and have been widely investigated in applications, such as exploration and inspection. In this paper, we present a new application of RMRobots for improving human spatial ability which plays a significant role in developing an individual's performance and achievement in science, technology, engineering, and mathematics (STEM). Two user studies are conducted, and the results show that: 1) the task performance of interacting with RMRobots has a significant positive relationship with mental rotation, a widely used measure of spatial ability; and 2) interacting with RMRobots can effectively improve the performance on a task related to spatial reasoning skills according to behavioral data and electroencephalograph (EEG) indices. Our presented study broadens RMRobot research in the area of human-robot interaction.
Article
Full-text available
Interaction with and development of 3D designs can be enhanced with a variety of interaction devices. Various developments in tangible interfaces for design applications show that customised HCI for 3D design can address the specific needs of this task domain. These developments provide the inspiration for our design workbench. We propose two principles for improving on the traditional interface of a screen, keyboard, and mouse on a desktop computer: the use of a horizontal projection surface with tangible input devices for interacting with and modifying a 3D design; and a tangible vertical screen for interacting with a changing perspective visualisation of the 3D design.
Article
Full-text available
This paper describes research that explores computational approaches to automatically developing multiple representations, with the aim of overcoming the limitation of fixed representations. The claim is that if CAD tools used a more fluid system of representation then they would be more useful to designers by preventing fixation and by providing a means of resolving the ambiguity in internal representations. A method for producing multiple representations has been developed and implemented to demonstrate that simple sketches can be re -represented by a computer. The method uses a neural network to find features in an arbitrary canonical representation and to restructure a sketch based upon groupings of similar features. The results demonstrate the feasibility of automated computer-based re- representation. This research provides the foundations for support for multiple representations in CAD tools that would make CAD tools more useful to designers.
Article
This paper examines the theoretical and practical problems that arise from attempts to develop formal characterizations and explanations of many work activities, in particular, collaborative activities. We argue that even seemingly discrete individual activities occur in, and frequently draw upon a complex network of factors: individual, social and organizational. Similarly, organizational and social constraints and practices impact upon individual cognitive processes and the realization of these in specific tasks. Any adequate characterization of work activities therefore requires the analysis and synthesis of information from these traditionally separate sources. We argue that existing frameworks, emanating separately from the respective disciplines (cognitive, social and organizational) do not present an adequate means of studying the dynamics of collaborative activity in situ. An alternative framework, advocated in this paper, is distributed cognition. Its theoretical basis is outlined together with examples of applied studies of computer-mediated work activities in different organizational settings.
Article
This paper examines the theoretical and practical problems that arise from attempts to develop formal characterizations and explanations of many work activities, in particular, collaborative activities. We argue that even seemingly discrete individual activities occur in, and frequently draw upon a complex network of factors: individual, social and organizational. Similarly, organizational and social constraints and practices impact upon individual cognitive processes and the realization of these in specific tasks. Any adequate characterization of work activities therefore requires the analysis and synthesis of information from these traditionally separate sources. We argue that existing frameworks, emanating separately from the respective disciplines (cognitive, social and organizational) do not present an adequate means of studying the dynamics of collaborative activity in situ. An alternative framework, advocated in this paper, is distributed cognition. Its theoretical basis is outlined together with examples of applied studies of computer-mediated work activities in different organizational settings.
Article
We advance a view that design is a combination of many types of thinking, maintaining that concurrent verbal reports are best at revealing particular types of thinking (specifically the short term focus of the designer). There are two issues of using concurrent verbal reports to elicit types of design thinking: (1) do the words ‘thought aloud’ accurately reflect the design thinking? (2) does a concurrent verbal methodology actually affect the designing it seeks to reveal? In our analysis of Dan's protocol we show how the interaction between design problem and design solution is effectively handled by Dan- and revealed by concurrent verbalization — but how aspects of design thinking such as perception and insight, are not elicited by concurrent verbalization. We go on to show how the design task changes as a result of the designer having to continually think aloud, with ‘normal’ activities like negotiation and displacement being impaired. We conclude by identifying a need for protocol analysis in design to be specific about the questions it addresses.
Article
One of the most important criteria for performance quality, in both art and design seems to be the creativity of the product. Being original and innovative is by, definition a feature of both areas. The primary objective of this study was to determine whether human judgment of creativity is a reliable and valid method in design evaluation and selection. In a first experiment, the judgments of experts, nonexperts, and people with an intermediate level of expertise were compared. They rated 44 first-year designs on creativity, prototypical value, attractiveness, interest, technical quality, expressiveness, and integrating capacity. Pearson product-moment correlations for creativity were relatively low, ranging from .23 to .29. There was little difference between experts and nonexperts. The results confirmed the research in artwork assessment. In Experiment 2 the results were replicated with senior design students as judges, a group with an intermediate level of expertise. Ratings were given for 3 different designs. Correlations were much higher ranging from .48 to .57. This could be a consequence of the homogeneity of the group of judges. The prototypicality of a design, the distance between the design and the observers' internal representation, appeared to discriminate between creativity and other aesthetic criteria. A pair-comparisons analysis also contributed to the definition of creativity in both general and domain-specific terms.