Content uploaded by Daniel Selva
Author content
All content in this area was uploaded by Daniel Selva on Aug 17, 2018
Content may be subject to copyright.
Side-by-side Human-Computer Design using a Tangible
User Interface
Authors Anonymized For Review
We present a digital-physical system to support side-by-side collabora-
tive human-computer design exploration. The system consists of a sensor-
instrumented “sand table” functioning as a digital-tangible space for explor-
ing early-stage design decisions. Using our system, the human designer gen-
erates phyiscal representations of design solutions, while monitoring a visu-
alization of the solutions objective space. At the same time, the AI system
uses the vicinity of the humans exploration point to continuously seed its
search algorithms and suggest design alternatives. We present an experimen-
tal study comparing this side-by-side design space exploration to human-
only design exploration and to AI-only optimization. We find that side-by-
side collaboration of a human and a computer significantly improves de-
sign outcomes and offers benefits in terms of the user experience. However,
side-by-side human-computer design also leads to more narrow design space
exploration and to less diverse solutions when compared to both human-
only and computer-only search. This has important implications for future
human-computer collaborative design systems.
Introduction
A useful formulation of early-stage design is as an exploration of the de-
sign space [5]. This can be formalized as a search through a solution space,
proposing and evaluating solutions in pursuit of some possible world [42].
Given that search is also a core capacity of Artificial Intelligence (AI) [32,
46], over the last few decades researchers were able to develop intelligent
tools to aid in design problems through a variety of search methods [43, 30].
In most cases of design-as-search, both when a human designer and when
a computer design tool is employed, the process is modeled as one of an in-
dividual designer [17]. Some researchers, however, suggested that exploring
Design Computing and Cognition DCC’18. J.S. Gero (ed),
pp. xx-yy. c
Springer 2018
2 Authors Anonymized For Review
Fig. 1 Conceptual schematic of side-by-side human-computer design using a tan-
gible interface. The human designer and the AI search algorithm explore different
designs simultaneously and affect each others position in the solution space. The
human generates tangible phyiscal representations of design solutions, while mon-
itoring a visualization of the solutions’ objective space. The AI search uses the hu-
man’s exploration to continuously seed the search algorithms and suggest design
alternatives.
a design space can be more powerful when designers work with others. Fis-
cher calls design social by nature [15]. Indeed, collaborative design can tran-
scend the capacity of the individual, leveraging specialized expertise across
“symmetries of ignorance”. This can enable designs that address more com-
plex problems and spaces [2]. The usefulness of collaboration in design has
engendered a strong interest in systems and tools that support collaborative
design, precipitating the field of computer-supported collaborative design
(CSCD) [40].
Beyond CSCD, the potential of design as a collaborative activity also sug-
gests the notion of human-computer collaborative design, which is the focus
of this paper. While many approaches to human-computer collaborative de-
sign either pose agents as support tools for humans [30, 36, 14] or position
humans as inputs to a computational process [13, 7, 25, 4], research in
human-computer teamwork suggests merit in a more balanced partnership
between human and computer designers, modeling the interaction as a true
collaboration [16].
In this paper, we present a system to support side-by-side human-computer
collaborative design using a digital-tangible sand table interface in combi-
nation with a visualization of the design problem’s objective space (Fig. 1).
The user searches the design space using a physical one-to-one mapping of
Side-by-side Human-Computer Design using a Tangible User Interface 3
the solution space, while the AI search algorithm uses the user’s designs as
seeds to search the design space alongside the human designer, and subse-
quently presents the human with a visualization of the search process.
Our motivation to use a tangible user interface (TUI) stems from the fact
that tangible and tabletop interfaces have been found particularly suited for
collaborative exploration of design spaces. On its own, a TUI affords design-
ers the ability to employ senses and manipulations familiar in the physical
world to interact with virtual models [20]. TUIs have been found to promote
learning [6, 45], and interaction with physical media to drive innovative ex-
ploration in design spaces [27, 44, 31]. Tangible interfaces can also impact
the nature of collaborative design processes and hence outcomes, e.g. the ef-
fect of a TUI on spatial cognition in groups can increase “problem-finding”,
leading to higher creativity [27]. TUI’s have been extensively evaluated vis-
a-vis graphical or screen-based interfaces [50, 47, 34], including with re-
spect to design tasks [26], so this is not the focus of this work. We instead
set out to use the TUI as a collaborative platform for evaluating side-by-side
exploration with an agent in a design space.
In this vein, we present an experimental study that compares side-by-side
human-computer collaborative design with two baseline conditions: human-
only design search, and human observation of computer-only search. De-
pendent variables include quality of the generated designs and user enjoy-
ment. The design problem we use to illustrate our approach is the EOSS
Sensor-Orbit Design Problem, a real-world space mission design problem
with multiple competing objectives.
The core contributions of this work are (a) a digital-phsycal system that
supports side-by-side human-computer collaborative exploration of a design
space; (b) support for our hypothesis that this system results in better de-
signs than either the human or the computer working alone; (c) insights into
the user-experience benefits of side-by-side human-computer collaborative
design; and (d) limitations and design implications related to the effects of
side-by-side exploration on the coverage and diversity of the design solu-
tions explored.
The EOSS Sensor-Orbit Design Problem
Designing sensor configurations for earth-observing satellite systems (EOSS)
is a real-world multi-objective design problem in Aerospace Engineering.
The design of such systems has become increasingly difficult and important
to space organizations planning satellite missions due to increasingly strin-
gent mission requirements without the necessary budget increases to fully
meet the increased demands [39].
Specifically, we engage the problem of deploying sensors on a climate-
monitoring satellite constellation to optimally satisfy 371 measurement re-
4 Authors Anonymized For Review
quirements (e.g. air temperature, cloud over, atmospheric chemistry) de-
fined by the World Meteorological Organization (www.wmo-sat.info/oscar)
at minimal cost [18]. A design in this space consists of assigning up to 12
different kinds of sensors to satellites in five different orbits around the earth.
Each sensor has different capabilities that can address different measurement
requirements to varying degrees, dependent on the orbit in which it is de-
ployed. The cost of deploying various sensors is also highly orbit-dependent,
insofar as it effects the choice of launch vehicle and supporting subsystems,
among other considerations. The cost and scientific benefit of a specific sen-
sor configuration is further complicated by the complimentary or deleterious
effects that sensors which are deployed together can exert on each other.
The Collaborative Design Sand-Table Tangible User Interface
Overview
Inspired by the affordances of TUIs for design and collaboration, we devel-
oped a tangible sand table interface to study collaborative design behaviors
(Fig. 2).
Our mixed-reality system consists of an interactive tabletop, a visualiza-
tion, and a set of blocks. Each block, which represents a particular instru-
ment from the aforementioned design problem, can be placed in regions
designated as different orbits on the tabletop. The science benefit and cost
associated with a particular block configuration is calculated using a custom
simulation engine and plotted on a visualization above the table. The visual-
ization also displays costs and the points are color-coded to indicate recency
and whether they are user or agent-generated
The most recent point is plotted in red, the second most recent in pink and
all other points in various shades of purple such that a dark shade indicated a
more recently generated point. All points on the plot are user-selectable; the
configuration used to generate any selected point is overlaid on the orbits in
the tabletop workspace (Fig. 3).
Independent and Collaborative Design Agents
We developed two computational design agents to explore the sensor-orbit
configuration design space, one that operates independently without user
input, and one that explores the design space collaboratively with a human
user.
The “independent” design agent employs a Non-dominated Sorting Ge-
netic Algorithm (NSGA-II) [9] to explore the design space. Evolutionary
Side-by-side Human-Computer Design using a Tangible User Interface 5
Fig. 2 A user working collaboratively with an AI design agent using the presented
digital-tangible sand table interface.
and genetic algorithms have long been associated with design exploration
and NSGA-II is a conventional approach to exploring both design and multi-
objective optimization spaces [37, 24, 8, 21, 10, 29].
Inspired by recent work demonstrating the effect of simple local behavior
on global outcomes in collaboration [41], the second, “collaborative”, de-
sign agent employs a version of local search modified to continuously orient
its search space around the sensor-orbit configurations being explored by
the human user. This allowed the human and design agent to monitor one
another while searching the space in parallel, with the user choosing when
to interact and cross search paths (Fig. 1).
Technical Specifications
Our tabletop TUI (Fig. 4) is designed in the tradition of the reacTable [22].
The table frame is constructed from 80/20 T-Slotted aluminum beams fit-
ted with a 36” by 30” frosted acrylic sheet as a tabletop. The frame inter-
nally houses a projector which displays images on the tabletop as well as
an infrared camera to detect objects placed on the surface. Blocks repre-
senting different sensors are fitted with unique fiducial markers and tracked
across the table surface using the camera and reacTIVision [23]. The NSGA-
II agent was implemented via the jMetal optimization library [12].
6 Authors Anonymized For Review
Fig. 3 This figure illustrates both the tabletop and the visualization interfaces
through which users explore the design space. As the user arranges blocks (sen-
sors) into orbits on the table, the system evaluates and plots the corresponding cost
on the scatterplot. The current configuration (in this case instrument H in Orbit 1,
etc.) is always plotted in red, the next most recent in pink. All other user-generated
cost points are plotted in a purple that fades over time. Whenever a design agent
generates a configuration, the system plots the corresponding cost in gray. Finally,
any point on the scatter plot, if selected, will be projected on the table (in this case,
instruments B,E,F in Orbit 1, etc.).
Research Questions
The described system and study are elements of an ongoing project to both
understand and realize novel forms of human-computer collaboration in
physical design spaces. In this particular work, we explore the following
research questions:
•RQ1: How do design solutions produced by a human and design agent
working side-by-side compare to either humans-only or algorithm-only
generated solutions?
•RQ2: How does collaborating side-by-side with an intelligent agent affect
user enjoyment while exploring a design space?
Side-by-side Human-Computer Design using a Tangible User Interface 7
Fig. 4 The sand table interface projected a workspace onto a surface where a camera
tracked blocks identified by fiducial markers. As the blocks move between regions
on the surface, a simulation engine evaluates the associated configurations and plots
them on a screen. All plotted points in the objective space can be selected and pro-
jected back onto the tabletop surface.
Experimental Setup
We compare our side-by-side approach with two baseline methods, which
are effectively “subsets” of the proposed approach. Specifically, we ran a
three-condition, within-user study which asked participants to explore the
sensor-orbit configuration design problem on their own, by passively observ-
ing the NSGA-II agent, and side-by-side with the collaborative local-search
design agent (Fig. 5).
Each study lasted roughly an hour, and involved three treatment sessions.
During each session, participants were asked to explore the design space
through our interface, after which they were given thirty seconds to build
the best design they could think of from scratch and, finally, completed a
questionnaire assessing affect and user experience for that round. Following
the study, users completed a post-survey ranking the conditions and reflect-
ing on their choices.
In the following we describe the three conditions in detail.
1. HUMAN-ONLY (HUM): In the HUMAN-ONLY condition, participants
were asked to engage with the design space through the sand table in-
8 Authors Anonymized For Review
Fig. 5 The three design-space search interactions studied: human-only search,
human-observation of agent search via NSGA-II, and side-by-side collaborative
search.
terface on their own. They were given a set of 24 blocks (two of each
instrument) and allowed to explore using the tabletop and visualization
without any assistance from or interaction with a design agent. As de-
scribed in the system description, participants could click on previously
generated designs of their own to reflect on their design exploration at any
time.
2. OBSERVE-AGENT (OBS): In the OBSERVE-AGENT condition, partic-
ipants followed along as the NSGA-II design agent explored the space
through the interface in real-time, with all evaluated configurations plot-
ted on the screen. Again, participants were able to select cost points as
they were explored to see the corresponding configurations on the table-
top, and we allowed them to move around blocks on the table as well,
although the system did not evaluate any block configurations.
3. SIDE-BY-SIDE (SBS): In the SIDE-BY-SIDE condition, the participants
worked alongside the local-search design agent. As in the HUM condi-
tion, the system would evaluate and plot evaluations for the block con-
figurations that users placed on the table. The local search agent would
continuously explore minor variations of the current block configuration,
which the system would evaluate and visualize for the user as well. For
simplicity, we defined the local search neighborhood as any configura-
tion at an edit distance of one from the current configuration (e.g. add,
remove, substitute, or move one instrument in any orbit). Users were free
Side-by-side Human-Computer Design using a Tangible User Interface 9
in this condition to monitor the agent’s search path and adjust their own
if desired.
The instruments and orbits were randomly remapped between conditions,
and users were reminded of this, in order to prevent carryover of knowledge
from one condition to the next. The conditions were also randomly and uni-
formly counterbalanced against ordering effects due to fatigue or increased
familiarity with the interface or task.
Hypotheses
Through our study, we examined the following hypotheses:
•H1 - Design Quality: The user-agent collaboration (SBS) will generate
better designs than the user (HUM) or computer alone (OBS) will gener-
ate. While “better” is often difficult to quantify in a design problem, in this
case we will evaluate designs relative to a baseline Pareto front generated
by a conventional genetic algorithm used in this domain—NSGA-II.
•H2 - Enjoyment: Users enjoy collaboratively exploring with an agent
(SBS) more than exploring on their own (HUM) or following the agent as
it explores (OBS).
Results
We ran our study with 31 subjects (13 female) between the ages of 18 and
37. To attain a more diverse population sample, we recruited participants
from a large city both through mailing lists and flyers at local universities
and via ads on related social media groups and online bulletin boards. The
resulting participant set came from a varied educational background: six had
completed high school or a GED, 18 had a bachelor’s degree, and 7 had a
master’s degree, advanced graduate work, or a PhD.
In the following, we describe our findings with regard to our hypotheses:
Design Quality, Learning, and Enjoyment.
Design Quality
Given the multi-objective nature of the sensor-orbit configuration problem,
there is no clear single metric by which designs can objectively be compared,
a matter complicated by the unknown nature of the true Pareto frontier in this
real-world problem.
10 Authors Anonymized For Review
For each participant and condition, we had a single design solution pro-
duced from a blank slate at the conclusion of the condition to compare
within-user. Following [19, 48, 35], we calculated the generational distance
for each of the designs using their normalized Euclidean distance from a ref-
erence, empirically-derived, non-dominated Pareto frontier. We constructed
this Pareto frontier by interpolating the Pareto-dominant subset of configu-
rations generated by running NSGA-II with a population size over 16,000
configurations (Fig. 7). User designs were then compared relative to their
distance from this reference frontier1.
One-tailed paired-sample t-tests were conducted to evaluate the differ-
ence in quality of designs produced in the SBS condition, compared to each
of the baseline conditions, HUM and OBS. The SBS condition produced sig-
nificantly closer designs (M=0.114, SD=0.086) in comparison to both HUM
(M=0.167, SD=0.138, t=-1.920, p=0.032) and OBS (M=0.155, SD=0.124,
t=-1.827, p=0.039), see Fig. 6(a). These results suggest that participants
tended to produce better designs after exploring the space with the collabo-
rative agent, relative to the reference Pareto-optimal front, supporting H1.
Enjoyment
Participants’ enjoyment was measured using the Positive and Negative Af-
fect Schedule (PANAS) [49], and user experience via the User Experience
Questionnaire (UEQ) [28]. Following the study, participants also ranked the
treatments in order of helpfulness and enjoyment, and provided qualitative
comparisons of the treatments in terms of helpfulness and enjoyment.
Participants displayed stronger positive affect in the SBS condition (M=32.85,
SD=8.964) compared to HUM (M=30.97, SD=8.677, t=1.455, p=0.078),
and compared to OBS (M=29.56, SD=8.677, t=3.117, p=0.002). One-tailed
paired-sample t-tests indicate that only the latter difference is significant,
thus only partially supporting H2. Likewise, participants displayed lower
negative affect after the SBS condition (M=13.13, SD=3.784) compared to
either HUM (M=13.26, SD=3.838, t=-0.295, p=0.385) or OBS (M=13.71,
SD=5.172, t=-0.668, p=0.255), but the differences were not significant (Fig
6(b)).
Participants scored the system more positively via aggregate UEQ scores
after SBS design (M=4.664, SD=4.117) than either HUM (M=3.858, SD=4.553,
t=1.301, p=0.102) or OBS (M=3.339, SD=4.929, t=1.717, p=0.048), al-
though one-tailed paired-sample t-tests indicate that only the first was barely
significant and effect sizes were small (Fig. 6(c)). We employed a subset of
the full UEQ scale, including the complete scales for attractiveness, effi-
1User-generated designs that dominated any configurations on the reference frontier
were assigned the negation of this distance.
Side-by-side Human-Computer Design using a Tangible User Interface 11
(a) Mean distance of user-produced
designs to the reference Pareto front
from each treatment. The reference
Pareto front was generated by running
NSGA-II.
(b) Mean positive and negative affect
scores via PANAS measured immedi-
ately after each treatment.
(c) Mean User Experience Question-
naire scores measured immediately
after each treatment.
(d) The UEQ scale used was a sub-
set composed of metrics for attrac-
tiveness, efficiency, stimulation, and
novelty.
Fig. 6 Mean design quality and user experience scores across the three study con-
ditions.
12 Authors Anonymized For Review
ciency, stimulation, and novelty. Interestingly, users rated OBS higher than
HUM or SBS in terms of efficiency, but lower than the others in terms of at-
tractiveness and the hedonistic scales of stimulation and novelty (Fig. 6(d)).
Finally, users overall ranked the treatments as (1. SBS, 2. OBS, 3. HUM)
in terms of helpfulness and (1. SBS, 2. HUM, and 3. OBS) in terms of en-
joyment (Table 1). The rankings were aggregated using an extended Borda
system [3] whereby each user’s ranking was scored with three points for
their first choice, two for their second, and one for their third choice.
Table 1 Participants ranked and reflected on the three treatments at the conclusion
of the study in terms of helpfulness and enjoyment. The rankings were aggregated
using an extended Borda system (scores listed next to rank in parentheses). Four
users did not respond for the enjoyment ranking.
Treatment Helpfulness
Rank
(Score)
n=31
Enjoyment
Rank
(Score)
n=27
Comments
SBS 1 (81) 1 (70) Positive I liked the fact that I was being
assisted along [...] It felt like as if
two brains were working simulta-
neously.
Negative It was distracting to see the agent
coming up with points around
me that weren’t always improve-
ments, and this made me feel less
productive.
OBS 2 (55) 3 (43) Positive It felt like watching the agent ex-
ploring by itself allowed me to
see different trends without hav-
ing to move the blocks myself
[...] I was arriving at a better so-
lution more quickly.
Negative Observing the agent exploring
was dreadful. Way too much in-
formation, and I couldn’t control
the variances in sequences to help
myself understand the impacts of
various instruments.
HUM 3 (50) 2 (49) Positive Exploring alone makes it easier
and enjoyable because it allows
me to follow my own logic of ex-
ploration.
Negative Exploring with blocks is too in-
efficient and make me feel frus-
trated. I felt lost without help
from the computer.
Side-by-side Human-Computer Design using a Tangible User Interface 13
Fig. 7 This figure shows an example of all the evaluated configurations explored
by a single user during the exploration phase in each of the three conditions of the
study. The outputs used to generate the reference Pareto front are plotted in the
background in gray.
Discussion
To summarize, we found that participants produced better designs after ex-
ploring the design space side-by-side with the collaborative design agent
than after exploring on their own or observing and querying the NSGA-II
algorithm visualization. Participants exhibited marginally higher affect and
user experience when working side-by-side than either of the other modes.
They also overwhelmingly rated this design method higher than the other
two. In the following, we discuss implications of our findings, qualitative
insights from user comments, and possible explanations that could lead to
tradeoffs when constructing collaborative design agents.
Working Side-by-side
Participants’ post-study reflections provide some insight on why so many
preferred exploring with the collaborative design agent (abbreviated as DA
14 Authors Anonymized For Review
below) and how they perceived the DA. Several users pointed out comple-
mentary advantages they inferred in the DA, from speed to the ability to
explore with more blocks at the same time. Others simply appreciated the
experience of working together: “Exploring with the DA felt more like a col-
laborative effort, rather than working alone or watching someone else work
on something” or saw the back-and-forth with the design agent as a way to
reduce the randomness of their search. Some participants derived confidence
from working with the design agent: “It felt like as if two brains were work-
ing simultaneously and there was a hope to achieve optimal configuration”.
On the other hand, some expressed annoyance with the agent: “It would
have been better if the computer gave better suggestions alongside working
with me...”. At least one user saw the design agent as a playful antagonist:
“I enjoyed exploring with the DA at the same time because I almost felt like
I was competing against the DA.”. Several developed ad-hoc strategies for
collaboration, e.g. splitting up the objectives: “After DA determined points
from my selection, I rearranged the blocks to the DA point with the highest
benefit. Then, I switched blocks to determine the lower cost”.
The experiences described by participants in the side-by-side condition,
whether positive or negative, suggest that users are capable of seeing such
agents as collaborators and not just tools. In particular, the variety of im-
plicit choices and ad-hoc strategies users made in interacting with the de-
sign assistant while exploring the design space mirror observations prior
work has made about human to human collaboration using TUIs, e.g. [50],
including turn-taking, dominant-submissive pairs, and independent, parallel
exploration. This supports the potential of intelligents agents acting as true
collaborators in the design search process.
Does Working Side-by-Side Lead to Broader Search?
In order to gain intuition about why users generated better designs after SBS
exploration, we examined the solutions they encountered during search un-
der the different conditions. Using one conventional way to compare sets
of solutions, we found that the set of configurations considered by partici-
pants in the SBS condition tended to dominate more of the objective space,
in terms of hypervolume [51] (M=0.626, SD=0.134), than those explored in
HUM (M=0.561, SD=0.145), (Fig. 8(a)). This difference was significant via
a one-tailed paired t-test (t=2.45, p=0.010). Designs explored by participants
in the OBS condition also tended to dominate less hypervolume than in SBS
(M=0.603, SD=0.091), although this difference was not significant (t=1.07,
p=0.146).
To our suprise, however, users appeared to explore less broadly in the
SBS condition than in either the HUM or OBS conditions. To quantify this,
we define the coverage of the exploration as the number of possible sensor-
Side-by-side Human-Computer Design using a Tangible User Interface 15
orbit pairings that appeared in at least one evaluated configuration during the
exploration. Similarly to [33], we also use the information entropy of every
explored configuration as a measure of diversity.
We find that participants tended to cover more of the orbit-sensor pair-
ings when searching the solution space in the HUM condition (M=44.93,
SD=13.03) and the OBS condtion (M=39.89, SD=12.79) than in the SBS
condition (M=32.97, SD=10.53). Both of these differences were significant
via paired one-tail t-tests (t=5.357, p<0.001 and t=2.428, p=0.011 for HUM
and OBS respectively). We also find that the human’s search tended to be
more disorded when either exploring alone (M=1.482, SD=0.502) or pas-
sively observing (M=2.326, SD=0.672), again both significant via paired
one-tail t-tests (t=3.093, p=0.002 and t=8.414, p<0.001 respectively).
(a) (b) (c)
Fig. 8 In the SBS condition, participants tended to consider more Pareto-optimal
designs as measured by the overall hypervolume dominated by the non-dominated
Pareto frontiers in each condtion (a). Nonetheless, the search spaces explored by
human participants when collaboratively exploring in the SBS condition tended to
cover fewer possible sensor-orbit pairings (b) and exhibit lower information entropy
(c) than in the other two conditions.
Indeed, participants’ post-study reflections suggest that working with the
design agent encouraged them to converge more confidently and quickly
to a more focused region of the configuration space. For example, “I could
immediately see some sort of direction to move in instead of randomly guess-
ing”, and “when we both (computer and I) are exploring together, less time
is wasted, and productive results are easier to discover”. Indeed, as one user
put it, “I felt lost without help from the computer”.
However, as others observed, collaboration “might have led to a bias in
what order to use and I resulted in a lower science benefit than I had on my
own” and “exploring on my own gave me more freedom to try something
completely different, and potentially get a more helpful combination”. Par-
ticipants appreciated the freedom of exploring alone, and as one put it, “it
was really useful learning through trial and error ”, and “exploring alone
16 Authors Anonymized For Review
makes it easier and enjoyable because it allows me to follow my own logic
of exploration”.
This raises an important conundrum for the design of collaborative agents,
insofar as the processes for achieving better designs through collaboration
may not coincide with those that best encourage broader exploration of the
design space or generate more creative designs. Some work with TUIs found
similarly that rapid design exploration enabled by physical interfaces could
actually reduce the degree to which users reflect in the design process [11].
This result also evokes prior work suggesting conversely that leveraging hu-
mans as a search heuristic can reduce the diversity of algorithmically gen-
erated solutions [38]. Insofar as a key benefit of collaborative design is its
potential to foster broader exploration and emergence, future research should
explore how interactions with collaborative design agents might expand,
rather than contract, human designers’ exploration.
Limitations and Future Work
Our findings are somewhat constrained by the complexity and domain-
specific nature of the design problem we chose in contrast with the relevant
sophistication and expertise of our users. Given the diversity of our partici-
pant pool and their experience, much of the richness of the design problem
may have been lost. The resultant abstractness of the problem made it very
demanding for our users, and could have added to the variance in our results,
although we attempted to account for this with a within-user design.
TUIs are expecially useful for co-present collaboration in a shared phys-
ical workspace. Although our agent interacted with the user through the
tabletop interface and display, it did not do so physically. This study is
part of an ongoing project in which we plan to study collaborative explo-
ration between a human and a physically embodied design agent in a shared
workspace. Observing interactions between a virtual agent and a human
through our TUI sand table is a first step towards this end.
This work also does not empirically compare the human-agent collabora-
tive exploration to collaboration between humans. While some participants
reported interacting with the agent in similar ways to what we see in the
literature on co-present human-computer collaborations, future work should
directly examine these similarities in order to lay the groundwork for de-
signing better collaborative agents in this vein.
Side-by-side Human-Computer Design using a Tangible User Interface 17
Conclusion
When searching a design space, humans and algorithms have different sets
of strengths and limitations. Algorithms are able to quickly explore a large
space and precisely compare solutions, while humans are adept at fast pat-
tern recognition, generalization, and context integration. Egan and Cagan
argue that design problems can require both human intuition to handle
difficult-to-translate qualitative processes and the objectivity and consis-
tency of computation at scale [13]. This suggests benefits to be reaped by
systems that model the human-machine interaction as a collaborative activ-
ity, building on the complementary skills of each agent. Systems with mixed
initiative or adjustable autonomy can thus enable more flexible and conver-
sational collaborations that better utilize human and agent complementary
strengths in different contexts [1].
In this paper, we described a new tabletop tangible sandbox interface in
order to study simultaneous real-time collaboration between human users
and design search algorithms. Such side-by-side human-computer collabo-
rative exploration of a design space using a physical one-to-one mapping of
the solution space has not before been studied, despite the potential it offers
designers to capitalize on the benefits of both collaborative and AI-supported
design.
In our experimental study we find that the propose model of side-by-side
design collaboration can lead a human designer to generate better designs,
both in terms of the distance from the Pareto front of the user-selected final
design, and the hypervolume dominance of all explored designs. We also
find marginal benefits to user positive affect and user experience. In particu-
lar, side-by-side design positively overcomes some of the trade-off between
efficiency and stimulation that exists between human-only and computer-
only design. This suggests the feasibility of treating design agents not just
as tools, but as peer collaborators in design exploration.
We also discuss how, on the other hand, a collaboration of this sort might
lead to lower solution space coverage and less diversity in the solutions ex-
plored. As we do not want human-machine collaborative design to reduce
the creativity and open-ended exploration that early-stage design requires,
these concerns should be taken into account in the development of such
agents and future research.
References
1. J. F. Allen, C. I. Guinn, and E. Horvtz. Mixed-initiative interaction. IEEE
Intelligent Systems and their Applications, 14(5):14–23, 1999.
2. E. Arias, H. Eden, G. Fischer, A. Gorman, and E. Scharff. Transcending the
individual human mindcreating shared understanding through collaborative de-
18 Authors Anonymized For Review
sign. ACM Transactions on Computer-Human Interaction (TOCHI), 7(1):84–
113, 2000.
3. K. J. Arrow. Social choice and individual values, volume 12. Yale university
press, 2012.
4. M. Babbar-Sebens and B. S. Minsker. Interactive Genetic Algorithm with
Mixed Initiative Interaction for multi-criteria ground water monitoring design.
Applied Soft Computing Journal, 12(1):182–195, 2012.
5. R. Balling. Design by shopping: A new paradigm? In Proceedings of the Third
World Congress of structural and multidisciplinary optimization (WCSMO-3),
volume 1, pages 295–297, 1999.
6. R. Chen and X. Wang. An Empirical Study on Tangible Augmented Reality
Learning Space for Design Skill Transfer. Tsinghua Science & Technology, 13,
Supple(October):13–18, 2008.
7. S.-B. Cho. Towards creative evolutionary systems with interactive genetic al-
gorithm. Applied Intelligence, 16(2):129–138, 2002.
8. K. Deb, S. Karthik, et al. Dynamic multi-objective optimization and decision-
making using modified nsga-ii: a case study on hydro-thermal power schedul-
ing. In International conference on evolutionary multi-criterion optimization,
pages 803–817. Springer, 2007.
9. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiob-
jective genetic algorithm: Nsga-ii. IEEE transactions on evolutionary compu-
tation, 6(2):182–197, 2002.
10. S. Dhanalakshmi, S. Kannan, K. Mahadevan, and S. Baskar. Application of
modified nsga-ii algorithm to combined economic and emission dispatch prob-
lem. International Journal of Electrical Power & Energy Systems, 33(4):992–
1002, 2011.
11. S. Do-Lenh, P. Jermann, S. Cuendet, G. Zufferey, and P. Dillenbourg. Task
performance vs. learning outcomes: a study of a tangible user interface in the
classroom. In European Conference on Technology Enhanced Learning, pages
78–92. Springer, 2010.
12. J. J. Durillo and A. J. Nebro. jmetal: A java framework for multi-objective
optimization. Advances in Engineering Software, 42(10):760–771, 2011.
13. P. Egan and J. Cagan. Human and computational approaches for design
problem-solving. In Experimental Design Research, pages 187–205. Springer,
2016.
14. G. Ferguson, J. F. Allen, et al. Trips: An integrated intelligent problem-solving
assistant. In AAAI/IAAI, pages 567–572, 1998.
15. G. Fischer. Social creativity: turning barriers into opportunities for collaborative
design. In Proceedings of the eighth conference on Participatory design: Artful
integration: interweaving media, materials and practices-Volume 1, pages 152–
161. ACM, 2004.
16. B. J. Grosz. Collaborative systems (aaai-94 presidential address). AI magazine,
17(2):67, 1996.
17. L. Hay, A. H. B. Duffy, C. McTeague, L. M. Pidgeon, T. Vuletic, and M. Grealy.
A systematic review of protocol studies on conceptual design cognition: Design
as search andexploration. Design Science, 3:e10, 2017.
18. N. Hitomi, H. Bang, and D. Selva. Extracting and applying knowledge with
adaptive knowledge-driven optimization to architect an earth observing satellite
system. AIAA Information Systems-AIAA Infotech@ Aerospace, page 0794,
2017.
Side-by-side Human-Computer Design using a Tangible User Interface 19
19. H. Ishibuchi, H. Masuda, Y. Tanigaki, and Y. Nojima. Modified distance calcu-
lation in generational distance and inverted generational distance. In EMO (2),
pages 110–125, 2015.
20. H. Ishii, C. Ratti, B. Piper, Y. Wang, A. Biderman, and E. Ben-Joseph. Bringing
Clay and Sand into Digital Design Continuous Tangible user Interfaces. BT
Technology Journal, 22(4):287–299, 2004.
21. S. Jeyadevi, S. Baskar, C. Babulal, and M. W. Iruthayarajan. Solving multiob-
jective optimal reactive power dispatch using modified nsga-ii. International
Journal of Electrical Power & Energy Systems, 33(2):219–228, 2011.
22. S. Jord`
a, G. Geiger, M. Alonso, and M. Kaltenbrunner. The reactable: exploring
the synergy between live music performance and tabletop tangible interfaces.
In Proceedings of the 1st international conference on Tangible and embedded
interaction, pages 139–146. ACM, 2007.
23. M. Kaltenbrunner. Reactivision and tuio: a tangible tabletop toolkit. In Pro-
ceedings of the ACM international Conference on interactive Tabletops and
Surfaces, pages 9–16. ACM, 2009.
24. R. Kicinger, T. Arciszewski, and K. De Jong. Evolutionary computation and
structural design: A survey of the state-of-the-art. Computers & Structures,
83(23):1943–1978, 2005.
25. H. S. Kim and S. B. Cho. Application of interactive genetic algorithm to fash-
ion design. Engineering Applications of Artificial Intelligence, 13(6):635–644,
2000.
26. M. Kim and M. Maher. Comparison of designers using a tangible user inter-
face & graphical user interface and impact on spatial cognition. Proc. Human
Behaviour in Design, 5, 2005.
27. M. J. Kim and M. L. Maher. The impact of tangible user interfaces on spatial
cognition during collaborative design. Design Studies, 29(3):222–253, 2008.
28. B. Laugwitz, T. Held, and M. Schrepp. Construction and evaluation of a user
experience questionnaire. In Symposium of the Austrian HCI and Usability
Engineering Group, pages 63–76. Springer, 2008.
29. M. Laumanns, L. Thiele, K. Deb, and E. Zitzler. Combining convergence and
diversity in evolutionary multiobjective optimization. Evolutionary computa-
tion, 10(3):263–282, 2002.
30. H. Liu and M. Tang. Evolutionary design in a multi-agent design environment.
Applied Soft Computing Journal, 6(2):207–220, 2006.
31. M. L. Maher and L. Lee. Designing for gesture and tangible interaction. Syn-
thesis Lectures on Human-Centered Interaction, 10(2):i–111, 2017.
32. J. McCarthy. What is artificial intelligence. URL: http://www-formal. stanford.
edu/jmc/whatisai. html, 2007.
33. A. Ozgur, W. Johal, F. Mondada, and P. Dillenbourg. Windfield: Learning wind
meteorology with handheld haptic robots. In HRI17: ACM/IEEE International
Conference on Human-Robot Interaction Proceedings, number EPFL-CONF-
224130, pages 156–165. ACM, 2017.
34. J. Patten and H. Ishii. A comparison of spatial organization strategies in graph-
ical and tangible user interfaces. In Proceedings of DARE 2000 on Designing
augmented reality environments, pages 41–50. ACM, 2000.
35. K. Petersson, A. Kyroudi, J. Bourhis, C. Ceberg, T. Kn¨
o¨
os, F. Bochud, and
R. Moeckli. A clinical distance measure for evaluating treatment plan quality
difference with pareto fronts in radiotherapy. Physics and Imaging in Radiation
Oncology, 3:53–56, 2017.
20 Authors Anonymized For Review
36. S. D. Ramchurn, F. Wu, W. Jiang, J. E. Fischer, S. Reece, S. Roberts, T. Rodden,
C. Greenhalgh, and N. R. Jennings. Humanagent collaboration for disaster
response. Autonomous Agents and Multi-Agent Systems, 30(1):82–111, 2016.
37. P. Reed, B. S. Minsker, and D. E. Goldberg. Simplifying multiobjective opti-
mization: An automated design methodology for the nondominated sorted ge-
netic algorithm-ii. Water Resources Research, 39(7), 2003.
38. D. Selva. Experiments in knowledge-intensive system architecting: Interactive
architecture optimization. In Aerospace Conference, 2014 IEEE, pages 1–12.
IEEE, 2014.
39. D. Selva. Knowledge-intensive global optimization of earth observing sys-
tem architectures: a climate-centric case study. In Sensors, Systems, and Next-
Generation Satellites XVIII, volume 9241, page 92411S. International Society
for Optics and Photonics, 2014.
40. W. Shen, Q. Hao, and W. Li. Computer supported collaborative design: Retro-
spective and perspective. Computers in Industry, 59(9):855–862, 2008.
41. H. Shirado and N. A. Christakis. Locally noisy autonomous agents improve
global human coordination in network experiments. Nature, 545(7654):370–
374, 2017.
42. H. A. Simon. The sciences of the artificial. MIT press, 1996.
43. T. Smithers, A. Conkie, J. Doheny, B. Logan, and K. Millington. Design as
intelligent behavior: An ai in design research program,(ed. js gero) artificial
intelligence in design, 1989.
44. D. Smithwick, D. Kirsh, and L. Sass. Designerly pick and place: Coding physi-
cal model making to inform material-based robotic interaction. In Design Com-
puting and Cognition’16, pages 419–436. Springer, 2017.
45. A. I. Starcic and M. Zajc. An interactive tangible user interface application
for learning addition concepts 1217 131.. 135. British Journal of Educational
Technology, 42(6):E131–E135, 2011.
46. C. Thornton and B. Du Boulay. Artificial intelligence through search. Springer
Science & Business Media, 2012.
47. B. Ullmer and H. Ishii. The metadesk: models and prototypes for tangible
user interfaces. In Proceedings of the 10th annual ACM symposium on User
interface software and technology, pages 223–232. ACM, 1997.
48. D. A. Van Veldhuizen and G. B. Lamont. Evolutionary computation and con-
vergence to a pareto front. In Late breaking papers at the genetic programming
1998 conference, pages 221–228, 1998.
49. D. Watson, L. A. Clark, and A. Tellegen. Development and validation of brief
measures of positive and negative affect: the panas scales. Journal of personal-
ity and social psychology, 54(6):1063, 1988.
50. L. Xie, A. N. Antle, and N. Motamedi. Are tangibles more fun?: comparing
children’s enjoyment and engagement using physical, graphical and tangible
user interfaces. In Proceedings of the 2nd international conference on Tangible
and embedded interaction, pages 191–198. ACM, 2008.
51. E. Zitzler, D. Brockhoff, and L. Thiele. The hypervolume indicator revisited:
On the design of pareto-compliant indicators via weighted integration. In Evo-
lutionary multi-criterion optimization, pages 862–876. Springer, 2007.