Content uploaded by Sarah Gallacher
Author content
All content in this area was uploaded by Sarah Gallacher on Apr 12, 2015
Content may be subject to copyright.
VoxBox: a Tangible Machine that Gathers Opinions
from the Public at Events
Connie Golsteijn1, Sarah Gallacher1, Lisa Koeman1, Lorna Wall1,
Sami Andberg2, Yvonne Rogers1, Licia Capra1
1 ICRI Cities, University College London, UK
{c.golsteijn; s.gallacher; lisa.koeman.12; l.wall;
y.rogers; l.capra}@ucl.ac.uk
2 University of Helsinki, P.O. Box 28
FI-00014 University of Helsinki, Finland
sami.andberg@helsinki.fi
ABSTRACT
Gathering public opinions, such as surveys, at events
typically requires approaching people in situ, but this can
disrupt the positive experience they are having and can
result in very low response rates. As an alternative
approach, we present the design and implementation of
VoxBox, a tangible system for gathering opinions on a
range of topics in situ at an event through playful and
engaging interaction. We discuss the design principles we
employed in the creation of VoxBox and show how they
encouraged wider participation, by grouping similar
questions, encouraging completion, gathering answers to
open and closed questions, and connecting answers and
results. We evaluate these principles through observations
from an initial deployment and discuss how successfully
these were implemented in the design of VoxBox.
Author Keywords
Public opinion; gathering opinions; crowd engagement;
playful; tangible interaction; design research
ACM Classification Keywords
H.5: Information interfaces and presentation (e.g., HCI):
H.5.2. User Interfaces; H.5.m. Miscellaneous
INTRODUCTION
Traditional ways of obtaining public opinions have largely
been through marketing people approaching the general
public at events or in the street with a clipboard, cold
calling over the phone, or sending a text or email with a
link to a webpage for people to register and then fill in a
survey. More recently, tablet computers have been used to
replace the clipboard. However, all of these approaches
have their limitations and are susceptible to bias. The
reasons include the general public being wary of people
approaching them, and an increasing tendency to simply
ignore unsolicited messages. Many will avert their gaze, put
the phone down or delete the message. Those who do
respond are often only a small number of the population
and it is therefore unclear how representative they are of the
general population at large [8]. An alternative approach is
to design systems that gather opinions from the crowd in
situ without inappropriately interrupting people or
negatively influencing their positive experiences. While
previous studies have introduced large screens, social media
plug-ins, or simple voting systems, we aimed to design a
more playful experience that gathers detailed feedback from
the crowd at events such as festivals or fairs, by providing
an engaging and playful tangible system that invites people
to use it through its affordances. In this paper we present
the design, implementation and initial deployment of a
novel system, called VoxBox (Figure 1), which used a
range of physical input and output devices, based on a set of
core tangible design principles. We present and discuss the
value of our design approach for creating such a public
tangible opinion system.
Figure 1. VoxBox: a system to gather opinions from crowds.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to
post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from Permissions@acm.org.
TEI '15, January 16 - 19 2015, Stanford, CA, USA
Copyright is held by the owner/author(s). Publication rights licensed to
ACM. ACM 978-1-4503-3305-4/15/01…$15.00
http://dx.doi.org/10.1145/2677199.2680588
BACKGROUND
A variety of technologies for eliciting public opinions or
feedback have been developed that try to be more inclusive
and approachable when placed in situ in public spaces.
These include the use of large screens, mobile phones, and
voting boxes. Texting or Tweeting are often used as the
medium. For example, Schroeter et al. [14] developed an
application for public displays to elicit opinions via text or
tweet from citizens who otherwise would not have their say.
Others have used more traditional input devices, such as
keyboards and public telephone handsets to get the public
to voice their opinions or concerns. The Opinionizer [2]
comprised a large projected display that people added their
opinions to via typing at a keyboard. The VoiceYourView
system [17] provided an old fashioned telephone in a
library to obtain peoples’ views about a recent
refurbishment, which were represented as colorful visual
bubbles on public screens. While many people freely gave
their opinions in both settings, some felt uncomfortable and
self-conscious doing so. This suggests that the method by
which people are asked to give their views and the setting
in which they do so impacts the extent to which they will
voice their opinions or take part. Taylor et al. [15] found
that users did not like using mobile phones to interact with
public displays, and preferred to press buttons on the device
directly. Müller et al. [9] found that mobile phone
interaction with public displays did not receive as high
uptake as expected. More recently, MyPosition asked
people to vote on local issues through gesturing in front of a
public display [16]. While many people stopped to look,
only one in four chose to submit an opinion.
While this new generation of opinion-based technologies
can be attractive and encourage more people to participate,
there is still the problem that others shy away. It is not
always clear how to interact with a public display that
people have never seen before, especially if it is novel.
Moreover, people may not see them in the first place. Such
display and interaction blindness has been found to exist for
a number of public displays and billboards [7, 9]. People
expect them to be advertising material they don't want to
look at or that simply do not grab their attention. We would
argue that the opposite is true for physical tangible objects,
which do have the affordances to draw people’s attention.
People are drawn to something that is novel, unusual and at
odds with the environment. For example, the Periscope was
designed as an unusual technological device for viewing
videos about the surrounding area. Situated in a woodland,
it provoked children to stop, wonder and interact [13].
Houben and Weichel [4] have also found that the
introduction of a curious physical object linked to a public
display attracted attention and significantly increased the
numbers of people interacting with the display. The
physicality and tangibility of components with clear and
familiar affordances, such as pressing buttons, moving
sliders, and turning knobs and handles, clearly indicate that
they are there to be interacted with and also they are
obvious how to do so. Both curiosity and clear affordances
are important, firstly, to attract passers-by attention and
secondly, to help them move through the threshold of
participation [2].
In this light, researchers have designed very simple physical
button-based voting boxes for gathering opinions [1, 3, 5,
15]. A benefit of using such simple input devices is that
they are cheap to make and can be situated in a range of
public places. However, they are limited in how far they
can probe people’s views and opinions. The question this
raises is how best to design a range of tangible input
devices that people are drawn to, will find compelling, will
know intuitively how to interact with, and will also not feel
self-conscious when doing so, or feel that it is too childlike
or too technical for them to use. Our approach was to
design a large tangible interactive machine that could stand
out, was obvious to interact with, was playful and would
engage people to gather a diversity of responses and views.
We also wanted to maintain the interest of passers-by and
provoke further discussion amongst those nearby by
showing the collected data in aggregated form as a real-
time visualization.
DESIGN PRINCIPLES
The design of VoxBox focused on recreational events, such
as festivals or fairs, and aimed to gather opinions on the
‘feel good factor’ of such events, e.g. do people enjoy the
event, do they feel connected to the people around them,
and what are the elements that are most memorable? We
considered characteristics of online or paper questionnaires
and also key issues that were observed with these, and
employed the following design principles.
Encouraging Participation
To prevent situations that are uncomfortable for both
researcher and participant, such as hassling people with a
clipboard, our aim was to design a system that invited
people to participate without forcing them or interrupting
their event experience. At the same time, it was important
to design VoxBox to be able to stand out and draw attention
from competing stalls that are also often part of an event.
We thus chose to create a large physical system with
physical input mechanisms through which people could
give their opinions, instead of using, for example, text
messages or social media input. VoxBox was designed as a
modular system built around a physical shelving unit that
lets users move through groups of questions, module by
module (Figure 1). Each module used a different input
mechanism that people were familiar with and knew how to
use, such as sliders, buttons, knobs, and spinners. The first
module asked closed questions about demographics, the
second about their current mood, the third about the crowd,
the fourth about the event, and the fifth and final asked an
open question. In addition the system included a transparent
tube at the side that dropped a ball step by step as the
question modules were completed as an incentive for
completion and progress indicator. Finally, the reverse side
of the system showed three real-time visualizations of the
collected data on small screen embedded in portholes. The
aim of our research was to make VoxBox mostly self-
explanatory so that it was clear what it was and why
someone would want to interact with it [7]. We further
designed interactions to require no technological knowledge
or skills [3], and made the system, in most cases, usable
without instructions.
Grouping Similar Questions
In conventional questionnaires, related questions or
questions that require the same way of answering are often
visually grouped, for example by putting them on the same
page, or separating them with whitespace. We employed a
tangible approach to this by designing VoxBox to consist of
a number of separate question modules. Each module
contained groups of questions that were related, and that
used the same input mechanism. In this way we created a
questionnaire with a logical flow of questions, and chose to
make it not visually intimidating, as grouped questions
emphasized that the questionnaire was not long.
Encouraging Completion and Showing Progress
One issue with questionnaires is people dropping out during
completion, which is often caused by lack of clarity about
length of questionnaire or progress, along with a lack of
incentive for completion. In the VoxBox design the entire
questionnaire was visible all the time so that users knew
how many questions they needed to respond to and how
long it may take. Further, a tangible reward (a stress ball
featuring the URL of the website with the results) was
given to the users to encourage completion; the ball could
only be obtained when the questionnaire was completed. By
designing a transparent tube that dropped the ball in stages
after each part of the questionnaire was completed, the ball
also served as a progress indicator. Progress was also
shown by lighting up the active panels one by one as the
user went through the questionnaire. This light feedback, in
addition to lights next to buttons and scales for each
corresponding option, provided immediate feedback from
the system to show that it was interactive and that it was
working, in order to encourage further use [7].
Gathering Answers to Closed and Open Questions
One problem with questionnaires is a lack, or brevity, of
responses to open questions. Rogers et al. [12] found that
engaging participants in playful activities resulted in a
greater willingness to talk, and that it triggered free
thinking. Although most of the questions in VoxBox are
closed questions, we specifically designed a playful input
mechanism, a phone handset that rang when a user reached
this panel and asked them a question when they picked up.
The user could then speak their answer into the handset and
hang up the phone. We hoped that through this playfulness
and engagement our questionnaire would result in more
willingness to answer the open questions asked.
Connecting Answers and Results
In traditional surveys there is often a divide between a
respondent answering questions and the researcher
gathering data and presenting these in reports or papers.
Respondents often do not have access to the results of the
survey or are not informed where these results can be
found. To make VoxBox more enticing to use and to trigger
discussions from by-standers, we decided to make the
collected results visible to the users [3]. Real-time results
were shown in two different ways: on the website (for
which the URL was printed on the incentive balls), and on a
set of visualizations on the reverse side of the system. By
printing the URL on the balls that were obtained after
answering questions, we physically linked the users’
answers to the results website by symbolizing that the
results quite literally rolled out of the system after
answering questions. The data visualizations on the system
offered an immediate insight into the results. We tried to
encourage users to look at these through the physical design
by making them walk around the side of VoxBox to collect
their ball. The box where the ball dropped was angled
backward to encourage users to walk further around the
back to see the visualizations.
DESIGN AND IMPLEMENTATION OF VOXBOX
Inspiration for the design of VoxBox came from a number
of sources including the archaic computer game ‘The
Incredible Machine’ (in which a user solves puzzles by
arranging physical objects, e.g. levers, ropes, and conveyor
belts), marble tracks (in which marbles are guided through
sometimes complex tracks), and mechanical devices and
interactive exhibitions as seen in science museums.
We decided on a final set of questions we wanted to ask
based on our own interpretations of what may influence the
feel good factor, and inspired by reading through evaluation
reports on several organized events [e.g. 6]. As mentioned,
these questions were divided into five categories, which
were shown on five separate question modules in the
system. An overview of the questions that were asked in
each module can be seen in Table 1. While the
demographics were mainly entered through simple push
buttons, for the mood, crowd, and event questions we
decided on different variations of input scales, so that
people could rate their agreement. Although we could have
used similar interactions for each of these groups, we felt it
was important to include a variety of interactions to avoid
the tedium of having to answer many questions in the same
way, and keep the system engaging throughout the whole
interaction. For the mood questions we decided to use linear
sliders with LED feedback that represented semantic
differential scales [10] on which people rate their response
between two opposite answers on a scale; these scales were
continuous (Figure 2a). For the crowd questions we used
rotary knobs with LED feedback to show the answer along
the scale. These questions were rated between disagreement
and agreement and the interaction provided a discrete scale
with 16 increments (Figure 2b). The event questions were
answered through physical spinners with five options
between disagreement and agreement similar to a Likert
scale (Figure 2c). Finally, for the open questions, we
designed a phone handset to employ a familiar metaphor for
dialog in an unfamiliar setting, which we hoped would
result in surprise and excitement (Figure 2d).
We developed VoxBox as a modular system with separate
question modules for the different groups of questions, and
incorporated mechanisms for the incentive ball to run
through the system (Figure 3a). Early variations of the
design imagined the ball completing a track through the
physical device in which obstacles had to be removed, or
the track had to be completed, by answering questions.
Different questions would have different physical
mechanisms behind them that would allow the ball to move
forward, for example a ‘yes’ or ‘no’ question would tip a
slope in a certain direction, while a Likert scale may move
an obstacle out of the way. Ideas also included mechanisms
for encouraging longer answers to open questions, such as
gradually moving obstacles away or only running a
conveyor belt while the user was still recording an answer.
Due to feasibility reasons within the time constraints of the
project, the ball track was simplified to run through the
device and be controlled through physical levers after each
question panel (see Figure 3b) and ultimately, replaced by
an external tube that dropped the ball after each stage.
Implementation
VoxBox was implemented using three off-the-shelf
shelving units to make sure it was sturdy enough to
withstand many interactions and unanticipated user
behavior. To allow for a flexible and modular system, we
designed each question module as a drawer that was slotted
into the shelving unit. In this way, question modules could
be moved around and the sequence of the questions could
easily be changed. Question modules were created from
plywood using a laser cutter to give VoxBox an appearance
that called up associations of ‘a time machine’ and ‘a mix
of Willy Wonka, the controls of the Tardis and those ornate
fairground automata’, according to initial responses.
Each question module contained a front panel for user
interactions, which contained the sliders, buttons, knobs,
spinners, or handset. A question module further contained
an LED strip around the edge of the front panel that was lit
up in green when a panel was active (Figure 4a), and a
green submit button that was used to submit the user’s
answers. This button was necessary to determine when a
user had made a final decision on the answers. Along with a
Table 1. Overview of the questions and interaction mechanisms in the different question modules.
Figure 2. The input mechanisms for the question modules.
Figure 3. Early sketches of VoxBox: a. design of a modular
system; b. design of the internal ball tube.
large green start button, elements in this color were thus
deliberately used to navigate the users through the system.
Although buttons and sliders were fixed in the panels,
questions and answers were cut from separate labels that
were screwed on (Figure 4b). This allowed for questions to
be easily changed (within the constraints of number and
type of question in each panel) for different events where
different questions may be desired. Most question panels
used off-the-shelf components, for example the sliders,
knobs, and buttons. We created a tailored rotary dial for age
input and spinners for the event questions (Figure 5).
Similar to the easily changeable question labels, the paper
inlays of these spinners could also be replaced to show
different answers.
VoxBox was controlled by open source Arduino
technologies. To enable a modular design each question
module contained its own Arduino board that controlled the
I/O for that module. In addition there was a 'Master'
Arduino and one to control the ball tube. The Master had
overall control of the VoxBox operation and a WiFi
connection to a backend server and database. On startup the
Master downloaded the ordered list of currently attached
question modules. It then proceeded to go through the list in
sequence (Table 1), activating the next question module in
the list, waiting for it to send back its data and then
deactivating it again. All communication between Arduinos
within the VoxBox was via I2C. Once the Master reached
the end of the list it collated all the data it had collected
from the question boxes and uploaded this to the backend
server and database via its WiFi link. This architecture
allowed VoxBox to be easily adapted, as question boxes
could be added, removed or swapped around without
needing to make any changes to their code or the code
inside the Master. Even extra connectors for possible
additional data cables between modules were already
implemented in the system. The only change required was
an alteration to the ordered list of currently attached
question modules in the backend server.
The ball tube was implemented by creating a tailored
construction from plywood and a transparent tube (Figure
6). The tube was divided into six parts and a servo motor
with a long arm was mounted in each part to stop the ball
from moving through. After pressing the start button, and
each of the submit buttons the servos rotated in sequence to
drop the ball step by step. The ball tube was connected to a
ball compartment within the VoxBox unit and although
balls were fed into the tube manually in this
implementation, an automatic feed was imagined for
potential redesigns. The ball tube thus functioned as an
incentive to complete the survey and as a physical progress
bar. Because the tube consisted of separate parts that
corresponded to each question module, this element of the
system could also easily be adapted to account for more or
fewer attached question modules.
Data that was sent from the Master Arduino to the server
was used to created visualizations that were shown on the
website and on the system itself. VoxBox was designed to
not only allow people to share data on their demographics
and views, but to also give them the opportunity to learn
more about the opinions held by others. Similar public
visualizations of people’s perceptions have served as a
talking point [e.g. 5, 16]. To enable passers-by to view and
discuss the data gathered at the front side of the VoxBox,
eye-catching and simple visual representations were shown
on the reverse side (Figure 7a). To ensure the aesthetics of
Figure 4a. Green LED strips showed that a panel was
active; b. Separate question and answer labels were
screwed on for easy changes.
Figure 5. Tailor-made spinners; paper inlays could be
changed to show different answers.
Figure 6. The ball tube at the side of the system functioned
as an incentive for completion and progress indicator.
these representations would match the look and feel of the
input technology, inspiration was sought from retro display
technology: flip-disc displays, the electromechanical dot
matrix displays traditionally used for destination signs
on buses. While these signs are originally of ultra-low
resolution, recreating digital screen-based flip-disc displays
allowed for the display of higher resolution infographic-like
visualizations. By flipping the discs row by row, the display
scrolled through real-time visual summaries of the data. By
creating side panels around these digital screens, we created
the illusion of a porthole via which people could look into
the VoxBox (Figure 7b). Apart from protecting the
screens from direct sunlight, the portholes were also meant
to spark curiosity and lure people to the screens — thereby
overcoming common display blindness [9].
INITIAL DEPLOYMENT
In addition to numerous people in our research institute
coming by our lab to try out VoxBox, we ran an initial
deployment at a one-day conference on technology
concerned with the relationship between the government,
digital democracy and the public (Figure 8). At this event,
over 50 academic researchers, people from industry, and
government organizations were present who were interested
in novel technologies. VoxBox was set up in the area where
coffee and lunch breaks took place, and over lunch there
was a dedicated slot for interactive demos. As such,
VoxBox was available for the attendees to use for a total of
1.5 hours. Around 30 people used the system, who all
completed the whole survey and took an average of three
minutes to complete. Below, we describe our observations
on how VoxBox was used at this event. Based on these we
discuss how our design principles played out in this context.
We end by describing possible improvements to the design.
Overall, VoxBox was well received and gained a lot of
interest. In the first break, we witnessed one person walking
with a brisk pace towards our system as soon as he spotted
it and immediately started interacting with it, eager to be
the first one to engage with the system. On several
occasions a queue formed as people waited for their turn.
Others deliberately chose to watch others interact first while
taking their turn afterwards. Many attendees were interested
in the thoughts behind the system and how it was built, and
reacted enthusiastically to its visual appearance. Small
groups of attendees who knew each other often came up
together and each had their turn. One person thought out
loud: ‘With whom did you come to this event?’ ‘Are you
guys my friends?’ which resulted in laughter from the
group. The phone handset, which rang shortly after the
users had submitted the answers on the previous panel,
caused surprise, and many users could be seen grinning
while picking up the phone. Most users answered the open
question through the phone, and several gave quite
elaborate answers, e.g.: ‘If there was an entry fee for this
event, how much would you be willing to pay?’ ‘I'd sell my
children. And possibly my mother. But I get less money for
my children – aye.’ Another example of an answer was:
‘What will you remember most from this event?’ to which
they replied, ‘I'll remember the VoxBox most.’
Among many utterances of ‘Wonderful, fantastic. Thank
you.’ and ‘that was fun!’ there was one attendee who
questioned whether the data shown on the system was the
data we were actually collecting there and then. He
wondered if he was the only one who would question if the
data representations were manipulated by the organizers of
the event to show favorable results. He was the only one at
this event to raise this concern, but it would be worth
exploring further to what extent people trust the accuracy of
the data visualizations. Among those that did ‘believe’ the
data, there was substantial interest and several people
remained watching the visualizations scroll through
different results. One speaker teased another by
commenting: ‘23% feel bored, that was your talk!’ Users
did not always immediately notice the ball dropping down
the side of the system – this happened mostly in early
interactions where people had not seen others use it yet, and
had not yet had a chance to walk around the device. They
sometimes seemed surprised that they could keep the ball
but were always pleased when we informed them. One or
two people opted to give their ball back to ‘save us money.’
Figure 8. User interacting with VoxBox during the initial
deployment at a one-day conference.
Figure 7a. The reverse side of VoxBox showed real-time
visualizations of the data; b. visualization screens were
embedded in portholes.
Finally, we noticed that some users did not realize that the
start button needed to be pressed before any other
interaction could take place. They usually figured this out
quickly, or had it pointed out to them by other attendees.
DISCUSSION
Our observations based on the initial deployment confirmed
that VoxBox is a novel and engaging system that succeeds
in gathering opinions from crowds at events. We were
interested in how our observations were able to validate the
choice of our design principles for creating interactive
features that were able to draw people to answer all the
questions thoughtfully. From these principles we consider
more generally which tangible features are effective and
how to combine them to make a compelling and enjoyable
experience for answering questions at other kinds of events.
Considering our first aim was to encourage participation,
we saw that the appearance of the system was very
attractive, drawing many people to it like a honey pot [2].
Although the deployment took place at an event with
predominantly attendees that were excited about
technology, there were also a number of attendees from
industry or governing organizations that had less affinity
with technology but were still very enticed by VoxBox. As
researchers, we deliberately took a stand-back approach:
instead of inviting people to have a go, we let them
approach it by themselves. Many people took initiative and
used it from start to finish. The ball tube and ball
compartment appeared an unanticipated attention catcher as
people were intrigued by the function of the colorful balls
and by the appearance of the ball tube. The system
appeared to be mostly self-explanatory although a few
usability issues were observed. Users did not always notice
the start button without which none of the panels were
activated. We had noticed this before during informal trials
in the lab and had created a large arrow to point out the start
of the interaction sequence but this was insufficient to fully
solve this issue. We further noticed that some users were
surprised at first about the sequence of the panels, although
the green light navigation helped to make this clear. Apart
from these small issues, VoxBox was very effective in
encouraging people to give their opinions.
As mentioned, VoxBox grouped similar questions, by
separating them on several question panels. Although this
did work well in giving the appearance of a short survey,
some people got a bit confused at first about having to go
through the panels in a fixed sequence. This fixed sequence
was introduced in part by technology constraints, and in
part by this being a common approach in traditional
questionnaires. It was thus unanticipated that users would
be confused by having to follow a sequence. It seems that
by transposing characteristics from paper or online
questionnaires to a physical device, we had created new
affordances that invited different behaviors, e.g. all the
questions were visible at the same time and some
interaction mechanisms may have looked more enticing
than others. We realize that VoxBox does not need to
incorporate a fixed sequence of interaction and we can
consider other ways in which the affordances of a physical
system are exploited to create a more appropriate, less
constrained form of interaction. Similarly, in traditional
questionnaires there are often options to activate different
flows of questions based on previous answers. We could
think of ways in which such more sophisticated functions
could be integrated in the physical design of VoxBox.
We aimed to encourage completion and show progress,
mainly through the ball tube that provided the ball as an
incentive and showed the progress in the questionnaire. In
our initial observations we saw that this did not work as
well as planned. Because of the location of the ball tube at
the side of the system, users did not always notice
straightaway that something was happening. Many users
had to be notified afterwards that they had now earned their
ball. We saw that once people noticed that the ball dropped
after each panel they were enthusiastic about this and often
stepped aside after each panel to check their progress. This
issue can easily be solved by moving the ball tube forward
along the side so that it is more visible while standing in
front of VoxBox. Furthermore, although most users were
pleased when informed that they could keep their ball, it did
not seem as strong an incentive as the joy of interacting
with Voxbox. Nevertheless, the ball functioned as a link to
the survey results and showed the URL of our website.
A further aim was to gather answers to open questions by
enticing people to speak their answers into a phone. This
method proved to be effective as shown by the number of
people who listened intently to the question and then
spontaneously gave a, sometimes elaborate, verbal response
after being pleasantly surprised by the phone ringing.
In showing the results of the data collection on the system,
we also wanted to connect answers and results. As a result
of the ball tube position not being ideal, the ball rolling
towards the back to encourage the users to walk towards the
visualizations did not work as strongly as hoped. Although
plenty of users did see the visualizations (albeit sometimes
prompted) and enjoyed seeing the results, it is important to
consider other ways to link the data input and visualizations
more strongly, for example, by not placing them at the
reverse side of the system but bringing them closer to the
location of the input so that users do not have to divide their
attention as strongly [11]. We further considered ways in
which to link data from the user more explicitly to that of
the crowd so comparisons are possible between personal
opinions and those of the crowd, e.g. by showing current
and aggregated data on different screens at the same time.
Such additions and improvements could connect answers
and results more strongly than was currently the case.
Finally, privacy is an important concern when asking
people to give personal information, such as their age or
views, in a public place. We considered placing the
VoxBox in a booth with a curtain that could be drawn by
the users to prevent people looking over their shoulders.
However, this would mean it would lose its attractive
visibility that was central to how we envisioned it drawing
people to it. We found that no-one was worried about their
privacy in this context and that those using it were given a
wide berth from onlookers – akin to how people stand back
when waiting to use an ATM machine.
CONCLUSIONS
In this paper we have presented the design, implementation,
and deployment of VoxBox, a tangible system to gather
opinions from crowds at events. We have shown through an
initial deployment how appealing and engaging VoxBox
was considered to be, and how successful it was in drawing
people in and gathering opinions in a novel way. We have
extensively discussed our rationale behind designing this
system and have reflected on the extent to which we have
successfully implemented our design principles based on
observations with an initial deployment. VoxBox opens up
discussions around the design of novel systems that can
encourage the sharing of opinions by engaging users in
playful interactions. Our findings have shown this is an
important area for researchers to explore because gauging
opinions and knowing what people think is considered an
increasingly important part of community engagement. Our
future plans include deploying and adapting VoxBox for a
variety of other events in different contexts and settings.
Finally, we argue that our tangible questionnaire approach –
asking people to walk up to playful and attractive life-size
machine and provide answers to a set of questions about
how they feel – shows much promise at getting people from
all walks of life to voice their opinions.
ACKNOWLEDGMENTS
This research was funded by ICRI Cities. We further thank
everyone who tried out VoxBox for their valuable insights,
and our colleagues in ICRI and UCL Interaction Centre for
their feedback on our ideas.
REFERENCES
1. Braun, L., et al. SkyWords: an engagement machine at
chicago city hall. In Proc. CHI '13 Ext. Abstr., ACM
Press (2013), 2839-2840.
2. Brignull, H. and Rogers, Y. Enticing people to interact
with large public displays in public spaces. In Proc.
Interact 2003, Rauterberg, M., Menozzi, M., and
Wesson, J., (eds). IOS Press, 2003, 17-24.
3. Dade-Robertson, M., Taylor, N., Marshall, J., and
Olivier, P. The political sensorium. In Proc. MAB 2012,
ACM Press (2012), 47-50.
4. Houben, S. and Weichel, C. Overcoming interaction
blindness through curiosity objects. In Proc. CHI 2013
Ext. Abstr., ACM Press (2013), 1539-1544.
5. Koeman, L., Kalnikaite, V., Rogers, Y., and Bird, J.
What chalk and tape can tell us: lessons learnt for next
generation urban displays. In Proc. PerDis ’14 (2014),
130-136.
6. Maennig, W. and Porsche, M. The Feel-good Effect at
Mega Sports Events. Recommendations for Public and
Private Administration Informed by the Experience of
the FIFA World Cup 2006. IASE/NAASE Working
Paper Series 8, 17 (2008), 1-28.
7. Marshall, P., Morris, R., Rogers, Y., Kreitmayer, S., and
Davies, M. Rethinking 'multi-user': an in-the-wild study
of how groups approach a walk-up-and-use tabletop
interface. In Proc. CHI 2011, ACM Press (2011), 3033-
3042.
8. Miller, K.W., Wilder, L.B., Stillman, F.A., and Becker,
D.M. The Feasibility of a Street-Intercept Survey
Method in an African-American Conmunity. American
Journal of Public Health 87, 4 (1987), 655-658.
9. Müller, J., et al. Display Blindness: The Effect of
Expectations on Attention towards Digital Signage. In
Pervasive Computing, Tokuda, H., et al., (eds). Springer
Berlin Heidelberg, 2009, 1-8.
10. Osgood, C.E., Suchard, G.J., and Tannenbaum, P.H. The
Measurement of Meaning. University of Illinois Press,
Urbana, 1957.
11. Price, S. A representation approach to conceptualizing
tangible learning environments. In Proc. TEI 2008,
ACM Press (2008), 151-158.
12. Rogers, Y., et al. Never too old: engaging retired people
inventing the future with MaKey MaKey. In Proc. CHI
2014, ACM Press (2014), 3913-3922.
13. Rogers, Y., et al. Ambient wood: designing new forms
of digital augmentation for learning outdoors. In Proc.
IDC 2004, ACM Press (2004), 3-10.
14. Schroeter, R., Foth, M., and Satchell, C. People, content,
location: sweet spotting urban screens for situated
engagement. In Proc. DIS 2012, ACM Press (2012),
146-155.
15. Taylor, N., et al. Viewpoint: empowering communities
with situated voting devices. In Proc. CHI 2012, ACM
Press (2012), 1361-1370.
16. Valkanova, N., Walter, R., Moere, A.V., and Müller, J.
MyPosition: sparking civic discourse by a public
interactive poll visualization. In Proc. CSCW 2014,
ACM Press (2014), 1323-1332.
17. Whittle, J., et al. VoiceYourView: collecting real-time
feedback on the design of public spaces. In Proc.
Ubicomp 2010, ACM Press (2010), 41-50.