Conference PaperPDF Available

VoxBox: A Tangible Machine that Gathers Opinions from the Public at Events

Authors:
  • Intel Labs Europe

Abstract and Figures

Gathering public opinions, such as surveys, at events typically requires approaching people in situ, but this can disrupt the positive experience they are having and can result in very low response rates. As an alternative approach, we present the design and implementation of VoxBox, a tangible system for gathering opinions on a range of topics in situ at an event through playful and engaging interaction. We discuss the design principles we employed in the creation of VoxBox and show how they encouraged wider participation, by grouping similar questions, encouraging completion, gathering answers to open and closed questions, and connecting answers and results. We evaluate these principles through observations from an initial deployment and discuss how successfully these were implemented in the design of VoxBox.
Content may be subject to copyright.
VoxBox: a Tangible Machine that Gathers Opinions
from the Public at Events
Connie Golsteijn1, Sarah Gallacher1, Lisa Koeman1, Lorna Wall1,
Sami Andberg2, Yvonne Rogers1, Licia Capra1
1 ICRI Cities, University College London, UK
{c.golsteijn; s.gallacher; lisa.koeman.12; l.wall;
y.rogers; l.capra}@ucl.ac.uk
2 University of Helsinki, P.O. Box 28
FI-00014 University of Helsinki, Finland
sami.andberg@helsinki.fi
ABSTRACT
Gathering public opinions, such as surveys, at events
typically requires approaching people in situ, but this can
disrupt the positive experience they are having and can
result in very low response rates. As an alternative
approach, we present the design and implementation of
VoxBox, a tangible system for gathering opinions on a
range of topics in situ at an event through playful and
engaging interaction. We discuss the design principles we
employed in the creation of VoxBox and show how they
encouraged wider participation, by grouping similar
questions, encouraging completion, gathering answers to
open and closed questions, and connecting answers and
results. We evaluate these principles through observations
from an initial deployment and discuss how successfully
these were implemented in the design of VoxBox.
Author Keywords
Public opinion; gathering opinions; crowd engagement;
playful; tangible interaction; design research
ACM Classification Keywords
H.5: Information interfaces and presentation (e.g., HCI):
H.5.2. User Interfaces; H.5.m. Miscellaneous
INTRODUCTION
Traditional ways of obtaining public opinions have largely
been through marketing people approaching the general
public at events or in the street with a clipboard, cold
calling over the phone, or sending a text or email with a
link to a webpage for people to register and then fill in a
survey. More recently, tablet computers have been used to
replace the clipboard. However, all of these approaches
have their limitations and are susceptible to bias. The
reasons include the general public being wary of people
approaching them, and an increasing tendency to simply
ignore unsolicited messages. Many will avert their gaze, put
the phone down or delete the message. Those who do
respond are often only a small number of the population
and it is therefore unclear how representative they are of the
general population at large [8]. An alternative approach is
to design systems that gather opinions from the crowd in
situ without inappropriately interrupting people or
negatively influencing their positive experiences. While
previous studies have introduced large screens, social media
plug-ins, or simple voting systems, we aimed to design a
more playful experience that gathers detailed feedback from
the crowd at events such as festivals or fairs, by providing
an engaging and playful tangible system that invites people
to use it through its affordances. In this paper we present
the design, implementation and initial deployment of a
novel system, called VoxBox (Figure 1), which used a
range of physical input and output devices, based on a set of
core tangible design principles. We present and discuss the
value of our design approach for creating such a public
tangible opinion system.
Figure 1. VoxBox: a system to gather opinions from crowds.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to
post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from Permissions@acm.org.
TEI '15, January 16 - 19 2015, Stanford, CA, USA
Copyright is held by the owner/author(s). Publication rights licensed to
ACM. ACM 978-1-4503-3305-4/15/01…$15.00
http://dx.doi.org/10.1145/2677199.2680588
BACKGROUND
A variety of technologies for eliciting public opinions or
feedback have been developed that try to be more inclusive
and approachable when placed in situ in public spaces.
These include the use of large screens, mobile phones, and
voting boxes. Texting or Tweeting are often used as the
medium. For example, Schroeter et al. [14] developed an
application for public displays to elicit opinions via text or
tweet from citizens who otherwise would not have their say.
Others have used more traditional input devices, such as
keyboards and public telephone handsets to get the public
to voice their opinions or concerns. The Opinionizer [2]
comprised a large projected display that people added their
opinions to via typing at a keyboard. The VoiceYourView
system [17] provided an old fashioned telephone in a
library to obtain peoples’ views about a recent
refurbishment, which were represented as colorful visual
bubbles on public screens. While many people freely gave
their opinions in both settings, some felt uncomfortable and
self-conscious doing so. This suggests that the method by
which people are asked to give their views and the setting
in which they do so impacts the extent to which they will
voice their opinions or take part. Taylor et al. [15] found
that users did not like using mobile phones to interact with
public displays, and preferred to press buttons on the device
directly. Müller et al. [9] found that mobile phone
interaction with public displays did not receive as high
uptake as expected. More recently, MyPosition asked
people to vote on local issues through gesturing in front of a
public display [16]. While many people stopped to look,
only one in four chose to submit an opinion.
While this new generation of opinion-based technologies
can be attractive and encourage more people to participate,
there is still the problem that others shy away. It is not
always clear how to interact with a public display that
people have never seen before, especially if it is novel.
Moreover, people may not see them in the first place. Such
display and interaction blindness has been found to exist for
a number of public displays and billboards [7, 9]. People
expect them to be advertising material they don't want to
look at or that simply do not grab their attention. We would
argue that the opposite is true for physical tangible objects,
which do have the affordances to draw people’s attention.
People are drawn to something that is novel, unusual and at
odds with the environment. For example, the Periscope was
designed as an unusual technological device for viewing
videos about the surrounding area. Situated in a woodland,
it provoked children to stop, wonder and interact [13].
Houben and Weichel [4] have also found that the
introduction of a curious physical object linked to a public
display attracted attention and significantly increased the
numbers of people interacting with the display. The
physicality and tangibility of components with clear and
familiar affordances, such as pressing buttons, moving
sliders, and turning knobs and handles, clearly indicate that
they are there to be interacted with and also they are
obvious how to do so. Both curiosity and clear affordances
are important, firstly, to attract passers-by attention and
secondly, to help them move through the threshold of
participation [2].
In this light, researchers have designed very simple physical
button-based voting boxes for gathering opinions [1, 3, 5,
15]. A benefit of using such simple input devices is that
they are cheap to make and can be situated in a range of
public places. However, they are limited in how far they
can probe people’s views and opinions. The question this
raises is how best to design a range of tangible input
devices that people are drawn to, will find compelling, will
know intuitively how to interact with, and will also not feel
self-conscious when doing so, or feel that it is too childlike
or too technical for them to use. Our approach was to
design a large tangible interactive machine that could stand
out, was obvious to interact with, was playful and would
engage people to gather a diversity of responses and views.
We also wanted to maintain the interest of passers-by and
provoke further discussion amongst those nearby by
showing the collected data in aggregated form as a real-
time visualization.
DESIGN PRINCIPLES
The design of VoxBox focused on recreational events, such
as festivals or fairs, and aimed to gather opinions on the
‘feel good factor’ of such events, e.g. do people enjoy the
event, do they feel connected to the people around them,
and what are the elements that are most memorable? We
considered characteristics of online or paper questionnaires
and also key issues that were observed with these, and
employed the following design principles.
Encouraging Participation
To prevent situations that are uncomfortable for both
researcher and participant, such as hassling people with a
clipboard, our aim was to design a system that invited
people to participate without forcing them or interrupting
their event experience. At the same time, it was important
to design VoxBox to be able to stand out and draw attention
from competing stalls that are also often part of an event.
We thus chose to create a large physical system with
physical input mechanisms through which people could
give their opinions, instead of using, for example, text
messages or social media input. VoxBox was designed as a
modular system built around a physical shelving unit that
lets users move through groups of questions, module by
module (Figure 1). Each module used a different input
mechanism that people were familiar with and knew how to
use, such as sliders, buttons, knobs, and spinners. The first
module asked closed questions about demographics, the
second about their current mood, the third about the crowd,
the fourth about the event, and the fifth and final asked an
open question. In addition the system included a transparent
tube at the side that dropped a ball step by step as the
question modules were completed as an incentive for
completion and progress indicator. Finally, the reverse side
of the system showed three real-time visualizations of the
collected data on small screen embedded in portholes. The
aim of our research was to make VoxBox mostly self-
explanatory so that it was clear what it was and why
someone would want to interact with it [7]. We further
designed interactions to require no technological knowledge
or skills [3], and made the system, in most cases, usable
without instructions.
Grouping Similar Questions
In conventional questionnaires, related questions or
questions that require the same way of answering are often
visually grouped, for example by putting them on the same
page, or separating them with whitespace. We employed a
tangible approach to this by designing VoxBox to consist of
a number of separate question modules. Each module
contained groups of questions that were related, and that
used the same input mechanism. In this way we created a
questionnaire with a logical flow of questions, and chose to
make it not visually intimidating, as grouped questions
emphasized that the questionnaire was not long.
Encouraging Completion and Showing Progress
One issue with questionnaires is people dropping out during
completion, which is often caused by lack of clarity about
length of questionnaire or progress, along with a lack of
incentive for completion. In the VoxBox design the entire
questionnaire was visible all the time so that users knew
how many questions they needed to respond to and how
long it may take. Further, a tangible reward (a stress ball
featuring the URL of the website with the results) was
given to the users to encourage completion; the ball could
only be obtained when the questionnaire was completed. By
designing a transparent tube that dropped the ball in stages
after each part of the questionnaire was completed, the ball
also served as a progress indicator. Progress was also
shown by lighting up the active panels one by one as the
user went through the questionnaire. This light feedback, in
addition to lights next to buttons and scales for each
corresponding option, provided immediate feedback from
the system to show that it was interactive and that it was
working, in order to encourage further use [7].
Gathering Answers to Closed and Open Questions
One problem with questionnaires is a lack, or brevity, of
responses to open questions. Rogers et al. [12] found that
engaging participants in playful activities resulted in a
greater willingness to talk, and that it triggered free
thinking. Although most of the questions in VoxBox are
closed questions, we specifically designed a playful input
mechanism, a phone handset that rang when a user reached
this panel and asked them a question when they picked up.
The user could then speak their answer into the handset and
hang up the phone. We hoped that through this playfulness
and engagement our questionnaire would result in more
willingness to answer the open questions asked.
Connecting Answers and Results
In traditional surveys there is often a divide between a
respondent answering questions and the researcher
gathering data and presenting these in reports or papers.
Respondents often do not have access to the results of the
survey or are not informed where these results can be
found. To make VoxBox more enticing to use and to trigger
discussions from by-standers, we decided to make the
collected results visible to the users [3]. Real-time results
were shown in two different ways: on the website (for
which the URL was printed on the incentive balls), and on a
set of visualizations on the reverse side of the system. By
printing the URL on the balls that were obtained after
answering questions, we physically linked the users’
answers to the results website by symbolizing that the
results quite literally rolled out of the system after
answering questions. The data visualizations on the system
offered an immediate insight into the results. We tried to
encourage users to look at these through the physical design
by making them walk around the side of VoxBox to collect
their ball. The box where the ball dropped was angled
backward to encourage users to walk further around the
back to see the visualizations.
DESIGN AND IMPLEMENTATION OF VOXBOX
Inspiration for the design of VoxBox came from a number
of sources including the archaic computer game ‘The
Incredible Machine’ (in which a user solves puzzles by
arranging physical objects, e.g. levers, ropes, and conveyor
belts), marble tracks (in which marbles are guided through
sometimes complex tracks), and mechanical devices and
interactive exhibitions as seen in science museums.
We decided on a final set of questions we wanted to ask
based on our own interpretations of what may influence the
feel good factor, and inspired by reading through evaluation
reports on several organized events [e.g. 6]. As mentioned,
these questions were divided into five categories, which
were shown on five separate question modules in the
system. An overview of the questions that were asked in
each module can be seen in Table 1. While the
demographics were mainly entered through simple push
buttons, for the mood, crowd, and event questions we
decided on different variations of input scales, so that
people could rate their agreement. Although we could have
used similar interactions for each of these groups, we felt it
was important to include a variety of interactions to avoid
the tedium of having to answer many questions in the same
way, and keep the system engaging throughout the whole
interaction. For the mood questions we decided to use linear
sliders with LED feedback that represented semantic
differential scales [10] on which people rate their response
between two opposite answers on a scale; these scales were
continuous (Figure 2a). For the crowd questions we used
rotary knobs with LED feedback to show the answer along
the scale. These questions were rated between disagreement
and agreement and the interaction provided a discrete scale
with 16 increments (Figure 2b). The event questions were
answered through physical spinners with five options
between disagreement and agreement similar to a Likert
scale (Figure 2c). Finally, for the open questions, we
designed a phone handset to employ a familiar metaphor for
dialog in an unfamiliar setting, which we hoped would
result in surprise and excitement (Figure 2d).
We developed VoxBox as a modular system with separate
question modules for the different groups of questions, and
incorporated mechanisms for the incentive ball to run
through the system (Figure 3a). Early variations of the
design imagined the ball completing a track through the
physical device in which obstacles had to be removed, or
the track had to be completed, by answering questions.
Different questions would have different physical
mechanisms behind them that would allow the ball to move
forward, for example a ‘yes’ or ‘no’ question would tip a
slope in a certain direction, while a Likert scale may move
an obstacle out of the way. Ideas also included mechanisms
for encouraging longer answers to open questions, such as
gradually moving obstacles away or only running a
conveyor belt while the user was still recording an answer.
Due to feasibility reasons within the time constraints of the
project, the ball track was simplified to run through the
device and be controlled through physical levers after each
question panel (see Figure 3b) and ultimately, replaced by
an external tube that dropped the ball after each stage.
Implementation
VoxBox was implemented using three off-the-shelf
shelving units to make sure it was sturdy enough to
withstand many interactions and unanticipated user
behavior. To allow for a flexible and modular system, we
designed each question module as a drawer that was slotted
into the shelving unit. In this way, question modules could
be moved around and the sequence of the questions could
easily be changed. Question modules were created from
plywood using a laser cutter to give VoxBox an appearance
that called up associations of ‘a time machine’ and a mix
of Willy Wonka, the controls of the Tardis and those ornate
fairground automata’, according to initial responses.
Each question module contained a front panel for user
interactions, which contained the sliders, buttons, knobs,
spinners, or handset. A question module further contained
an LED strip around the edge of the front panel that was lit
up in green when a panel was active (Figure 4a), and a
green submit button that was used to submit the user’s
answers. This button was necessary to determine when a
user had made a final decision on the answers. Along with a
Table 1. Overview of the questions and interaction mechanisms in the different question modules.
Figure 2. The input mechanisms for the question modules.
Figure 3. Early sketches of VoxBox: a. design of a modular
system; b. design of the internal ball tube.
large green start button, elements in this color were thus
deliberately used to navigate the users through the system.
Although buttons and sliders were fixed in the panels,
questions and answers were cut from separate labels that
were screwed on (Figure 4b). This allowed for questions to
be easily changed (within the constraints of number and
type of question in each panel) for different events where
different questions may be desired. Most question panels
used off-the-shelf components, for example the sliders,
knobs, and buttons. We created a tailored rotary dial for age
input and spinners for the event questions (Figure 5).
Similar to the easily changeable question labels, the paper
inlays of these spinners could also be replaced to show
different answers.
VoxBox was controlled by open source Arduino
technologies. To enable a modular design each question
module contained its own Arduino board that controlled the
I/O for that module. In addition there was a 'Master'
Arduino and one to control the ball tube. The Master had
overall control of the VoxBox operation and a WiFi
connection to a backend server and database. On startup the
Master downloaded the ordered list of currently attached
question modules. It then proceeded to go through the list in
sequence (Table 1), activating the next question module in
the list, waiting for it to send back its data and then
deactivating it again. All communication between Arduinos
within the VoxBox was via I2C. Once the Master reached
the end of the list it collated all the data it had collected
from the question boxes and uploaded this to the backend
server and database via its WiFi link. This architecture
allowed VoxBox to be easily adapted, as question boxes
could be added, removed or swapped around without
needing to make any changes to their code or the code
inside the Master. Even extra connectors for possible
additional data cables between modules were already
implemented in the system. The only change required was
an alteration to the ordered list of currently attached
question modules in the backend server.
The ball tube was implemented by creating a tailored
construction from plywood and a transparent tube (Figure
6). The tube was divided into six parts and a servo motor
with a long arm was mounted in each part to stop the ball
from moving through. After pressing the start button, and
each of the submit buttons the servos rotated in sequence to
drop the ball step by step. The ball tube was connected to a
ball compartment within the VoxBox unit and although
balls were fed into the tube manually in this
implementation, an automatic feed was imagined for
potential redesigns. The ball tube thus functioned as an
incentive to complete the survey and as a physical progress
bar. Because the tube consisted of separate parts that
corresponded to each question module, this element of the
system could also easily be adapted to account for more or
fewer attached question modules.
Data that was sent from the Master Arduino to the server
was used to created visualizations that were shown on the
website and on the system itself. VoxBox was designed to
not only allow people to share data on their demographics
and views, but to also give them the opportunity to learn
more about the opinions held by others. Similar public
visualizations of people’s perceptions have served as a
talking point [e.g. 5, 16]. To enable passers-by to view and
discuss the data gathered at the front side of the VoxBox,
eye-catching and simple visual representations were shown
on the reverse side (Figure 7a). To ensure the aesthetics of
Figure 4a. Green LED strips showed that a panel was
active; b. Separate question and answer labels were
screwed on for easy changes.
Figure 5. Tailor-made spinners; paper inlays could be
changed to show different answers.
Figure 6. The ball tube at the side of the system functioned
as an incentive for completion and progress indicator.
these representations would match the look and feel of the
input technology, inspiration was sought from retro display
technology: flip-disc displays, the electromechanical dot
matrix displays traditionally used for destination signs
on buses. While these signs are originally of ultra-low
resolution, recreating digital screen-based flip-disc displays
allowed for the display of higher resolution infographic-like
visualizations. By flipping the discs row by row, the display
scrolled through real-time visual summaries of the data. By
creating side panels around these digital screens, we created
the illusion of a porthole via which people could look into
the VoxBox (Figure 7b). Apart from protecting the
screens from direct sunlight, the portholes were also meant
to spark curiosity and lure people to the screens thereby
overcoming common display blindness [9].
INITIAL DEPLOYMENT
In addition to numerous people in our research institute
coming by our lab to try out VoxBox, we ran an initial
deployment at a one-day conference on technology
concerned with the relationship between the government,
digital democracy and the public (Figure 8). At this event,
over 50 academic researchers, people from industry, and
government organizations were present who were interested
in novel technologies. VoxBox was set up in the area where
coffee and lunch breaks took place, and over lunch there
was a dedicated slot for interactive demos. As such,
VoxBox was available for the attendees to use for a total of
1.5 hours. Around 30 people used the system, who all
completed the whole survey and took an average of three
minutes to complete. Below, we describe our observations
on how VoxBox was used at this event. Based on these we
discuss how our design principles played out in this context.
We end by describing possible improvements to the design.
Overall, VoxBox was well received and gained a lot of
interest. In the first break, we witnessed one person walking
with a brisk pace towards our system as soon as he spotted
it and immediately started interacting with it, eager to be
the first one to engage with the system. On several
occasions a queue formed as people waited for their turn.
Others deliberately chose to watch others interact first while
taking their turn afterwards. Many attendees were interested
in the thoughts behind the system and how it was built, and
reacted enthusiastically to its visual appearance. Small
groups of attendees who knew each other often came up
together and each had their turn. One person thought out
loud: With whom did you come to this event? Are you
guys my friends?which resulted in laughter from the
group. The phone handset, which rang shortly after the
users had submitted the answers on the previous panel,
caused surprise, and many users could be seen grinning
while picking up the phone. Most users answered the open
question through the phone, and several gave quite
elaborate answers, e.g.: ‘If there was an entry fee for this
event, how much would you be willing to pay?’ I'd sell my
children. And possibly my mother. But I get less money for
my children aye.’ Another example of an answer was:
What will you remember most from this event? to which
they replied, I'll remember the VoxBox most.
Among many utterances of ‘Wonderful, fantastic. Thank
you.’ and ‘that was fun!’ there was one attendee who
questioned whether the data shown on the system was the
data we were actually collecting there and then. He
wondered if he was the only one who would question if the
data representations were manipulated by the organizers of
the event to show favorable results. He was the only one at
this event to raise this concern, but it would be worth
exploring further to what extent people trust the accuracy of
the data visualizations. Among those that did ‘believe’ the
data, there was substantial interest and several people
remained watching the visualizations scroll through
different results. One speaker teased another by
commenting: ‘23% feel bored, that was your talk!’ Users
did not always immediately notice the ball dropping down
the side of the system this happened mostly in early
interactions where people had not seen others use it yet, and
had not yet had a chance to walk around the device. They
sometimes seemed surprised that they could keep the ball
but were always pleased when we informed them. One or
two people opted to give their ball back to ‘save us money.’
Figure 8. User interacting with VoxBox during the initial
deployment at a one-day conference.
Figure 7a. The reverse side of VoxBox showed real-time
visualizations of the data; b. visualization screens were
embedded in portholes.
Finally, we noticed that some users did not realize that the
start button needed to be pressed before any other
interaction could take place. They usually figured this out
quickly, or had it pointed out to them by other attendees.
DISCUSSION
Our observations based on the initial deployment confirmed
that VoxBox is a novel and engaging system that succeeds
in gathering opinions from crowds at events. We were
interested in how our observations were able to validate the
choice of our design principles for creating interactive
features that were able to draw people to answer all the
questions thoughtfully. From these principles we consider
more generally which tangible features are effective and
how to combine them to make a compelling and enjoyable
experience for answering questions at other kinds of events.
Considering our first aim was to encourage participation,
we saw that the appearance of the system was very
attractive, drawing many people to it like a honey pot [2].
Although the deployment took place at an event with
predominantly attendees that were excited about
technology, there were also a number of attendees from
industry or governing organizations that had less affinity
with technology but were still very enticed by VoxBox. As
researchers, we deliberately took a stand-back approach:
instead of inviting people to have a go, we let them
approach it by themselves. Many people took initiative and
used it from start to finish. The ball tube and ball
compartment appeared an unanticipated attention catcher as
people were intrigued by the function of the colorful balls
and by the appearance of the ball tube. The system
appeared to be mostly self-explanatory although a few
usability issues were observed. Users did not always notice
the start button without which none of the panels were
activated. We had noticed this before during informal trials
in the lab and had created a large arrow to point out the start
of the interaction sequence but this was insufficient to fully
solve this issue. We further noticed that some users were
surprised at first about the sequence of the panels, although
the green light navigation helped to make this clear. Apart
from these small issues, VoxBox was very effective in
encouraging people to give their opinions.
As mentioned, VoxBox grouped similar questions, by
separating them on several question panels. Although this
did work well in giving the appearance of a short survey,
some people got a bit confused at first about having to go
through the panels in a fixed sequence. This fixed sequence
was introduced in part by technology constraints, and in
part by this being a common approach in traditional
questionnaires. It was thus unanticipated that users would
be confused by having to follow a sequence. It seems that
by transposing characteristics from paper or online
questionnaires to a physical device, we had created new
affordances that invited different behaviors, e.g. all the
questions were visible at the same time and some
interaction mechanisms may have looked more enticing
than others. We realize that VoxBox does not need to
incorporate a fixed sequence of interaction and we can
consider other ways in which the affordances of a physical
system are exploited to create a more appropriate, less
constrained form of interaction. Similarly, in traditional
questionnaires there are often options to activate different
flows of questions based on previous answers. We could
think of ways in which such more sophisticated functions
could be integrated in the physical design of VoxBox.
We aimed to encourage completion and show progress,
mainly through the ball tube that provided the ball as an
incentive and showed the progress in the questionnaire. In
our initial observations we saw that this did not work as
well as planned. Because of the location of the ball tube at
the side of the system, users did not always notice
straightaway that something was happening. Many users
had to be notified afterwards that they had now earned their
ball. We saw that once people noticed that the ball dropped
after each panel they were enthusiastic about this and often
stepped aside after each panel to check their progress. This
issue can easily be solved by moving the ball tube forward
along the side so that it is more visible while standing in
front of VoxBox. Furthermore, although most users were
pleased when informed that they could keep their ball, it did
not seem as strong an incentive as the joy of interacting
with Voxbox. Nevertheless, the ball functioned as a link to
the survey results and showed the URL of our website.
A further aim was to gather answers to open questions by
enticing people to speak their answers into a phone. This
method proved to be effective as shown by the number of
people who listened intently to the question and then
spontaneously gave a, sometimes elaborate, verbal response
after being pleasantly surprised by the phone ringing.
In showing the results of the data collection on the system,
we also wanted to connect answers and results. As a result
of the ball tube position not being ideal, the ball rolling
towards the back to encourage the users to walk towards the
visualizations did not work as strongly as hoped. Although
plenty of users did see the visualizations (albeit sometimes
prompted) and enjoyed seeing the results, it is important to
consider other ways to link the data input and visualizations
more strongly, for example, by not placing them at the
reverse side of the system but bringing them closer to the
location of the input so that users do not have to divide their
attention as strongly [11]. We further considered ways in
which to link data from the user more explicitly to that of
the crowd so comparisons are possible between personal
opinions and those of the crowd, e.g. by showing current
and aggregated data on different screens at the same time.
Such additions and improvements could connect answers
and results more strongly than was currently the case.
Finally, privacy is an important concern when asking
people to give personal information, such as their age or
views, in a public place. We considered placing the
VoxBox in a booth with a curtain that could be drawn by
the users to prevent people looking over their shoulders.
However, this would mean it would lose its attractive
visibility that was central to how we envisioned it drawing
people to it. We found that no-one was worried about their
privacy in this context and that those using it were given a
wide berth from onlookers akin to how people stand back
when waiting to use an ATM machine.
CONCLUSIONS
In this paper we have presented the design, implementation,
and deployment of VoxBox, a tangible system to gather
opinions from crowds at events. We have shown through an
initial deployment how appealing and engaging VoxBox
was considered to be, and how successful it was in drawing
people in and gathering opinions in a novel way. We have
extensively discussed our rationale behind designing this
system and have reflected on the extent to which we have
successfully implemented our design principles based on
observations with an initial deployment. VoxBox opens up
discussions around the design of novel systems that can
encourage the sharing of opinions by engaging users in
playful interactions. Our findings have shown this is an
important area for researchers to explore because gauging
opinions and knowing what people think is considered an
increasingly important part of community engagement. Our
future plans include deploying and adapting VoxBox for a
variety of other events in different contexts and settings.
Finally, we argue that our tangible questionnaire approach
asking people to walk up to playful and attractive life-size
machine and provide answers to a set of questions about
how they feel shows much promise at getting people from
all walks of life to voice their opinions.
ACKNOWLEDGMENTS
This research was funded by ICRI Cities. We further thank
everyone who tried out VoxBox for their valuable insights,
and our colleagues in ICRI and UCL Interaction Centre for
their feedback on our ideas.
REFERENCES
1. Braun, L., et al. SkyWords: an engagement machine at
chicago city hall. In Proc. CHI '13 Ext. Abstr., ACM
Press (2013), 2839-2840.
2. Brignull, H. and Rogers, Y. Enticing people to interact
with large public displays in public spaces. In Proc.
Interact 2003, Rauterberg, M., Menozzi, M., and
Wesson, J., (eds). IOS Press, 2003, 17-24.
3. Dade-Robertson, M., Taylor, N., Marshall, J., and
Olivier, P. The political sensorium. In Proc. MAB 2012,
ACM Press (2012), 47-50.
4. Houben, S. and Weichel, C. Overcoming interaction
blindness through curiosity objects. In Proc. CHI 2013
Ext. Abstr., ACM Press (2013), 1539-1544.
5. Koeman, L., Kalnikaite, V., Rogers, Y., and Bird, J.
What chalk and tape can tell us: lessons learnt for next
generation urban displays. In Proc. PerDis ’14 (2014),
130-136.
6. Maennig, W. and Porsche, M. The Feel-good Effect at
Mega Sports Events. Recommendations for Public and
Private Administration Informed by the Experience of
the FIFA World Cup 2006. IASE/NAASE Working
Paper Series 8, 17 (2008), 1-28.
7. Marshall, P., Morris, R., Rogers, Y., Kreitmayer, S., and
Davies, M. Rethinking 'multi-user': an in-the-wild study
of how groups approach a walk-up-and-use tabletop
interface. In Proc. CHI 2011, ACM Press (2011), 3033-
3042.
8. Miller, K.W., Wilder, L.B., Stillman, F.A., and Becker,
D.M. The Feasibility of a Street-Intercept Survey
Method in an African-American Conmunity. American
Journal of Public Health 87, 4 (1987), 655-658.
9. Müller, J., et al. Display Blindness: The Effect of
Expectations on Attention towards Digital Signage. In
Pervasive Computing, Tokuda, H., et al., (eds). Springer
Berlin Heidelberg, 2009, 1-8.
10. Osgood, C.E., Suchard, G.J., and Tannenbaum, P.H. The
Measurement of Meaning. University of Illinois Press,
Urbana, 1957.
11. Price, S. A representation approach to conceptualizing
tangible learning environments. In Proc. TEI 2008,
ACM Press (2008), 151-158.
12. Rogers, Y., et al. Never too old: engaging retired people
inventing the future with MaKey MaKey. In Proc. CHI
2014, ACM Press (2014), 3913-3922.
13. Rogers, Y., et al. Ambient wood: designing new forms
of digital augmentation for learning outdoors. In Proc.
IDC 2004, ACM Press (2004), 3-10.
14. Schroeter, R., Foth, M., and Satchell, C. People, content,
location: sweet spotting urban screens for situated
engagement. In Proc. DIS 2012, ACM Press (2012),
146-155.
15. Taylor, N., et al. Viewpoint: empowering communities
with situated voting devices. In Proc. CHI 2012, ACM
Press (2012), 1361-1370.
16. Valkanova, N., Walter, R., Moere, A.V., and Müller, J.
MyPosition: sparking civic discourse by a public
interactive poll visualization. In Proc. CSCW 2014,
ACM Press (2014), 1323-1332.
17. Whittle, J., et al. VoiceYourView: collecting real-time
feedback on the design of public spaces. In Proc.
Ubicomp 2010, ACM Press (2010), 41-50.
... They have also been found to support playfulness in learning, which in turn sustains interest and creativity [29]. The tangible questionnaire, VoxBox, was designed to encourage people to be playful when giving feedback and refecting on various aspects of an event, proving to be highly efective at eliciting a wide range of responses [14]. A similar kind of tangible device, called SmallTalk, was specifcally developed as a tangible survey system to capture what young children thought of a live performance that they had just seen [11]. ...
... Researchers have capitalised on these benefts to transform traditionally 'dull' evaluation and feedback tasks into ones that are more engaging and inviting. For example, VoxBox [14], mentioned earlier, was designed to attract passers-by by being inclusive and approachable, leveraging well-known afordances of buttons, dials, and other everyday input devices. In another project that explored alternative form factors for answering questions, and how they can encourage refection, Jennett et al. [15] developed an installation of "Squeezy Green Balls" which provided playful 'stress balls' that people squeezed in relation to how they felt about specifc environmental issues. ...
... We also discussed the potential of designing a tangible artefact that could gather data in a visible way. Based on previous work with tangible interfaces for supporting refection [11,14], we agreed that this form of interaction has the potential to encourage more curiosity while enabling the process of inputting data to be either private to an individual or public to others around them. We also considered based on the positive fndings of previous tangible user interface research, that using a tangible device, instead of a purely screen-based one, has the potential of making the experience of 'evaluation' more playful, encouraging participation, and fostering discussion. ...
Conference Paper
Tangible interfaces have much potential for engendering shared interaction and reflection, as well as for promoting playful experiences. How can their properties be capitalised on to enable students to reflect on their learning, both individually and together, throughout learning sessions? This Research through Design paper describes our development of EvalMe, a flexible, tangible tool aimed at being playful, enjoyable to use and enabling children to reflect on their learning, both in the moment and after a learning session has ended. We discuss the insights gained through the process of designing EvalMe, co-defining its functionality with two groups of collaborators and deploying it in two workshop settings. Through this process, we map key contextual considerations for the design of technologies for in situ evaluation of learning experiences. Finally, we discuss how tangible evaluation technologies deployed throughout a learning session, can positively contribute to students’ reflection about their learning.
... [9,63]), tangibles (e.g. [22]), or bodily gestures (e.g. [29]). ...
... Stakeholders explained the importance of communicating findings back to participants to "...keep citizens involved and show that [they] are absolutely still working on it. "(MH3) as a way to maintain trust "...but also to dig deeper and get more insights" (HB1) and "...communicate very widely Figure 4: The responses on specific demographic polling questions suggest that PPDs were mainly used by locals (right from RP) and successfully reached different age groups (left from HB), yet notably fewer by [16][17][18][19][20][21][22][23][24][25] year olds. ...
... Living, Mobility, Governance Descriptive field study None (Vlachokyriakos et al., 2014) The authors present a low-cost public display that allows citizens to vote on printed questions by pressing on the button of their choosing. (Golsteijn et al., 2015) The presented system offers a playful interaction to gather people's opinion on an event about the relationship between the government, digital democracy and the public. The participants are prompted on their socio-demographic characteristics and their opinion on the event, and are rewarded with a stress ball upon completion. ...
Thesis
Full-text available
A smart city is a city that provides innovative solutions, in collaboration with its citizens and with the support of technology, to solve the challenges of its territory. Citizens are expected to be involved in decision-making processes of smart cities with the aim that their needs and ideas are integrated. This participation, full of promises, is however also littered with barriers. While these barriers have been impeding citizen participation for decades, the technologies, prominent in the smart city paradigm, provide new opportunities to alleviate them. The objective of this thesis is to explore how technology can be used to address three barriers experienced on the citizen side. First, in order to alleviate the lack of awareness around participation in smart cities, a workshop methodology is proposed to introduce the participatory smart city to 12-14-year-old children. Second, to reduce the difficulty of accessing usable data needed to understand the topic of participation, recommendations are provided to lead the development of Open Government Data portals tailored to citizens. Third, as current participation methods are subjected to entry barriers, the potential of public displays to serve as a participation method is unpacked, as they are exempt from this limitation.
... Another tangible approach was EmoBall [12] and Mood Squeezer [14] which allows for playfulness in o ce buildings where a lightweight technology was designed to ask people to reflect on their mood by squeezing a colored ball from a box set. VoxBox [16] presented a tangible system for gathering opinions in situ at an event through playful and engaging interaction via a large panel of interaction turning the questionnaire itself into a toy that made people want to fidget with it. Com-munitySourcing [17] was also used to investigate the potential of community sourcing by designing, implementing and evaluating a vending machine that allowed users to earn credits by performing tasks using a touchscreen attached to the machine and physical rewards were dispensed through traditional vending mechanics. ...
Conference Paper
Full-text available
During the COVID-19 pandemic, social distancing measures were employed to contain its spread. This paper describes the deployment and testing of a passive Wi-Fi scanning system to help people keep track of crowded spaces, hence comply with social distancing measures. The system is based on passive Wi-Fi sensing to detect human presence in 93 locations around a medium-sized European Touristic Island. This data is then used in website plugins and a mobile application to inform citizens and tourists about the locations' crowdedness with real-time and historical data. To understand how people react to this type of information , we deployed online questionnaires in situ to collect user insights regarding the usefulness, safety, and privacy concerns. Results show that users considered the occupancy data reported by the system as positively related to their perception. Furthermore, the public display of this data made them feel safer while travelling and planning their commute.
... Another tangible approach was EmoBall [12] and Mood Squeezer [14] which allows for playfulness in office buildings where a lightweight technology was designed to ask people to reflect on their mood by squeezing a colored ball from a box set. VoxBox [16] presented a tangible system for gathering opinions in situ at an event through playful and engaging interaction via a large panel of interaction turning the questionnaire itself into a toy that made people want to fidget with it. Com-munitySourcing [17] was also used to investigate the potential of community sourcing by designing, implementing and evaluating a vending machine that allowed users to earn credits by performing tasks using a touchscreen attached to the machine and physical rewards were dispensed through traditional vending mechanics. ...
Chapter
During the COVID-19 pandemic, social distancing measures were employed to contain its spread. This paper describes the deployment and testing of a passive Wi-Fi scanning system to help people keep track of crowded spaces, hence comply with social distancing measures. The system is based on passive Wi-Fi sensing to detect human presence in 93 locations around a medium-sized European Touristic Island. This data is then used in website plugins and a mobile application to inform citizens and tourists about the locations’ crowdedness with real-time and historical data. To understand how people react to this type of information, we deployed online questionnaires in situ to collect user insights regarding the usefulness, safety, and privacy concerns. Results show that users considered the occupancy data reported by the system as positively related to their perception. Furthermore, the public display of this data made them feel safer while travelling and planning their commute.
... Projects like Prante's Hello.Wall [70], Tollmar's virtually living together lamps [92], the MIT Tangible Media Group's ambient displays [106], Columbia University's work on the Touring Machine [30], and Situated Documentaries [43] are situated visualizations done under different themes and concepts. Similarly, the recent grand challenges in immersive analytics [29] and arguments within the visualization community to move "visualization beyond the desktop" [75] and focus on opening up the domain for casual information visualization [69], public physical data installations [35] and personal visualization [45], suggest an important focus in situated visualization moving forward. ...
Article
Situated visualization is an emerging concept within visualization, in which data is visualized in situ, where it is relevant to people. The concept has gained interest from multiple research communities, including visualization, human-computer interaction (HCI) and augmented reality. This has led to a range of explorations and applications of the concept, however, this early work has focused on the operational aspect of situatedness leading to inconsistent adoption of the concept and terminology. First, we contribute a literature survey in which we analyze 44 papers that explicitly use the term “situated visualization” to provide an overview of the research area, how it defines situated visualization, common application areas and technology used, as well as type of data and type of visualizations. Our survey shows that research on situated visualization has focused on technology-centric approaches that foreground a spatial understanding of situatedness. Secondly, we contribute five perspectives on situatedness (space, time, place, activity, and community) that together expand on the prevalent notion of situatedness in the corpus. We draw from six case studies and prior theoretical developments in HCI. Each perspective develops a generative way of looking at and working with situatedness in design and research. We outline future directions, including considering technology, material and aesthetics, leveraging the perspectives for design, and methods for stronger engagement with target audiences. We conclude with opportunities to consolidate situated visualization research.
... Like other neighbourhood-scale technologies, many of these applications have been deployed in public spaces and taken advantage of their physical location, for example by capturing lightweight data from passers-by [e.g. 14,15,29,34]. Other examples of civic technologies in neighbourhoods and communities have included locative apps [13] and citizen sensing [8] and have often been grounded in DIY activities, such as fruit foraging [9], which show citizens taking an active role in shaping their environment. ...
Conference Paper
Full-text available
Despite widespread interest in civic technologies, empowering neighbourhoods to take advantage of these technologies in their local area remains challenging. This paper presents findings from the Ardler Inventors project, which aimed to understand how neighbourhoods can be supported in performing roles normally carried out by researchers and designers. We describe the end-to-end process of bringing people together around technology, designing and prototyping ideas, and ultimately testing several devices in their local area. Through this work, we explore different strategies for infrastructuring local residents' participation with technology, including the use of hackathon-like intensive design events and pre-designed kits for assembly. We contribute findings relating to the ability of these strategies to support building communities around civic technology and the challenges that must be addressed.
Article
Full-text available
Within HCI, aging is often viewed in terms of designing assistive technologies to improve the lives of older people, such as those who are suffering from frailty or memory loss. Our research adopts a very different approach, reframing the relationship in terms of wisdom, creativity and invention. We ran a series of workshops where groups of retirees, aged between early 60s and late 80s, used the MaKey MaKey inventor’s toolkit. We asked them to think about inventing the future and suggest ideas for new technologies. Our findings showed that they not only rose to the challenge but also mastered the technology, collaborated intensely together while using it and freely and at length discussed their own, their family’s and others’ relationship with technology. We discuss the value of empowering people in this way and consider what else could be invented to enable more people to be involved in the design and use of creative technologies.
Conference Paper
Full-text available
When governments make new policies they often have limited methods for engaging the public and gathering opinions. As a result, policy-making is not always inclusive and too often important decisions are made by just a few. SkyWords is a site-specific installation — or "civic engagement machine" — that addressed this problem head on. Installed on the ground floor of Chicago City Hall for ten days in April 2012, SkyWords leveraged technology, interaction design and the universal appeal of play to give hundreds of people the opportunity to participate in government. The product of a collaboration between graduate students at the IIT Institute of Design and the City of Chicago's Department of Cultural Affairs; SkyWords was designed to create awareness of and generate data for the 2012 Chicago Cultural Plan. Using a simple, universal metaphor — balloons — and intuitive tactile interactions, SkyWords captured people's preferences and aspirations for arts and culture in Chicago. Complementing their town halls and neighborhood meetings, SkyWords helped the Department of Cultural Affairs reach a different and more diverse audience of Chicago's citizens. It also demonstrated to the City how interaction design can be used as an effective means of civic empowerment.
Conference Paper
Full-text available
In this position paper we outline some of the key themes and background research which may help form a better understanding of the relationship between technology and political activity. The paper is written in an attempt to articulate a better understanding of the relationship between political processes, urban environments and situated technologies. The paper is written from a UK perspective, although the ideas have a broader relevance for relatively developed western democracies. To this end we analyse the political and digital divides which are present in western society focusing on local politics in Newcastle upon Tyne in the UK as a case study. Following a brief description of the problem domain we briefly give an outline of an ongoing project Viewpoint which has created a mobile voting system which we are currently deploying in various locations in Newcastle Upon Tyne.
Conference Paper
Full-text available
Viewpoint is a public voting device developed to allow residents in a disadvantaged community to make their voices heard through a simple, lightweight interaction. This was intended to open a new channel of communication within the community and increase community members' perception of their own efficacy. Local elected officials and community groups were able to post questions on devices located in public spaces, where residents could vote for one of two responses. Question authors were subsequently required to post a response indicating any actions to be taken. Following a two-month trial, we present our experiences and contribute guidelines for the design of public democracy tools and dimensions impacting their effectiveness, including credibility, efficacy and format.
Article
In recent years, many researchers have explored the different roles public displays can play in the urban environment. A particular focus has been on the deployment of digital screens. A range of technical, spatial and social factors have been found to influence the appeal, acceptance and usage of such screens. As there are still a range of unsolved issues around digital screens, including display blindness and evaluation apprehension, we argue that when thinking about the design of next generation urban displays, it is important to not only focus on these digital screens. Instead, we propose also investigating other types of displays, including non-digital ones. We contribute to this evaluation of a wider range of public displays by presenting two case studies in which non-digital public visualisations of local data were deployed in urban communities. Based on the findings from these studies, we distinguish four affordances of non-digital public displays and describe the opportunities these reveal for the design of future urban displays.
Conference Paper
We present the design and evaluation of MyPosition, a public display in the form of a large projection, featuring an interactive poll visualization. MyPosition aims at facilitating the deliberation and comparison of individual opinions on locally relevant topics in an opportunistic and engaging way. We evaluated MyPosition in an in-the-wild study and demonstrated that the engaging nature of the installation was effective in enticing public discussion. We found that (i) the increased identifiability of users positively impacted the engagement with and the social debate around the installation, however lowered the actual polling rate; (ii) people submitted their personal opinion instead of playing around with the interactive features; and (iii) the display led to considerable discussion as well as nudging among people, in particular in zones beyond the interaction area in front of the screen.
Conference Paper
In recent years there has been a widespread installation of large interactive public displays. Longitudinal studies however show that these interactive displays suffer from interaction blindness - the inability of the public to recognize the interactive capabilities of those surfaces. In this paper, we explore the use of curiosity-provoking artifacts, (curiosity objects) to overcome interaction blindness. Our study confirmed the interaction blindness problem and shows that introducing a curiosity object results in a significant increase in interactivity with the display as well as changes in movement in the spaces surrounding the interactive display.
Conference Paper
A growing body of research is looking at ways to bring the processes and benefits of online deliberation to the places they are about and in turn allow a larger, targeted proportion of the urban public to have a voice, be heard, and engage in questions of city planning and design. Seeking to take advantage of the civic opportunities of situated engagement through public screens and mobile devices, our research informed a public urban screen content application DIS that we deployed and evaluated in a wide range of real world public and urban environments. For example, it is currently running on the renowned urban screen at Federation Square in Melbourne. We analysed the data from these user studies within a conceptual framework that positions situated engagement across three key parameters: people, content, and location. We propose a way to identify the sweet spot within the nexus of these parameters to help deploy and run interactive systems to maximise the quality of the situated engagement for civic and related deliberation purposes.
Conference Paper
This paper reports on VoiceYourView, a kind of intelligent kiosk, which uses speech recognition and natural language processing to gather the public's creative input on the public space designs. Over a six week period, VoiceYourView was deployed in a public space and 2000 design critiques were collected from 600 people. The paper shows that people are capable of providing creative input on their environment using unstructured speech or text and that a good proportion of these comments are actionable. The paper also investigates the use of public displays to auto-summarize comments left by the public so far. Although there is anecdotal evidence that this encourages participation, an experiment found that filtering comments (e.g., to display only positive responses) had no effect on what people had to say.