ArticlePDF Available

Engaging augmented reality in public places

Authors:

Abstract and Figures

Augmented Reality (AR) systems are moving beyond the laboratory and into the public domain. Such a shift presents new challenges for AR design. In this paper, we study a public artistic exhibition which includes a bespoke AR system. Our design reflects social and physical constraints of the public space in which the device is placed. We investigate the effect of AR on the engagement of visitors with the exhibition. Through our analysis, we provide evidence to illustrate the differing 'augmented' and 'disaugmented' levels of engagement users experience with the AR device in addition to typical engagement observed in social scientific studies of the exhibit face. We discuss the importance of separating target and display, and how levels of engagement with public AR can be explicitly supported.
Content may be subject to copyright.
Engaging Augmented Reality in Public Places
Stuart Reeves1, Mike Fraser2, Holger Schnadelbach1, Claire O’Malley1, Steve Benford1
1The Mixed Reality Laboratory &
Learning Sciences Research Institute
The University of Nottingham
Computer Science Building
Wollaton Road, Nottingham
NG8 1BB, UK
{str,sdb,hms}@cs.nott.ac.uk, com@psyc.nott.ac.uk
2Department of Computer Science
The University of Bristol
Merchant Venturers Building
Woodland Road, Bristol
BS8 1UB, UK
fraser@cs.bris.ac.uk
ABSTRACT
Augmented Reality (AR) systems are moving beyond the
laboratory and into the public domain. Such a shift presents
new challenges for AR design. In this paper, we study a
public artistic exhibition which includes a bespoke AR
system. Our design reflects social and physical constraints
of the public space in which the device is placed. We
investigate the effect of AR on the engagement of visitors
with the exhibition. Through our analysis, we provide
evidence to illustrate the differing ‘augmented’ and
‘disaugmented’ levels of engagement users experience with
the AR device in addition to typical engagement observed
in social scientific studies of the exhibit face. We discuss
the importance of separating target and display, and how
levels of engagement with public AR can be explicitly
supported.
Author Keywords
Augmented reality, public exhibitions, engagement.
ACM Classification Keywords
H5.m. Information interfaces and presentation:
Miscellaneous; J.5 Computer Applications: Arts and
Humanities
INTRODUCTION
In this paper, we explore the challenge of publicly
deploying an Augmented Reality (AR) system. AR has, in
recent times, become a prominent strategy for overlaying
digital content on physical environments. The development
of AR across research laboratories has provided increasing
speeds [12], improved registration [16], better quality
graphics [3,7,23], devices to support multiple users [6,1]
and broadening domains of projected use [5,25,28].
Numerous frameworks exist which place AR into a
particular relationship with other mixed reality systems
[10,20,22] and tangible interfaces [18]. Nevertheless, there
have been few attempts to date at placing AR systems in
everyday settings, and none that systematically reflect on
the constraints that are imposed on development strategies.
As HCI studies move from traditional laboratories to
investigate users’ everyday experiences, we find new
challenges in making technologies work in the real world.
Novel technologies that emerge from early research have
often co-existed uneasily when faced with the practical
settings of public places, and numerous studies describe
challenges in deploying systems as varied as kiosks in
shops and bars [9], interactive displays in museums [13],
and tourist touch-screens around city streets [8].
In exposing such systems to everyday use, public
exhibitions have become increasingly important settings for
studies of human-computer interaction [15, 14]. The
exhibition presents an ideal domain in which to study AR
systems in a public setting. Curators are often seeking new
ways to engage the public, recognising that collections
which are problematic to exhibit often include absences and
assumptions, such as fragments of complete artefacts or
incomplete collections of these artefacts. In these cases
demonstration through, for example, digitally recreating
artefacts [21], or augmenting existing objects [26], could
improve access to the material. It has previously been noted
that there is a close correspondence between such
exhibition goals and the goals of AR [13]. Furthermore, the
technical requirements associated with inserting digital
content to augment exhibition spaces closely compare to the
aims of AR research on registration of environments
[2,3,4,12]. In various ways, then, AR systems have begun
to display reconstructions of events and objects in context
with the physical world [24,14,11]. Nonetheless, AR
systems development currently prevails as an enterprise
primarily driven by the improvement and demonstration of
the technological achievements, rather than being equally
balanced with detailed reflection on such developments.
Several key issues are prominent in the design of public
experiences which are available within social scientific
studies of exhibition settings. For example, studies have
Copyright is held by the author/owner(s).
CHI 2005, April 2–7, 2005, Portland, Oregon, USA.
ACM 1-59593-002-7/05/0004.
Copyright 2005 ACM 1-58113-998-5/05/0004…$5.00.
shown that both companions and passers-by can often shape
each others’ experiences [19]. Furthermore, visitors often
draw on the activities of others to learn how to use and
appreciate interactive exhibits [15]. In line with this corpus
of research and other discussions of sensing visitor types
(such being characterised as ‘busy,’ ‘greedy’ or ‘selective’)
[26], we expected visitors to occupy a particular level of
engagement in the exhibition space. There are those directly
engaged with the exhibit; co-visitors that form a local
(collaborative) grouping with interactors; and co-visitors
that are bystanders, often being implicated in the
proceedings. We therefore begin from the standpoint that
AR design should not just refer to one or more people using
a device, but rather adapt to the many ways in which the
public engage with each other with and around displays.
ONE ROCK
One Rock was a two-month public installation developed
by Welfare State International, an arts company located in
Ulverston, Cumbria. The focus of the exhibition was a large
rock in Morecambe Bay, on the north-west coast of
England. The aim was to use the various geological,
microbiological, historical and social aspects of the rock to
engender and renew fascination with the surrounding
locality and its features. The installation was created inside
an exhibition space a short distance away.
An Overview of the Exhibition
The exhibition attracted varying numbers of people, both in
terms of group sizes and daily throughput (from individuals
through to groups of forty). Automated progression
between stages of the installation precluded any latecomers
(who would be asked to wait until the next run), so once a
performance had started, the group inside stayed until the
end without any additional visitors entering the space. A
single performance lasted for twenty minutes in total. A
docent was usually on hand during the performance,
providing different levels of intervention for the visitors.
When visitors entered the space, for example, some docents
described briefly the experience, where others said nothing.
Figure 1. Exhibition space floor plan
The exhibition itself was structured specifically around
three ways of viewing the rock: macro, micro and mythic;
the space was divided up into three parts to reflect these
three aspects. All the sections involved dramatic changes in
lighting and a loud accompanying soundtrack.
The first section was deliberately passive and meditative
with coordinated visuals and sound showing the rock and
its surroundings (Figure 1, left). The entrance area
contained a model of the rock which matched the
dimensions of the real rock, which is approximately the size
of a small car. The macro section provided a physical
representation of the rock to give visitors a sense of its
place within the local ecology of the Bay.
Once the initial sequence on the large screen ended, lights
underneath the gratings in the floor directed visitors
towards the second, more interactive section (Figure 1,
centre). This area of the exhibition contained the bespoke
AR device called the Telescope, which was placed
approximately two metres from a feature of the exhibition
called the Incubator. The Incubator was a metal structure lit
from below that supported hundreds of bottles containing
microbes, sea life and other residue collected from around
the rock. It also held concealed speakers for associated
sounds. During this part of the exhibition, views of
microscopic sea-life were projected onto the opposing wall.
The micro section allowed visitors to experience the
‘unseen’ world of the rock, studying its microscopic life
and substance.
The final section adjacent to the area where the Telescope
and Incubator were located was primarily sculptural, using
traditional materials including those collected from the Bay.
These forms illustrated various social and historical legends
that the rock might tell if it could speak (Figure 1, right).
Telescope Design and Constraints
The Telescope (Figure 2) could be rotated to examine the
bottles in detail. The device provided visitors with a way of
conjuring video sequences out of the bottles on the
Incubator. We hoped to create the illusion that prefabricated
microscopic images and videos from the rock could emerge
from the glass bottles by ‘zooming’ into them. We wanted
to register which bottles the Telescope was pointing at to
create connections between them and the digital content.
Figure 3 illustrates the view that visitors would see when
using the Telescope. In the centre, we can see video content
emerging from a bottle that is behind it. This bottle is
enclosed by the green region polygon. Just to the edge of
the display, we can see another region, indicating another
video associated with the bottle over which that region sits.
Figure 2. The Telescope and Incubator (left) in use (right)
Figure 3. View experienced looking through the Telescope
As an element of the overall installation, the Telescope
needed to fit within the artistic thematic of the piece.
Indeed, the ‘telescope’ metaphor emerged through
discussions on the various ways in which the public could
currently view the physical Bay at a distance. Real pay-per-
view telescopes are available in waterfront towns around
the Bay, and provide ways of inspecting it in more detail. In
addition, the Telescope metaphor was relevant to conveying
some sense of the dangers of viewing the Bay too closely.
The display inside the Telescope was also informed by this
metaphor, and was intended to emulate the sense of
distance experienced when using a real telescope.
Challenging some of these aesthetics, however, were more
practical considerations. For example, the Telescope needed
to be robust enough to last through the two-month
exhibition, yet not supersede the impact of the digital
content (the microscopic images) and physical target (the
Incubator). The sturdy casing of the Telescope was
therefore covered in black paint and cloth so as to reduce its
physical impact.
We were faced with the issue of registration. AR devices
often rely on a registration scheme embedded in the
environment. The significant and constant changes in
lighting would have seriously challenged an image
processing algorithm. More importantly, however, the
aesthetics of the Incubator, and indeed the surrounding
space, meant that we could not make concessions over the
inclusion of fiducial markers (for example, placed on the
bottles to be examined). We therefore had to depend only
on sensor data obtained from the compass.
We also had to calibrate the device for the exhibition space
in ways that impacted both physical and digital
components. For example, increasing the Telescope’s
distance from the Incubator obtained a more realistic
telescope effect but led to a poorer display resolution.
We had to decide whether to explicitly display media-
tagged regions. During initial testing, the speed of update in
electronic compass readings made it hard to simply explore
to find tagged regions. Furthermore, compass readings were
often subject to unexplained magnetic disturbances, despite
internal smoothing and thresholding in the software. Taking
into account the impact on the aesthetic of the view,
polygonal plotting of tagged regions was agreed upon to
reduce the brief time in which visitors were going to be able
to learn how to use the Telescope.
Finally, the lighting changes that featured in the exhibition
meant that the Incubator was not illuminated at all times.
Therefore, visitors using the Telescope outside of the
‘micro’ section were unable to see the Incubator well. For
this reason a halogen lamp was placed inside the Telescope
in order to provide temporary illumination of the jars. The
light switch was intended for users studying the Incubator
when the lights were low. The background light level was
accounted for when drawing both the regions and video file
overlay; in low light, neither regions nor media were
visible, whereas when the Incubator’s internal lights were
on, the regions and media appeared (as seen in Figure 3).
Telescope Hardware
The Telescope construction is shown in Figure 4. Looking
into the viewing tube (1) reveals the contents of the screen
(2), which displays a processed video feed from a webcam
located at the front of the body (3). The Telescope can be
moved using the handles (4) which rotate the entire body
section about the pivot of the tripod (5). The light switch on
the right handle triggers a halogen lamp attached next to the
webcam. A digital compass1 (6) is attached to the underside
of the viewing tube, and detects changes in the heading and
pitch of the Telescope’s upper section. Rotation of the tube
is calculated from the roll of the compass as it is rotated by
the viewing tube. The compass heading readings allowed a
360-degree range, whereas both pitch and roll were limited
to ±40 degrees.
Figure 4. The Telescope
Telescope Software
The software that was developed combined code for the
electronic compass (accessed through the Java
Communications API) and provided a video handling and
display service using the Java Media Framework API and
OpenPTC graphics library. The Z-axis roll of the compass
was mapped to increment or decrement the level of zoom.
Video data from the webcam was enlarged in proportion to
this level of zoom so that rotating the viewing tube showed
the video content emerging from the Incubator bottles.
1 A Honeywell HMR3000
Our software was devised so that multiple arbitrarily shaped
regions could be defined. Each region could be associated
with a video file, which would then be played when the
Telescope was pointed within the bounds of a region. The
success of the Telescope’s augmentation relied on it
ensuring two distinct spaces were kept in correspondence –
‘compass space’ and ‘video space.’ Compass space, shown
in Figure 5, is the 2D, cylindrical compass view of the
environment. Video space, on the other hand, is the 2D
image of the 3D world received from the webcam that
moved in accordance with the motions of the Telescope
(Figure 5 shows real world objects as cylinders). Once the
video feed from the webcam was captured, the software
superimposed the compass space view – i.e., of regions and
video files – on top of the video space view to produce what
is seen in Figure 3.
Figure 5. Mapping flat compass space to 3D video space
The size of the video image was tied to the size of the
region, which in turn was linked directly to the current level
of zoom. The size of both the video file and the webcam
image grew in proportion to the amount of zoom applied.
Zoom varied from the standard resolution of the camera to
an enlarged portion of an overlaid video. In this way, the
video image could be increased in resolution as the webcam
feed was gradually occluded, giving the impression that the
media emerged from inside the bottle.
ANALYSIS
The reaction to the exhibition was positive. Comments
recorded in the visitors book continually made reference to
the beauty and audiovisual impact of the “shifting light
images” and sound effects. One visitor reflected many
comments when they stated “I’ll see the bay in a different
way.” Our interest, however, was directed particularly
towards the details of the Telescope in use, and the
interaction taking place around it.
Over the course of the two-month exhibition, we collected
data at various intervals to study the Telescope in use. Our
analytic data corpus consisted of many hours of video data
shot from two positions in the exhibition space (marked in
Figure 1), and log files of the electronic compass sensor
readings taken at corresponding times. Video cameras were
placed to give an overview of the exhibition space and a
close-up of the Telescope. The camera recording the
Telescope obtained audio from a plate microphone attached
to the front of the device, allowing conversations to be
heard above the ambient music and sounds of the
exhibition.
Alongside the video data, we developed a tool to
reconstruct the Telescope’s movement from the sensor logs
and provide a view of what visitors would have seen
(simulated view shown in Figure 6). The reconstruction of
this view was a necessary feature of our analysis, as there
were a significant number of cases in which visitors reacted
to or commented on what could be seen on the display
inside the Telescope. Due to the time constraints, we were
unable to link the tool with existing video players, and so a
reconstructed 3D graphical simulation of the Telescope and
its movement (3D model shown in Figure 6) was
implemented, allowing us to manually synchronisation
between the reconstructed view and the video data by
visually comparing our video recording of the Telescope in
use and the motions of the 3D simulation. The bottom
window in Figure 6 shows one camera view, but note that
typically we viewed the recordings from both cameras
concurrently. Video segments needed to be repeatedly
viewed in tandem with the simulated view in order to better
understand the often subtle interactions we found occurring.
We were able to perform such repeated viewings by
skipping to certain points in the log data using the controls
shown in the centre of Figure 6.
Figure 6. The analysis tool
In the following three sections we describe various facets of
the Telescope’s use by visitors. Examples shown are typical
of cases we have studied throughout the data. Given our
interest in making the telescope fit appropriately within the
public exhibition, we were particularly interested in how
visitors engaged both with the Telescope and one another
around it. Our first examples illustrate how visitors
swapped over and collaborated around the device.
Lighting and Turn
We found that the use of the light provided on the
Telescope had an unexpected impact on co-visitors in the
space. In our first example an exhibition docent, Tom, is
using the Telescope. The exhibition space is relatively dark
at this point in the performance, and the Incubator in
particular is not illuminated. Tom presses the button to turn
the Telescope light on.
Figure 72. The Telescope’s light illuminates the Incubator and
many visitors move or look towards it; Jenny (circled) moves
towards the Telescope
Figure 8. Movement of co-visitors
As Tom switches the Telescope light on, visitors situated
near the Incubator turn and move towards the newly lit
glass bottles (Figures 7 and 8). Such attention or distraction
caused by lighting effects is a well-known phenomenon
[27,17], and here highlights the ability of the Telescope to
provoke interest in the Incubator. It also shows how the use
of the Telescope can impact on co-visitors not using the
Telescope or otherwise locally engaged with a Telescope
user.
The Telescope’s light in this example not only affects the
behaviour of both the current user and the bystanders, but
also causes a visitor (Jenny) to move towards the
Telescope. As with others of the group, Jenny’s gaze is
intermittently cast on the Incubator, but her movement is
directed towards the Telescope. She stops a short distance
away from the device. Tom finishes using the Telescope,
lets go of the light switch and, at this very instant, Jenny
turns her head from the Incubator towards Tom. Tom
disengages from the Telescope, and arcs around Jenny,
creating space so she can use it.
The light cast upon the Incubator brings about Jenny’s
initial movement towards the Telescope. However, when
she adopts a position of proximity, it is the light turning off
which brings her gaze away from the Incubator and towards
the device. Tom’s use of the Telescope light on the
Incubator may at first appear to have a ‘moth effect’ on the
gathering visitors, but Jenny’s movements highlight a more
subtle point. There is a relationship between using the
device and the effect that has on the trajectory and
transitions of other visitors’ engagement.
2 This and some subsequent images have been artificially
enhanced to improve visibility for the reader. The real
exhibition space is substantially darker than it appears here.
In our next example, the Incubator lights have just come on
and the same exhibition docent, Tom is explaining the
contents of the glass bottles to the group. A woman, Mary,
leaves a crowd of visitors across the room walks across to
the Telescope. She grabs the handles and looks through the
eyepiece. She spends approximately six seconds in this
position as another woman, Pauline, approaches. Pauline
arrives at the Telescope. She makes a comment
(undecipherable) to Mary after which they share a laugh. A
few seconds later, they swap turns and Pauline peers into
the eyepiece. She spends some time examining the image
inside (about eight seconds), and then pulls away from the
Telescope, making a distasteful expression and saying
“wriggling” (Figure 9, left). She then moves away to
examine the Incubator. Her view prior to pulling away can
be seen in Figure 9 (right). A few seconds after Pauline’s
disengagement, a man, Freddy begins to use the Telescope.
Freddy approaches directly towards the device following
Pauline, but does not talk to her.
Figure 9. Pauline’s reaction (left) to what she sees (right)
We must attend to the relatively rapid ‘handover’ speed
with which several visitors are able to use the Telescope
and move on. Mary, Pauline and Freddy all move from
positions as bystanders to being engaged with the device
and accessing augmented content directly. Symmetrically,
Pauline, after her encounter, swiftly moves from this direct
engagement with the device to engaging with the target, the
Incubator. Amongst the handovers, Pauline and Mary
briefly take part in a humorous exchange regardless of the
fact that Pauline is not engaged with the device, and has (at
this point in time) no direct experience with the
augmentation. Beyond these person-to-person encounters,
however, visitors are able to collaboratively share
experiences of the Telescope itself, by collaboratively
varying their engagement with the device.
Later on, at the end of the same show, Pauline, David and
Eric are standing nearby. The house lights have come on
and they appear to be discussing the structure of the room.
Bob begins using the Telescope by placing his left eye to
the viewing tube and moving it around (Figure 10, left). He
then turns the Telescope light on using the button on the
handlebars. Previous examples have shown how bystanding
groups are attracted by the Telescope’s light cast on the
Incubator. In this example, however, Eric and Pauline turn
their gaze and subsequently bodies directly towards the
Telescope (Figure 10, centre). After a few seconds Eric
moves towards the Telescope, followed by Pauline. Bob
moves up from the Telescope eyepiece, his right hand
releasing the light switch (Figure 10, right) and then the
handlebar. As he pulls up, Bob looks in the approaching
visitors’ direction and then moves away, creating space as
Eric and Pauline move in.
Figure 10. Bob uses the Telescope with Eric, Pauline and
David standing nearby (left). He presses the light and Eric
orients towards the Telescope via the Incubator (center). Bob
disengages from the Telescope as David, Eric and Pauline
approach (right)
Eric and Pauline take over, with Eric grabbing the
handlebars (Figure 11). Pauline here uses the same word,
“wriggling,” to describe the function of the Telescope to her
co-visitor. Bob also overhears, as shown in the following
dialogue:
P: There’s the ((mumble)) thing there
((Pauline points to the Telescope))
E: Mmm (.) does that help?
P: Hhh if you wanna see something
wriggling down there ((Bob laughs))
P: ((Pauline laughs and looks up at Bob))
E: ((looking through Telescope))
How weird (.) huh
Figure 11. Eric (right) looks through the Telescope whilst
Pauline stands next to him
The interaction between Bob, Pauline and Eric forms part
of handing over the Telescope between Bob and Eric, a
process where the physical features of the Telescope enable
the swift traversal of visitors between being part of the local
milieu and becoming engaged users of the device. Earlier,
we saw how a rapid succession of visitors came to view
augmented content in a matter of seconds. By ‘physical
features’ we mean the simple access to the augmentation
afforded by the eyepiece style that, when configured by a
docent or previous visitor, provides a relatively stable
experience that is less sensitive to ‘handover’ instabilities,
namely, jumps of alignment between users.
In cases such as these we see the impact of both lighting
and body movement inform and affect visitors’ engagement
with the device. The impact of the Telescope’s light appears
to be contingent upon the ambient lighting of the
surroundings and here we note how the aesthetic impact of
the Telescope changes with the lighting aesthetic of the
space. Bob’s movement away from the Telescope is
occasioned by Eric and Pauline approaching, which in turn
is occasioned by Bob’s use of the Telescope light. Here,
however, we are interested in how Pauline and Eric share
content across different levels of engagement with the
Telescope. They have both used the Telescope before, and
are able to interact during, and as part of Eric’s first
encounter with the augmented videos. Pauline draws on her
previous characterisation, “wriggling,” to frame Eric’s
direct engagement with the device. Similarly, Bob and
Pauline’s interchange of laughing in response to Pauline’s
description weaves a fabric of sense for Eric’s use of the
Telescope. Thus, we see co-visitors providing a context in
which direct users directly engage with the augmentation.
Interestingly, the reconstruction from our sensor log files
indicates that there is no augmented video being played out
while during Eric’s characterisation “how weird.” He can
only see a direct (pass-through) view of the Incubator and
never manages to locate any green regions, in contrast to
Pauline’s previous experience of seeing some video content
overlaid on this pass-through view (Figure 9, right).
Nevertheless, he orients to that content as “weird” to
Pauline before disengaging from the device. The view in
the Telescope is experienced both in the context of what
can be seen and in what ways co-visitors are collaborating
with the user. The exclusive access the Telescope provides
for a ‘augmented’ user can therefore create problematic
discrepancies in views between users and co-visitors.
Sharing and Stability
Our next example takes place when the Incubator lights are
on. Tom is talking to two women, Sally and Fay, about the
Telescope. He approaches the device with them, and begins
to adjust the viewpoint. Tom lines up the view through the
Telescope at the edge of a video, and provides a brief
description of its operation. Just as he disengages, however,
the Telescope moves slightly, shifting the focus to outside
the video region. Tom then makes space for Sally as she
grabs the Telescope with both hands and places her left eye
to the viewing tube (Figure 12). As she grabs the device,
the view through the Telescope jumps again, moving the
focus to between two regions. After approximately three
seconds, Sally looks over the Telescope, still holding the
handlebars, and says:
S: What am I looking at? (4.0) Can’t see what I’m
looking at
Just before Sally looks up (on her first “what”), the
Telescope focus moves inside a region and a video starts to
play. Sally hands over Telescope to Tom who then very
briefly checks the view. When Tom checks the view, he
sees that there is a video on screen, the same video Sally
unwittingly lined up just as she asked her question.
T: Right oh there you go you’ve got something y-
you’ve on screen ((points at eyepiece)) now
you’ve act- you’ve picked something up you’ve
picked a beastie up there you’ve picked a blob
a live microbe
Tom disengages from the Telescope at “got something” and
Sally then reengages. Unfortunately, an anomalous
movement (possibly due to magnetic field jitter) shifts the
focus of the Telescope again to the other side of the region
such that the focus is now too far to the left of the region
and again no video is playing.
S: Have I?
T: Yes can you see what it is?
S: Nooooh!
Tom laughs and moves in on Sally’s “Nooooh!” as Sally
backs away from the Telescope. He grasps the eyepiece and
places his right eye on it. The view he sees is the same as
Sally’s when she says “Have I?”
Figure 12. Sally requests help (top left, circled), Tom adjusts
(top right, circled), Sally still has problems (bottom left), Tom
adjusts again (bottom centre), Sally sees the augmentation
(bottom right)
T: Oh it’s gone now ummm
S: What that blue there was a
blue ((Tom pushes the Telescope’s view to the
right in order to get the focus inside the
region))
T: Try and line it up with the green squares ahh
there you go yeah yeah those are living ((Tom
hands over the Telescope to Sally)) living
microbes in the inside the jars
S: Oooh my lord
When Sally first uses the Telescope, Tom frames her use of
with a description of its operation. Sally’s grab of the
Telescope’s handlebars, however, disrupts the viewpoint
that Tom has configured. Due to the ‘single-user’ properties
of the Telescope (i.e. a private view), Tom is unable to
monitor the display during the handover. Thereafter follows
a further problem – albeit one not caused by accidental
movement of the Telescope’s body, but an anomalous jump
in viewpoint – which also follows a similar pattern: Sally
says “Have I?” and “Nooooh!” after which Tom moves in
to perform another correction. Tom, building on his
previous description, now provides a more detailed account
of how to locate the content, “Try and line it up with the
green squares.”
There are therefore three attempts at configuring the
viewpoint for a handover before any success. The interface
does not allow a shared perspective on the content and so
the docent is unable to reconstruct or correlate a user's
difficulties in using the interface without drawing from
second hand information, namely accounts of the problems
occurring, or by taking over himself. For this reason, the
two causes of breakdown in this sequence, the accidental
bumping of the Telescope and the anomalous jump of
viewpoint, indicate that attempts at repairing the problem
may be required repeatedly until the set up view just
happens to survive during the handover. As a result, the
docent is unable to craft the experience for the visitor.
The key issue here, then, is how those using the Telescope
and those standing alongside identify and repair disparities
in content during handovers. The problematic handovers
between Tom and Sally show how the Telescope’s design
limits co-visitors’ ability to see what others are seeing.
That the Telescope is a ‘private’ device means repair of
these discrepancies is problematic. Nonetheless, the amount
of time taken to perform several iterations of the configure-
handover-view cycle is a matter of seconds. Repair is
eventually possible, enabled by the rapidity with which
users and co-visitors can move between looking through the
eyepiece, holding the handlebars but talking to co-visitors,
and handing over to become a co-visitor themselves.
Viewing and Vicinity
In this example, Freddy approaches the Telescope for the
first time. The Incubator lights are on. After getting into a
comfortable position with the handlebars, he begins to
move the Telescope around. He zooms in to watch a video
emerging from a bottle. A few seconds later, Pauline walks
directly between the Telescope and the Incubator. Freddy
stops and briefly glances up and over the Telescope at
Pauline (Figure 13, left). Freddy’s movement is noticed by
Pauline who looks to her left, and then crouches down
(Figure 13, right). In response to this ducking movement,
Freddy jerks his head back to the Telescope slightly. He
then moves back up again and grins at Pauline while she
laughs. Finally, Freddy moves his head back down to look
through the Telescope, still smiling.
Figure 13. Freddy looks up (left) from his view through the
Telescope and Pauline ducks (right, circled)
This sequence illustrates how Freddy and Pauline
seamlessly traverse and collaborate across different levels
of engagement, both with the Telescope and with one
another. Freddy initially engages with the Telescope. He
notices a disruption of his view, and pulls up from the
Telescope in order to work out what is happening. He
maintains physical engagement with the device by holding
on to the handlebars, and checks the real world view against
what he has just experienced in the augmented view.
Pauline indicates an understanding of his movement by
belatedly making an attempt to avoid blocking his
augmented view, and Freddy is able to both recognise this
fact, and share a moment with Pauline that shows his
recognition. There are a series of resources that are drawn
upon to retain a view of the Incubator: the ability to
‘disaugment’ yet maintain engagement on Freddy’s part;
the ability to recognise and orient to such an activity on
Pauline’s part; and their ability to acknowledge and
complete such a process quickly (in this case,
approximately two seconds between Freddy moving his
view away from the Telescope and returning to it). In this
example, the Telescope’s physical form allowed Freddy to
assess a discrepancy and subsequently resolve it.
The Incubator is an interesting artefact in its own right
without using the Telescope to view it, so we encounter
frequent obstructions of the Telescope view by co-visitors
passing between the two. Pauline has used the Telescope
before and it is possible that she realises to some extent the
effect her movement may have on Freddy’s view.
In our next example, Tom has set up the Telescope’s
orientation for Jenny to see a video of diatoms in the centre
of the view. He begins to speak to Jenny as she approaches
the Telescope.
T: Press the light
J: Yes
T: And twiddle round until umm line it up with the
(.) line the image up in here with some of the
little green squares ((Jenny moves to place her
eye to the Telescope))
T: You can see some of the microbes inside ohh
look there's a big microbe there
As Tom starts to say “the microbes inside,” Jenny presses
the light switch. Alice happens to be walking in front of the
Incubator at this moment, and the light illuminates her
movement across the Telescope’s view. Tom points at
Alice and says “ohh look,” just as Jenny presses the light.
As she is illuminated, Alice glances towards the Telescope
and quickly moves past. However, Jenny does not
disengage from the Telescope despite Tom’s statement to
“look,” and she continues to view through the eyepiece.
Tom then goes on to account for Jenny’s potentially
occluded view of the Incubator. He has no direct access to
what Jenny is seeing on the Telescope display, but he
describes Alice’s movement in terms of Jenny’s potential
experience by saying “there’s a big microbe there”
(referring to Alice).
In the first section, we saw how Pauline provided an
account for Eric to show what he might see and similarly in
the second section, we saw how Tom’s accounts to Sally
assisted the identification of the discrepancies in their
views. In this case we see how Tom is able to share an
account of how the Incubator may (dis)appear even though
he is not using with the Telescope. In this case, Tom draws
on his view as a co-visitor’s and his experience of having
used the Telescope in order to provide some sense to
Jenny’s augmented view. Whether by a user looking up or
having a co-visitor explicate the situation, a ‘disaugmented’
viewpoint is important to understanding how the Incubator
appears.
DESIGN IMPLICATIONS
We now generalise from our observations, reflecting on the
specific strengths and weaknesses of the Telescope design
and drawing out broader issues for the design of public AR
experiences.
Telescope Design Features
The form of the Telescope fundamentally shapes the way in
which social interaction plays out in One Rock.
Despite our use of black paint and fabric, the sheer physical
size and visibility of the Telescope attracts attention. We
have derived unexpected benefits from this attention, most
prominently that its size requires large gestural usage, so
that bystanders and co-visitors are aware when and how it is
being used. Furthermore, the Telescope light amplifies its
visibility and also the visibility of the target, making a
connection between them; we see that some visitors’
attention is drawn to the target and then back to the display,
attracting them to become involved. The Telescope’s
projection of presence into the environment caused by this
light is, however, contingent upon the lighting aesthetics of
the environment; changes in lighting featured centrally in
the show and thus the Telescope’s light was most
meaningful in that aesthetic context.
Due to the Telescope’s handles and mounting being
separate to the viewfinder, it is easy to make room around
the device while still holding on to it, as a way of sharing,
handing over and inviting others to use it. It also facilitates
rapid handover to others and rapid disengagement/
reengagement by an individual, which is useful for
negotiating social interactions such as repairing breakdowns
in communication due to instability or interference.
The peephole style display is especially interesting. The
privacy of the view causes problems for co-visiting,
especially when it comes to lining up and maintaining
views for others during handover and when sharing and
discussing content within a group. In contrast, however, the
physical form of the viewing tube permits swift handovers
since the action involved in engaging and disengaging with
the augmented view is simple and takes little time. The
design therefore does at times permit quick, seamless and
even humorous negotiations between the AR user and
others in the exhibition space. Nevertheless, there are some
important benefits to such a display, even in a social
situation. A concealed display can certainly engender both
immediate surprise and ongoing fascination with digital
content. Additionally, it is clear when someone is looking
through a peephole display. This can enable others to infer
both what that person is doing with the display and in what
directions they might be doing it.
Separation of Target and Display
AR interfaces are characteristically distinguished from
other forms of interface by their combination of a physical
artefact, a target, and a computer display, a device, and that
these are very often separate from one another. Both the
target and the device in One Rock are legitimate objects of
interest for the visitor. While we expect visitors to use the
device, we should anticipate that others will attend directly
to the target in its own right as a painting, sculpture or a
part of a building, or similar.
This separation of device from target has an important
consequence for design in that we typically need to
consider a shared environment in which some participants
have an augmented view while others have a un-
augmented, or ‘plain,’ view. This will be especially true in
public environments such as exhibitions, where there are
many visitors flowing through the experience and it is
infeasible to ensure they are all equipped with a display.
Target and device are also often separated in space; that is,
the device is some way from the target and has to be
pointed at it in order to view the target. This raises the
possibility of interference, as we saw when visitors
physically moved into the space between the Telescope and
the Incubator. In our case, this interference is distracting,
but in other cases, especially if tracking is used to identify
targets using video cameras on the device, it might also
affect the operation of marker tracking. Either way,
interference requires resolution, typically involving
collaboration between the people involved. In our case, this
involved the Telescope user temporarily and rapidly
disengaging, an action that was then noticed by the passing
visitor, enabling the pair to quickly and fluidly resolve the
problem without the need for explicit discussion.
In cases where the sensing technology is separate from both
the display and the target, for example where we are using
wall- or ceiling-mounted video cameras to track targets,
there are further possibilities. Visitors may cause visual
interference by passing between the device and target or
may interfere with the sensing system by passing between
the external sensor and the target and/or device depending
on which is being tracked. In situations in which multiple
displays, targets, visitors and even sensors can change
locations, designers need to be aware that the possibilities
of interference become far more complex. Fortunately in
One Rock only the visitors move.
Levels of Engagement and Transitions
Previous studies of interactive exhibits in museums and
galleries have introduced the idea of varying levels of
engagement. The subtle interplay between various
movements made by bystanders, co-visitors and users
around the Telescope might be compared to the
observations reported by vom Lehn et al. [19] who describe
the coordinated conduct of groups and strangers around
museum exhibits. Our observations confirm these
observations, in that we see substantial coordination of
conduct between both strangers and friends, groups and
individuals, roles and responsibilities. However, we also
suggest some refinements. We propose that the use of the
Telescope in One Rock, especially the separation of plain
from augmented views and the use of a peephole-style
display, results in several distinct levels of engagement:
Augmented User. Visitors who are looking through the
peephole.
Disaugmented User. Visitors who are controlling
(holding) the telescope but not looking through it.
Co-Visitor. Visitors who are part of the local group
around the telescope.
Observer. Other visitors who are grouped around (or in
the way of) the Incubator.
Bystander. Those currently not engaged with the
device or target.
As we have seen, collaboration across and transitions
between these levels are an important part of the
experience. We have seen collaboration across augmented
and disaugmented perspectives (such as the humorous
exchange between Tom and Jenny), and, specifically, how a
‘disaugmented’ perspective might inform an augmented
one. We note that such collaboration might be problematic
in more permanently worn displays, such as HMDs (Head
Mounted Displays). We have also observed a variety of
transitions, such as Jenny moving from bystander to co-
visitor to augmented user, Sally and Tom swapping
between co-visitor and user, and Freddy moving from
augmented user to disaugmented user. These transitions
relied on a variety of collaborative activities, such as:
drawing attention to the target and or device;
communication (verbal, gestural) between engaged visitors
and those nearby; engaging/disengaging from the display;
inviting and making room for others; and, as we shall now
discuss in closing, handing over the display to others.
Handovers are particularly important moments, with the
current visitor going to considerable lengths to set up the
experience for the next visitor, both in terms of verbally
framing their experience but also in carefully positioning
the display to provide them with an appropriate view when
they engage. The need to position the display for others is
clearly important, but is also difficult, and handovers are
dangerous moments for social interaction. We have seen
that a combination of physical instability, sensor instability
and an inability to see the other’s view when disengaged
from the display can cause problems here. Fortunately, in
the case of the Telescope these issues can often be resolved
by quickly disengaging and reengaging with the device.
In contrast, aligning an HMD’s viewpoint for a handover to
a subsequent user is almost impossible, whereas due to the
Telescope’s construction, handovers become less
problematic when crafting an experience for others.
Tracked opera-glass or handheld displays will be faster to
handover but by default will provide a different perspective
making it difficult to set up a particular view for a co-
visitor. Such transitions will therefore be more or less rapid
and seamless depending on the design and type of AR
display used, which will in turn fundamentally shape the
ways users engage with, and collaborate around, the
augmentation.
ACKNOWLEDGEMENTS
The authors would like to thank Welfare State International,
all One Rock visitors who took part in the data collection,
and the UK’s EPSRC for funding through the Equator IRC
(GR/N15986/01).
REFERENCES
1. Ahlers, K. H., Kramer, A., Breen, D. E., Chevalier, P.,
Crampton, C., Rose, E., Tuceryan, M., Whitaker, R. T.
and Greer, D. Distributed augmented reality for
collaborative design applications. In Computer
Graphics Forum, 14(3), pp. 3-14, 1995.
2. Azuma, R. T. A survey of augmented reality. In
Presence: Teleoperators and Virtual Environments, 6,
pp. 355-385, August 1997.
3. Azuma, R., et al. Recent advances in augmented
reality. In Computers and Graphics, 21(6), pp .34-37,
November 2001
4. Beier, D., Billert, R., Brüderlin, B., Kleinjohann, B.
and Stichling, D. Marker-less vision based tracking for
mobile augmented reality. In Proc. of Second
International Symposium on Mixed and Augmented
Reality (ISMAR’03), October 2003.
5. Bérard, F. The magic table: Computer-vision based
augmentation of a whiteboard for creative meetings. In
Proc. of PROCAM Workshop at the IEEE International
Conference in Computer Vision, 2003.
6. Billinghurst, M., and Kato, H. Collaborative
augmented reality, In CACM, 45(7), pp. 64-70, 2002.
7. Billinghurst, M., Kato, H. and Poupyrev, I. The
MagicBook – Moving Seamlessly between Reality and
Virtuality. In IEEE Computer Graphics and
Applications, 21 (3), pp. 6-8, IEEE, May 2001.
8. Brown, B. and Chalmers, M., Tourism and mobile
technology. In Proc. 8th European CSCW Conference,
Kluwer, 2003.
9. Christian, A. D. and Avery, B. L., Speak out and annoy
someone: experience with intelligent kiosks. In Proc.
CHI 2000, pp. 313-320, 2000, ACM Press.
10. Drascic, D. and Milgram, P. Perceptual Issues in
Augmented Reality. In Proc. of SPIE: Stereoscopic
Displays and Applications VII & Virtual Reality
Systems III, 2653, pp. 123-134, 1996.
11. Ferris, K., Bannon, L., Ciolfi, L., Gallagher, P., Hall, T.
and Lennon, M. Shaping Experiences in the Hunt
Museum. To appear in DIS 2004.
12. Foxlin, E. and Naimark, L. VIS-Tracker: A wearable
vision-inertial self-tracker. In Proc. of IEEE VR 2003,
March 2003, IEEE.
13. Fraser, M., et al. Re-tracing the Past: Mixing Realities
in Museum Settings. In Proc. Of ACM ACE 2004,
ACM Press.
14. Fraser, M., et al. Assembling History: Achieving
Coherent Experiences with Diverse Technologies. In
Proc. of ECSCW 2003, pp.179-198, Kluwer.
15. Hindmarsh, J., Heath, C., vom Lehn, D. and Cleverly,
J. Creating assemblies: Aboard the ghost ship. In Proc.
of CSCW'02, pp. 156-165, ACM, 2002.
16. Hoff, B. and Azuma, R. Autocalibration of an
electronic compass in an outdoor augmented reality
system. In Proc. of IEEE/ACM International
Symposium on Augmented Reality, pp. 159-164, 2000.
17. Hopkinson, R. G. and Longmore, J. Attention and
distraction in the lighting of work-places. In
Ergonomics, 2, pp. 321-334, 1959.
18. Koleva, B., Benford, S., Ng, K. H. and Rodden, T. A
Framework for Tangible User Interfaces. In Proc. of
Physical Interaction (PI03) – Workshop on Real World
User Interfaces at Mobile HCI 2003, pp. 46-50,
September 2003.
19. vom Lehn, D., Heath, C. and Hindmarsh, J. Exhibiting
interaction: Conduct and collaboration in museums and
galleries. In Symbolic Interaction, 24(2), pp. 189-216.
20. Milgram, P. and Kishino, F. A Taxonomy of Mixed
Reality Visual Displays. In IEICE Transactions on
Information and Systems (Special Issue on Networked
Reality), E77-D(12), pp. 1321-1329.
21. Oppenheimer, P., Billinghurst, M., May, R. 2001,
Sichuan Virtual Dig,
http://www.hitl.washington.edu/research/sichuan
22. Rogers, Y., Scaife, M., Gabrielli, S., Smith, H. and
Harris, E. A conceptual framework for mixed reality
environments: Designing novel learning activities for
young children. In Presence: Teleoperators and Virtual
Environments, 11(6), pp. 677-686, 2002.
23. Schmalstieg, D. and Hesina, G. Distributed
applications for collaborative augmented reality. In
Proc. of IEEE Conference on Virtual Reality (VR
2002), pp. 59-66, IEEE Press, March 2002.
24. Schnädelbach, H., et al. The Augurscope: A mixed
reality interface for outdoors. In Proc. of ACM
Conference on Human Factors in Computing Systems
(CHI’02), pp. 9-16, ACM Press, 2002.
25. Schwald, B., Seibert, H. and Weller, T. A flexible
tracking concept applied to medical scenarios using an
AR window. In Proc. of International Symposium on
Mixed and Augmented Reality (ISMAR), pp. 261-263,
October 2002.
26. Sparacino, F. The Museum Wearable:
real-time sensor-driven understanding of visitors'
interests for personalized visually-augmented museum
experiences. In Proc. of Museums and the Web
(MW2002), April, 2002
27. Taylor, L. H. and Sucov, E. W. The movement of
people toward lights. In Journal of the Illuminating
Engineering Society, 3, pp. 237-241, April 1974.
28. Thomas, B., et al. ARQuake: An outdoor/indoor
augmented reality first person application. In 4th
International Symposium on Wearable Computers
(ISWC), pp. 139-146, 2000.
... However, little is known about factors that infuence the likelihood of a honeypot efect when other kinds of technologies are used in public space. Exploring this issue is important, particularly as we are confronted with new and emerging technologies for use in public space, such as head-mounted displays [33] and augmented reality (AR) [49]. These technologies raise questions about whether traditional approaches to encouraging human involvement with public installations-through the honeypot efect-are applicable to systems with new interactional opportunities found in AR. ...
... Previous work provides clues to suggest that AR technologies may be able to stimulate a honeypot efect. Reeves et al. [49] described the Telescope, an interactive AR experience designed to stimulate engagement in a heritage setting. Their work drew attention to the way in which users were drawn in by observing the interactions of people who were using the Telescope, though they did not characterise this as a honeypot efect. ...
... The Visibility of Game-related Infrastructure. Mobile AR applications are characterised by the presence of a target that triggers AR content and a display upon which this content appears [49]. In SLH, the targets were the physical markers distributed around the six sites. ...
Conference Paper
In HCI, the honeypot effect describes a form of audience en- gagement in which a person’s interaction with a technology stimulates passers-by to observe, approach and engage in an interaction themselves. In this paper we explore the potential for honeypot effects to arise in the use of mobile augmented reality (AR) applications in urban spaces. We present an ob- servational study of Santa’s Lil Helper, a mobile AR game that created a Christmas-themed treasure hunt in a metropolitan area. Our study supports a consideration of three factors that may impede the honeypot effect: the presence of people in relation to the game and its interactive components; the visi- bility of gameplay in urban space; and the extent to which the game permits a shared experience. We consider how these factors can inform the design of future AR experiences that are capable of stimulating honeypot effects in public space.
... Although studies have noted the success of AR applications in enhancing user experience [18], engagement [50] and learning [23,24,45], these studies have yet to question whether engagement with the AR technology comes at the cost of ignoring the user's physical environment. With the increasing availability of AR applications in the market, concerns related to user safety and situational awareness become critical. ...
... For example, Hornecker's research at the Natural History Museum in Berlin has noted how two distinct technologies, the jurascope (a telescope-like device) and a large screen, had impacted on the visitors' engagement with the exhibition [14]. Further research has explored the effects that overlaying digital content with physical displays has had on visitor engagement; such research includes the augmented reality experience at the "One Rock" exhibition [19] and the "Augurscope" [20]. ...
Article
Within cultural heritage, curators, exhibition designers and other professionals are increasingly involved in the design of exhibits that make use of interactive digital technologies to engage visitors in novel ways. While a body of work on the design and evaluation of interactive exhibitions exists in HCI and Interaction Design, little research has been conducted thus far on understanding how cultural heritage professionals engage in the design of interactive exhibitions in terms of their attitudes, process, expectations and understandings of technology. In this paper, we present the results from an interview study involving cultural heritage professionals and aimed at understanding their involvement in designing interactive exhibitions. Our findings could provide the HCI community with a better understanding of the strategies and aspirations of domain professionals regarding interactive exhibitions, and to identify new ways to engage with them - particularly as these professionals' knowledge and understanding of interactive digital technologies becomes more advanced.
Preprint
Full-text available
Laneways form substantial networks of underutilised spaces in cities—stigmatised as dangerous with a propensity for criminal behaviour. This study recommends Augmented Reality (AR) as an activation instrument for Prince Lane, Perth, Western Australia, a case study challenging prohibitive planning policies that preclude physical construction. Research methods include literature reviews, site analysis, mapping, and an iterative design processes. Concept ideation do not draw to a conclusive prototype, rather, are presented as a projected images within a vernacular context with localised spatial relationships and functions. This research aligns with sustainable place activation implementation planning and inclusivity policy without permanent structural intervention or permanent inhabitation—while retaining original functionality and being scalable beyond the scope and setting of this localised study.
Chapter
The overarching aim of this book has been to provide a step forward in our understanding of interaction with technology in public settings. This has been done firstly articulating in empirical and concrete instances the challenges posed, and secondly in providing detailed ways in which to address those challenges in interface design. In summarising how this has been done within this book, this chapter returns to the initial questions and key aims posed at the start of the book, and begins to reflect upon them in light of the studies and framework that have been presented. In closing the chapter discusses some practical matters regarding the use of this book, looks at recent developments of related work, and finally briefly covers some directions for where we go next in understanding interaction with technology in public settings.
Conference Paper
We present and compare two different approaches for touristic applications using smartphones. Our goal is to add value to the touristic experience in an appropriate way by provoking or improving social interaction between tourists. Because touristic actions always are motivated intrinsically, we decided to implement two game-based approaches. We use smartphones in two completely different ways: In the first approach, we use it as an input device for a large interactive display which is exhibited in public. In our second approach, we use it to enable tourists to explore places all over the world in a long term multiplayer game. © 2013 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.
Conference Paper
Time Telescope is a site-specific digital art installation which allows viewers to explore an area of the city of NewcastleGateshead at various points in history. The installation formed part of a project in which a participatory interaction design process was used to engage young people with the heritage of their local area. The telescope itself and the project through which it was designed is discussed in relation to the goals of the project and its impact upon the young participants.
Conference Paper
Today, New Delhi with more than 10,000 people speaking an estimate of 122 languages [1], has led it to enter into a new kind of multilingual anarchy. The conversational language has disintegrated into an array of jargons, idioms, acronyms, abbreviations, and symbols. ZOR SE BOLO presents 5 of the most commonly spoken jargons of the capital, (“Thoda adjust karlo”, “Bhai kuch jugaad laga”, “Tu jaanta hai mera baap kaun hai?”, “Tum toh bade log ho”, “No Problem ji”), in the form of interactive installations to be displayed at the Delhi International Airport (T3 Terminal) with a view of providing a linguistic tour of the capital. Two of the design concepts (“No Problem ji” and “Tu jaanta hai mera baap kaun hai?”) were successfully prototyped and exhibited. The concept was achieved in 3 main phases: literature research & analysis, design and implementation. “Touch” and “Speech” were used as interaction mediums to design an experience such that people relate to the individuals and their lingo through the audio they hear after triggering the installation. Body language / gestures of the mannequins in the installations becomes a metaphor to reach out and establish contact between people of different tongues, from different cultures.
Article
Full-text available
This preface to the Proceedings of Physicality 2006 describes some of the work at Lancaster and in the Equator project that were the initial inspiration for this workshop. We wish to understand physicality both because it is interesting in its own right and also because the understanding can help us design novel digital and hybrid digital-physical artefacts. Our own existing work is used to propose some initial properties and issues of physicality including rules of 'natural' interaction, issues of 'it- ness' and continuity in time and space, the physicality and instrumentation of the human body and issues of embodiment and spatiality.
Article
Full-text available
Mixed Reality (MR) visual displays, a particular subset of Virtual Reality (VR) related technologies, involve the merging of real and virtual worlds somewhere along the 'virtuality continuum' which connects completely real environments to completely virtual ones. Augmented Reality (AR), probably the best known of these, refers to all cases in which the display of an otherwise real environment is augmented by means of virtual (computer graphic) objects. The converse case on the virtuality continuum is therefore Augmented Virtuality (AV). Six classes of hybrid MR display environments are identified. However quite different groupings are possible and this demonstrates the need for an efficient taxonomy, or classification framework, according to which essential differences can be identified. An approximately three-dimensional taxonomy is proposed comprising the following dimensions: extent of world knowledge, reproduction fidelity, and extent of presence metaphor.
Article
Full-text available
This article explores how individuals, both alone and together, examine exhibits in museums and galleries. Drawing on ethnomethodology and conversation analysis, it focuses on the ways in which visitors encounter and experience exhibits and how their activities are organized, at least in part, with intimate regard to the actions of others in the domain, both companions and "strangers." This study contributes to the long-standing concerns of symbolic interactionism with (mutual) attention and involvement, materiality and social relations, and interpersonal communication. The data consist of video recordings of naturally occurring action and interaction in various museums and galleries.
Conference Paper
Full-text available
This paper describes an activity designed for a site of special interest in which clues to its history are gathered as visitors explore the site before interacting with two displays which reveal details of key past events. We investigate a design approach in which electronically tagged paper is used both to weave the visit together and configure the interactive displays so as to provide variable access to a common information space. An analysis of visitors' interactions throughout a week's public exhibition shows how features of our approach can support people in making connections between displays, locations, and historical events. In addition to situating our work in relationship with CSCW's emerging concern for technologies and collaboration in museums and allied public settings, we examine general questions of how to design activities to establish coherence of experience across diverse interfaces. This is a timely issue as interactive technologies proliferate and take on ever more variable physical forms.
Conference Paper
Full-text available
The augurscope is a portable mixed reality interface for outdoors. A tripod-mounted display is wheeled to different locations and rotated and tilted to view a virtual environment that is aligned with the physical background. Video from an onboard camera is embedded into this virtual environment. Our design encompasses physical form, interaction and the combination of a GPS receiver, electronic compass, accelerometer and rotary encoder for tracking. An initial application involves the public exploring a medieval castle from the site of its modern replacement. Analysis of use reveals problems with lighting, movement and relating virtual and physical viewpoints, and shows how environmental factors and physical form affect interaction. We suggest that problems might be accommodated by carefully constructing virtual and physical content
Conference Paper
Full-text available
Re-Tracing the Past: exploring objects, stories, mysteries, was an exhibition held at the Hunt Museum, in Limerick, Ireland from 9th--19th June 2003. We attempted to create an exhibition that would be an engaging experience for visitors, that would open avenues for exploration, allow for the collection of visitor opinions,and that would add to the understanding of material already in the Museum,rather than focus on "gee-whiz" technology. Thus our augmented environment completely hid the technology from view. A key objective was to be faithful to the ethos of the Museum, and to produce an exhibition that would stand up to scrutiny by Museum professionals. This design study paper gives a flavour of the exhibition by taking the reader on a tour of the whole design and development cycle-through site pictures, drawings, scenarios, pictures of the exhibition spaces, the interactive components, and visitor comments.
Article
An experiment was designed to study the effects of path illumination on individuals encountering a left-right decision point for the first time. A set of four hypotheses which vary the statement of this theme are tested by the author. There were 111 volunteer subjects of which 4 were left handed. The experiment was contained within another experiment about which the subjects had been told. At no time were they told about this experiment. The subjects entered a room, were interviewed, read a message and acted accordingly. These "instructions" served to destroy any tendency to walk to either the right or left. They then had to circumvent a room divider to approach another experimenter who noted their choice of direction. The side walls were splashed with a controlled intensity of light. Then the volunteer experiment was conducted with the side lights off. With conclusion and lights on the experimenter again recorded the exit direction chosen by the subject. When equivalent left-right paths are presented, two-thirds of the people will take the right path unless the other is more brightly lighted. These suggestions could increase the traffic to displays such as often encountered in museums and retail stores, and could also be an aid in controlling exiting traffic on highways.
Article
This paper describes the museum wearable: a wearabl e computer which orchestrates an audiovisual narration as a function of the visitor's interests gathered from his/her physical path in the museum and length of stops. The wearable is made by a lightweight and small computer that people carry inside a shoulder pack. It offers an audiovisual augmentation of the surrounding environment using a small, lightweight eye-piece display (often called private-eye) attached to conventional headphones. Using custom built infrared location sensors distributed in the museum space, and statistical mathematical modeling, the museum wearable builds a progressively refined user model and uses it to deliver a personalized audiovisual narration to the visitor. This device will enrich and personalize the museum visit as a visual and auditory storyteller that is able to adapt its story to the audience's interests and guide the public through the path of the exhibit.
Article
General lighting with a uniform level of illumination over the working plane was introduced and adopted in the inter-war period. Before this, local lighting over tho work itaelf had been customary. It was found, however, that general lighting was not always satisfactory for work which demanded high degrees of visual skill and attention, and in many cases a return to local lighting, together with some general lighting, was favoured. It is generally accepted that the attention is held by objects which contrast strongly with their environment, cithor by their brightness, colour, texture or form. Equally the attention can bo distracted by a bright or highly coloured object in tho field of view a little away from tho object of regard. Experiments have been made in ' human phototropism ', employing apparatus which enabled a simultaneous cine-photographic record to be made of the visual scene togother with the eye movements of an unsuspocting observer viewing the Bcene. A count of the number and duration of these eye movements revealed that sharp, intensely bright points of light distracted the attention in a series of jerky oyo movoments, whereas loss bright but larger areas causedmore eye movements of longer duration. Different behaviour patterns of different observers were noticed, two rather distinct groups being recognized which bear many striking similarities to the ? postural-cluo? and the ?visual-clue ? personality groups recognized by Witkin (1950). Some applications of tho results to tho lighting of work-places are suggested. Tho results argue in favour of preferential lighting of tho work, possibly by local lighting.
Article
This paper presents the "Magic Table": an augmented whiteboard surface for supporting creative meetings. The Magic Table uses computer vision for scanning and spatially organizing texts and drawings on the surface. Digitization of the physical ink is done by a-posteriori capture of the strokes. The digital information is organized through the manipulation of tokens (small plastic disks). The interaction consist of fast and easy to learn gestures that support multiple simultaneous users and two-handed control. After motivating our approach with respect to the more common pen trajectory capture approach, we detail the interaction offered by the Magic Table and report on our initial observations of users interacting with the Table. Finally, we present the implementation of the two main components of the system: the color model based token tracker and the scanner based on mosaicking techniques.
Conference Paper
An intelligent kiosk is a public information kiosk that senses the presence of humans and communicates in a natural way. To examine issues of human-kiosk interaction, we have built and deployed two versions of intelligent kiosks. The first kiosk design combines machine vision to locate and track people in the vicinity with an animated talking head that focuses on clients and talks to them. The second kiosk design uses infrared and sonar sensors to sense clients and multiple interacting agents to communicate with the client.The foremost lessons learned from public trials include (1) people are attracted to an animated face that watches them, (2) small mobile agents interact better with kiosk content than a single fixed face, (3) speaker-independent speech recognition is only useful in targeted applications, and (4) the quality of the content on the kiosk strongly influences the client's evaluation of the quality of the technology.