Conference PaperPDF Available

The Madeira Touch: Encouraging Visual-Spatial Exploration using a Tactile Interactive Display


Abstract and Figures

The current information marketplace for tourists is dominated by for-profit purveyors of information. Potential visitors must rely on experts-for-hire or search engine results in order to learn about a desired destination. In this paper, we introduce The Madeira Touch, a multimodal display installation rooted in the unique characteristics of Madeira, which allows users to explore the island by selecting a type of scenery and showing the user-generated photos of that type of scenery in a map-based interface. To make this pervasive display more engaging, we designed an exploratory tactile-input mode of interaction: users will be able to touch a physical object, representing a type of scenery (a rock for mountains, a seashell for the sea, etc.), which will then bring up suitable photos of that type of scenery overlaid on a map of the island. The display will help users to form their mental image of the island and to plan trips that best suit their interests.
Content may be subject to copyright.
The Madeira Touch: Encouraging
Visual-Spatial Exploration using a
Tactile Interactive Display
The current information marketplace for tourists is
dominated by for-profit purveyors of information.
Potential visitors must rely on experts-for-hire or
search engine results in order to learn about a desired
destination. In this paper, we introduce The Madeira
Touch, a multimodal display installation rooted in the
unique characteristics of Madeira, which allows users to
explore the island by selecting a type of scenery and
showing the user-generated photos of that type of
scenery in a map-based interface. To make this
pervasive display more engaging, we designed an
exploratory tactile-input mode of interaction: users will
be able to touch a physical object, representing a type
of scenery (a rock for mountains, a seashell for the sea,
etc.), which will then bring up suitable photos of that
type of scenery overlaid on a map of the island. The
display will help users to form their mental image of the
island and to plan trips that best suit their interests.
Author Keywords
Pervasive display, tactile interaction, multimodal
interaction, user-generated content, digital signage
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g.,
HCI): Miscellaneous.
Copyright is held by the author/owner(s).
CHItaly ’17, September 18-20, 2017, Cagliari, Italy.
Catia Prandi
Funchal 9020-105, Portugal
Catherine Chiodo
Ricjeareu Villaflor
Carnegie Mellon University
Pittsburgh PA 15213, USA
Nicolas Autzen
Johannes Schöning
University of Bremen
Bremen 28359, Germany
Introduction & Motivation
In an increasingly connected world, travelers seeking
local experiences may encounter a paradox. While it is
easy to find information about destinations and places
that they may wish to visit, it is also increasingly
difficult to differentiate between the many options
available. At the same time, travelers who have
witnessed the homogenizing effects of globalization
may place a particular premium on unique experiences
that can only be found in certain places.
In this project, we took the island of Madeira as a
representative case study of a well-known touristic
destination that could be made more discoverable to
the island’s visitors. However, while tourists may
choose to visit Madeira because of its beautiful scenery
and outdoor activities, often related to the island’s
levadas, remote canals that serve as walking paths,
they may not know what destinations on the island are
best suited to the kinds of sceneries they hope to see.
Currently, travelers who hope to experience natural
beauty can plan their trips by either starting with a
possible location and attempting to find correlating
photos, or by beginning with photos of destinations
they would like to visit and then attempting to find
location information. Both approaches suffer
breakdowns when photos are not tagged with the
commonly-used location names.
In the field of pervasive display systems [4], we
designed our solution to investigate how the use of an
exploratory tactile mode of interaction to provide
georeferenced visual information in the form of user-
generated photos of points of interest (POIs) can
facilitate visitors in discovering locations on the touristic
destination, enhancing their experience. The result is
The Madeira Touch, a pervasive display that allows
users to correlate photos with locations, using a
multimodal interaction that allow users to select either
a type of scenery (touching a physical object) or a
location (using the map-based visualization). Madeira
represents an ideal place to develop a new way for
visitors to explore the island, however, it is our
intention for this system to be adaptable to other
tourist destinations, levering on the location’s unique
offerings and characteristics.
Figure 1: The Madeira Touch in context
The Madeira Touch
Our solution is to provide the Madeira’s main tourism
office, with our pervasive display. There, visitors to the
island will be able to explore a map of Madeira using
two modes of interaction: 1. traditional map-based
touchscreen interaction (Figure 2), or 2. exploratory
tactile-input interaction (Figure 3). With the first type of
interaction, visitors will be able to touch the digital map
and see user-generated photos of that location. With
the second one, users will be able to touch a physical
Scenarios of Use
Scenario 1: While waiting at
the tourism office, a visitor
notices a display with boxes
containing different objects
surrounding it, in front of the
entrance. She touches one
object and notices the display
showing images from a
levada walk in Madeira. She
removes her hand from the
object and the display returns
to the map of the island. She
continues to touch the
objects and see the images
associated with them. She
leaves the display with a
better understanding of the
opportunities on the island.
Scenario 2: A local Madeiran
visits the tourism office to
see the new interactive
display. While interacting
with it, he notices that one of
the photos is an image he
took on Instagram. He feels
gratified that one of his
images is contributing to the
experience available on the
display and to the information
provided to tourists on the
object that corresponds to a type of scenery on the
island, which will then bring up user-generated photos
of that type of scenery (Figure 1). By enabling both
forms of interaction, we intend to encourage
exploratory scenery discovery as well as practical trip-
planning in context in which multiple users, such as
family groups, can co-experience the display [5].
Design Concepts
Traditional tourism information relies on professionally-
produced content which is limited both in quantity and
in coverage. We intend for this system to serve as an
exploration of how UGC from social media can be
curated to provide dynamic, updated and custom sets
of information for specific audiences (i.e., visitors).
While interactive displays have a high potential to
engage passersby, they frequently go unnoticed and
unused [7, 11], confirming the so called ‘display
blindness’ effect [8]. By situating our display in a
strategic location (i.e., in front of the entrance) of a
tourism office where visitors often wait to speak to
someone, we intend to mitigate this issue, exposing
visitors to the display at a time when they will be
inclined to investigate. To further increase visitor
engagement, we have incorporated a novel form of
input in the form of physical objects that a user may
touch to experience certain kinds of sceneries. In fact,
studies of initial engagement with interactive displays
have found that physical interactions prompt greater
rates of engagement among passersby [6]. Moreover,
our solution aims to overcome the ‘interaction
blindness’ [10] that often plagues public displays by
providing users with novel and suggestive physical
objects that encourage non-linear exploration. This
paradigm encourages a very different kind of
interaction with the data, engendering an experience
that is less goal-oriented and more exploratory. On the
other hand, the use of tactile physical objects can raise
the ‘affordance blindness’ issue, defined as the inability
to understand the interaction modalities of a public
display [3]. Our solution aims to moderate this
problematic providing visual hints to attract the user’s
attention (as described in Scenario 3).
Figure 2: Users can touch the thumbnails on the screen.
Figure 3: Users can directly touch the physical objects.
The Madeira Touch software architecture is composed
of three main modules (as shown in Figure 4). The
Geotagged Photos Retrieval Module collects geotagged
photos and paths related to the main touristic
georeferenced pedestrian walks that characterize the
island. This stage, we have decided to use
OpenStreetMap (OSM), an open source system that
allows users to voluntarily collect and share GPS tracks
and georeferenced data (i.e. Point of Interests). In
Scenario 3: While his
parents are in the queue, a
15-year-old notices a monitor
surrounded by different
natural objects in display
boxes, in front of the
entrance. He tries to touch a
location on the monitor,
expecting a touch-based
interaction with the system.
In response to his touch, the
display shows a photo of that
area. At the same time, a box
lights up, grabbing his
attention. He notices that the
box holds a tree branch and
that the photo on the screen
is full of trees. Intrigued, he
decides to touch the tree
branch and the monitor
begins to display photos of
forests in different areas of
the island. He touches
another object, then another.
When his parents are done,
he brings them over to the
display and together they
explore the island.
Madeira, an island with an area of 802 km², the OSM
dataset includes 16000 points and 24000 lines along
with walking paths (levadas). There are also private
datasets gathered by companies that have collected
GPS tracks specifically related to walking paths such as
Walk Me Madeira (
Based on this dataset of GPS tracks and routes, the
module retrieves public geotagged photos from
different social media platforms and photo blogs to
continually integrate UGC (i.e. photos) to enrich the
user experience of our system [12]. At this stage, the
system includes the Instagram and Flickr platforms,
both of which provide developers with APIs for
retrieving public photos based on locations and/or tags.
The Classification Module is the core of our system,
because it enables (i) the elimination of photos which
include faces and other non-nature showing images and
(ii) the categorization of the collected pictures based on
the main objects (rocks, sand, etc.) and scenery (cliff,
forest, etc.) in each image. Each of these categories is
correlated with a physical object with which the users
can interact. Different kinds of machine learning
algorithms have been developed for the recognition and
classification of faces/landscapes/nature elements
represented in photos from social media platforms and
photo blogs [1, 2, 9, 13]. However, this system could
also use a crowdsourcing approach to let participating
users manually check the photos. A third approach
would be a system like the Google ReCAPTCHA, which
asks users to solve a puzzle by selecting all the images
that represent a specific element.
The Visualization Module is the final step in this system
and connects the categorized geotagged photos with
the object the user touches, showing the information in
a Google Maps based interface. Regarding the hardware
requirements, The Madeira Touch utilizes a touch
screen monitor and sensors to indicate when an object
has been touched. Considering the design of our
system, simple motion sensors should work well. The
current design also uses LED lights in each box to
emphasize the way in which each object corresponds to
a certain type of photo. This way, even when the user
is interacting directly with the touchscreen, the relevant
object will light up, indicating the relationship between
object and image.
Conclusion and Future Work
Our concept combines three characteristics in a unique
way to make an engaging pervasive display for visitors
to the island of Madeira. By using images from social
media, the system insures that the content remains
dynamic and accurate. By allowing users to explore the
data, either by location or by scenery type, the system
allows for scenery exploration and practical trip
planning. Finally, by creating a tactile form of
interaction, the system encourages users to consider
the natural materials represented, allowing them to
form a more sensual and complete mental picture of
the destination. While all three of these characteristics
are transferable and could be applied to other
destinations, it is the authors’ belief that, for future
installations of this solution, the appeal and utility of
such a system relies on its ability to accurately reflect
the unique character of the local environment. To
evaluate the effectiveness of our pervasive display in
engaging users and enhancing their visiting experience,
overcoming the display/interaction/affordance blindness
issues, we plan to install the system in the Madeira’s
main tourism office.
Figure 4: System architecture
1. Nuttapoom Amornpashara, Yutaka Arakawa,
Morihiko Tamai and Keiichi Yasumoto. 2015.
Landscape photo classification mechanism for
context-aware photography support system, In
Proceedings of Conference on Consumer Electronics
(ICCE 2015), 663-666.
2. Pu Cheng and Jie Zhou. 2011. Automatic Season
Classification of Outdoor Photos, In Proceedings of
the Conference on Intelligent Human-Machine
Systems and Cybernetics, 46-49.
3. Jorgos Coenen, Sandy Claes, and Andrew Vande
Moere. 2017. The concurrent use of touch and mid-
air gestures or floor mat interaction on a public
display. In Proceedings of the Symposium on
Pervasive Displays (PerDis '17). Article 9, 9 pages.
4. Nigel Davies, Sarah Clinch, and Florian Alt. 2014.
Pervasive displays: understanding the future of
digital signage. Synthesis Lectures on Mobile and
Pervasive Computing 8.1 (2014): 1-128.
5. Jodi Forlizzi and Katja Battarbee. 2004.
Understanding experience in interactive systems.
In Proceedings of the conference on Designing
interactive systems: processes, practices, methods,
and techniques (DIS '04), 261-268.
6. Wendy Ju and David Sirkin. 2010. Animate objects:
how physical motion encourages public interaction.
In Proceedings of the conference on Persuasive
Technology (PERSUASIVE'10), 40-51.
7. Kazjon Grace, Rainer Wasinger, Christopher Ackad,
Anthony Collins, Oliver Dawson, Richard Gluga,
Judy Kay, and Martin Tomitsch. 2013. Conveying
interactivity at an interactive public information
display. In Proceedings of the Symposium on
Pervasive Displays (PerDis '13), 19-24.
8. Jörg Müller, Dennis Wilmsmann, Juliane Exeler,
Markus Buzeck, Albrecht Schmidt, Tim Jay, and
Antonio Krüger. 2009. Display blindness: The effect
of expectations on attention towards digital
signage. Pervasive Computing (2009): 1-8.
9. Mor Naaman, Susumu Harada, QianYing Wang,
Hector Garcia-Molina, and Andreas Paepcke. 2004.
Context data in geo-referenced digital photo
collections. In Proceedings of the conference on
Multimedia (MULTIMEDIA '04), 196-203.
10. Gonzalo Parra, Joris Klerkx, and Erik Duval. 2014.
Understanding Engagement with Interactive Public
Displays: an Awareness Campaign in the Wild. In
Proceedings of the Symposium on Pervasive
Displays (PerDis '14), 180-186.
11. Peter Peltonen, Esko Kurvinen, Antti Salovaara,
Giulio Jacucci, Tommi Ilmonen, John Evans, Antti
Oulasvirta, Petri Saarikko. 2008. It’s mine, don't
touch!: interactions at a large multi-touch display
in a city centre. In Proceedings of the Conference
on Human Factors in Computing Systems (CHI
'08), 12851294.
12. Pavel Serdyukov, Vanessa Murdock, and Roelof van
Zwol. 2009. Placing flickr photos on a map.
In Proceedings of the conference on Research and
development in information retrieval (SIGIR '09),
13. Feng Tang, Daniel R. Tretter and Chris Willis. 2011.
Event classification for personal photo collections.
In Proceedings of Conference on Acoustics, Speech
and Signal Processing (ICASSP), 877-880.
... The Wi-Fi nodes of the Beanstalk infrastructure can gather data in a non-intrusive way, exploring the possibility to provide a wider community of stakeholders with information about sustainabilityrelated issues, such spatio-temporal patterns about the movement of people in tourist destinations, and data related to the air quality and weather conditions (Prandi et al., 2017a). The data gathered by the infrastructure was also used to generate public visualizations (Redin et al., 2017, Prandi et al., 2017b with the goal of raising awareness about the impact of tourism, in term of mobility flows, and sustainability-related variables (such as CO2 emissions and energy consumption) in the Island. ...
Conference Paper
Full-text available
We present data from detailed observations of CityWall, a large multi-touch display installed in a central location in Helsinki, Finland. During eight days of installation, 1199 persons interacted with the system in various social con- figurations. Videos of these encounters were examined qualitatively as well as quantitatively based on human cod- ing of events. The data convey phenomena that arise uniquely in public use: crowding, massively parallel inter- action, teamwork, games, negotiations of transitions and handovers, conflict management, gestures and overt re- marks to co-present people, and "marking" the display for others. We analyze how public availability is achieved through social learning and negotiation, why interaction becomes performative and, finally, how the display restruc- tures the public space. The multi-touch feature, gesture- based interaction, and the physical display size contributed differentially to these uses. Our findings on the social or- ganization of the use of public displays can be useful for designing such systems for urban environments.
Conference Paper
Full-text available
In this paper we investigate generic methods for placing photos uploaded to Flickr on the World map. As primary input for our methods we use the textual annotations provided by the users to predict the single most probable location where the image was taken. Central to our approach is a language model based entirely on the annotations provided by users. We define extensions to improve over the language model using tag-based smoothing and cell-based smoothing, and leveraging spatial ambiguity. Further we demonstrate how to incorporate GeoNames, a large external database of locations. For varying levels of granularity, we are able to place images on a map with at least twice the precision of the state-of-the-art reported in the literature.
Conference Paper
This paper investigates a novel approach to simultaneously use the qualities of touch and mid-air gestures or floor mat interaction on a public display. We demonstrate that although the concurrent use of multiple interaction modalities appeared functionally possible and has the potential to augment a single display with both personal and public functionalities, it is hampered by issues relating to display and interaction blindness, in addition to social discomfort and what we propose to name affordance blindness. We describe this blindness as the inability to understand the interaction modalities of a public display. Overall, passers-by tend to assume that a public display only supports a single interaction modality, an issue that is hard to overcome with ergonomic or visual interventions, or via physical curiosity objects. Although user engagement was relatively limited, we believe that our qualitative results highlight several crucial design and usability aspects when designing multi-modal, interactive public displays. Furthermore, this study demonstrates that previous knowledge on public display interaction cannot be simply ported to other contexts, and that more diverse content-types or contextual use cases should be evaluated "in-the-wild" to attain more generalizable insights.
Download Free Sample Fueled by falling display hardware costs and rising demand, digital signage and pervasive displays are becoming ever more ubiquitous. Such systems have traditionally been used for advertising and information dissemination, with digital signage commonplace in shopping malls, airports and public spaces. While advertising and broadcasting announcements remain important applications, developments in sensing and interaction technologies are enabling entirely new classes of display applications that tailor content to the situation and audience of the display. As a result, signage systems are beginning to transition from simple broadcast systems to rich platforms for communication and interaction. In this lecture, we provide an introduction to this emerging field for researchers and practitioners interested in creating state-of-the-art pervasive display systems. We begin by describing the history of pervasive display research, providing illustrations of key systems, from pioneering work on supporting collaboration to contemporary systems designed for personalized information delivery. We then consider what the near future might hold for display networks -- describing a series of compelling applications that are being postulated for future display networks. Creating such systems raises a wide range of challenges and requires designers to make a series of important trade-offs. We dedicate four chapters to key aspects of pervasive display design: audience engagement, display interaction, system software, and system evaluation. These chapters provide an overview of current thinking in each area. Finally, we present a series of case studies of display systems and our concluding remarks.
Conference Paper
In this paper, we present the findings from a field study that quantifies the different engagement phases of an interactive public display: from noticing interactivity and the first reaction to it, to actually interacting with the screen and expressing interest in a campaign. For this purpose, we developed an interactive public display for a real-life campaign that aims to increase awareness on cardiac arrests and Cardio-Pulmonary Resuscitation (CPR). In our study, we deployed two public displays with interactive prototypes in the biggest railway station of Brussels (Belgium), which resulted in 10,000+ passers-by and more than 1,000 reactions. We conclude that although interactive displays are effective at capturing attention and do provide a high conversion rate from passers-by to users interacting, this does not directly translate into achieving the goal of the display for the campaign as only 0,10% of them reach the final stage (visiting a website).
Conference Paper
Successfully conveying the interactivity of a Public Information Display (PID) can be the difference between a display that is used or not used by its audience. In this paper, we present an interactive PID called 'Cruiser Ribbon' that targets pedestrian traffic. We outline our interactive PID installation, the visual cues used to alert people of the display's interactivity, the interaction mechanisms with which people can interact with the display, and our approach to presenting rich content that is hierarchical in nature and thus navigable along multiple dimensions. This is followed by a field study on the effectiveness of different mechanisms to convey display interactivity. Results from this work show that users are significantly more likely to notice an interactive display when a dynamic skeletal representation of the user is combined with a visual spotlight effect (+8% more users) or a follow-me effect (+7% more users), compared to just the dynamic skeletal representation. Observation also suggests that - at least for interactive PIDs - the dynamic skeletal representation may be distracting users away from interacting with a display's actual content, and that individual interactivity cues are affected by group size.
Automatic season classification of an image is potentially useful for content-based image retrieval, computer object recognition and digital photo management applications. In this paper, we propose a method for automatic season classification of outdoor photos combining color information and skin information. We extract color information by computing the normalized color histogram of the photo. Skin information is only available for the photos containing people. It is extracted based on the results of face detection and skin detection. After performing face and skin detection, we select skin blobs that are probably face or limbs based on local feature analysis. Then features are extracted from the selected skin blobs to represent the information of skin exposure. Color information and skin information are combined using the Bayesian method. Experimental results have shown the effectiveness of the proposed method.
Conference Paper
Given time and location information about digital photographs we can automatically generate an abundance of related contextual metadata, using off-the-shelf and Web-based data sources. Among these are the local daylight status and weather conditions at the time and place a photo was taken. This metadata has the potential of serving as memory cues and filters when browsing photo collections, especially as these collections grow into the tens of thousands and span dozens of years. We describe the contextual metadata that we automatically assemble for a photograph, given time and location, as well as a browser interface that utilizes that metadata. We then present the results of a user study and a survey that together expose which categories of contextual metadata are most useful for recalling and finding photographs. We identify among still unavailable metadata categories those that are most promising to develop next.
Conference Paper
Understanding experience is a critical issue for a variety of professions, especially design. To understand experience and the user experience that results from interacting with products, designers conduct situated research activities focused on the interactions between people and products, and the experience that results. This paper attempts to clarify experience in interactive systems. We characterize current approaches to experience from a number of disciplines, and present a framework for designing experience for interactive system. We show how the framework can be applied by members of a multidisciplinary team to understand and generate the kinds of interactions and experiences new product and system designs might offer.