ArticlePDF Available

Interaction of mobile camera devices with physical maps

Authors:

Abstract and Figures

Traditional paper-based maps are still superior to their digital coun-terparts used on mobile devices in several ways. They provide high-resolution, large-scale information with zero power consumption. On the other hand digital maps provide personalized and dynamic information on request, but suffer from small outer scales and low resolutions. In this work we try to combine the advantages of both by using mobile camera devices (such as smartphones ore PDA) as a map-referenced magic lens that displays geo-referenced information on top of the physical map. We will mainly focus on the interaction schemes that arise from using mobile camera devices with physical maps and briefly explain how the device tracking over existing physical maps can be realized.
Content may be subject to copyright.
Interaction of Mobile Camera Devices with
physical maps
Johannes Sch¨oning, Antonio Kr¨uger, Hans J¨org M¨uller
{j.schoening, antonio.krueger, joerg.mueller}@uni-muenster.de
Abstract
Traditional paper-based maps are still superior to their digital coun-
terparts used on mobile devices in several ways. They provide high-
resolution, large-scale information with zero power consumption. On the
other hand digital maps provide personalized and dynamic information
on request, but suffer from small outer scales and low resolutions. In this
work we try to combine the advantages of both by using mobile camera
devices (such as smartphones ore PDA) as a map-referenced magic lens
that displays geo-referenced information on top of the physical map. We
will mainly focus on the interaction schemes that arise from using mobile
camera devices with physical maps and briefly explain how the device
tracking over existing physical maps can be realized.
1 Introduction
In many mid- to large-sized cities public maps are ubiquitous. They help to
facilitate orientation and provide information to tourists but also to locals who
just want to look up an unfamiliar place while on the go. These maps are usually
designed to address the most common questions of average users and therefore
contain only the most necessary information, such as street names and places
of interest. More specific information, such as locations of ATM machines,
pubs, shops and restaurants would visually clutter the map and are therefore
not included. Digital requests can be answered by using mobile devices, such
as PDA and smartphones with network connectivity by querying an adequate
web service, which returns a dynamic digital map with the desired content.
These maps suffer from a small outer scale (due to the small display size) and
a rather small inner scale. It is often hard to identify locations and landmarks
on these maps, rendering them rather useless. In this paper we combine the
advantages of large scale paper-based but static maps with small dynamic maps
on mobile devices. We apply a magic lens approach [7] that makes use of mobile
camera devices. The main idea is that the camera image of the physical map is
augmented with dynamic content, for example locations of ATM machines on
the map. By moving a tracked camera device over the physical map (see figure
1) users can explore requested digital content available for the whole space of the
1
2
Figure 1: Interaction of Mobile Camera Devices with physical maps
map by just using their mobile PDA or smartphone as a see-through device. For
this purpose the mobile camera device has to be tracked over the physical map
(see section 3), and appropriate map interaction concepts are needed (section
4). We will also provide some details on the implementation and start with a
brief review of related work.
2 Related Work
Our work builds upon existing work on mobile augmented reality. To track the
device over the map we apply the marker-based approach developed by Wagner
and Schmalstieg [6] to the domain of physical maps. Our work is similar to
that of Reilly et al [2], where a physical map is equipped with RFID-tags, which
allows a mobile device, equipped with an RFID-reader to identify certain spots
and display corresponding information. However, in our work we follow a magic
lens see-through approach and use a mobile camera device. We are inspired by
the interaction concepts developed by Rohs and Roduner [4], but specifically
look at the interaction requirements in the map domain.
3 Device Tracking and Marker Integration
To track the device in respect to the map, we are currently using ARToolkitPlus
[1] markers. The marker based approach is very robust, but in our case its main
disadvantage is that it obscures parts of valuable map space. We have tried
to address this problem in several ways: semi transparent markers (up to 15%
transparency), multiple but smaller markers (see figure 1c), and markers with
map content such as a north arrow, a parking place symbol or even markers
with commercial information. When seen through the display, markers can be
covered by an appropriate digital patch of the map (the effect can be seen on
the system screen-shots in figure 2).
3
A special marker should be used to identify the type, the outer boundary
and the scale of the map.
Most physical maps are nowadays designed with the help of dedicated Geo-
graphic Information Systems, that can also be used to easily geo-reference the
markers. For this purpose, the markers are inserted as additional map objects
and stored along with the geodetic coordinates of the marker’s center and the
orientation of the marker’s coordinate system. This approach makes it very easy
to design maps with integrated markers and a correct geographical reference.
Further achievements in tracking could be obtained by combing a marker-based
approach with optical flow analysis [3]. Given the fact that city maps are usu-
ally highly structured, we are currently also exploring the possibilities to apply
structural image analysis to the tracking problem.
4 Interaction Concepts
Figure 2: A Screenshot of the mobile devices. The Marker is masked by a map
from a Web Mapping Service: a) ATMs in M¨unster b) Measuring Distance
The basic interaction pattern is that of sweeping the camera device over the
map (as described in [4] and seen in figure 1). Moving the camera towards or
away from the map will lead to a smaller or greater portion of the map being
visible on the display. In combination with keystrokes dedicated geo-services
can be triggered, e.g. a routing service that calculates a route from the actual
position to the designated location1. For the selected area specific geofeatures
can be requested from a Web Feature Service2. The result of an request to
display available ATM-machines is shown in figure 2a).
Another obvious interaction concept is that of map annotations. Allowing
users to annotate physical maps with arbitrary kind of information (e.g. lo-
cations of good pubs or interesting shops) has the great advantage that this
1In case of city maps the location of the user is known and thus only the identification of
destination is needed.
2A Web Feature Service (WFS) is a highly interoperable and standardized protocol, that
allows for requests for geographical features across the web.
4
information is geo-referenced without the need of any external location technol-
ogy (such as GPS).
Calculating distances between two designated locations on the map by point-
ing is straightforward. As seen in figure 2b users just need to mark two desig-
nated points on the physical map.
5 Summary and State of Implementation
This paper has discussed an approach to access digital geo-referenced content
through a mobile camera device (such as a PDA or a smart phone). By applying
a magic-lens approach we have shown that high resolution and large scale phys-
ical maps can be augmented with dynamic and personalized content without
requiring great changes in the infrastructure.
The current implementation runs on a PDA with a SD-camera. The content
is retrieved over a wireless connection from a Geographic Information System.
We are investigating the possibility to run the system on a MDA Pro (HTC
Universal) from T-Mobile with a 1.3 mega pixel camera running under Windows
Mobile 5.0.
References
[1] ARToolkit (2005) <http://www.hitl.washington.edu/artoolkit/>
[2] Reilly, D., Welsman-Dinelle, M., Bate, C., Inkpen, K.: Just Point and
Click? Using Handhelds to Interact with Paper Maps. Proceedings of the
7th international conference on Human Computer Interaction with Mobile
Devices and Services (2005)
[3] Drab, S., Artner, N.: Motion Detection as Interaction Technique for Games
& Applications on Mobile Devices. Proceedings of the Workshop PER-
MID(2005)
[4] Rohs, M., Roduner, C.: Camera Phones with Pen Input as Annotation
Devices. Proceedings of the Workshop PERMID (2005)
[5] Wagner, D., Schmalstieg, D.: Towards Massively Multi-User Augmented
Reality on Handheld Devices. Proceedings of Third International Confer-
ence on Pervasive Computing, Pervasive (2005)
[6] Wagner, D., Schmalstieg, D.: First Steps Towards Handheld Augmented
Reality. International Symposium on Wearable Computer (2003)
[7] Bier, E. A., Stone, M. C., Pier, K., Buxton, W., DeRose, T. D.: Toolglass
and magic lenses: The see-through interface. Computer Graphics, vol. 27,
no. Annual Conference Series, pp. 7380, (1993)
5
6 Specific requirements
Figure 3: Schema
General No specific requirements
Space Just a desk and a wall for the map
Power Notebook and PDA
Network Not needed
Time about 20 minutes
... Real time computation of the user's point of view can be performed by using computer vision techniques such as the ARToolKit library. This library computes the position and attitude of the camera/user based on the recognition of specific simple markers [17][18][19]. ...
... Therefore, the great advantage in using a system based on computer vision-and more specifically the ARToolKit library-is that only a video camera and a computer are required [3]. Thus, the ARToolKit was chosen for the implementation of this prototype because of its extensive presence in the literature and good processing efficiency reports [3,4,8,17,19]. ...
Article
Full-text available
Recent technological advancements in many areas have changed the way that individuals interact with the world. Some daily tasks require visualization skills, especially when in a map-reading context. Augmented Reality systems could provide substantial improvement to geovisualization once it enhances a real scene with virtual information. However, relatively little research has worked on assessing the effective contribution of such systems during map reading. So, this research aims to provide a first look into the usability of an Augmented Reality system prototype for interaction with geoinformation. For this purpose, we have designed an activity with volunteers in order to assess the system prototype usability. We have interviewed 14 users (three experts and 11 non-experts), where experts were subjects with the following characteristics: a professor; with a PhD degree in Cartography, GIS, Geography, or Environmental Sciences/Water Resources; and with experience treating spatial information related to water resources. The activity aimed to detect where the system really helps the user to interpret a hydrographic map and how the users were helped by the Augmented Reality system prototype. We may conclude that the Augmented Reality system was helpful to the users during the map reading, as well as allowing the construction of spatial knowledge within the proposed scenario.
... Our system uses markerless image tracking and hence does not require any modifications of the physical space. Schöning et al. [26] employed a magic-lens approach to interact with personalized content on a poster-size city map where a user is required to hold a phone in mid-air. In an outdoor scenario such as skiing, however, with its often harsh usage conditions, we instead use a head-worn display for information delivery. ...
... physical paper maps or digital maps on smartphone) do not take into account usergenerated content; secondly, alternative setups (e.g. handheld AR [26]) are often found inconvenient for the winter context [8]. However, even though there is no direct equivalent to our system among traditional decision-making practices on the slope, future research would nevertheless benefit from a quantitative inquiry. ...
Conference Paper
Full-text available
Winter sports like skiing and snowboarding are often group activities. Groups of skiers and snowboarders traditionally use paper maps or board-mounted larger-scale maps near ski lifts to aid decision making: which slope to take next, where to have lunch, or what hazards to avoid when going off-piste. To enrich those static maps with personal content (e.g., pictures, prior routes taken, or hazards encountered), we developed SkiAR – a wearable augmented reality system that allows groups of skiers and snowboarders to share such content on a printed panoramic resort map. The contribution of our work is twofold: (1) we developed a system that offers a novel way to review and share personal content in situ while on the slope using a resort map; (2) we report on the results from a qualitative analysis of two user studies to inform the design and validate the usability and perceived usefulness of our prototype.
... This finding demonstrates the application of pattern recognition technology and data visualisation to traditional maps. Schöning et al. (2006) describe a method of accessing a map by moving a camera device (such as a mobile phone) to increase the digital geo-referenced content on the map. The high-resolution maps and large-scale physical maps can augment personalised content and dynamic content with augmented reality without modifying anything. ...
Thesis
Full-text available
The use of augmented reality extends navigation in location-based mobile applications. This thesis introduces the concept of narrative navigation and describes the design and testing of four mobile application prototypes that explore the proof-of-concept for narrative navigation. We explored the work related to Augmented Reality and navigation and found that no scholars have focused on the use of Augmented Reality by navigating users in the context of location-based stories. This thesis aims to answer the research question of how Augmented Reality visualisations can reflect location-based data to tell a story on mobile devices. While typical navigation only involves guidance to a single point, narrative navigation requires navigation that focuses on the linearity of the story. The goal of our research is to investigate how augmented reality can be used to visualize location-based information for navigation in a storytelling environment. We carried out the design, development, and testing of the Initial Digital Prototype, exploring the use of Augmented Reality technology to display the Point of Interest (POI) position, direction, distance, and story chapter order, and guide users to the next location in the story. In the subsequent Paper Prototype, we explored four navigation visualisations, showing the next story element through stylized flags of varying heights. We found that showing the next story location nearer to the bottom of the screen was the most successful way to guide users, but also found that many participants preferred to show location order by distance. In the Narrative Navigation Prototype, further POI design options are explored. We improved details such as size, quantity, description and the spacing of POIs, the camera tilt display, and the use of indicator arrows. In the Final Prototype, the concept of narrative navigation was confirmed by the positive evaluation of participants and confirmation of user interactions provided by our behavioural tracking maps that were used to observe participant movements and prototype interactions. The thesis contributions include four insights into the concept of narrative navigation. The first is the use of augmented reality to visualise location-based stories on mobile phones; the second is to highlight the importance of showing the next location in the story sequence, which has been almost absent in related work. Removing visited story chapters will facilitate navigation and narrative ordering; and finally, the use of directional indicators that guide the small phone screen to connect to the real world.
... There are many examples of the combination of AR and maps for both desktop or mobile platforms, including Bill- inghurst et al. (2001), Bobrich and Otto (2002), Bobrich (2003), Hedley (2003), Reitmayr et al. (2005), Reilly et al. (2006), Schöning et al. (2006), Morrison et al. (2009), Paelke and Sester (2010), Grammenos et al. (2011), Morri- son et al. (2011) and Low and Lee (2014). More specifically, the contribution of Halik and MedyńskaMedyńska-Gulij (2017) focuses on graphic variables in AR, aiming to better understand how the public (e.g. ...
Article
Among the systems that aim to help users to perform map reading tasks, augmented reality (AR) is one of the most promising. However, the impact of this new technology in terms of acceptance, motivation and improvement of the learning process has still not been sufficiently explored. This study aims to assess the contribution of an AR system for map reading and for improving users’ understanding of geographic data. Landscape and census data from New Zealand were used to evaluate the performance of an AR system in an experiment involving 60 participants. Differences in the participants’ backgrounds were reflected in the way they completed the task for both printed maps and the AR system and 90% of the participants preferred to work with the AR system rather than with traditional printed maps. Users that had previous experience of the geographical dataset provided performed the task better using only the printed maps, while those without that experience performed better with the AR system.
... While the presented interface was deployed on a large multi-touch surface we want to highlight a second example of a new interface for mobile devices. At the beginning of this PhD in investigated the combination of mobile devices with traditional paper maps [18]. The mobile device was used as a magic lens to explore additional dynamic information on the map (see figure 3 (i)). ...
Conference Paper
In this paper, we propose a way to enable users to preview a modified version of objects in the real world with a mobile device's screen using techniques of augmented reality with live video. Here, we applied the methodology to develop a prototype system and an interface that enables users to modify fonts of designs of a poster put in an actual environment and preview it to reduce the problem referred to as "impression inconsistency." From another point of view, this system uses an "interaction through video" metaphor. Tani et al. devised a technique to remotely operate machines by manipulating a live video image on a computer screen. Boring et al. applied it to distant large displays and mobile devices. Our system provides interaction with static, unintelligent targets such as posters and signs through live video.
Chapter
One intention of archeology is the documentation and reconstruction of historical development of mankind. The extracted data of an archeological excavation is usually spatial referenced and visualized with the help of maps or geographical information system. Both, paper maps and digital representations have partly complementary strengths and shortcomings in their application. With Augmented Reality, both Systems can be combined and complement each other. This Work presents a concept for augmenting archeological paper maps with 3D models and additional interaction options. Besides the presentation of contents in 3D space for museum visitors, the identified examples of usage include the generation of new contents to support the archeological work on an excavation site. The mobile application ARAC Maps (Augmented Reality for Archeological Content) realizes this concept based on commercially available devices with the Android operation system.
Article
In recent years, vision recognition applications have made the transition from desktop computers to mobile phones. This has allowed a new range of mobile interactions and applications to be realised. However, this shift has unearthed new issues in mobile hardware, interactions and usability. As such the authors present a survey into mobile vision recognition, outlining a number of academic and commercial applications, analysing what tasks they are able to perform and how they achieve them. The authors conclude with a discussion on the issues and trends found in the survey.
Article
3D game engines are originally developed for 3D games. In combination with developing technologies we can use game engines to develop a 3D graphics based navigation system or 3D Animated Map Viewer (AMV). Visualizing geospatial data (buildings, roads, rivers, etc) in 3D environment is more relevant for navigation systems or maps rather than using symbolic 2D maps. As 3D visualization provides real spatial information (colors and shapes) and the 3D models resembles the real world objects. So, 3D view provides high accuracy in navigation. This paper describes the development of human interactive 3D navigation system in virtual 3D world space. This kind of 3D system is very useful for the government organization, school bodies, and companies having large campuses, etc for their people or employers for navigation purposes.
Article
Full-text available
Toolglass™ widgets are new user interface tools that can appear, as though on a transparent sheet of glass, between an application and a traditional cursor. They can be positioned with one hand while the other positions the cursor. The widgets provide a rich and concise vocabulary for operating on application objects. These widgets may incorporate visual filters, called Magic Lens™ filters, that modify the presentation of application objects to reveal hidden information, to enhance data of interest, or to suppress distracting information. Together, these tools form a see-through interface that offers many advantages over traditional controls. They provide a new style of interaction that better exploits the user's everyday skills. They can reduce steps, cursor motion, and errors. Many widgets can be provided in a user interface, by designers and by users, without requiring dedicated screen space. In addition, lenses provide rich context-dependent feedback and the ability to view details and context simultaneously. Our widgets and lenses can be combined to form operation and viewing macros, and can be used over multiple applications.
Conference Paper
Full-text available
We present preliminary results from two studies examining the selection techniques suitable for paper maps using handheld computers or cellphones as interaction devices. An informal mockup exploration indicated a strong tendency toward point-and-click style interaction when participants were asked to envision how a range of queries might be expressed. A subsequent study involving a functional prototype and a short training session showed that participants were receptive to other interaction styles, including tracing paths, circling regions, constraining queries with paper menus, and selecting multiple non-adjacent map icons. The contrasting results underline the importance of using a range of design evaluation techniques when developing applications involving handheld devices as interactors.
Conference Paper
Full-text available
Augmented Reality (AR) can naturally complement mobile computing on wearable devices by providing an intuitive interface to a three-dimensional information space embedded within physical real- ity. Unfortunately, current wearable AR systems are relatively complex, expensive and heavyweight, rendering them unfit for large-scale deploy- ment to untrained users outside a constrained laboratory environment. Consequently, collaborative multi-user experiments have been prevented from exceeding just a hand full of participants. In this paper, we present a software architecture for interactive, infrastructure-independent multi- user AR applications on o-the-shelf handheld devices. We implemented a four-user interactive game installation as an evaluation scenario that would encourage participants to playfully engage in a cooperative task. Over the course of four weeks, more than five thousand visitors from a wide range of professional and socio-demographic backgrounds inter- acted with our system at a total of four dierent locations and events. The findings from an informal summative assessment of user performance were generally positive, and will hopefully advance our long-term eort to deploy massively multi-user AR applications to the general public.
Conference Paper
This paper explores the use of camera phones with pen in- put as a platform for generating digital annotations to real- world objects. We analyze the client-side requirements for a general annotation system that is applicable in mobile as well as stationary settings. We outline ways to create and interact with digital annotations using the camera and pen-based input. Two prototypically implemented anno- tation techniques are presented. The first technique uses visual markers for digital annotations of individual items in printed photos. The second technique addresses the an- notation of street signs and indication panels. It is based on image matching supported by interactively established 4-point correspondences.
Conference Paper
Mobile devices become smaller and more powerful with each generation distributed. Because of the tiny enclosures the interaction with such devices ofiers limited input capabili- ties. In contrast there are hardly any mobile phones pur- chasable that do not have a built-in camera. We developed a concept of an intuitive interaction technique using optical inertial tracking on mobile phones. The key of this con- cept is the user moving the mobile device which results in a moving video stream of the camera. The direction of the movement can be calculated with a suitable algorithm. This paper outlines the algorithm Projection Shift Analysis devel- oped to run on mobile phones.