Conference PaperPDF Available

Abstract

Communication between users and physical objects and sensors through the web within the Internet of Things framework, requires by definition the capability to perceive the sensors and the underlying information and services. Visualization of the Things in IoT is thus a requirement for natural interaction between users and IoT instances in the upcoming but steadily established computing paradigm. The immense quantity of sensors and variety of usable information introduces the need to intelligently filter and adapt the respective information sources and layers. Current work proposes an architecture that supports intelligent interaction between users and the IoT addressing the intelligent perception requirement described earlier. On the one hand, sensory visualization is tackled via Augmented Reality layers of sensors and information and on the other hand context and location awareness enhance the system by providing usable in the respective senses information.
Intelligent visual interface with the Internet of Things
Konstantinos Michalakis
University of the Aegean
Mytilene, Greece
kmichalak@aegean.gr
John Aliprantis
University of the Aegean
Mytilene, Greece
jalip@aegean.gr
George Caridakis
University of the Aegean
Mytilene, Greece
gcari@aegean.gr
ABSTRACT
Communication between users and physical objects and
sensors through the web within the Internet of Things
framework, requires by definition the capability to
perceive the sensors and the underlying information and
services. Visualization of the Things in IoT is thus a
requirement for natural interaction between users and IoT
instances in the upcoming but steadily established
computing paradigm. The immense quantity of sensors
and variety of usable information introduces the need to
intelligently filter and adapt the respective information
sources and layers. Current work proposes an architecture
that supports intelligent interaction between users and the
IoT addressing the intelligent perception requirement
described earlier. On the one hand, sensory visualization
is tackled via Augmented Reality layers of sensors and
information and on the other hand context and location
awareness enhance the system by providing usable in the
respective senses information.
AUTHOR KEYWORDS
Internet of Things; Natural interaction; Augmented
reality; Context Awareness; Markerless tracking
ACM CLASSIFICATION KEYWORDS
H.5.2 Human-centered computing: Systems and tools
for interaction design, Human-centered computing:
Ubiquitous and mobile computing systems and tools,
Human-centered computing: Visualization techniques
INTRODUCTION
The emergence of the Internet of Things (IoT) promises a
wide range of new applications and services that will
shape our everyday life. The building blocks are smart
objects (SOs) around us that can interact with each other
and with the user, providing real-time personalization of
the system behavior according to the user’s preferences.
The IoT envisions interaction with billions of such SOs.
Valli [11] discusses the importance of «natural
interaction» and the need to design appropriate interfaces
and interaction models that will allow users to
communicate with the machine in ways that are more
natural to them than the use of a mouse or a keyboard. An
augmented representation of smart objects can act as a
natural interface that provides a better understanding of
the building blocks of the IoT infrastructure of the area,
giving users the ability to monitor and process the
operation of SOs from their own device. A marker is the
most common and simple way to track a device with an
augmented reality application [4]. Even though it is a
widespread technique, these markers have considerable
limits when it comes to an IoT implementation, especially
when referring to non-dynamic markers.
Besides that, the precise localization provided by markers
is not important for an IoT application, as knowledge of
the presence of sensors is enough for the perception of a
smart system. So, even if accurate tracking fails to be
performed in many cases, the user will still benefit from
the procedure. However, there are cases in which precise
localization of an SO is crucial, for example when the
sensor’s data is meaningful only for its position like the
temperature of a specific object or when an SO needs
maintenance. But this may expose smart devices to
security dangers, especially in outdoor environments; so
in order to prevent that, access to precise localization
should be allowed only with special privileges.
Presenting the smart objects of the IoT as augmented
artifacts is one way to provide natural interaction to the
users. Let us consider the following scenario:
Lane is visiting a city that boasts an extensive use of
IoT services. Her initial experience is with a smart
transport system that plans her trips. Soon she starts
using more smart services but she still feels ignorant
of the numerous sensors and smart objects that
function all around her. She would like to have more
information about all those sensors, what they
measure and where they are located.
Lane started using the AR system on her smartphone.
Among other things she can now identify sensors
located around her, understand their function, enable
or disable them. As a result, she can use the services
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights
for components of this work owned by others than ACM must be
honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee. Request permissions from
Permissions@acm.org.
SmartObjects'17, March 13 2017, Limassol, Cyprus
© 2017 ACM. ISBN 978-1-4503-4902-4/17/03…$15.00
DOI: http://dx.doi.org/10.1145/3038450.3038452
provided more efficiently.
We propose a system architecture that combines the
sensory layer, the middleware and the user’s device,
adding context aware methods and smart object
management, discussing markerless tracking methods
which can be applied in our system and also focusing on
contextual matters [6]. The rest of the paper is organized
as follows: Section 2 briefly reviews the related work. In
Section 3 we introduce our proposed framework,
analyzing issues like context awareness, security and
markerless tracking methods. Section 4 describes the
interaction methods in our system. Finally in Section 5 we
discuss our future plans and directions.
RELATED WORK
Several papers explore the use of AR systems in smart
spaces. Garcia et al [2] discuss the use of sentient
browsers for the Internet of Things by implementing a
prototype UbiVisor. Similar efforts can be found in [1] &
[5]. All the efforts listed above introduce solutions limited
to a user device. Our system differs since it can be
integrated into the IoT infrastructure combining sensors,
middleware and AR components into a seamless platform.
Marker based techniques are being used in the majority of
visualization methods which integrate Augmented Reality
in the IoT framework. QR codes printed upon smart
devices or close to their position provide less complicated
detection methods but user’s devices must be a few
centimeters close to the marker for a successful
identification [9]. Same issues appear in larger projects
which use thousands of QR codes to implement smart city
strategies [3]. Bluetooth Low Energy is amongst the most
promising new hardware technologies for tagging
devices, opening new horizons for IoT applications [12].
PROPOSED SYSTEM ARCHITECTURE
Perera et al [6] reviewing the field, present a typical IoT
architecture that comprises of the middleware and the
Sensor Network, allowing Users to interact through
Applications. We enhance the depicted architecture as
shown in figure 1, in which the colored areas and arrows
show our additions to the original figure of the authors
(greyed areas).
The heterogeneity of smart objects deployed into a fully
realized Internet of Things needs to be addressed by the
use of middleware. The role of middleware is to provide
an abstraction layer between the sensory network and the
applications running in the environment. In our proposed
system, the middleware is enhanced with a Context
Aware Module and an SO Management component. Both
modules are necessary to provide a personalized list of the
SOs of the area to the AR System. Research for
middleware in IoT has been very active recently,
proposing many solutions [8].
The original architecture scheme did not illustrate the
usage of the User Device, which receives an upgraded
role in our enhanced version. The User Device module
consists of the User Profile component which is
responsible for transmitting the user access and
preferences and the AR component that is responsible for
the augmented visualization of the sensors.
The data flow starts at the sensor layer which
communicates directly with the middleware. The
transmitted data consists of both the sensed context
measured by the sensors (e.g. noise detection) and the
descriptive metadata of the sensors such as: ID,
description, capabilities, location (for the AR tracking).
We propose that the middleware should catalog and
manage the smart objects residing in their area of
influence, a daunting task that raises many challenges.
In particular, there is a need to define the area of control
attributed to each middleware. Although many spaces will
be constrained and easily attributed to a single
middleware (e.g. rooms in a Smart Home), some -mostly
public- spaces will be too large. A geospatial approach
could assign middleware to all SOs within a perimeter,
thus allowing overlapping areas, which may add
complexity but enhance the SO management. The
middleware system should be able to identify which smart
objects are in its area, their IDs and functions, their last
known location and other metadata. Also, as users moves
around, the user device should be able to hop from
middleware to middleware, sometimes even merging lists
from different sources, eventually presenting to the user
the actual SOs of interest around the area.
Responsible for the execution of the AR system, the user
device receives a list of smart objects, before visualizing
the SOs on the screen. In a typical scenario, a user enters
a smart space, with a smart device (smartphone). IoT
Fig. 1 Proposed IoT architecture
systems residing in that space track the entrance of a new
smart object and initiate communication protocols. The
middleware system responsible for the interaction with
the sensory network interacts with it, exchanging profiles.
Eventually, a list of all available SOs is transmitted to the
device, making it available to the AR program installed
on it.
The proposed system also tackles issues of multi-users.
What happens if two users antagonize for the same smart
object? How will the system respond to multiple
conflicting requests given at the same time frame? Part of
the answer to multiuser challenges lies to the correct
attributing of profiles to users, will be discussed in the
next part of this section. But eventually, equally
antagonizing users will need to be tackled with an
appropriate protocol.
Context Awareness
Garcia et al [2] argue that candidate visors browsing the
area for smart objects should satisfy the CA requirements
if they wish to be perceived as sentient. Adding context
aware computing, the list of SOs presented to the user is
filtered based on their preferences, privileges and the
general area context. As a result, the user is not confused
by the overexposure to irrelevant objects.
The Context Aware Module residing on the middleware
receives two context types:
a) sensed context
The sensor layer produces great amounts of sensed data,
all of which can be considered by the system as context.
Contextual variables like light and sound emissions or
traffic and people congestion have to be measured by the
sensors and transmitted to the CA reasoning engine of the
middleware. For example, based on sound filtering, the
engine may identify that privacy conditions are satisfied
and accordingly visualize SOs that the user may wish to
hide from public view.
b) user profile
The IoT infrastructure will cover many services and
procedures, some of which may have restricted access. A
public camera recording passengers for safety reasons
should not be available for all users but only those with
the appropriate privileges. User access data is thus
essential and will be transmitted from the user device to
the middleware when initiating their communication.
Apart from identity and privileges, the user profile can
provide user preferences that further personalize the
system behavior producing context information of higher
complexity.
Fusing the sensed data with the user profile, the CA
module produces reasoned contextual information which
is meaningful and transmits it to the Smart Objects
Managing module which may decide to hide the smart
objects that are irrelevant in the specific context, or focus
on those that are better suited. E.g. a museum visitor will
probably not be interested in the exhibits displayed in an
open area when it is raining. Furthermore since the
procedure is executed by the middleware, it will be
responsible to capture changes in the environment, thus
providing an up-to-date perception of the situation.
Security and Privacy issues
As in all IoT applications, there are many security issues
that need to be answered. A new security risk added by
the proposed architecture is the danger of disclosing to
users the accurate position of public smart objects, which
may increase the risk of those objects being stolen. An
anti-theft technique could be applied: there can be two
types of SOs, those that are and those that are not critical
to being stolen. The first type of SOs may be decided to
transmit only a vague value of its exact location.
At the same time, privacy and trust issues are crucial for
the acceptance of IoT applications context from users.
Confidentiality, integrity and availability (known as the
CIA triad) need to be guaranteed while authorization and
authentication techniques must be established in order to
protect sensitive data and personal information. Our
proposed system comprehends many of the above
mechanisms so that it can handle shared data in
compliance with user needs and privileges [10].
Markerless tracking techniques
As we already discussed, markers are not an ideal solution
for the IoT applications so it is necessary to find
alternative techniques. For our proposed architecture, we
need to address the following challenges:
Devices/sensors may change their position anytime.
Users can access data of all devices/sensors inside a
certain area without being close to them.
Devices/sensors in the area are displayed in the user’s
smart phone even if there is not a clear line of sight.
We examine two technologies which can address these
challenges. Firstly, the active RFID tag can be attached in
physical objects and communicates with an RFID-
middleware in charge through RF signals. As a result, the
real-time locations of all active RFID tags can be acquired
in a map provided by the middleware in charge that loads
in a tracking device which enters its area. Then, users can
interact with these tags through their camera view by
pointing their device to a direction according to the map.
Due to technology restrictions, we only have the
representation of each object in two coordinates but
cannot calculate their distance from the ground, so we
assume that all SOs are in a fixed height (1.5 m).
As a second technology we propose the Bluetooth Low
Energy (BLE) beacons. In this case, we present a different
approach for our architecture. Users no longer search
around for SOs in order to interact with, instead the IoT
devices are the ones which scan their area and give the
ability for any user who approaches them to use their
device and access their data. However, BLE beacons
expose many limitations regarding the IoT
implementation, especially on the interaction part, but
eventually new architectures will be introduced and tackle
these issues, revealing BLE’s potential in this field [7].
INTERACTION
The interaction between humans and things is very crucial
for the successful deployment of the Internet of Things.
The Augmented Sensors system described in this paper
aims towards the direction of giving more control to the
user over the surrounding smart objects.
The challenge lies in the wide heterogeneity of existing
smart objects and sensors. If standardization efforts fail,
multiple protocols administered by various manufacturers
will be used. In such a scenario, the user’s device will
scan the area for smart objects and upon identification of
the desirable object will request access. The targeting
object will respond by transmitting the interaction
protocol it understands. The user’s device can then initiate
a communication based on the selected protocol which
may be known or not (can be downloaded from the Web).
Visualizing the smart objects as augmented sensors at the
screen of the user device is our proposal for more natural
interactions. The abstraction layer offered by the AR
system can hide the technical details of smart objects to
users not interested in those and allow them to interact
using the touch screen, which might not be possible due
to the size or positioning of those objects. Finally, the
wide range of system capabilities of user devices can also
let voice or gesture interfaces and other abstract ways of
communication fulfilling the user’s requirement for
natural interactions.
DISCUSSION
In this paper we introduced an architecture that improves
the natural interaction between users and IoT
environments by visualizing the sensor layer through
augmented reality. It also embeds a context awareness
layer, which personalizes the AR experience while
depending on markerless tracking techniques, since
precise localization of smart objects only adds complexity
to the system without enhancing the perception of the IoT
experience. As a result, users are able to observe and
control smart objects through an augmented reality
representation by scanning the area around them with the
appropriate tracking device.
In the future, we plan to simulate our architecture in order
to review the middleware and software prototypes that
will have the ability to connect and share data between
SOs and tracking devices. Furthermore, we intend to
evaluate this new method of interaction in terms of user’s
natural experience in a smart environment.
REFERENCES
1. Ajanki, A., Billinghurst, M., Gamper, H., Järvenpää,
T., Kandemir, M., Kaski, S., Koskela, M., Kurimo,
M., Laaksonen, J., Puolamäki, K., Ruokolainen, T.
(2011). An augmented reality interface to contextual
information. Virtual reality, 15(2-3), 161-173.
2. García-Macías, J., Alvarez-Lozano, J., & Estrada-
Martinez, P. (2011). Browsing the internet of things
with sentient visors. Computer, 44(5), 46-52.
3. Gutiérrez, V., Galache, J. A., Sánchez, L., Muñoz, L.,
Hernández-Muñoz, J. M., Fernandes, J., & Presser, M.
(2013). Smartsantander: Internet of things research
and innovation through citizen participation. In The
Future Internet Assembly (pp. 173-186). Springer
Berlin Heidelberg.
4. Mohan, A., Woo, G., Hiura, S., Smithwick, Q., &
Raskar, R. (2009). Bokode: imperceptible visual tags
for camera based interaction from a distance. In ACM
Transactions on Graphics (TOG) (Vol. 28, No. 3, p.
98). ACM.
5. Oh, S., & Woo, W. (2009). CAMAR: Context-aware
mobile augmented reality in smart space. In
Proceedings of International Workshop on Ubiquitous
Virtual Reality (pp. 15-18).
6. Perera, C., Zaslavsky, A., Christen, P., &
Georgakopoulos, D. (2014). Context aware computing
for the internet of things: A survey. IEEE
Communications Surveys & Tutorials, 16(1), 414-454.
7. Pointr, (2015 January 18), Beacons: Everything you
need to know, Retrieved from
http://www.pointrlabs.com/blog/beacons-everything-
you-need-to-know/
8. Razzaque, M. A., Milojevic-Jevric, M., Palade, A., &
Clarke, S. (2016). Middleware for internet of things: a
survey. IEEE Internet of Things Journal, 3(1), 70-95.
9. Sato, K., Sakamoto, N., & Shimada, H. (2015).
Visualization and Management Platform with
Augmented Reality for Wireless Sensor Networks.
Wireless Sensor Network, 7(01), 1.
10. Sicari, S., Rizzardi, A., Grieco, L. A., & Coen-
Porisini, A. (2015). Security, privacy and trust in
Internet of Things: The road ahead. Computer
Networks, 76, 146-164.
11. Valli, A. (2008). The design of natural interaction.
Multimedia Tools and Applications, 38(3), 295-305
12. Want, R., Schilit, B. N., & Jenson, S. (2015). Enabling
the Internet of Things. IEEE Computer, 48(1), 28-35.
... RFID (Radio -Frequency Identification) and BLE (Bluetooth Low Energy) technologies can be also effective and viable solutions but there are also certain technological and practical aspects that need to be addressed [17,18]. Furthermore, QR codes and image tracking are the most low cost and simple methods but they require additional actions from users, which may be frustrating [19]. Finally, modern mobile devices have the capability to use their powerful camera and processing power in order to perform large-scale indoor 3D scene reconstruction based on cloud images and sensor data [20]. ...
Chapter
Full-text available
Many projects have already analyzed the current limitations and challenges on the integration of the Linked Open Data (LOD) cloud in mobile augmented reality (MAR) applications for cultural heritage, and underline the future directions and capabilities. The majority of the above works relies on the detected geo-location of the user or his device by various sensors (GPS – global positioning system, accelerometer, camera, etc.) or geo-based linked data, while others use marker-based techniques to link various locations with labels and descriptions of specific geodata. But when it comes to indoor environments (museums, libraries) where tracking the accurate user’s position and orientation is challenging due to the lack of GPS valid sensor data, complex and costly technological systems need to be implemented for identifying user’s OoI (Object of Interest). This paper describes a concept which is based on image identification and matching between frames from the user’s camera and stored images from the Europeana platform, that can link the LOD cloud from cultural institutes around Europe and mobile augmented reality applications in cultural heritage without the need of the accurate user’s location, and discusses the challenges and future directions of this approach.
... RFID (Radio -Frequency Identification) and BLE (Bluetooth Low Energy) technologies can be also effective and viable solutions but there are also certain technological and practical aspects that need to be addressed [17,18]. Furthermore, QR codes and image tracking are the most low cost and simple methods but they require additional actions from users, which may be frustrating [19]. Finally, modern mobile devices have the capability to use their powerful camera and processing power in order to perform large-scale indoor 3D scene reconstruction based on cloud images and sensor data [20]. ...
Article
Backgrund: Recent improvements of augmented reality technologies have boosted the design and the development of new solutions to support the user when visiting cultural sites. Each kind of multimedia information can be conveyed to the user in a new and intriguing way. On the other hand, a model to evaluate this kind of AR solutions for cultural heritage still misses. Objective: This paper aims to bridge the gap between applications and assessment by proposing a multivariate evaluation model and its application for an Android mobile augmented reality application designed to support the user during the visit of the historical industrial site of Carpano in Turin, Italy. This site is now a museum that keeps alive the memory of antique procedures and industrial machineries. Method: The proposed assessment model is based on a star like representation, which is often used to denote multivariate models; the length of each ray denotes the value of the corresponding variate. A three-level scale has been chosen for the proposed star-like representation: full length corresponds to the high-maximum level, medium length corresponds to the fair-average level and short length corresponds to the poor-null level. Results: The proposed AR application has been used by 13 people who, at the end of the experience, filled a questionnaire. Subjective feedbacks allowed us to evaluate the application usability. Moreover, the multivariate evaluation model has been applied to the AR application, thus outlining advantages and drawbacks. Conclusion: The presented multivariate evaluation model considers several different elements that can have an impact on the user experience; it also takes into account the coherence of the multimedia material used to augment the visit, as well as the impact of different thematic routes, is assessed.
... RFID (Radio -Frequency Identification) and BLE (Bluetooth Low Energy) technologies can be also effective and viable solutions but there are also certain technological and practical aspects that need to be addressed [17,18]. Furthermore, QR codes and image tracking are the most low cost and simple methods but they require additional actions from users, which may be frustrating [19]. Finally, modern mobile devices have the capability to use their powerful camera and processing power in order to perform large-scale indoor 3D scene reconstruction based on cloud images and sensor data [20]. ...
Conference Paper
Full-text available
Many projects have already analyzed the current limitations and challenges on the integration of the Linked Open Data (LOD) cloud in mobile augmented reality (MAR) applications for cultural heritage, and underline the future directions and capabilities. The majority of the above works relies on the detected geo-location of the user or his device by various sensors (GPS – global positioning system, accelerometer, camera, etc.) or geo-based linked data, while others use marker-based techniques to link various locations with labels and descriptions of specific geodata. But when it comes to indoor environments (museums, libraries) where tracking the accurate user’s position and orientation is challenging due to the lack of GPS valid sensor data, complex and costly technological systems need to be implemented for identifying user’s OoI (Object of Interest). This paper describes a concept which is based on image identification and matching between frames from the user’s camera and stored images from the Europeana platform, that can link the LOD cloud from cultural institutes around Europe and mobile augmented reality applications in cultural heritage without the need of the accurate user’s location, and discusses the challenges and future directions of this approach.
Article
Full-text available
The Internet of Things (IoT) envisages a future in which digital and physical things or objects (e.g., smartphones, TVs, cars) can be connected by means of suitable information and communication technologies, to enable a range of applications and services. The IoT's characteristics, including an ultra-largescale network of things, device and network level heterogeneity, and large numbers of events generated spontaneously by these things, will make development of the diverse applications and services a very challenging task. In general, middleware can ease a development process by integrating heterogeneous computing and communications devices, and supporting interoperability within the diverse applications and services. Recently, there have been a number of proposals for IoT middleware. These proposals mostly addressed wireless sensor networks (WSNs), a key component of IoT, but do not consider RF identification (RFID), machine-tomachine (M2M) communications, and supervisory control and data acquisition (SCADA), other three core elements in the IoT vision. In this paper, we outline a set of requirements for IoT middleware, and present a comprehensive review of the existing middleware solutions against those requirements. In addition, open research issues, challenges, and future research directions are highlighted.
Article
Full-text available
Recently a ubiquitous sensor network which collects our environmental information gets increasingly popular, a visualization application is necessary for users to manage complicated wireless networks, however, these applications are developed individually for wireless communication standard or a type of wireless device. Therefore, users are forced to adopt and use the application individually according to the target of the wireless network. In this paper, we propose a visualization platform for wireless network environments using augmented reality technology, and evaluate the effectiveness of the platform. From the result of the evaluation, we have confirmed the proposed platform has availability for visualization and management of wireless networks.
Article
Full-text available
Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.
Conference Paper
Full-text available
The Smart City concept relates to improving efficiency of city services and facilitating a more sustainable development of cities. However, it is important to highlight that, in order to effectively progress towards such smart urban environments, the people living in these cities must be tightly engaged in this endeavour. This paper presents two novel services that have been imple-mented in order to bring the Smart City closer to the citizen. The Participatory Sensing service we are proposing exploits the advanced features of smartphones to make the user part of the ubiquitous sensing infrastructure over which the Smart City concept is built. The Augmented Reality service is connected to the smart city platform in order to create an advanced visualization tool where the plethora of available information is presented to citizens embedded in their natural surroundings. A brief description of the smart city platform on top of which these services are built is also presented.
Article
Full-text available
As we are moving towards the Internet of Things (IoT), the number of sensors deployed around the world is growing at a rapid pace. Market research has shown a significant growth of sensor deployments over the past decade and has predicted a significant increment of the growth rate in the future. These sensors continuously generate enormous amounts of data. However, in order to add value to raw sensor data we need to understand it. Collection, modelling, reasoning, and distribution of context in relation to sensor data plays critical role in this challenge. Context-aware computing has proven to be successful in understanding sensor data. In this paper, we survey context awareness from an IoT perspective. We present the necessary background by introducing the IoT paradigm and context-aware fundamentals at the beginning. Then we provide an in-depth analysis of context life cycle. We evaluate a subset of projects (50) which represent the majority of research and commercial solutions proposed in the field of context-aware computing conducted over the last decade (2001-2011) based on our own taxonomy. Finally, based on our evaluation, we highlight the lessons to be learnt from the past and some possible directions for future research. The survey addresses a broad range of techniques, methods, models, functionalities, systems, applications, and middleware solutions related to context awareness and IoT. Our goal is not only to analyse, compare and consolidate past research work but also to appreciate their findings and discuss their applicability towards the IoT.
Article
Full-text available
In this paper, we report on a prototype augmented reality (AR) platform for accessing abstract information in real-world pervasive computing environments. Using this platform, objects, people, and the environment serve as contextual channels to more information. The user’s interest with respect to the environment is inferred from eye movement patterns, speech, and other implicit feedback signals, and these data are used for information filtering. The results of proactive context-sensitive information retrieval are augmented onto the view of a handheld or head-mounted display or uttered as synthetic speech. The augmented information becomes part of the user’s context, and if the user shows interest in the AR content, the system detects this and provides progressively more information. In this paper, we describe the first use of the platform to develop a pilot application, Virtual Laboratory Guide, and early evaluation results of this application.
Conference Paper
Full-text available
We show a new camera based interaction solution where an ordinary camera can detect small optical tags from a relatively large distance. Current optical tags, such as barcodes, must be read within a short range and the codes occupy valuable physical space on products. We present a new low-cost optical design so that the tags can be shrunk to 3mm visible diameter, and unmodified ordinary cameras several meters away can be set up to decode the identity plus the relative distance and angle. The design exploits the bokeh effect of ordinary cameras lenses, which maps rays exiting from an out of focus scene point into a disk like blur on the camera sensor. This bokeh-code or Bokode is a barcode design with a simple lenslet over the pattern. We show that a code with 15μm features can be read using an off-the-shelf camera from distances of up to 2 meters. We use intelligent binary coding to estimate the relative distance and angle to the camera, and show potential for applications in augmented reality and motion capture. We analyze the constraints and performance of the optical system, and discuss several plausible application scenarios.
Article
Full-text available
Unlike the traditional Internet, the emerging Internet of Things constitutes a mix of virtual and physical entities. A proposed IoT browser enables the exploration of augmented spaces by identifying smart objects, discovering any services they might provide, and interacting with them.
Article
This white paper addresses the problem of the relationship between humans and technology-enhanced spaces and physical objects (later defined as artifacts). The class of cases here analyzed includes interactive digital signage, information kiosks, home media centers and interactive spaces (public and domestic) whose purpose is the communication of a meaning. In this domain, complex interfaces are not needed, as common people interaction with information, content and media is in most cases extremely simple. The topic of specialized interfaces for expert users is not addressed here; the focus is on interfaces for the general public, whose main purpose is the basic fruition of digital information, although such information can be large and complex in its organization.