Article

A semantic-based federated cloud system for emergency response

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Cloud federation can be described through the concept of collaboration, where each organization has its own cloud(s) that deals with a different and independent domain but needs to work together with other organizations in order to fulfill a specific shared objective. According to this perspective, the federation is a collection of interacting clouds that collaborate with one another through the instantiation and management of shared subsets of resources (computation and storage resources as well as sensors and actuators). This idea could be profitably used in those scenarios in which different organizations have to share several resources (e.g., emergency response or disaster management scenario). On the other hand, when different independent organizations share their resources, several issues arise. One of them is related to interoperability problems. As a consequence, this work also introduces a framework for an ontology-based resource life cycle management and provisioning in a federated cloud infrastructure. Therefore, The main contributions of this work consists of redesigning a cloud infrastructure architecture from the ground up, leveraging Semantic Web and Semantic Web Service Technologies, and natively supporting a federated provisioning of any kind of resource. This paper exploits, as a motivating scenario, a flood emergency response system. Copyright © 2014 John Wiley & Sons, Ltd.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Thus leading to serious challenges that prevent fostering the widespread adoption of cloud. These challenges are often related to Cloud resources and service description ( [8], [9], [10], [2], [4], [3], [11]), Cloud security ( [12], [13], [14], [15]), Cloud services discovery, selection and negotiation ( [6], [7], [16], [17], [18], [19], [20], [21], [22], [23], [24]) and Cloud interoperability and portability ( [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35]). In fact, the lack of semantic represents a key factor that has strongly contributed to the emergence of such challenges. ...
... It is used to consolidate heterogeneous data source, upon which the cloud management engine is developed. Manno et al [31] introduced a framework for an ontologybased resource life cycle management and provision in a federated cloud infrastructure to achieve the interoperability among heterogeneous and autonomous clouds. The proposed framework is based on a distributed and temporal KB embodied in terms of three ontologies. ...
... As concerns the RQ6, we remark that some approaches have developed tools with the aim of implementing their solutions, using both semantic programing APIs and programing languages. However, except the mOSAIC project and [31], none of approaches provide software applications or supporting tools already available for practise. As to RQ7, we observe that the validation of approaches in practice is, in most cases, either partially or totally missing. ...
Conference Paper
Full-text available
During the last years we have seen a dramatic increase of new Cloud providers, applications, services, management platforms, data, etc. reaching a level of complexity that implies the necessity of new solutions to deal with such vast, shared and heterogeneous services and resources. Consequently, challenges often related to interoperability, portability, security, discovery, selection, negotiation and description of cloud service and resource may take place. In this sense, Semantic Web Technologies, holding a great potential to cloud computing, have been proven as an efficient means to relive these challenges. This paper examines and explores the role of Semantic Web Technologies in the cloud from a wide variety of literatures. Various approaches, architectures, and frameworks are screened and evaluated based on eight prime research questions. At the end of the review, research opportunities in the form of a roadmap are discussed.
... Ontology based IOT service matching are widely studied in existing literature [16,18]. These works often refer to and maintain compatibility with the Semantic Sensor Network (SSN) ontology [4]. ...
Preprint
A large number of smart devices (things) are being deployed with the swift development of Inter- net of Things (IOT). These devices, owned by different organizations, have a wide variety of services to offer over the web. During a natural disaster or emergency (i.e., a situation), for example, relevant IOT services can be found and put to use. However, appropriate service matching methods are required to find the relevant services. Organizations that manage situation responses and organizations that provide IOT services are likely to be independent of each other, and therefore it is difficult for them to adopt a common ontological model to facilitate the service matching. Moreover, there exists a large conceptual gap between the domain of discourse for situations and the domain of discourse for services, which cannot be adequately bridged by existing techniques. In this paper, we address these issues and propose a new method, WikiServe, to identify IOT services that are functionally relevant to a given situation. Using concepts (terms) from situation and service descriptions, WikiServe employs Wikipedia as a knowledge source to bridge the conceptual gap between situation and service descriptions and match functionally relevant IOT services for a situation. It uses situation terms to retrieve situation related articles from Wikipedia. Then it creates a ranked list of services for the situation using the weighted occurrences of service terms in weighted situation articles. WikiServe performs better than a commonly used baseline method in terms of Precision, Recall and F measure for service matching.
... Social media, emergency management, GIS, twitter, emergency response have appeared maximally with other keywords with total number of links 35, 21, 19, 17 and 16 respectively. In year 2014, attention was given to emergency response, interoperability, situation awareness, ontology using technologies like remote sensing (Tralli and In 2015, new technologies like cloud computing (Manno et al. 2015), crowdsourcing model (Poblet et al. 2018), data mining techniques (Li et al. 2017) in addition to wireless sensor networks (Devasena and Sowmya 2015) were spotlight topics in disaster management . In 2016, data from social media especially twitter was used for disaster management as it has 75 occurrences in the same year. ...
Article
Full-text available
Disasters are cataclysmic events that cause significant loss of natural, human and financial resources. The repercussions are exaggerated by climate change making disaster management a hot topic for academic research. The exponential growth in the utilization of ICT (Information and Communication Technology) with other domains has motivated the scientific community to integrate it with disaster management. The aim of this paper is to put forward a scientometric examination to assess the corpus of research performed on various types of disasters and use of ICT over the last 10 years. Annual growth of publication output, related subject categories, productivity analysis parameters were calculated for the evaluation of scopus bibliographic data. Moreover, it presents insights into productive journals, co-operation of qualified authors and collaboration of nations from all over the world. It further investigates prominent institutes and co-occurrence of key research topics. This organized study will facilitate the upcoming authors for effective future research.
... Federated cloud has been proposed to joint multiple external and internal clouds together to achieve extended resources and improved scalability. It offers a number of additional benefits such as increased interoperability, improved economies of scale and removal of cloud isolation [7,8]. ...
Article
Full-text available
Federated cloud enables data sharing across different organizations. This paper proposes a distributed search index (DS-index) solution to provide distributed search capability in federated cloud, with the target to support multi-attribute range queries with secondary indexes. DS-index deploys a three-layer architecture with a multi-attribute index overlay, a tree-based P2P network layer, and a federated cloud layer. We present a distributed search algorithm with the three-layer architecture. In order to facilitate distributed search, we propose a dynamic mapping algorithm for DS-index to map between different layers. In addition, a Markov Chain-based cost model is defined to facilitate node selection and mapping. With the dynamic mapping algorithm and cost model, the proposed DS-index solution is more cost-effective than the traditional P2P networks as it can reduce the network cost in terms of index maintenance cost and node selection cost. The experiments demonstrate that, our DS-index solution along with its cost model can save the computation resource and reduce network bandwidth consumption by around 30% comparing to the one without cost model. It can also reduce the number of node splits/merges by around 20% comparing to the state-of-the-art solution.
... Last but not the least, the work by Manno et al. [152] can be related to a similar scenario: smart cities; even though, in this case, the focus is on emergency management. The authors propose a semantic-based federated cloud system, that is, a collection of clouds that collaborate with one another through the instantiation and management of shared subsets of resources: computation and storage resources as well as sensors and actuators. ...
Article
High-performance computing (HPC) is now present in many spheres and domains of modern society. The special issue of Concurrency Computation contains research papers addressing the state-of-the-art in HPC and simulation and reflects some of the trends reported earlier. P. Trunfio deals with a peer-to-peer file-sharing model that takes into account energy efficiency. This is achieved by means of a sleep-and-wake approach. The second manuscript falling within this category is the work by Yeh and colleagues. Their work consists of prioritizing both memory management and disk scheduling processes. The approach, which was applied to the Linux OS, allows the authors to improve process performances. Alexandru and colleagues deal with parallel computing and computation time prediction as applied to the area of computational electromagnetic modeling and simulation. The prediction model allows for an analytical estimation of the required number of computing resources in order to optimize their use while delivering the performance sought. The work by Ortega and researchers present a novel and compact form to storage of large sparse matrices coupled with the adoption of a hybrid MPI-GPU parallelization. This combined approach allows systems to extend the dimension of the Helmholtz problem they are able to solve. Neves and Araujo also deal with large sparse matrices, so common in many scientific problems. They focus on binary matrix-vector multiplication in particular. Guo and Wang also propose a prediction model. But in this case, the model assists researchers in evaluating the performances of a GPU architecture when a matrix-vector multiplication is required.
Article
The advent of Internet of Things (IoT) paradigm is having a great ripple effect in the field of disaster management as well as many other areas. Victim detection has been studied as rescuer‐ or victim‐oriented approach, which utilises the IoT advanced technologies, but they have weaknesses according to disaster types. In this paper, we propose a Victim Detection Platform (VDP) architecture combining both approaches using various advanced technologies such as IoT, drone and edge/cloud computing to offer higher quality services, which ultimately result in swifter, better responses and more lives saved. First, we have reviewed related literature regarding three crucial issues (ie, multi‐modal evaluation for reliable data, edge‐based real‐time response, and privacy‐preserved Big Data analysis) to satisfy the service. In order to achieve these important aspects, the VDP architecture is described along with roles/relations of technologies and concepts leveraged to and data sources considered in the platform. To explore the realisation possibility of the proposed approach in real disasters, a validation scenario considering the critical issues and details in a victim detection task is illustrated, and then appropriate techniques, which will be utilised in the proposed VDP, and its justifications are discussed. Finally, we debate how the techniques are harmonised in the VDP and what the challenges are.
Article
Purpose This paper presents a statistical analysis of research into emergency management using information systems for the period 2000-2016. Design/methodology/approach In this study, research trends in the area of emergency management using information systems are analysed using various parameters, including trends on publications and citations, disciplinary distribution, journals, research institutions and regional cooperation. Through a keyword co-occurrence analysis, this study identifies the evolution of the main keywords in this area, and examines the changes and developments in the main focus of scholars in this period. The study also explores the main research orientations in the field by analysing and integrating the results of two cluster analyses conducted from keyword- and reference-based perspectives, respectively. Findings The area of emergency management using information systems have received increased attention and interest by researchers and practitioners. It is suggested that more cooperation between research institutions is required to help facilitate the further development of the area. Six main research orientations are identified: namely Web 2.0-enabled research, geographic information technology, information technology-based research, the contextual use of information technology, crisis collaboration research and mass media communication research, since the research area first became popular in 2006. Originality/value This study is the first to comprehensively map the landscape of emergency management by conducting a bibliometric analysis of the research using information systems. Our findings can help academics and emergency managers gain a comprehensive understanding of the research area, and guide scholars towards producing more effective findings.
Article
Full-text available
Abundant sensor data are now available online from a wealth of sources, which greatly enhance research efforts on the Digital Earth. The combination of distributed sensor networks and expanding citizen-sensing capabilities provides a more synchronized image of earth's social and physical landscapes. However, it remains difficult for researchers to use such heterogeneous Sensor Webs for scientific applications since data are published by following different standards and protocols and are in arbitrary formats. In this paper, we investigate the core challenges faced when consuming multiple sources for environmental applications using the Linked Data approach. We design and implement a system to achieve better data interoperability and integration by republishing real-world data into linked geo-sensor data. Our contributions include presenting: (1) best practices of re-using and matching the W3C Semantic Sensor Network (SSN) ontology and other popular ontologies for heterogeneous data modeling in the water resources application domain, (2) a newly developed spatial analysis tool for creating links, and (3) a set of RESTful OGC Sensor Observation Service (SOS) like Linked Data APIs. Our results show how a Linked Sensor Web can be built and used within the integrated water resource decision support application domain.
Article
Full-text available
Flooding is one of the major disasters occurring in various parts of the world. The system for real-time monitoring of water conditions: water level; flow; and precipitation level, was developed to be employed in monitoring flood in Nakhon Si Thammarat, a southern province in Thailand. The two main objectives of the developed system is to serve 1) as information channel for flooding between the involved authorities and experts to enhance their responsibilities and collaboration and 2) as a web based information source for the public, responding to their need for information on water condition and flooding. The developed system is composed of three major components: sensor network, processing/transmission unit, and database/ application server. These real-time data of water condition can be monitored remotely by utilizing wireless sensors network that utilizes the mobile General Packet Radio Service (GPRS) communication in order to transmit measured data to the application server. We implemented a so-called VirtualCOM, a middleware that enables application server to communicate with the remote sensors connected to a GPRS data unit (GDU). With VirtualCOM, a GDU behaves as if it is a cable directly connected the remote sensors to the application server. The application server is a web-based system implemented using PHP and JAVA as the web application and MySQL as its relational database. Users can view real-time water condition as well as the forecasting of the water condition directly from the web via web browser or via WAP. The developed system has demonstrated the applicability of today's sensors in wirelessly monitor real-time water conditions.
Conference Paper
Full-text available
The deployment and provisioning of intelligent systems and utility-based services will greatly benefit from a cloud-based intelligent middleware framework, which could be deployed over multiple infrastructure providers (such as smart cities, hospitals, campus and private enterprises, offices, etc.) in order to deliver on-demand access to smart services. This paper introduces the formulation of an open source integrated intelligent platform as solution for integrating global sensor networks, providing design principles for cloud-based intelligent environments and discuss infrastructure functional modules and their implementation. This paper review briefly technologies enabling the framework, towards emphasizes on demand establishment of smart cities services based on the automated formulation of ubiquitous intelligence of Internet connected objects. The framework introduced founds on the GSN infrastructure. The framework leverages W3C SSN-XG formal language and the IETF COAP protocol, providing support for enabling intelligent (sensors-objects) services. The service requirements for particular smart city scenario are introduced and initial implementations and results from performed simulations studied and discussed.
Conference Paper
Full-text available
Cloud Computing is a paradigm that applies a service model on infrastructures, platforms and software. In the last few years, this new idea has been showing its potentials and how, in the long run, it will affect Information Technology and the act of interfacing to computation and storage. This article introduces the FCFA project, a framework for an ontology-based resource life-cycle management and provisioning in a federated Cloud Computing infrastructure. Federated Clouds are presumably the first step toward a Cloud 2.0 scenario where different providers will be able to share their assets in order to create a free and open Cloud Computing marketplace. The contribution of this article is a redesign of a Cloud Computing infrastructure architecture from the ground-up, leveraging semantic web technologies and natively supporting a federated resource provisioning.
Article
Full-text available
Managing virtualized services efficiently over the cloud is an open challenge. Traditional models of software development are not appropriate for the cloud computing domain, where software (and other) services are acquired on demand. In this paper, we describe a new integrated methodology for the life cycle of IT services delivered on the cloud and demonstrate how it can be used to represent and reason about services and service requirements and so automate service acquisition and consumption from the cloud. We have divided the IT service life cycle into five phases of requirements, discovery, negotiation, composition, and consumption. We detail each phase and describe the ontologies that we have developed to represent the concepts and relationships for each phase. To show how this life cycle can automate the usage of cloud services, we describe a cloud storage prototype that we have developed. This methodology complements previous work on ontologies for service descriptions in that it is focused on supporting negotiation for the particulars of a service and going beyond simple matchmaking.
Conference Paper
Full-text available
Cloud computing infrastructures support dynamical and flexible access to computational, network and storage resources. To date, several disjoint industrial and academic technologies provide infrastructure level access to Clouds. Especially for industrial platforms, the evolution of de-facto standards goes together with worries about user lock-in to a platform. The Contrail project [6] proposes a federated and integrated approach to Clouds. In this work we present and motivate the architecture of Contrail federations. Contrail's goal is to minimize the burden on the user and increase the efficiency in using Cloud platforms by performing both a vertical and a horizontal integration. To this end, Contrail federations play a key role, allowing users to exploit resources belonging to different cloud providers, regardless of the kind of technology of the providers and with a homogeneous, secure interface. Vertical integration is achieved by developing both the Infrastructure- and the Platform-as-a-Service levels within the project. A third key point is the adoption of a fully open-source approach toward technology and standards. Beside supporting user authentication and applications deployment, Contrail federations aim at providing extended SLA management functionalities, by integrating the SLA management approach of SLA@SOI project in the federation architecture.
Article
Full-text available
In the current worldwide ICT scenario, a constantly growing number of ever more powerful devices (smartphones, sensors, household appliances, RFID devices, etc.) join the Internet, significantly impacting the global traffic volume (data sharing, voice, multimedia, etc.) and foreshadowing a world of (more or less) smart devices, or “things” in the Internet of Things (IoT) perspective. Heterogeneous resources can be aggregated and abstracted according to tailored thing-like semantics, thus enabling Things as a Service paradigm, or better a “Cloud of Things”. In the Future Internet initiatives, sensor networks will assume even more of a crucial role, especially for making smarter cities. Smarter sensors will be the peripheral elements of a complex future ICT world. However, due to differences in the “appliances” being sensed, smart sensors are very heterogeneous in terms of communication technologies, sensing features and elaboration capabilities. This article intends to contribute to the design of a pervasive infrastructure where new generation services interact with the surrounding environment, thus creating new opportunities for contextualization and geo-awareness. The architecture proposal is based on Sensor Web Enablement standard specifications and makes use of the Contiki Operating System for accomplishing the IoT. Smart cities are assumed as the reference scenario.
Article
Full-text available
The W3C Semantic Sensor Network Incubator group (the SSN-XG) produced an OWL~2 ontology to describe sensors and observations --- the SSN ontology, available at http://purl.oclc.org/NET/ssnx/ssn. The SSN ontology can describe sensors in terms of capabilities, measurement processes, observations and deployments. This article describes the SSN ontology. It further gives an example and describes the use of the ontology in recent research projects.
Article
Full-text available
Challenges the Internet of Things (IoT) is facing are directly inherited from today's Internet. However, they are amplified by the anticipated large scale deployments of devices and services, information flow and direct user involvment in the IoT. Challenges are many and we focus on addressing those related to scalability, heterogeneity of IoT components, and the highly dynamic and unknown nature of the network topology. In this paper, we give an overview of a service-oriented middleware solution that addresses those challenges using semantic technologies to provide interoperability and flexibility. We especially focus on modeling a set of ontologies that describe devices and their functionalities and thoroughly model the domain of physics. The physics domain is indeed at the core of the IoT, as it allows the approximation and estimation of functionalities usually provided by things. Those functionalities will be deployed as services on appropriate devices through our middleware.
Conference Paper
Full-text available
This paper intends to contribute to the design of a pervasive infrastructure where new generation services interact with the surrounding environment, collecting data and applying management strategies. According to such perspective computing, storage and sensing become complementary aspects to be coordinated. In this way innovative and value-added services can be implemented by bridging Clouds with the Internet of Things. Heterogeneous resources can be aggregated and abstracted according to tailored thing-like semantics, thus enabling a Things as a Service paradigm, a way to build a "Cloud of Things" (CoT). This paper mainly focuses on the implementation of the underlying infrastructure at the basis of the CoT. An ad-hoc architecture and some preliminary background of this challenging view are provided and discussed, identifying guidelines and future directions.
Conference Paper
Full-text available
Semantic modeling for the Internet of Things has become fundamental to resolve the problem of interoperability given the distributed and heterogeneous nature of the "Things". Most of the current research has primarily focused on devices and resources modeling while paid less attention on access and utilisation of the information generated by the things. The idea that things are able to expose standard service interfaces coincides with the service oriented computing and more importantly, represents a scalable means for business services and applications that need context awareness and intelligence to access and consume the physical world information. We present the design of a comprehensive description ontology for knowledge representation in the domain of Internet of Things and discuss how it can be used to support tasks such as service discovery, testing and dynamic composition.
Conference Paper
Full-text available
In this paper, we propose SemSense architecture for collecting real world data from a physical system of sensors and publishing it on the Web, thus contributing to the Web of Things. SemSense comprises of four components: (1) the data collection component, (2) the storage component (3) the semantic enrichment component and (4) the publishing component, which are described and implemented for an existing deployment of a sensor network. Through these components, the real world data is collected from the physical devices, processed, equipped with semantic information and published on the Web. The paper addresses challenges of efficiently collecting data and meta-data from sensors and publishing it following the linked data principles.
Article
Full-text available
We present a vision of the future of emergency management that better supports inclusion of activities and information from members of the public during disasters and mass emergency events. Such a vision relies on integration of multiple subfields of computer science, and a commitment to an understanding of the domain of application. It supports the hopes of a grid/cyberinfrastructure-enabled future that makes use of social software. However, in contrast to how emergency management is often understood, it aims to push beyond the idea of monitoring on-line activity, and instead focuses on an understudied but critical aspect of mass emergency response—the needs and roles of members of the public. By viewing the citizenry as a powerful, self-organizing, and collectively intelligent force, information and communication technology can play a transformational role in crisis. Critical topics for research and development include an understanding of the quantity and quality of information (and its continuous change) produced through computer-mediated communication during emergencies; mechanisms for ensuring trustworthiness and security of information; mechanisms for aligning informal and formal sources of information; and new applications of information extraction techniques.
Article
Full-text available
The Internet of Things is a hyped term and many definitions for it exist. Worse still, it comes with a lot of related terminology that is not used uniformly either, hindering scientific discourse. This paper tries to bring clarity by describing the most important terms like things, devices, entities of interest, resources, addressing, identity and, more importantly, the relationships between them.
Conference Paper
Full-text available
Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns of privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon, and Microsoft. Community Cloud Computing makes use of the principles of Digital Ecosystems to provide a paradigm for Clouds in the community, offering an alternative architecture for the use cases of Cloud Computing. It is more technically challenging to deal with issues of distributed computing, such as latency, differential resource management, and additional security requirements. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.
Conference Paper
Full-text available
Cloud infrastructure providers may form Cloud federations to cope with peaks in resource demand and to make large-scale service management simpler for service providers. To realize Cloud federations, a number of technical and managerial difficulties need to be solved. We present ongoing work addressing three related key management topics, namely, specification, scheduling, and monitoring of services. Service providers need to be able to influence how their resources are placed in Cloud federations, as federations may cross national borders or include companies in direct competition with the service provider. Based on related work in the RESERVOIR project, we propose a way to define service structure and placement restrictions using hierarchical directed acyclic graphs. We define a model for scheduling in Cloud federations that abides by the specified placement constraints and minimizes the risk of violating Service-Level Agreements. We present a heuristic that helps the model determine which virtual machines (VMs) are suitable candidates for migration. To aid the scheduler, and to provide unified data to service providers, we also propose a monitoring data distribution architecture that introduces cross-site compatibility by means of semantic metadata annotations.
Conference Paper
Full-text available
This paper focuses on the emerging problem of semantic interoperability between heterogeneous cooperating Cloud platforms. We try to pave the way towards a Reference Architecture for Semantically Interoperable Clouds (RASIC). To this end, three fundamental and complementary computing paradigms, namely Cloud computing, Service Oriented Architectures (SOA) and lightweight semantics are used as the main building blocks. The open, generic Reference Architecture for Semantically Interoperable Clouds introduces a scalable, reusable and transferable approach for facilitating the design, deployment and execution of resource intensive SOA services on top of semantically interlinked Clouds. In order to support the development of semantically interoperable Cloud systems based on RASIC, the model of a common Cloud API is also specified.
Conference Paper
Full-text available
Many efforts are centered around creating large-scale networks of “smart things” found in the physical world (e.g., wireless sensor and actuator networks, embedded devices, tagged objects). Rather than exposing real-world data and functionality through proprietary and tightly-coupled systems, we propose to make them an integral part of the Web. As a result, smart things become easier to build upon. Popular Web languages (e.g., HTML, Python, JavaScript, PHP) can be used to easily build applications involving smart things and users can leverage well-known Web mechanisms (e.g., browsing, searching, bookmarking, caching, linking) to interact and share these devices. In this paper, we begin by describing the Web of Things architecture and best-practices based on the RESTful principles that have already contributed to the popular success, scalability, and modularity of the traditional Web. We then discuss several prototypes designed in accordance with these principles to connect environmental sensor nodes and an energy monitoring system to the World Wide Web. We finally show how Web-enabled smart things can be used in lightweight ad-hoc applications called “physical mashups”.
Conference Paper
Full-text available
In this paper, we propose an Internet of Things (IoT) virtualization framework to support connected objects sensor event processing and reasoning by providing a semantic overlay of underlying IoT cloud. The framework uses the sensor-as-a-service notion to expose IoT cloud's connected objects functional aspects in the form of web services. The framework uses an adapter oriented approach to address the issue of connectivity with various types of sensor nodes. We employ semantic enhanced access polices to ensure that only authorized parties can access the IoT framework services, which result in enhancing overall security of the proposed framework. Furthermore, the use of event-driven service oriented architecture (e-SOA) paradigm assists the framework to leverage the monitoring process by dynamically sensing and responding to different connected objects sensor events. We present our design principles, implementations, and demonstrate the development of IoT application with reasoning capability by using a green school motorcycle (GSMC) case study. Our exploration shows that amalgamation of e-SOA, semantic web technologies and virtualization paves the way to address the connectivity, security and monitoring issues of IoT domain.
Article
Full-text available
This article discusses the areas in which semantic models can support cloud computing. Semantic models are helpful in three aspects of cloud computing. The first is functional and nonfunctional definitions. The ability to define application functionality and quality-of-service details in a platform-agnostic manner can immensely benefit the cloud community. The second aspect is data modeling. Semantic modeling of data to provide a platform-independent data representation would be a major advantage in the cloud space. The third aspect is service description enhancement.
Conference Paper
Full-text available
Service Clouds are a key emerging feature of the Future Internet which will provide a platform to execute virtualized services. To effectively operate a service cloud there needs to be a monitoring system which provides data on the actual usage and changes in resources of the cloud and of the services running in the cloud. We present the main aspects of Lattice, a new monitoring framework, which has been specially designed for monitoring resources and services in virtualized environments. Finally, we discuss the issues related to federation of service clouds, and how this affects monitoring in particular.
Conference Paper
Full-text available
Description Logics (DLs) are the formal foundations of the standard web ontology languages OWL-DL and OWL-Lite. In the Semantic Web and other domains, ontologies are in- creasingly seen also as a mechanism to access and query data repositories. This novel context poses an original combina- tion of challenges that has not been addressed before: (i) suf- ficient expressive power of the DL to capture common data modeling constructs; (ii) well established and flexible query mechanisms such as Conjunctive Queries (CQs); (iii) opti- mization of inference techniques with respect to data size, which typically dominates the size of ontologies. This calls for investigating data complexity of query answering in ex- pressive DLs. While the complexity of DLs has been studied extensively, data complexity has been characterized only for answering atomic queries, and was still open for answering CQs in expressive DLs. We tackle this issue and prove a tight CONP upper bound for the problem in SHIQ, as long as no transitive roles occur in the query. We thus establish that for a whole range of DLs from AL to SHIQ, answering CQs with no transitive roles has CONP-complete data complex- ity. We obtain our result by a novel tableaux-based algorithm for checking query entailment, inspired by the one in (19), but which manages the technical challenges of simultaneous inverse roles and number restrictions (which leads to a DL lacking the finite model property).
Chapter
Spatial and temporal data is plentiful on the Web, and SemanticWeb technologies have the potential to make this data more accessible and more useful. Semantic Web researchers have consequently made progress towards better handling of spatial and temporal data.SPARQL, the W3C-recommended query language for RDF, does not adequately support complex spatial and temporal queries. In this work, we present the SPARQL-ST query language. SPARQL-ST is an extension of SPARQL for complex spatiotemporal queries. We present a formal syntax and semantics for SPARQL-ST. In addition, we describe a prototype implementation of SPARQL-ST and demonstrate the scalability of this implementation with a performance study using large real-world and synthetic RDF datasets.
Conference Paper
Emergency events like natural disasters and large scale accidents pose a number of challenges to handle. The requirement to coordinate a wide range of organizations and activities, public and private, to provide efficient help for the victims can often be complicated by the need to comply with requisite policies and procedures. Current process and service models that represent domains such as emergency planning do not provide sufficient artefacts with respect to compliance requirements. In this paper, we argue that techniques for compliance management in business processes can be applied to the emergency domain. We provide a high level model for the representation of compliance requirements within business processes and services. Hence, we demonstrate the application of the model in the emergency planning domain. Finally, we present an analysis derived from the case study that identifies the current limitations and requirements for semantic extensions of process and service models in order to cater for compliance.
Conference Paper
The challenges of the sensor web have been well documented, and the use of appropriate semantic web technologies promises to offer potential solutions to some of these challenges (for example, how to represent sensor data, integrate it with other data sets, publish it, and reason with the data streams). To date a large amount of work in this area has focused on sensor networks based on\traditional" hardware sensors. In recent years, citizen sensing has became a relatively well established approach for incorporating humans as sensors within a system. Often facilitated via some mobile platform, citizen sensing may incorporate ob- servational data generated by hardware (e.g. a GPS device) or directly by the human observer. Such human observations can easily be imperfect (e.g. erroneous or fake), and sensor properties that would typically be used to detect and reason about such data, such as measurements of accuracy and sampling rate do not exist. In this paper we discuss our work as part of the Informed Rural Passenger project, in which the passengers themselves are our main source for transport related sensing (such as vehicle occupancy levels, available facilities). We discuss the challenges of incorporating and using such observational data in a real world system, and describe how we are using semantic web technologies, combined with models of provenance to address them.
Conference Paper
Cloud computing has already been adopted in a broad range of application domains. However, domains like the distributed development of embedded systems are still unable to benefit from the advancements of cloud computing. Besides general security concerns, a common obstacle often is the incompatibility between such applications and the cloud. In particular, if applications need direct access to hardware elements, cloud computing cannot be used. In this paper we describe an approach of a novel cloud layer called Hardware as a Service (HaaS), which allows for usage distinct hardware components through the Internet analogously to the cloud services. HaaS focuses the transparent integration of remote hardware that is distributed over multiple geographical locations into an operating system. Furthermore, HaaS will not only enable interconnection of physical systems, but also virtual hardware emulation. Therefore, we consider in this paper only the use of emulated hardware and the interconnection with hardware models. To demonstrate the tremendous improvement by a Haas cloud, we explain the applicability in a distributed development process by an anti blocking system and an adaptive cruise control system in the automotive industry.
Book
Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko’s book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, artificial intelligence. With **Ontology Matching**, researchers and practitioners will find a reference book which presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can equally be applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a detailed account of matching techniques and matching systems in a systematic way from theoretical, practical and application perspectives.
Conference Paper
Since the late 1980s the world is working towards connectivity and convergence. In the last three decades, the convergence of information resources has happened. However to achieve a true convergence the information assets have to be shared, used and executed fruitfully by the various gadgets which we use in our daily lives. Internet of Things is a concept which leverages on the power of networks to create ubiquitous sensor-actuator networks. With the advent of the cloud technologies, the concept of IOTs can be integrated with even the basic elements having limited computing power. This paper aims to evaluate the possibilities offered by integrating the two concepts of IOTs and Cloud Computing.
Conference Paper
The exponential growth of connected devices, big data and cloud computing is driving a concrete transformation impacting the ICT world. This hyper-connected scenario is deeply affecting relationships between individuals, enterprises, citizens and Public Administrations, fostering innovative use cases in practically any environment and market, and introducing new opportunities and new challenges. The successful realization of this hyper-connected scenario depends on different elements of the ecosystem. In particular, it builds on connectivity and functionalities allowed by converged Next Generation Networks (NGN) and their capacity to support and integrate with the Internet of Things, M2M and Cloud Computing. This paper aims at providing some hints of this scenario in order to contribute to analyze impacts on network issues and requirements and open the way to handle this explosion in data communication, while maintaining consumer expectations and turn data into meaningful social and economic benefits.
Article
Spatially distributed sensor nodes can be used to monitor systems and humans conditions in a wide range of application domains. A network of body sensors in a community of people generates large amounts of contextual data that requires a scalable approach for storage and processing. Cloud computing can provide a powerful, scalable storage and processing infrastructure to perform both online and offline analysis and mining of body sensor data streams. This paper presents BodyCloud, a system architecture based on Cloud Computing for the management and monitoring of body sensor data streams. It incorporates key concepts such as scalability and flexibility of resources, sensor heterogeneity, and the dynamic deployment and management of user and community applications.
Article
The adoption of the Cloud computing concept and its market development are nowadays hindered by the problem of application, data and service portability between Clouds. Open application programming interfaces, standards and protocols, as well as their early integration in the software stack of the new technological offers, are the key elements towards a widely accepted solution and the basic requirements for the further development of Cloud applications.An approach for a new set of APIs for Cloud application development is discussed in this paper from the point of view of portability. The first available, proof-of-the-concept, prototype implementation of the proposed API is integrated in a new open-source deployable Cloudware, namely mOSAIC, designed to deal with multiple Cloud usage scenarios and providing further solutions for portability beyond the API.
Article
Model checking is a formal verification method widely accepted in the web service world because of its capability to reason about service behavior at process level. It has been used as a basic tool in several scenarios such as service selection, service validation, and service composition. The importance of semantics is also widely recognized. Indeed, there are several solutions to the problem of providing semantics to web services, most of them relying on some form of Description Logic. This paper presents an integration of model checking and semantic reasoning technologies in an efficient way. This can be considered the first step toward the use of semantic model checking in problems of selection, validation, and composition. The approach relies on a representation of services at process level that is based on semantically annotated state transition systems (asts) and a representation of specifications based on a semantically annotated version of computation tree logic (anctl). This paper proves that the semantic model checking algorithm is sound and complete and can be accomplished in polynomial time. This approach has been evaluated with several experiments.
Article
The right information at the right time is a critical aspect in any emergency or disaster management activity. Decision making and efficiency are improved when based on complete information about the conditions of the affected area. Sensors integration in a grid-based infrastructure, through a service-oriented architecture and via the World Wide Web, allows for real-time access to information. In this paper, we survey and classify the types of sensors useful in emergency management systems (EMS), present a strategy to organize the sensory infrastructure, and propose a multi-organizational service based architecture that when exploiting the Web 2.0 technologies will support emergency management activities. Lastly, we discuss a scenario where this system is employed.
Article
This paper proposes an approach for supporting the analysis of collected energy consumption data in combination with structured system models to reveal correlations between energy usage and related properties of products, operations and equipment. The described method serves as a starting point for the creation of tailored simulation models for energy consumption forecasts that can be used in the planning phase of manufacturing systems. Therefore several energy-oriented simulation methods are introduced and discussed regarding their suitability for different use cases in manufacturing engineering.
Article
This paper outlines the ways in which information technologies (ITs) were used in the Haiti relief effort, especially with respect to web-based mapping services. Although there were numerous ways in which this took place, this paper focuses on four in particular: CrisisCamp Haiti, OpenStreetMap, Ushahidi, and GeoCommons. This analysis demonstrates that ITs were a key means through which individuals could make a tangible difference in the work of relief and aid agencies without actually being physically present in Haiti. While not without problems, this effort nevertheless represents a remarkable example of the power and crowdsourced online mapping and the potential for new avenues of interaction between physically distant places that vary tremendously.
Article
To support the sharing and reuse of formally represented knowledge among AI systems, it is useful to define the common vocabulary in which shared knowledge is represented. A specification of a representational vocabulary for a shared domain of discourse—definitions of classes, relations, functions, and other objects—is called an ontology. This paper describes a mechanism for defining ontologies that are portable over representation systems. Definitions written in a standard format for predicate calculus are translated by a system called Ontolingua into specialized representations, including frame-based systems as well as relational languages. This allows researchers to share and reuse ontologies, while retaining the computational benefits of specialized implementations.We discuss how the translation approach to portability addresses several technical problems. One problem is how to accommodate the stylistic and organizational differences among representations while preserving declarative content. Another is how to translate from a very expressive language into restricted languages, remaining system-independent while preserving the computational efficiency of implemented systems. We describe how these problems are addressed by basing Ontolingua itself on an ontology of domain-independent, representational idioms.
Article
This paper presents an overview of ongoing work to develop a generic ontology design pattern for observation-based data on the Seman-tic Web. The core classes and relationships forming the pattern are dis-cussed in detail and are aligned to the DOLCE foundational ontology to improve semantic interoperability and clarify the underlying ontological commitments. The pattern also forms the top-level of the the Semantic Sensor Network ontology developed by the W3C Semantic Sensor Net-work Incubator Group. The integration of both ontologies is discussed and directions of further work are pointed out.
Conference Paper
Both OWL-DL and function-free Horn rules are decidable logics with interesting, yet orthogonal expressive power: from the rules perspective, OWL-DL is restricted to tree-like rules, but provides both existentially and universally quantified variables and full, monotonic negation. From the description logic perspective, rules are restricted to universal quantification, but allow for the interaction of variables in arbitrary ways. Clearly, a combination of OWL-DL and rules is desirable for building Semantic Web ontologies, and several such combinations have already been discussed. However, such a combination might easily lead to the undecidability of interesting reasoning problems. Here, we present a decidable such combination which is, to the best of our knowledge, more general than similar decidable combinations proposed so far. Decidability is obtained by restricting rules to so-called DL-safe ones, requiring each variable in a rule to occur in a non-DL-atom in the rule body. We show that query answering in such a combined logic is decidable, and we discuss its expressive power by means of a non-trivial example. Finally, we present an algorithm for query answering in SHIQ(D)\mathcal{SHIQ}(\mathbf{D}) extended with DL-safe rules based on the reduction to disjunctive datalog.
Article
In this article, I introduce the exciting paradigm of citizen sensing enabled by mobile sensors and human computing - that is, humans as citizens on the ubiquitous Web, acting as sensors and sharing their observations and views using mobile devices and Web 2.0 services. Access a copy at: http://knoesis.org/library/resource.php?id=00158
Article
People are on the verge of an era in which the human experience can be enriched in ways they couldn't have imagined two decades ago. Rather than depending on a single technology, people progressed with several whose semantics-empowered convergence and integration will enable us to capture, understand, and reapply human knowledge and intellect. Such capabilities will consequently elevate our technological ability to deal with the abstractions, concepts, and actions that characterize human experiences. This will herald computing for human experience (CHE). The CHE vision is built on a suite of technologies that serves, assists, and cooperates with humans to nondestructively and unobtrusively complement and enrich normal activities, with minimal explicit concern or effort on the humans' part. CHE will anticipate when to gather and apply relevant knowledge and intelligence. It will enable human experiences that are intertwined with the physical, conceptual, and experiential worlds (emotions, sentiments, and so on), rather than immerse humans in cyber worlds for a specific task. Instead of focusing on humans interacting with a technology or system, CHE will feature technology-rich human surroundings that often initiate interactions. Interaction will be more sophisticated and seamless compared to today's precursors such as automotive accident-avoidance systems. Many components of and ideas associated with the CHE vision have been around for a while. Here, the author discuss some of the most important tipping points that he believe will make CHE a reality within a decade.
Conference Paper
Thanks to Moore's law the incremental cost of adding networking to devices is falling rapidly. This creates opportunities for many new kinds of applications. This paper looks at the potential of Web technologies for reducing the complexity for developing such applications, allowing millions of developers to extend the Web out of the browser and into the real world. This is achieved through mechanisms to provide web run times with access to rich models of users, devices, services, and the environment in which they reside. Privacy and trust are key considerations.