Chapter

Fog Computing and Data as a Service: A Goal-Based Modeling Approach to Enable Effective Data Movements

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The usage of this delivery model in fog computing is growing. For example, Plebani et al. [26] proposed a DaaS-based solution to support data delivery in a fog computing environment. The solution permits efficient data transfer between data stores owned by data providers and data consumers. ...
Article
Full-text available
Demographic growth in urban areas means that modern cities face challenges in ensuring a steady supply of water and electricity, smart transport, livable space, better health services, and citizens’ safety. Advances in sensing, communication, and digital technologies promise to mitigate these challenges. Hence, many smart cities have taken a new step in moving away from internal information technology (IT) infrastructure to utility-supplied IT delivered over The Internet. The benefit of this move is to manage The vast amounts of data generated by The various city systems, including water and electricity systems, The waste management system, transportation system, public space management systems, health and education systems, and many more. Furthermore, many smart city applications are time-sensitive and need to quickly analyze data to react promptly to The various events occurring in a city. The new and emerging paradigms of edge and fog computing promise to address big data storage and analysis in The field of smart cities. Here, we review existing service delivery models in smart cities and present our perspective on adopting these two emerging paradigms. We specifically describe The design of a fog-based data pipeline to address The issues of latency and network bandwidth required by time-sensitive smart city applications.
... The authors in [16] do not use choreography modeling but propose Data as a Service (DaaS) at the fog layer in [18]. Among data movement actions it includes to move or duplicate data from edge-to-edge storages. ...
Article
Full-text available
This paper presents a solution to support service discovery for edge choreography based distributed embedded systems. The Internet of Things (IoT) edge architectural layer is composed of Raspberry Pi machines. Each machine hosts different services organized based on the choreography collaborative paradigm. The solution adds to the choreography middleware three messages passing models to be coherent and compatible with current IoT messaging protocols. It is aimed to support blind hot plugging of new machines and help with service load balance. The discovery mechanism is implemented as a broker service and supports regular expressions (Regex) in message scope to discern both publishing patterns offered by data providers and client services necessities. Results compare Control Process Unit (CPU) usage in a request–response and datacentric configuration and analyze both regex interpreter latency times compared with a traditional message structure as well as its impact on CPU and memory consumption.
... In fact, categories are useful in aggregating together actions that are likely to have similar impacts when applied in a specific context. As discussed in Plebani et al. (2018), when implemented in a specific scenario and according to the actual resources available, one or more instances for each category might be instantiated to represent all possible movements between all the possible resources. Considering the example of Figure 3, two instances for the class M CE and M EC are created (since we have two edge devices connected with a cloud resource); similarly, two instances for M EE are generated, and no tasks of type M CC are available since only one cloud location is available in the scenario. ...
Article
Full-text available
Pervasive sensing is increasing our ability to monitor the status of patients not only when they are hospitalized but also during home recovery. As a result, lots of data are collected and are available for multiple purposes. If operations can take advantage of timely and detailed data, the huge amount of data collected can also be useful for analytics. However, these data may be unusable for two reasons: data quality and performance problems. First, if the quality of the collected values is low, the processing activities could produce insignificant results. Second, if the system does not guarantee adequate performance, the results may not be delivered at the right time. The goal of this document is to propose a data utility model that considers the impact of the quality of the data sources (e.g., collected data, biographical data, and clinical history) on the expected results and allows for improvement of the performance through utility-driven data management in a Fog environment. Regarding data quality, our approach aims to consider it as a context-dependent problem: a given dataset can be considered useful for one application and inadequate for another application. For this reason, we suggest a context-dependent quality assessment considering dimensions such as accuracy, completeness, consistency, and timeliness, and we argue that different applications have different quality requirements to consider. The management of data in Fog computing also requires particular attention to quality of service requirements. For this reason, we include QoS aspects in the data utility model, such as availability, response time, and latency. Based on the proposed data utility model, we present an approach based on a goal model capable of identifying when one or more dimensions of quality of service or data quality are violated and of suggesting which is the best action to be taken to address this violation. The proposed approach is evaluated with a real and appropriately anonymized dataset, obtained as part of the experimental procedure of a research project in which a device with a set of sensors (inertial, temperature, humidity, and light sensors) is used to collect motion and environmental data associated with the daily physical activities of healthy young volunteers.
Chapter
Fog computing offers an integrated key in support of communications, data gathering, device management, services capabilities, storage, and analysis at the edge of the network. This allows the deployment of centrally managed infrastructure in an extremely distributed environment. The present work discusses the most significant applications of fog computing for smart city infrastructure. In the smart city environment running a lot of IoT-based services, computing infrastructure becomes the most important concern. Thousands of smart objects, vehicles, mobiles, and people interact with each other to provide innovative services; here the fog computing infrastructure can be very useful from the perspective of data and communication. The chapter focuses on three main aspects: (a) deployment of data and software in fog nodes (b) fog-based data management and analytics, and (c) 5G communication using the fog infrastructure. Working models in all the perspectives have been presented to illustrate these fog computing applications. Use-cases have been added from the successful implementations of smart city infrastructure. Further, the challenges and opportunities have been presented from the perspective of growing interest in smart cities.
Article
Full-text available
Mapping out the challenges and strategies for the widespread adoption of service computing.
Conference Paper
Full-text available
Risks of software projects are often ignored and risk analysis is left for later stages of project life-cycle, resulting in serious financial losses. This paper proposes a goal-oriented risk analysis framework that includes inter-dependencies among treatments and risks in terms of likelihood and generate optimal solutions with respect to multiple objectives such as goal rewards, treatment costs, or risk factor. The Loan Origination Process illustrates our approach and a detailed analysis of the visual notation is provided.
Conference Paper
Full-text available
Business Intelligence (BI) offers great opportunities for strategic analysis of current and future business operations; however, existing BI tools typically provide data-oriented responses to queries, difficult to understand in terms of business objectives and strategies. To make BI data meaningful, we need a conceptual modeling language whose primitive concepts represent business objectives, processes, opportunities and threats. We have previously introduced such a language, the Business Intelligence Model (BIM). In this paper we consolidate and rationalize earlier work on BIM, giving a precise syntax, reducing the number of fundamental concepts by using meta-attributes, and introducing the novel notion of “pursuit”. Significantly, we also provide a formal semantics of BIM using a subset of the OWL Description Logic (DL). Using this semantics as a translation, DL reasoners can be exploited to (1) propagate evidence and goal pursuit in support of “what if?” reasoning, (2) allow extensions to the BIM language, (3) detect inconsistencies in specific BIM models, and (4) automatically classify defined concepts relative to existing concepts, organizing the model.
Conference Paper
Full-text available
Managing data produced in the Internet of Things according to the traditional data-center based approach is becoming no longer appropriate. Devices are improving their computational power as the processors installed on them are more and more powerful and diverse. Moreover, devices cannot guarantee a continuous connection due their mobility and limitation of battery life. Goal of this paper is to tackle this issue focusing on data movement to eliminate the unnecessary storage, transfer and processing of datasets by concentrating only the data subsets that are relevant. A cross-layered framework is proposed to give to both applications and developers the abstracted ability to choose which aspect to optimize, based on their goals and requirements and to data providers an environment that facilitates data provisioning according to users’ needs.
Article
Full-text available
In goal-oriented requirements engineering, goal models have been advocated to express stakeholder objectives and to capture and choose among system requirement candidates. A number of highly automated procedures have been proposed to analyze goal achievement and select alternative requirements using goal models. However, during the early stages of requirements exploration, these procedures are difficult to apply, as stakeholder goals are typically high-level, abstract, and hard-to-measure. Automated procedures often require formal representations and/or information not easily acquired in early stages (e.g., costs, temporal constraints). Consequently, early requirements engineering (RE) presents specific challenges for goal model analysis, including the need to encourage and support stakeholder involvement (through interactivity) and model improvement (through iterations). This work provides a consolidated and updated description of a framework for iterative, interactive, agent-goal model analysis for early RE. We use experiences in case studies and literature surveys to guide the design of agent-goal model analysis specific to early RE. We introduce analysis procedures for the i* goal-oriented framework, allowing users to ask “what if?” and “are certain goals achievable? how? or why not?” The i* language and our analysis procedures are formally defined. We describe framework implementation, including model visualization techniques and scalability tests. Industrial, group, and individual case studies are applied to test framework effectiveness. Contributions, including limitations and future work, are described.
Article
Full-text available
Fog Computing extends the Cloud Computing paradigm to the edge of the network, thus enabling a new breed of applications and services. Defining characteristics of the Fog are: a) Low latency and location awareness; b) Wide-spread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real time applications, g) Heterogeneity. In this paper we argue that the above characteristics make the Fog the appropriate platform for a number of critical Internet of Things (IoT) services and applications, namely, Connected Vehicle, Smart Grid, Smart Cities, and, in general, Wireless Sensors and Actuators Networks (WSANs).
Article
Full-text available
Business intelligence (BI) offers tremendous potential for business organizations to gain insights into their day-to-day operations, as well as longer term opportunities and threats. However, most of today’s BI tools are based on models that are too much data-oriented from the point of view of business decision makers. We propose an enterprise modeling approach to bridge the business-level understanding of the enterprise with its representations in databases and data warehouses. The business intelligence model (BIM) offers concepts familiar to business decision making—such as goals, strategies, processes, situations, influences, and indicators. Unlike many enterprise models which are meant to be used to derive, manage, or align with IT system implementations, BIM aims to help business users organize and make sense of the vast amounts of data about the enterprise and its external environment. In this paper, we present core BIM concepts, focusing especially on reasoning about situations, influences, and indicators. Such reasoning supports strategic analysis of business objectives in light of current enterprise data, allowing analysts to explore scenarios and find alternative strategies. We describe how goal reasoning techniques from conceptual modeling and requirements engineering have been applied to BIM. Techniques are also provided to support reasoning with indicators linked to business metrics, including cases where specifications of indicators are incomplete. Evaluation of the proposed modeling and reasoning framework includes an on-going prototype implementation, as well as case studies.
Article
Full-text available
This Reference Model for Service Oriented Architecture is an abstract framework for understanding significant entities and relationships between them within a service-oriented environment, and for the development of consistent standards or specifications supporting that environment. It is based on unifying concepts of SOA and may be used by architects developing specific service oriented architectures or in training and explaining SOA. A reference model is not directly tied to any standards, technologies or other concrete implementation details. It does seek to provide a common semantics that can be used unambiguously across and between different implementations. The relationship between the Reference Model and particular architectures, technologies and other aspects of SOA is illustrated in Figure 1. While service-orientation may be a popular concept found in a broad variety of applications, this reference model focuses on the field of software architecture. The concepts and relationships described may apply to other "service" environments; however, this specification makes no attempt to completely account for use outside of the software domain.
Conference Paper
Full-text available
Exploring alternative options is at the heart of the requirements and design processes. Different alternatives contribute to different degrees of achievement of non-functional goals about system safety, security, performance, usability, and so forth. Such goals in general cannot be satisfied in an absolute, clear-cut sense. Various qualitative and quantitative frameworks have been proposed to support the assessment of alternatives for design decision making. In general they lead to limited conclusions due to the lack of accuracy and measurability of goal formulations and the lack of impact propagation rules along goal contribution links. The paper presents techniques for specifying partial degrees of goal satisfaction and for quantifying the impact of alternative system designs on the degree of goal satisfaction. The approach consists in enriching goal refinement models with a probabilistic layer for reasoning about partial satisfaction. Within such models, non-functional goals are specified in a precise, probabilistic way; their specification is interpreted in terms of application-specific measures; impact of alternative goal refinements is evaluated in terms of refinement equations over random variables involved in the system's functional goals. A systematic method is presented for guiding the elaboration of such models. The latter can then be used to assess the impact of alternative decisions on the degree of goal satisfaction or to derive quantitative, fine-grained requirements on the software to achieve the higher-level goals.
Conference Paper
Full-text available
This paper presents several algorithms to solve code generation and optimization problems specific to machines with distributed address spaces. Given a description of how the computation is to be partitioned across the processors in a machine, our algorithms produce an SPMD (single program multiple data) program to be run on each processor. Our compiler generated the necessary receive and send instructions, optimizes the communication by eliminating redundant communication and aggregating small messages into large messages, allocates space locally on each processor, and translates global data addresses to local addresses. Our techniques are based on an exact data-flow analysis on individual array element accesses. Unlike data dependence analysis, this analysis determines if two dynamic instances refer to the same value, and not just to the same location. Using this information, our compiler can handle more flexible data decompositions and find more opportunities for communication optimization than systems based on data dependence analysis. Our technique is based on a uniform framework, where data decompositions, computation decompositions and the data flow information are all represented as systems of linear inequalities. We show that the problems of communication code generation, local memory management, message aggregation and redundant data communication elimination can all be solved by projecting polyhedra represented by sets of inequalities onto lower dimensional spaces.
Article
Full-text available
The User Requirements Notation (URN), standardized by the International Telecommunication Union in 2008, is used to model and analyze requirements with goals and scenarios. This paper describes the first ten years of development of URN, and discusses ongoing efforts targeting the next ten years. We did a study inspired by the systematic literature review approach, querying five major search engines and using the existing URN Virtual Library. Based on the 281 scientific publications related to URN we collected and analyzed, we observe a shift from a more conventional use of URN for telecommunications and reactive systems to business process management and aspect-oriented modeling, with relevant extensions to the language being proposed. URN also benefits from a global and active research community, although industrial contributions are still sparse. URN is now a leading language for goal-driven and scenario-oriented modeling with a promising future for many application domains.
Conference Paper
Full-text available
Goal models have been used in Computer Science in order to represent software requirements, business objectives and design qualities. In previous work we have presented a formal framework for reasoning with goal models, in a qualitative or quantitative way, and we have introduced an algorithm for forward propagating values through goal models. In this paper we focus on the qualitative framework and we propose a technique and an implemented tool for addressing two much more challenging problems: (1) find an initial assignment of labels to leaf goals which satisfies a desired final status of root goals by upward value propagation, while respecting some given constraints; and (2) find an minimum cost assignment of labels to leaf goals which satisfies root goals. The paper also presents preliminary experimental results on the performance of the tool using the goal graph generated by a case study involving the Public Transportation Service of Trentino (Italy).
Conference Paper
Data-intensive applications are usually developed based on Cloud resources whose service delivery model helps towards building reliable and scalable solutions. However, especially in the context of Internet of Things-based applications, Cloud Computing comes with some limitations as data, generated at the edge of the network, are processed at the core of the network producing security, privacy, and latency issues. On the other side, Fog Computing is emerging as an extension of Cloud Computing, where resources located at the edge of the network are used in combination with cloud services. The goal of this paper is to present the approach adopted in the recently started DITAS project: the design of a Cloud platform is proposed to optimize the development of data-intensive applications providing information logistics tools that are able to deliver information and computation resources at the right time, right place, with the right quality. Applications that will be developed with DITAS tools live in a Fog Computing environment, where data move from the cloud to the edge and vice versa to provide secure, reliable, and scalable solutions with excellent performance.
Conference Paper
Over the past decade, goal models have been used in Computer Science in order to represent software requirements, business objectives and design qualities. Such models extend traditional AI planning techniques for representing goals by allowing for partially defined and possibly inconsistent goals. This paper presents a formal framework for reasoning with such goal models. In particular, the paper proposes a qualitative and a numerical axiomatization for goal modeling primitives and introduces label propagation algorithms that axe shown to be sound and complete with respect to their respective axiomatizations. In addition, the paper reports on experimental results on the propagation algorithms applied to a goal model for a US car manufacturer.
Article
This article discusses the technologies for realizing highly efficient data migration and backup for big data applications in elastic optical inter-data-center (inter-DC) networks. We first describe the impacts of big data applications on underlying network infrastructure and introduce the concept of flexible-grid elastic optical inter-DC networks. Then we model the data migration in such networks as dynamic anycast and propose several efficient algorithms. Joint resource defragmentation is also discussed to further improve network performance. For efficient data backup, we leverage a mutual backup model and investigate how to avoid the prolonged negative impacts on DCs' normal operation by minimizing the DC backup window.
Article
This work has been motivated by the growing demand of energy coming from the IT sector. We propose a goal-oriented approach where the state of the system is assessed using a set of indicators. These indicators are evaluated against thresholds that are used as goals of our system. We propose a self-adaptive context-aware framework, where we learn both the relations existing between the indicators and the effect of the available actions over the indicators state. The system is also able to respond to changes in the environment, keeping these relations updated to the current situation. Results have shown that the proposed methodology is able to create a network of relations between indicators and to propose an effective set of repair actions to contrast suboptimal states of the data center. The proposed framework is an important tool for assisting the system administrator in the management of a data center oriented towards Energy Efficiency (EE), showing him the connections occurring between the sometimes contrasting goals of the system and suggesting the most likely successful repair action(s) to improve the system state, both in terms of EE and QoS.
Conference Paper
Data outsourcing or database as a service is a new paradigm for data management in which a third party service provider hosts a database as a service. The service provides data management for its customers and thus obviates the need for the service user to purchase expensive hardware and software, deal with software upgrades and hire professionals for administrative and maintenance tasks. Since using an external database service promises reliable data storage at a low cost it is very attractive for companies. Such a service would also provide universal access, through the Internet to private data stored at reliable and secure sites. A client would store their data, and not need to carry their data with them as they travel. They would also not need to log remotely to their home machines, which may suffer from crashes and be unavailable. However, recent governmental legislations, competition among companies, and database thefts mandate companies to use secure and privacy preserving data management techniques. The data provider, therefore, needs to guarantee that the data is secure, be able to execute queries on the data, and the results of the queries must also be secure and not visible to the data provider. Current research has been focused only on how to index and query encrypted data. However, querying encrypted data is computationally very expensive. Providing an efficient trust mechanism to push both database service providers and clients to behave honestly has emerged as one of the most important problem before data outsourcing to become a viable paradigm. In this paper, we describe scalable privacy preserving algorithms for data outsourcing. Instead of encryption, which is computationally expensive, we use distribution on multiple data provider sites and information theoretically proven secret sharing algorithms as the basis for privacy preserving outsourcing. The technical contributions of this paper is the establishment and development of a framework for effi- - cient fault-tolerant scalable and theoretically secure privacy preserving data outsourcing that supports a diversity of database operations executed on different types of data, which can even leverage publicly available data sets.
Article
This short paper is concerned with computational methods for solving optimization problems with a vector-valued index function (vector optimization). It uses vector optimization as a tool for analyzing static control problems with performance and parameter sensitivity indices. The first part of this short paper presents a new computational method, the goal attainment method, which overcomes some of the limitations and disadvantages of methods currently available. The second part presents an integrated, multiobjective treatment of performance and sensitivity optimization based on a vector index approach. A numerical example in electric power system control is included, with analysis and results demonstrating the use of the goal attainment method and application of the approach to performance and sensitivity optimization.
Content delivery network using edge-of-network servers for providing content delivery to a set of participating content providers
  • F T Leighton
  • D M Lewin
Leighton, F.T., Lewin, D.M.: Content delivery network using edge-of-network servers for providing content delivery to a set of participating content providers (Apr 22 2003)
Fog computing and its role in the internet of things
  • F Bonomi
  • R Milito
  • J Zhu
  • S Addepalli
Bonomi, F., Milito, R., Zhu, J., Addepalli, S.: Fog computing and its role in the internet of things. In: Proc. of the MCC Workshop on Mobile Cloud Computing. pp. 13-16 (2012)