About
112
Publications
13,751
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
4,579
Citations
Introduction
Skills and Expertise
Additional affiliations
November 2005 - present
October 1998 - September 2001
Publications
Publications (112)
Data Stream Processing (DSP) applications analyze data flows in near real-time by means of operators, which process and transform incoming data. Operators handle high data rates running parallel replicas across multiple processors and hosts. To guarantee consistent performance without wasting resources in face of variable workloads, auto-scaling te...
Data Stream Processing (DSP) has emerged over the years as the reference paradigm for the analysis of continuous and fast information flows, which have often to be processed with low-latency requirements to extract insights and knowledge from raw data. Dealing with unbounded data flows, DSP applications are typically long-running and, thus, likely...
Cloud-native applications increasingly adopt the microservices architecture, which favors elasticity to satisfy the application performance requirements in face of variable workloads. To simplify the elasticity management, the trend is to create an auto-scaler instance per microservice, which controls its horizontal scalability by using the classic...
Data-intensive applications have attracted considerable attention in recent years. Business organizations are increasingly becoming data-driven and therefore look for novel ways to collect, analyze, and leverage the data at their disposal. The goal of this chapter is to overview some recurring performance management activities for data-intensive ap...
Emerging fog and edge computing environments enable the analysis of Big Data collected from devices (e.g., IoT sensors) with reduced latency compared to cloud-based solutions. In particular, many applications deal with continuous data flows in latency-sensitive domains (e.g., healthcare monitoring), where Data Stream Processing (DSP) systems repres...
The microservice architecture structures an application as a collection of loosely coupled and distributed services. Since application workloads usually change over time, the number of replicas per microservice should be accordingly scaled at run-time. The most widely adopted scaling policy relies on statically defined thresholds, expressed in term...
The fast increasing presence of Internet-of-Things and fog computing resources exposes new challenges due to heterogeneity and non-negligible network delays among resources as well as the dynamism of operating conditions. Such a variable computing environment leads the applications to adopt an elastic and decentralized execution. To simplify the ap...
Software containers are changing the way applications are designed and executed. Moreover, in the last few years, we see the increasing adoption of container orchestration tools, such as Kubernetes, to simplify the management of multi-container applications. Kubernetes includes simple deployment policies that spread containers on computing resource...
We consider several Software as a Service (SaaS) providers that offer services using the Cloud resources provided by an Infrastructure as a Service (IaaS) provider which adopts a pay-per-use scheme similar to the Amazon EC2 service, comprising flat, on demand, and spot virtual machine instances. For this scenario, we study the virtual machine provi...
Routing solutions for multi-hop underwater wireless sensor networks suffer significant performance degradation as they fail to adapt to the overwhelming dynamics of underwater environments. To respond to this challenge, we propose a new data forwarding scheme where relay selection swiftly adapts to the varying conditions of the underwater channel....
In the last few years, a large number of real-time analytics applications rely on the Data Stream Processing (DSP) so to extract, in a timely manner, valuable information from distributed sources. Moreover, to efficiently handle the increasing amount of data, recent trends exploit the emerging presence of edge/Fog computing resources so to decentra...
Data Stream Processing (DSP) has emerged as a key enabler to develop pervasive services that require to process data in a near real-time fashion. DSP applications keep up with the high volume of produced data by scaling their execution on multiple computing nodes, so as to process the incoming data flow in parallel. Workloads variability requires t...
Software containers are ever more adopted to man-age and execute distributed applications. Indeed, they enable to quickly scale the amount of computing resources by means of horizontal and vertical elasticity. Most of the existing works consider the deployment of containers in centralized data centers.However, to exploit the diffused presence of ed...
Data Stream Processing (DSP) applications should be capable to efficiently process high-velocity continuous data streams by elastically scaling the parallelism degree of their operators so to deal with high variability in the workload. Moreover, to efficiently use computing resources, modern DSP frameworks should seamlessly support infrastructure e...
The increasing use of virtualization (e.g., in Cloud Computing, Software Defined Networks), demands Infrastructure Providers (InPs) to optimize the placement of the virtual network requests (VNRs) into a substrate network. In addition to that, they need to cope with QoS, in particular for the rising number of time critical applications (e.g., healt...
By exploiting on-the-fly computation, Data Stream Processing (DSP) applications can process huge volumes of data in a near real-time fashion. Adapting the application parallelism at run-time is critical in order to guarantee a proper level of QoS in face of varying workloads. In this paper, we consider Reinforcement Learning based techniques in ord...
The capability of efficiently processing the data streams emitted by nowadays ubiquitous sensing devices enables the development of new intelligent services. Data Stream Processing (DSP) applications allow for processing huge volumes of data in near real-time. To keep up with the high volume and velocity of data, these applications can elastically...
Traditional networks are transformed to enable full integration of heterogeneous hardware and software functions, that are configured at runtime, with minimal time to market, and are provided to their end users on “as a service” principle. Therefore, a countless number of possibilities for further innovation and exploitation opens up. Network Funct...
Data Stream Processing (DSP) applications are widely used to develop new pervasive services, which require to seamlessly process huge amounts of data in a near real-time fashion. To keep up with the high volume of daily produced data, these applications need to dynamically scale their execution on multiple computing nodes, so to process the incomin...
Optimal interface selection is a key mobility management issue in heterogeneous wireless networks. Measuring the physical or link level performance on a given wireless access networks does not provide a reliable indication of the IP connectivity, delay and loss on the (bidirectional) paths from the Mobile Host to the node that is handling the mobil...
In the Big Data era, Data Stream Processing (DSP) applications should be capable to seamlessly process huge amount of data. Hence, they need to dynamically scale their execution on multiple computing nodes so to adjust to unpredictable data source rate. In this paper, we present a hierarchical and distributed architecture for the autonomous control...
Processing data in a timely manner, data stream processing (DSP) applications are receiving an increasing interest for building new pervasive services. Due to the unpredictability of data sources, these applications often operate in dynamic environments; therefore, they require the ability to elastically scale in response to workload variations. In...
An ever increasing use of virtualization in various emerging scenarios, e.g.: Cloud Computing, Software Defined Networks, Data Streaming Processing, asks Infrastructure Providers (InPs) to optimize the allocation of the virtual network requests (VNRs) into a substrate network while satisfying QoS requirements. In this work, we propose MCRM, a two-s...
Exploiting on-the-fly computation, Data Stream Processing (DSP) applications are widely used to process unbounded streams of data and extract valuable information in a near real-time fashion. As such, they enable the development of new intelligent and pervasive services that can improve our everyday life. To keep up with the high volume of daily pr...
In the last few years, several processing approaches have emerged to deal with Big Data. Exploiting on-the-fly computation, Data Stream Processing (DSP) applications can process unbounded streams of data to extract valuable information in a near real-time fashion. To keep up with the high volume of daily produced data, the operators that compose a...
Optimal interface selection is a key mobility management issue in heterogeneous wireless networks. Measuring the physical or link level performance on a given wireless access networks does not provide a reliable indication of the actually perceived level of service. It is therefore needed to take measurements at IP level, on the (bidirectional) pat...
Data Stream Processing (DSP) applications are widely used to timely extract information from distributed data sources, such as sensing devices, monitoring stations, and social networks. To successfully handle this ever increasing amount of data, recent trends investigate the possibility of exploiting decentralized computational resources (e.g., Fog...
We consider a three-tier architecture for mobile and pervasive computing scenarios, consisting of a local tier of mobile nodes, a middle tier (cloudlets) of nearby computing nodes, typically located at the mobile nodes access points but characterized by a limited amount of resources, and a remote tier of distant cloud servers, which have practicall...
In this paper we propose the adoption of a self-adaptable cross-layer and modular Software Defined Communication Stack (SDCS) for Underwater Wireless Sensor Networks. The SDCS is a modular stack solution which is capable to run different protocols at each layer of the network stack; a new component, named policy engine, autonomously and adaptively,...
Storm is a distributed stream processing system that has recently gained increasing interest. We extend Storm to make it suitable to operate in a geographically distributed and highly variable environment such as that envisioned by the convergence of Fog computing, Cloud computing, and Internet of Things.
Fog computing is rapidly changing the distributed computing landscape by extending the Cloud computing paradigm to include wide-spread resources located at the network edges. This diffused infrastructure is well suited for the implementation of data stream processing (DSP) applications, by possibly exploiting local computing resources. Storm is an...
In this paper, we consider an application provider that executes simultaneously periodic long running jobs and needs to ensure a minimum throughput to guarantee QoS to its users; the application provider uses virtual machine (VM) resources offered by an IaaS provider. Aim of the periodic jobs is to compute measures on data collected over a specific...
In this paper we introduce the Generalized Virtual Networking (GVN) concept.
GVN provides a framework to influence the routing of packets based on service
level information that is carried in the packets. It is based on a protocol
header inserted between the Network and Transport layers, therefore it can be
seen as a layer 3.5 solution. Technically...
Offloading to external surrogate machines (part of) the workload generated by applications running on mobile nodes has been suggested as a way to improve the mobile user experience. In this paper, we consider a set of mobile users that can offload their computation on Virtual Machines (VMs) instantiated in a cloud infrastructure implemented over a...
In this paper we consider a set of Software as a Service (SaaS) providers, that offer a set of Web services using the Cloud facilities provided by an Infrastructure as a Service (IaaS) provider. We assume that the IaaS provider offers a pay only what you use scheme similar to the Amazon EC2 service, comprising flat, on demand, and spot virtual mach...
We present a distributed, integrated medium access control, scheduling, routing and congestion/rate control protocol stack for Cognitive Radio Ad Hoc Networks (CRAHNs) that dynamically exploits the available spectrum resources left unused by primary licensed users, maximizing the throughput of a set of multi-hop flows between peer nodes. Using a Ne...
In this paper we consider several Software as a Service (SaaS) providers, that offer a set of applications using the Cloud facilities provided by an Infrastructure as a Service (IaaS) provider. We assume that the IaaS provider offers a pay only what you use scheme similar to the Amazon EC2 service, comprising flat, on demand, and spot virtual machi...
Service selection has been widely investigated by the SOA research community as an effective adaptation mechanism that allows a service broker, offering a composite service, to bind at runtime each task of the composite service to a corresponding concrete implementation, selecting it from a set of candidates which differ from one another in terms o...
Underwater sensor networks have become an important area of research with many potential practical applications. Given impairments of optical and radio propagation, acoustic communication is used for underwater networking, which translates into variable and long propagation delays, low data rates, long interference ranges and significant fluctuatio...
Architecting software systems according to the service-oriented paradigm and designing runtime self-adaptable systems are two relevant research areas in today's software engineering. In this paper, we address issues that lie at the intersection of these two important fields. First, we present a characterization of the problem space of self-adaptati...
Service Oriented Systems (SOSs) based on the SOA paradigm are becoming popular thanks to a widely deployed internetworking infrastructure. They are composed by a possibly large number of heterogeneous third-party subsystems and usually operate in a highly varying execution environment, that makes it challenging to provide applications with Quality...
Modern Internet-based systems typically involve a large number of servers and applications and require real-time management strategies for cloning and migrating virtual machines, as well as re-distributing or re-mapping the underlying hardware. At the basis of most real-time management strategies there is the need to continuously evaluate system st...
In the service computing paradigm, a service broker can build new applications by composing network-accessible services offered by loosely coupled independent providers. In this paper, we address the problem of providing a service broker, which offers to prospective users a composite service with a range of different Quality of Service (QoS) classe...
The Service-Oriented Architecture (SOA) paradigm supports a collaborative business model, where business applications are built from independently developed services, and services and applications build up complex dependencies. Guaranteeing high dependability levels in such complex environment is a key factor for the success of this model. In this...
Service selection has been widely investigated by the SOA research community as an effective adaptation mechanism that allows a service broker, offering a composite service, to bind at runtime each task of the composite service to a corresponding concrete implementation, selecting it from a set of candidates which differ from one another in terms o...
In the today Internet of Services, one of the challenges of Application Service Providers (ASPs) is to fulfill the QoS requirements stated in the Service Level Agreements (SLAs) established with different consumers and to minimize the investment and management costs. Cloud computing is the promising solution for ASPs that increasingly demand for an...
In the service computing paradigm, a service broker can build new applications by composing network-accessible services offered by loosely coupled independent providers. In this paper, we address the admission control problem for a a service broker which offers to prospective users a composite service with a range of different Quality of Service (Q...
In this paper, we address the problem of providing a service broker, which offers to prospective users a composite service with a range of different Quality of Service (QoS) classes, with a forward-looking admission control policy based on Markov Decision Processes (MDPs).
Service selection has been widely investigated as an effective adaptation mechanism that allows a service broker, offering a composite service, to bind each task of the abstract composition to a corresponding implementation, selecting it from a set of candidates. The selection aims typically to fulfill the Quality of Service (QoS) requirements of t...
We present a brokering service for the adaptive management of composite services. The goal of this broker is to dynamically
adapt at runtime the composite service configuration, to fulfill the Service Level Agreements (SLAs) negotiated with different
classes of requestors, despite variations of the operating environment. Differently from most of th...
Data centers providing modern interactive applications are enriched by autonomous management decision systems that are able to clone and migrate virtual machines, to re-distribute resources or to re-map services in real-time. At the basis of all these decisions, there is the need of a continuous evaluation of the state of system resources and of de...
Cognitive radio (CR) networks have been proposed as a viable solution to spectrum scarcity problems. In CR networks, CR nodes exploit spectrum holes in space, time and/or frequency to transmit on licensed frequency bands without affecting primary users. In such a dynamic and unpredictable environment, CR networks require the ability to gather infor...
In this paper we consider a provider that offers a SOA application implemented as a composite service to several users with
different Qos requirements. For such a system, we present a scalable framework to the QoS-aware self-adaptation based on a
two layer reference architecture. The first layer addresses the adaptation at the provisioning level: o...
All runtime management decisions in computer and information systems require immediate detection of relevant changes in the state of their resources. This is accomplished by continuously monitoring the performance/utilization of key system resources and by using appropriate statistical tests to detect the occurance of significant state changes. Unf...
Runtime adaptation is recognized as a viable way for a service-oriented system to meet QoS requirements in its volatile operating environment. In this paper we propose a methodology to drive the adaptation of such a system, that integrates within a unified framework different adaptation mechanisms, to achieve a greater flexibility in facing differe...
We propose a general model for resources allocation of virtual machines in multi-tier distributed environments. Our model describes each virtual machine and each physical host by a multi-dimensional resource vector, allowing the coexistence of both quantitative and qualitative resources, also handling different SLAs. As this model is a generalizati...
Increasingly complex information systems operating in dynamic environments ask for management policies able to deal intelligently and autonomously with problems and tasks. An attempt to deal with these aspects can be found in the Service-Oriented Architecture (SOA) paradigm that foresees the creation of business applications from independently deve...
This paper jointly addresses dynamic replica placement and traffic redirection to the best replica in Content Delivery Networks (CDNs). Our solution is fully distributed and lo- calized and trade-offs the costs paid by the CDN provider (e.g., the number of allocated replicas, frequency of replicas additions and removals) with the quality of the con...
The content delivery networks (CDN) paradigm is based on the idea to transparently move third-party content closer to the users. More specifically, content is replicated on CDN servers which are located close to the final users, and user requests are redirected to the "best" replica (e.g., the closest) in a transparent way, so that users perceive a...
In this paper, we consider a provider that offers an application implemented as a composite service to several users with (possibly) different Quality of Service (QoS) requirements. To this end, the provider negotiates with both the clients and the service providers Service Level Agreements (SLAs), which define the respective QoS-related obligation...
In the service oriented paradigm applications are created as a composition of independently developed Web services. Since the same service may be offered by different providers with different non-functional Quality of Service (QoS) attributes, a selection process is needed to identify the constituent services for a given composite service that best...
A composite Web service can be constructed and deployed by combining independently developed component services, each one may be offered by different providers with different non-functional Qual- ity of Service (QoS) attributes. Therefore, a selection process is needed to identify which constituent services are to be used to construct a composite s...
Network performance tomography involves correlating end-to-end performance measures over different network paths to infer the performance characteristics on their intersection. Multicast based inference of link-loss rates is the first paradigm for the approach. Existing algorithms generally require numerical solution of polynomial equations for a m...
In this paper, we explore the use of end-to-end unicast traffic as measurement probes to infer link-level loss rates. We leverage off of earlier work that produced efficient estimates for link-level loss rates based on end-to-end multicast traffic measurements. We design experiments based on the notion of transmitting stripes of packets (with no de...
The content delivery networks (CDN) paradigm is based on the idea to transparently move third-party content closer to the users. More specifically, content is replicated on CDN servers which are located close to the final users, and user requests are redirected to the "best" replica (e.g. the closest) in a transparent way, so that users perceive a...
In this paper we present a model for the joint congestion control, routing and MAC link access for ad hoc wireless networks. We formulate the problem as a utility maximization problem with routing and link access constraints. For the solution we exploit the separable structure of the problem via dual decomposition and the sub-gradient algorithm. Th...
The content delivery networks (CDN) paradigm is based on the idea to move third-party content closer to the users transparently. More specifically, content is replicated on servers closer to the users, and users requests are redirected to the best replica in a transparent way, so that the user perceives better content access service. In this paper...
A broader and broader range of ICT devices and network-based services are made available to users. Today, to access them, the user has to use heterogeneous hardware and software technologies, has to properly configure them, and must be authenticated and charged in different way. The result is a growing complexity which can become a serious obstacle...
End-to-end measurement is a common tool for network performance diagnosis, primarily because it can reflect user experience and typically requires minimal support from intervening network elements. However, pinpointing the site of performance degradation from end-to-end measurements is a challenging problem. We show how end-to-end delay measurement...
In this article we present a scalable model of a network of Active Queue Management (AQM) routers serving a large population of Transport Control Protocol (TCP) flows. We present efficient solution techniques that allow one to obtain the transient behavior of the average queue lengths and packet loss/mark probabilities of AQM routers, and average e...
this paper we present a scalable model of a network of Active Queue Management (AQM) routers serving a large population of TCP flows. We present e#cient solution techniques that allow one to obtain the transient behavior of the average queue lengths and packet loss/mark probabilities of AQM routers, and average end-to-end throughput and latencies o...
End-to-end measurement is a common tool for network performance diagnosis, primarily because it can reflect user experience and typically requires minimal support from intervening network elements. However, pinpointing the site of performance degradation from end-to-end measurements is a challenging problem. In this paper, we show how end-to-end de...
In this paper we present a scalable model of a network of Active Queue Management (AQM) routers serving a large population of TCP flows. We present efficient solution techniques that allow one to obtain the transient behavior of the average queue lengths, packet loss probabilities, and average end-to-end latencies. We model different versions of TC...
In this paper we revise some of the most relevant aspects concerning the quality of service in wireless networks, providing, along the research issues we are currently pursuing, both the state-of-the-art and our recent achievements. More specifically, first of all we focus on network survivability, that is the ability of the network of maintaining...
Packet delay greatly influences the overall performance of network applications. It is therefore important to identify causes and locations of delay performance degradation within a network. Existing techniques, largely based on end-to-end delay measurements of unicast traffic, are well suited to monitor and characterize the behavior of particular...
Content delivery networks (CDN) design entails the placement of server replicas to bring the content close to the users, together with an efficient and content aware request routing. In this paper we address the problem of dynamic replica placement to account for users demand variability while optimizing the costs paid by a CDN provider and the ove...
In this paper we consider the problem of inferring link-level loss rates from end-to-end multicast measurements taken from a collection of trees. We give conditions under which loss rates are identifiable on a specified set of links. Two algorithms are presented to perform the link-level inferences for those links on which losses can be identified....