Riccardo PinciroliGran Sasso Science Institute | GSSI · Computer Science
Riccardo Pinciroli
Ph.D.
About
35
Publications
2,140
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
353
Citations
Publications
Publications (35)
Autoscaling systems provide means to automatically change the resources allocated to a software system according to the incoming workload and its actual needs. Public cloud providers offer a variety of autoscaling solutions, ranging from those based on user-written rules to more sophisticated ones. Originally, these solutions were conceived to mana...
The design of cyber-physical systems (CPS) is challenging due to the heterogeneity of software and hardware components that operate in uncertain environments (e.g., fluctuating workloads), hence they are prone to performance issues. Software performance antipatterns could be a key means to tackle this challenge since they recognize design problems...
Non-functional properties of collective adaptive systems (CAS) are of paramount relevance practically in any application. This paper compares two recently proposed approaches to quantitative modelling that exploit different system abstractions: the first is based on generalised stochastic Petri nets, and the second is based on queueing networks. Th...
We present an individual-centric model for COVID-19 spread in an urban setting. We first analyze patient and route data of infected patients from January 20, 2020, to May 31, 2020, collected by the Korean Center for Disease Control & Prevention (KCDC) and discover how infection clusters develop as a function of time. This analysis offers a statisti...
Performance evaluation of multi-agent systems (MAS) embraces several challenges due to uncertain operational environments, such as software/hardware failures and unfaithful communications that facilitate the spread of deceptive messages. One way to smooth the impact of reliability and security potential issues in MAS is to enforce different coordin...
The detection of performance issues in Java-based applications is not trivial since many factors concur to poor performance, and software engineers are not sufficiently supported for this task. The goal of this manuscript is the automated detection of performance problems in running systems to guarantee that no quality-based hinders prevent their s...
This paper fosters the analysis of performance properties of collective adaptive systems (CAS) since such properties are of paramount relevance practically in any application. We compare two recently proposed approaches: the first is based on generalised stochastic petri nets derived from the system specification; the second is based on queueing ne...
Serverless computing is gaining popularity for machine learning (ML) serving workload due to its autonomous resource scaling, easy to use and pay-per-use cost model. Existing serverless platforms work well for image-based ML inference, where requests are homogeneous in service demands. That said, recent advances in natural language processing could...
Data center downtime typically centers around IT equipment failure. Storage devices are the most frequently failing components in data centers. We present a comparative study of hard disk drives (HDDs) and solid state drives (SSDs) that constitute the typical storage in data centers. Using six-year field data of 100,000 HDDs of different models fro...
As the COVID-19 outbreak evolves around the world, the World Health Organization (WHO) and its Member States have been heavily relying on staying at home and lock down measures to control the spread of the virus. In the last months, various signs showed that the COVID-19 curve was flattening, but even the partial lifting of some containment measure...
Data center downtime typically centers around IT equipment failure. Storage devices are the most frequently failing components in data centers. We present a comparative study of hard disk drives (HDDs) and solid state drives (SSDs) that constitute the typical storage in data centers. Using a six-year field data of 100,000 HDDs of different models f...
Nearly all principal cloud providers now provide burstable instances in their offerings. The main attraction of this type of instance is that it can boost its performance for a limited time to cope with workload variations. Although burstable instances are widely adopted, it is not clear how to efficiently manage them to avoid waste of resources. I...
During the past few years, all leading cloud providers introduced burstable instances that can sprint their performance for a limited period to address sudden workload variations. Despite the availability of burstable instances, there is no clear understanding of how to minimize the waste of resources by regulating their burst capacity to the workl...
The extent of epistemic uncertainty in modeling and analysis of complex systems is ever growing, mainly due to increasing levels of the openness, heterogeneity and versatility in cloud-based applications that are being adopted in critical sectors, like banking and finance. State-of-the-art approaches for model-based performance assessment do not em...
The evolution of digital technologies and software applications has introduced a new computational paradigm that involves the concurrent processing of jobs taken from a large pool in systems with limited computational capacity. Pool Depletion Systems is a framework proposed to analyze this paradigm where an optimal admission policy for jobs allocat...
Fog Computing (FC) systems represent a novel and promising generation of computing systems aiming at moving storage and computation close to end-devices so as to reduce latency, bandwidth and energy-efficiency. Despite their gaining importance, the literature about capacity planning studies for FC systems is very limited only considering very simpl...
Cloud service providers use the concept of “burstable performance instance” that can temporally ramp up its performance to handle bursty workloads by utilizing spare resources. The state-of-the-practice to using the available burst capacity is independent of the workload, which results in squandering spare resources. In this work, we quantify and o...
Data-centers have recently experienced a fast growth in energy demand, mainly due to cloud computing, a paradigm that lets the users access shared computing resources (e.g., servers, storage, etc.). Several techniques have been proposed in order to alleviate this problem, and numerous power models have been adopted to predict the servers' power con...
The volume of data, one of the five “V” characteristics of Big Data, grows at a rate that is much higher than the increase of ability of the existing systems to manage it within an acceptable time. Several technologies have been developed to approach this scalability issue. For instance, MapReduce has been introduced to cope with the problem of pro...
Mobile Crowdsensing (MCS) is a contribution-based paradigm involving mobiles in pervasive application deployment and operation, pushed by the evergrowing and widespread dissemination of personal devices. Nevertheless, MCS is still lacking of some key features to become a disruptive paradigm. Among others, control on performance and reliability, mai...
The evolutions of digital technologies and software applications have introduced a new computational paradigm that involves initially the creation of a large pool of jobs followed by a phase in which all the jobs are executed in systems with limited capacity. For example, a number of libraries have started digitizing their old books, or video conte...
The rapid growth
of energy
requirements in
large data-center
has motivated
several research projects focusing on the reduction of power consumption. Several techniques have been studied to tackle this problem, and most of them require simple power models to estimate the energy consumption starting from known system parameters. It has been proven th...
The increase of energy consumption and the related costs in large data centers has stimulated new researches on techniques to optimize the power consumption of the servers. In this paper we focus on systems that should process a peak workload consisting of different classes of applications. The objective is to implement a policy of load control whi...