ArticlePDF Available


Cloud computing is rapidly emerging as a new model for service delivery, including for telecommunications services (cloud telephony). Although many solutions are now available, cloud management and monitoring technology has not kept pace, partially because of the lack of open source solutions. To address this limitation, this article describes our experience with a private cloud, discusses the design and implementation of PCMONS (Private Cloud Monitoring System) and its application via a case study for the proposed architecture. An important finding of this article is that is possible to deploy a private cloud within the organization using only open- source solutions and integrating with traditional tools like Nagios. However, there are significant development work to be done while integrating these tools. With PCMONS we took first steps towards this goal, opening paths for new development opportunities as well making PCMONS itself an open-source tool.
8/21/13 5:46 PMIEEE Xplore - Toward an architecture for monitoring private clouds
Page 1 of 2
Sign Out
Access provided by:
Browse Journals & Magazines > Communications Magazine, IEEE ...> Volume:49 Issue:12
Toward an architecture for monitoring
private clouds Full Text as PDF
130 - 137
INSPEC Accession Number:
Digital Object Identifier :
Date of Current Version :
05 December 2011
Issue Date :
December 2011
Sponsored by :
IEEE Communications Society
Cloud computing is rapidly emerging as a new model for service delivery, including for
telecommunications services (cloud telephony). Although many solutions are now available, cloud
management and monitoring technology has not kept pace, partially because of the lack of open
source solutions. To address this limitation, this article describes our experience with a private cloud,
and discusses the design and implementation of a private cloud monitoring system (PCMONS) and
its application via a case study for the proposed architecture. An important finding of this article is
that is possible to deploy a private cloud within the organization using only open source solutions
and integrating with traditional tools like Nagios. However, there is significant development work to
be done while integrating these tools. With PCMONS we took the first steps toward this goal,
opening paths for new development opportunities as well as making PCMONS itself an open-source
Published in:
Communications Magazine, IEEE (Volume:49 , Issue: 12 )
Date of Publication: December 2011
Sign In | Create Account
IEEE Account Purchase Details Profile Information Need Help? | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
De Chaves, S.A. ; Post Graduation Program in Comput. Sci., Fed. Univ. of Santa Catarina, Florianopolis, Brazil ; Uriarte, R.B. ; Westphall, C.B.
Authors References Cited By Keywords Metrics SimilarAbstract
8/21/13 5:46 PMIEEE Xplore - Toward an architecture for monitoring private clouds
Page 2 of 2
Change Username/Password
Update Address
Payment Options
Order History
Access Purchased Documents
Communications Preferences
Profession and Education
Technical Interests
US & Canada: +1 800 678 4333
Worldwide: +1 732 981 0060
Contact & Support
About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies
A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2013 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.
... The monitoring of a private cloud is a problem since most commercial cloud solutions are extremely expensive [42]. An open-source architecture has been proposed for cloud monitoring in [59] to tackle this issue. The architecture proposed is split into three layers: (i) infrastructure, (ii) integration, and (iii) view. ...
Full-text available
Resource management in cloud infrastructure is one of the key elements of quality of services provided by the cloud service providers. Resource management has its taxonomy, which includes discovery of resources, selection of resources, allocation of resources, pricing of resources, disaster management, and monitoring of resources. Specifically, monitoring provides the means of knowing the status and availability of the physical resources and services within the cloud infrastructure. This results in making “monitoring of resources” one of the key aspects of the cloud resource management taxonomy. However, managing the resources in a secure and scalable manner is not easy, particularly when considering a federated cloud environment. A federated cloud is used and shared by many multi-cloud tenants and at various cloud software stack levels. As a result, there is a need to reconcile all the tenants’ diverse monitoring requirements. To cover all aspects relating to the monitoring of resources in a federated cloud environment, we present the FEDerated Architecture for Resource manaGement and mOnitoring in cloudS Version 1.0 (FEDARGOS-V1), a cloud resource monitoring architecture for federated cloud infrastructures. The architecture focuses mainly on the ability to access information while monitoring services for early identification of resource constraints within short time intervals in federated cloud platforms. The monitoring architecture was deployed in a real-time OpenStack-based FEDerated GENomic (FEDGEN) cloud testbed. We present experimental results in order to evaluate our design and compare it both qualitatively and quantitatively to a number of existing Cloud monitoring systems that are similar to ours. The architecture provided here can be deployed in private or public federated cloud infrastructures for faster and more scalable resource monitoring.
... Da bi se korisnicima što više olakšalo upravljanje, pruža im se funkcionalnost praćenja instanciranih virtuelnih mašina. Ovaj modul takoĎe služi i za praćenje dostupnosti i iskorišćenja klaud resursa [6], kao i za automatsku prijavu tiketa i logova ukoliko doĎe do neke greške. Podaci iz monitoring sistema mogu da se iskoriste i za izračunavanje dostupnosti odreĎene klaud platforme i dalje ocenjivanje i rangiranje klaud platformi na osnovu tih rezultata ukoliko postoji potreba za tim. ...
Full-text available
This paper aims to present the architecture of the cloud infrastructure management system that uses REST principles and REST API for communication between distributed system modules. The idea of this architecture is a result of the personal challenge to overcome conceptual mistakes we encountered in the service offering during and after two years of developing a commercial cloud management platform that the first author was working on. The paper explores the advantages and disadvantages of the system and its communication with the cloud platform on an example of CloudStack. Examples of applications from the real world are given.
... Due the relevant differences among cloud and fog computing, there was no guarantee that a monitoring solution developed to the cloud would function properly in a fog computing environment [66]. To confirm this, some recent works analyzed and tested open source and commercial cloud monitoring solutions, e.g., Nagios [67], Zabbix [68], DARGOS [69], PCMONS [70] and JCatascopia [71]. These solutions were confronted to fog computing requirements and challenges, and the result was that none of them is suitable for fog environments [7][8][9][10]. ...
Fog computing is a distributed paradigm that provides computational resources in the users’ vicinity. Fog orchestration is a set of functionalities that coordinate the dynamic infrastructure and manage the services to guarantee the Service Level Agreements. Monitoring is an orchestration functionality of prime importance. It is the basis for resource management actions, collecting status of resource and service and delivering updated data to the orchestrator. There are several cloud monitoring solutions and tools, but none of them comply with fog characteristics and challenges. Fog monitoring solutions are scarce, and they may not be prepared to compose an orchestration service. This paper updates the knowledge base about fog monitoring, assessing recent subjects in this context like observability, data standardization and instrumentation domains. We propose a novel taxonomy of fog monitoring solutions, supported by a systematic review of the literature. Fog monitoring proposals are analyzed and categorized by this new taxonomy, offering researchers a comprehensive overview. This work also highlights the main challenges and open research questions.
... This is due to killing tasks when the memory request exceeded its limit. But, this constraint is omitted in case of CPU workload, whereby tasks can use much more CPU than that of the requested CPU [82]. ...
Full-text available
Dynamic resource allocation and auto-scaling represent effective solutions for many cloud challenges, such as over-provisioning (i.e., energy-wasting, and Service level Agreement “SLA” violation) and under-provisioning (i.e., Quality of Service “QoS” dropping) of resources. Early workload prediction techniques play an important role in the success of these solutions. Unfortunately, no prediction technique is perfect and suitable enough for most workloads, particularly in cloud environments. Statistical and machine learning techniques may not be appropriate for predicting workloads, due to instability and dependency of cloud resources’ workloads. Although Recurrent Neural Network (RNN) deep learning technique considers these shortcomings, it provides poor results for long-term prediction. On the other hand, Sequence-to-Sequence neural machine translation technique (Seq2Seq) is effectively used for translating long texts. In this paper, workload sequence prediction is treated as a translation problem. Therefore, an Attention Seq2Seq-based technique is proposed for predicting cloud resources’ workloads. To validate the proposed technique, real-world dataset collected from a Google cluster of 11 k machines is used. For improving the performance of the proposed technique, a novel procedure called cumulative-validation is proposed as an alternative procedure to cross-validation. Results show the effectiveness of the proposed technique for predicting workloads of cloud resources in terms of accuracy by 98.1% compared to 91% and 85% for other sequence-based techniques, i.e. Continuous Time Markov Chain based models and Long short-term memory based models, respectively. Also, the proposed cumulative-validation procedure achieves a computational time superiority of 57% less compared to the cross-validation with a slight variation of 0.006 in prediction accuracy.
... The authors of [27] proposed a private cloud monitoring and management scheme called the PCMONS. Despite many differences from traditional technologies and the cloud environments, it is argued that it is possible that these resources (inherited network and distributed management methods, etc.) have potential for reuse in the private cloud. ...
Full-text available
Reactive programming is a popular paradigm that has been used as a new solution in our proposed model for security in the cloud. In this context, we have been able to reduce the execution time compared to our previous work for the model proposed in cloud security, where the control of security depending on the ITSS (IT security specialist) of a certain organization based on selecting options. Some of the difficulties we encountered in our previous work while using traditional programming were the coordination of parallel processes and the modification of real-time data. This study provides results for two methods of programming based on the solutions of the proposed model for cloud security, with the first method of traditional programming and the use of reactive programming as the most suitable solution in our case. While taking the measurements in this paper, we used the same algorithms, and we present comparative results between the first and second methods of programming. The results in the paper are presented in tables and graphs, which show that reactive programming in the proposed model for cloud security offers better results compared to traditional programming.
... Private cloud monitoring is a concern as most company cloud solutions are highly costly. To solve this issue, a cloud monitoring open-source architecture has been proposed in [75]. The architecture proposed is split into three layers: i) infrastructure, ii) integration, and iii) view. ...
Full-text available
Cloud users have recently expanded dramatically. The cloud service providers (CSPs) have also increased and have therefore made their infrastructure more complex. The complex infrastructure needs to be distributed appropriately to various users. Also, the advances in cloud computing have led to the development of interconnected cloud computing environments (ICCEs). For instance, ICCEs include the cloud hybrid, intercloud, multi-cloud, and federated clouds. However, the sharing of resources is not facilitated by specific proprietary technologies and access interfaces used by CSPs. Several CSPs provide similar services but have different access patterns. Data from various CSPs must be obtained and processed by cloud users. To ensure that all ICCE tenants (users and CSPs) benefit from the best CSPs, efficient resource management was suggested. Besides, it is pertinent that cloud resources be monitored regularly. Cloud monitoring is a service that works as a third-party entity between customers and CSPs. This paper discusses a complete cloud monitoring survey in ICCE, focusing on cloud monitoring and its significance. Several current open-source monitoring solutions are discussed. A taxonomy is presented and analyzed for cloud resource management. This taxonomy includes resource pricing, assignment of resources, exploration of resources, collection of resources, and disaster management. © 2022, Telkomnika (Telecommunication Computing Electronics and Control). All rights reserved.
... e association between the energy cost of migration, network bandwidth, and VM size is showing linearity. Also, the structure clearly represents virtual machine size as S, network bandwidth as NB, and X, Y, and Z as constant values [20]. ...
Full-text available
The level of difficulty that can be envisioned in a cloud data center will not grow with convention. As a result, all hosts should have a standard and pervasive collection of memory and communication characteristics in order to lower ownership costs and operate virtual machine instances. This solution includes fundamental foundations and integrated component basics that will allow an IT or federal agency to embrace cloud computing domestically via private virtual cloud data centers. These private cloud data centers would later be developed to purchase and develop IT services on the outside. They are well aware of the obstacles to cloud computing’s acceptance, including concerns about credibility, privacy, interoperability, and marketplaces. In addition, this procedure describes critical standards and collaborations to address these issues. Ultimately, it offers a coherent response to deploying safe data centers using cloud computing services from both a technological and an IT strategic standpoint. To foster creativity, invention, learning, and enterprise, a private data center and cloud computing must be established to combine the activities of different research teams. In the framework of energy-efficient distribution of resources in private cloud data center architecture, we focus on system structure investigations. On the other hand, we want to equip private cloud providers with the current design and performance analysis for energy-efficient resource allocation. The methodology should be adaptable enough to support a wide range of computing systems, as well as on-demand and extensive resource providing approaches, cloud environment scheduling, and bridging the gap between private cloud users and a complete image of offers.
The McEliece public-key cryptography (PKC) has fewer encryption/decryption operations compared to other PKC schemes such as RSA, ECC, and ElGamal. The use of Goppa codes in its implementation ensures the hardness of the decoding problem. Conversely, the original McEliece PKC has a low encryption rate and large key size. In this paper, a new variant of the McEliece cryptosystem is presented based on non-linear convolutional codes. Cascaded convolutional codes are used to be part of the public key with each stage of the cascade separated by a product cipher to increase the security level. Convolutional codes are used as an alternative to Goppa codes since the Viterbi decoding algorithm is suitable for high data-rate applications by providing maximum-likelihood solutions. The convolutional code used in the implementation increases both security and throughput due to its high error-correcting capacity. It is shown that the new variant has small key sizes with enhanced security-complexity trade-off. Cryptanalysis of the new version of the McEliece cryptosystem is performed using existing attacks of the classical cryptosystem to demonstrate the difficulties in breaking the new cryptosystem. Also, it is shown that security levels comparable to the original McEliece cryptosystem could be obtained by using smaller public key sizes of the new version if multiple stages of the generator matrix are employed. This aspect makes the new version of the McEliece cryptosystem attractive in mobile wireless networks since it could be ported onto a single Field Programmable Gate Array (FPGA).
Full-text available
Cloud computing becomes increasingly prevalent for outsourcing IT functions. The basic feature of offering virtual data center slices to customers has been in use for some time now. So far, customers only get the raw resources, with only little insight and control of their resources. But to let customers build reliable services on top of the rented infrastructure, they need adequate monitoring and control capabilities. In the future, we expect operators to offer such functions to their customers. In this paper, we introduce our approach towards offering a holistic monitoring system to data center customers. It offers generic monitoring information propagation and storage covering various types of resources (network, servers, and applications), all kinds of monitoring information, and all tenants. As virtualized data centers are usually large and multi-tenant, our solution is built with these properties in mind.
Full-text available
The cloud computing and its related paradigms as grid computing, utility computing, and voluntary computing, with its proposed cloud business model ontology offers a clear framework to delineate and classify cloud offering were examined. A number of cloud applications accessed through browsers but with the look and feel of desktop programs were also described. The ontology model basically consists of three layers analogous to the technical layers in most cloud realizations: infrastructure, platform as a service, and application. Number of prominent providers as Amazon, Google, Sun, IBM, and Oracle have also extended their computing infrastructures and platforms to provide top-level services for computation, storage, and databases. The model also enable cloud users and providers to use it to map products, identify customers and suppliers, and set the pricing schemes.
This document is an informal description of Use Cases and requirements for the OOCI Cloud API. Created by the Open Cloud Computing Interface working group. This document records the needs of IaaS Cloud computing managers and administrators in the form of Use Cases. The Use Cases serve as the primary guide for the development of API requirements. The document is the first deliverable to demonstrate and validate the features of the Open Cloud Computing Interface.
One of the many definitions of "cloud" is that of an infrastructure-as-a-service (IaaS) system, in which IT infrastructure is deployed in a provider's data center as virtual machines. With IaaS clouds' growing popularity, tools and technologies are emerging that can transform an organization's existing infrastructure into a private or hybrid cloud. OpenNebula is an open source, virtual infrastructure manager that deploys virtualized services on both a local pool of resources and external IaaS clouds. Haizea, a resource lease manager, can act as a scheduling back end for OpenNebula, providing features not found in other cloud software or virtualization-based data center management software.
Conference Paper
Cloud computing emerges as a new computing paradigm which aims to provide reliable, customized and QoS guaranteed computing dynamic environments for end-users. This paper reviews recent advances of Cloud computing, identifies the concepts and characters of scientific Clouds, and finally presents an example of scientific Cloud for data centers
Conference Paper
When it comes to grid and cloud computing, there is a lot of debate over their relations to each other. A common feature is that grids and clouds are attempts at utility computing. However, how they realize utility computing is different. The purpose of this paper is to characterize and present a side by side comparison of grid and cloud computing and present what open areas of research exist.
Grid computing is a technology for distributed computing. To manage a large scale of Grid resources for dynamic access, resource management is a key component. In this paper, a Grid Resource Information Monitoring (GRIM) prototype is introduced. To support the constantly changing resource states in the GRIM prototype, the push-based data delivery protocol named Grid Resource Information Retrieving (GRIR) is provided.There is a trade-off between information fidelity and updating transmission cost. The more frequent the reporting is, the more precise the information will be. But there will be more overheads. The offset-sensitive mechanism, the time-sensitive mechanism, and the hybrid mechanism in GRIR are used to achieve a high degree of data accuracy while decreasing the cost of updating messages. Experimental results show that the proposal alleviates both the update transmission cost and the loss of data accuracy compared to prior methods.
Cloud platforms host several independent applications on a shared resource pool with the ability to allocate com- puting power to applications on a per-demand basis. The use of server virtualization techniques for such platforms provide great flexibility with the ability to consolidate sev- eral virtual machines on the same physical server, to resize a virtual machine capacity and to migrate virtual machine across physical servers. A key challenge for cloud providers is to automate the management of virtual servers while taking into account both high-level QoS requirements of hosted applications and resource management costs. This paper proposes an autonomic resource manager to con- trol the virtualized environment which decouples the provi- sioning of resources from the dynamic placement of virtual machines. This manager aims to optimize a global utility function which integrates both the degree of SLA fulfillment and the operating costs. We resort to a Constraint Pro- gramming approach to formulate and solve the optimization problem. Results obtained through simulations validate our approach.