ArticlePDF Available

I.T.'s Getting Cloudy. A Picture of the Current and Future Cloud Computing Scenarios



Content may be subject to copyright.
I.T.’s Getting Cloudy. A Picture of the Current
and Future Cloud Computing Scenarios
Giuliano Manno
OculusAI Technologies
Stockholm, Sweden
Abstract Cloud Computing is a relatively young paradigm.
Even if the technologies involved so far have been massively used in
the industry, Cloud Computing reflects a new vision based on the
business idea of applying a successful outsourcing model to a
computational infrastructure, platform or software. The aim of this
article is to point out the drivers that are currently pushing the
adoption of this paradigm, along with the trends in the enterprise
market segment and the threats that can interfere with a natural
development on the road to an open standardized Cloud ecosystem.
Furthermore, an evolutionary model is provided to visualize the
directions taken by the major IT breakthroughs to locate, on the time
variable, the development of Federated Cloud technologies toward a
Cloud 2.0 evolution.
Index Terms cloud-computing, SOA, federated-cloud-
Every revolution in Information Technology is the result of
previous disruptive events that have changed the perception of
the role of IT on a global-scale. When a new revolution starts
to be visible at the horizon of the events, it systemically
inherits the previous de-facto roles of Information Technology
established by the preceding breakthroughs. Specifically, the
advent of Personal Computer and Internet has laid the
foundations of the new revolutions and their direct
consequences not only on a macro scale but also on the fine-
grained end-user experiences.
Cloud Computing is the next wave of disruption in the IT
industry that will strongly affect the human-to-machine and
also to some extent human-to-human relationships. Cloud
Computing is a paradigm that embraces a mix of existing
technologies whose adoption permits the deployment of
complex Internet-based services, leveraging shared and
scalable resources [1] to exploit a shorter innovation response-
time. This paradigm is doomed to expand, in the short term,
from limited solutions for SMEs to large customer-oriented
services that, in the long term, will make Cloud Computing
the next utility [2].
The recognition of Cloud Computing as the new utility has
been done through the analysis of the previous evolutions of
Information Technology and the demands in computing that
are driving the current events. Furthermore, considering the
actors changing their IT strategies, Cloud Computing has been
adopted by the “Early Innovator” since 2009, and in 2011 is
moving toward the “Early Majority” actors [3].
1.1. Cloud Adoption Drivers
The deployment of services upon Cloud infrastructures has
several benefits that will be discussed in this paragraph, in
order to outline the drivers that are strongly pushing the
adoption of the Cloud Computing paradigm.
1.1.1. Cloud Infrastructure: A Technological Driver
In the last decade, the raise of complexity and amount of
exchanged data after the advent of the Web 2.0, revealed the
structural limits of data-centres to sustain high and
unpredictable workloads. The big actors in the IT industry are
facing the problem of scalability on a daily basis. In the past,
the problem of scalability was solved vertically, upgrading the
servers in the data-centres with powerful processors, more
central memory and storage. An alternative to this monolithic
solution is the horizontal scaling-up, which carries out the
improvement of performance adding multiple machine
instances with the same amount of resources, creating a
computing cluster [4].
The technology used in the Cloud infrastructure to achieve
the horizontal scalability is the hardware virtualization, a
technique that permits multiple operating systems to run
concurrently in the same physical machine. Today these
technologies represent the answer to the needs of shared
resources that can be consolidated for cost reduction. Since
the virtual machines are isolated computing environments,
they can be migrated from a physical server to another.
Virtualization is the best way to exploit horizontal scalability,
from a single consolidated server to an entire clustered data-
centre. This technique is useful for an optimal management of
the resources that can be reallocated to other virtual machines
in case of under-utilization [5].
Furthermore, considering the workload, the normal
behaviour in client-server communication involves highly
irregular network traffic, with high peaks of service demand
during specific hours. Outside peak hours the servers will
have a low rate of requests per second, still using the same
amount of electric power. In a virtualized data-centre the
varying nature of the service workload can be exploited in
favour of a deterministic resource allocation, scaling up and
down elastically in order to adapt the capabilities to the
fluctuating number of requests. Without elasticity, the
performances of a data-centre should be statically shaped
according to the maximum predicted workload, losing cost-
1.1.2. Economic Driver
Economy is probably the most important driver to the
adoption of Cloud Computing. The possibility to apply a
demand-driven service model to a complete computing
environment, from the hardware to the software, can improve
the agility especially for companies that are investing on
innovation and need a short time-to-market. Furthermore, the
horizontal scalability gives the opportunity to elastically
enlarge the capability to provide services in case of business
growth, with a fast responsiveness and no up-front expenses.
Another important aspect regards the costs that the companies
need to cover and their impact on the budget. The most
important indicator is the Total Cost of Ownership (TCO)
calculable by the sum of the expenditures on infrastructure,
implementation and services. [6]
Capital expenditure (Cap-Ex): the infrastructure cost
needed to acquire physical assets. In order to create a
Cloud environment, the Cap-Ex will be used to buy
the needed equipment such as servers, SAN drives
and network appliances;
Implementation expenditure (Imp-Ex): the cost that
has to cover all the activities, from the installation of
hardware and software to the final rollout phase. In
between, the provider could need to migrate from old
systems to new ones or buy custom solutions for
Operating expenditure (Op-Ex): includes all the
operating and maintenance costs. It is usually the
biggest cost area in data-centres since it includes
electricity, downtimes, failures, security, audit,
disaster recovery, training, consultancies, insurance
and personnel.
When the Cloud services are not developed and deployed
on-premise (Private Cloud), the company that will use an
outsourced Cloud solution (Public Cloud) won’t have capital
expenditures for the data-centre. In this situation, the Cap-Ex
will be converted in Op-Ex, since the resources will be leased
with a pay-per-use price model. The Cap-Ex/Op-Ex
conversion can have several benefits in cost reduction, in fact
on-premise solutions may be subject to inconvenient
economic constraints, such as taxes and utility costs, for this
reason data-centres are usually built where electricity is less
expensive [7]. However it’s important to emphasize that in
this moment in which the Cloud offerings are not numerous
and mature, the benefits of a Cloud adoption depend on the
size of the companies and the deployment models. For
economic and technical reasons, SMEs are able to take
advantage from Public Cloud solutions; instead, large
enterprises will prefer private or hybrid Cloud deployments
1.1.3. Environmental Driver
The power consumption of the Internet infrastructure
(considering server power, cooling and auxiliaries) in 2005
was estimated at 123 Tera-Watt/h, growing at a rate of 16%
every year [9]. Data-centres that provide a service-like
infrastructure leasing, can optimize the power consumption
leveraging consolidation and buying specific hardware
solutions that can, for example, eliminate redundant power
suppliers. Furthermore an efficient cooling system design of
rooms and server enclosures can reduce the carbon footprint.
1.2. The evolution of service models
Cloud Computing is a composition of three essential IT
elements subject to a service model: the Infrastructure, the
Platform and the Software, organized as shown in Figure 1.
Infrastructure is the bottom layer where the physical resources
of the system are shared, configured and managed. In the
Platform, all the libraries and common programming
components are provided to deploy the Software, which is in
the top layer accessible by the end users over the Internet or
the Local Area Network, depending on the deployment model
[10]. When a service model is applied to the Infrastructure,
Platform or Software, a new abstraction layer is created and
hardware/software resources can be leased according to the
terms of a formal contract. This general model is called Cloud
Stack and for each layer the service is enabled by a Control
Service, which is a generic interface to the resources and the
Considering the time variable, it’s interesting to point out
that the adoption of the three “a.a.S.” didn’t occur with a
bottom-up direction. Instead, the implementation of the
related technologies has taken place with the opposite
In 2001, the expression “Software as a Service” (SaaS)
described a new way of thinking about software, that is
essentially the idea of considering it no more as a tangible
product, but an on-demand service to which could be possible
the application of a business model based on usage, not on
commercial ownership. This model took advantage of the
technologies used for the web development to spread over the
Internet. During the last decade, the “X as a Service” model
has been extended to other technologies.
The Software-as-a-Service experience initiated the
possibility to provide the idea of IT-as-a-service and
pragmatically converted the needs of more low-level
operations to a concrete service model. The Infrastructure as a
Service is the last introduced model, since it required
particular interventions on operating systems and hardware
components such as kernel modules or CPU extensions.
Figure 1 : Cloud Stack
This section points out some trends and possible threats in
the evolution of Cloud systems. The trends are the directions
where the development of new solutions will concentrate the
forces in the next years. The threats on the other side can be
considered the obstacles that, in the long term, could affect a
fluent evolution toward open standards and processes.
2.1. Trend: SOA and Cloud Computing
The first years of Cloud Computing have been
characterized by pioneers such as Google [11] and Amazon
[12]. In the subsequent years multiple Cloud companies
started to propose their own technical solutions and important
consulting groups developed their new Cloud strategies.
According to previous argumentations, the deployment model
of Cloud services are still connected to the size of the clients
(SMEs or Large-Enterprises) because directly related with
complexity issues. Another aspect is that Small or Medium
companies require less time for a Traditional-to-Cloud
conversion; on the other hand Large Enterprises will need a
more strategic and systematic approach for an effective
transformation. The Enterprise-level information systems are
complex entities that need to operate along the organization
and its processes. For this reason an answer for this kind of
companies need a specific set of design principles that can
help the architects to provide an effective solution capable of
lasting for years and at the same time be able to change
flexibly and dynamically.
As defined by the OASIS group, the “Service Oriented
Architecture (SOA) is a paradigm for organizing and utilizing
distributed capabilities that may be under the control of
different ownership domains” [13]. Differently from the
average belief, SOA is not an architecture but a paradigm that
can assist the design of large-scale information systems to
avoid several design issues. When Cloud Computing started to
become feasible for Large Enterprises, the discrepancies and
similarities with the Service Oriented Architecture emerged
and some blurred borders appeared between the two
paradigms. It’s important to emphasize that the main
difference between SOA and Cloud Computing is the role in
the design process and the infrastructure implementation. The
final use of web-service technologies like SOAP [14] or
REST [15] for Cloud and SOA solutions has misled about
how these two paradigms can coexist in large Enterprise
systems. Cloud Computing or SOA can be adopted together to
take advantage of the benefits that they respectively generate,
but they could also be used independently. The adoption of a
Cloud infrastructure could be useful to achieve the strategic
goal of a SOA design, exploiting infrastructure scalability,
multi-tenancy and elasticity. At the same time SOA could
provide a set of effective principles to design a Cloud solution
with a specific strategy to achieve the right agility and respond
to the Enterprise’s needs. Furthermore, the use of SOA
principles may help the architect to design the cloud solution
not considering a specific Cloud vendor or provider, being
able to freely diversify the implementation. Vendor lock-in is
an issue that should be taken into account by every Cloud
designer, as described in the next paragraph.
2.2. Threat: The Issue of Vendor Lock-In
Cloud Computing is a mixture of pre-existent technologies
built-up from the business world to form a complex solution
to provide IT as a service. Academia started to focus on this
subject to solve particular issues, not only technical but also
inherent to the principles that constituted the foundation of the
modern IT. The Cloud solutions provisioned by the big
players are vendor locked-in by design; their existence is
subject to limited possibility to interoperate with other Cloud
systems. Even if the pay-as-you-go pricing model doesn’t
contemplate any up-front cost, the act of moving to the Cloud
should take into account the costs of a possible migration to
other providers or vendors, relatively to public and private
deployments. A normative scenario seems trivial to obtain,
since there are several evolutions that can lead to such a
conflicting situation. When using external Cloud providers,
the knowledge of the implementation is rarely available to the
public, since the majority of them use exclusive ”closed”
technologies and configurations. The lack of transparency is
one of the first issues from which a set of further doubts can
be addressed, mainly according to the impossibility to know
how data is managed and also if the provider is actually
respecting the lease contract.
Control Service
Application Server
Database Server
ArchitectureService Models
Web-Services / Applications
Control Service
Operating System (Hypervisor)
Operating System (Guest)
Control Service
2.3. Threat: The Issue of Centralization
In the actual representation made by the media, the concept
of the Cloud itself reflects the idea of a complex entity whose
components are concentrated in a single spot with clear
borders, somehow moving away from the concept of the
Internet Cloud, whose similarity was clear and intentional in
the first place.
The Cloud Computing paradigm is strongly shifting the
computational load and storage from the end points of the
global network to powerful centralized data-centres, and it
will give the possibility to thin clients such as smart phones or
tablets to remotely carry out complex operations, exploiting
the pay-as-you-go pricing model. Centralization is the initial
stage of the development of a solution, especially in the Cloud
Computing paradigm, in which consolidation is a foundational
aspect with several benefits but also threats, like having
critical single points of failure. Another fundamental aspect of
Cloud systems is elasticity, which is considered a key value
but it implies an infinite resource growth and usage. This idea
is unrealistic on a global scale and, obviously, on localized
2.3.4. End-To-End principle
The mesh topology of the Internet network, designed
during the Cold War without a centralized control (to keep
networks operative in case of a nuclear attack) is the most
important factor responsible of the freedom of Internet. With
this topology the communication is established between two
endpoints (End-to-End) of the Internet Cloud, decentralizing
the computation at the edge, keeping the operations within the
network relatively simple and oriented to optimize data
transfers. So far, complexity and intelligence needed to
provide applications have been pushed away from the network
itself, located in or close to the devices attached to the edges
[16]. A plausible and unwished long-term Cloud scenario
could change the fundamentals of internetworking moving
computation and storage away from the edges of the network,
opening the possibilities to a new type of control on private
identities and sensitive data [17].
In this paragraph, a general evolutionary model is provided
to explain the dynamics that have moved Information
Technology from one revolution to another, considering their
centralized/decentralized cyclic nature [18,19].
This model shows how every paradigm-shift in
Information Technology has been carried out through a
successful cyclic path along four dimensions: Centralization,
Popularity, Decentralization and Standards. A graphic
representation of this model is shown in Figure 2.
3.1. Centralization
The idea of a new technology is the result of a new
business and research idea coming from the usage of pre-
existent mature (standard) technologies. Usually, the first
stage of development is made analysing the problems with a
centralized divide-and-conquer paradigm. Mainframes, local
networking and Cloud technologies have been developed with
this approach. When a technology is developed outside the
academic world, the research in that direction generates
projects that must evolve without an open knowledge sharing
with other entities, generating vendor lock-in-oriented
Figure 2 : An evolutionary model for IT revolutions
3.2. Popularity (Heterogeneity)
The technology is considered valuable under the economic
or academic point of view and starts to spread around. Usually
the implementations of the initial ideas are different. In
academia every research initiative starts from limited groups
of people and subsequently, eventually, followed by other
groups. On the business side, instead, every research project is
always classified and the technologies behind their products
are proprietary.
3.3. Decentralization
This point has been reached by various technologies,
especially when different systems, after a diffusion phase,
need to communicate, but since they have been developed by
independent organizations, there are several integration and
interconnection issues that can be solved only with the
definition of a standard communication model that can
abstract from the different implementations.
3.4. Standardization
This phase usually implies the definition of standard
technologies, models, architectures that enable the new
GRID Computing
Federated Cloud Computing
Small scale
Mature Technologies
Grid open
paradigm and elevate it to a mature level. The initial
standardization processes, relevant to this discussion, regarded
the network models and protocols, like ISO/OSI and TCP/IP.
Without these standards, a further development would have
been impossible. Other technologies have been subject to a
similar evolution becoming de-facto standards, but this kind
of processes usually is related to companies whose work has
become a worldwide reference, usually after a long struggle in
the marketplace determining violent skimming and acquisition
stages. These activities often create a new context that
sometimes is described with the “2.0” version number.
The evolutionary model presented in the previous paragraph
has outlines the possible scenarios for the next developments
of Cloud Computing. A Cloud 2.0 scenario is conceivable
after a standardization process that will unify the whole set of
operations and processes to accomplish a service selection and
contract establishment on the same unified layer.
4.1. Federated Cloud Computing
Considering a generic enterprise viewpoint, Cloud
federation can be described through the concept of community,
where each Cloud is a different and independent domain. Each
community is distinguished by a common relationship that
defines an objective, which is the set of resources that are
shareable with other domains in the federation. The
community is a collection of interacting Clouds whose
purpose is to fulfil an objective, which is defined by a contract
that describes the economic and technical nature of the
commitment, along with the policies that define the
constraints of this interaction.
Each <D> domain that forms the federation can be
autonomous with respect to entering or leaving the federation
and sharing its resources. Each domain is part of a chain of
responsibility, where each scope is hierarchically limited to
the assets directly controlled by each domain. Besides that,
each domain is independent. When domain A and domain B
enter into a federation, each one designates the shareable
assets by the concepts and rules of a federation; each domain
in the federation retains autonomy over its shareable resources.
Technically, a <D> Domain is a set of entities with a <D>
relationship to a controlling service that may or may not be
part of the domain. The controlling service consists of an
architecture that involves several components in the federation,
to manage domain naming, identification, authorization and
trust, access control, and enables search and retrieval services.
A contract establishes the agreement between the parties,
and carries with it the obligations of each domain, the duration
of the commitment and the strategy to solve failures or other
types of exceptions.
In a federated Cloud environment, the real scenario
presents highly heterogeneous systems, that can differ in
every level, from the technologies involved in the
computation model, such as the hypervisor and the whole set
of tools built upon it, to the configurations, internal security
policies, network topologies, external exposed services and
other details. Considering the Cloud Computing marketplace,
the providers tend to reproduce the same tested environment
on every physical data-centre distributed on different
geographical locations.
So far, the research on federated Clouds has put
heterogeneity as the first issue to be resolved in order to make
different environments to communicate. Like many other IT
compatibility problems, especially regarding the lack of
standardization, the heterogeneity issue have a solution that
involves the design of further abstraction layers, as seen in the
Reservoir project [20].
In the Cloud the act of enlarging the computational domain
follows a horizontal direction, but when the domain needs to
cross the physical borders of the data-centre, another idea of
scalability can be presented in order to explain the various
possibilities. A cross-domain scalability is often referred to
the act of scaling-out.
Figure 3 describes two different scaling-out directions,
vertical and horizontal. Scaling-out vertically can represent
the real world scenario of an organization that uses the same
basic physical infrastructure and managed by the same
Figure 3 : Cloud infrastructure and scalability in a federated scenario
policies. On the contrary, a horizontal scale-out represents the
creation of a multi Cloud domain that needs to manage
heterogeneous resources. This second example designates a
scenario where a federation is created between different
independent entities that have their own organization. The
federation between Clouds seems an intermediate step toward
a future Cloud networking without any centralized control,
when Cloud providers will be able to share assets through
standardized processes, automatically selecting the services in
an open and free Cloud ecosystem.
This article has pointed out the main drivers that are
leading the adoption of the Cloud Computing paradigm and
the actual research directions like Federated Cloud Computing.
It’s conceivable that in the near future the IT industry will be
subject to a skimming phase after which the winning Cloud
solutions will constitute the foundation for the future best
practices in development, management, security and quality of
service. Regardless to the next events in the business, the
Federated Cloud paradigm will surely be a leading research
area, responsible of the design for the architecture of the
future Cloud ecosystems.
The next articles will provide a comprehensive description
of a Federated Cloud Computing architecture and the semantic
description of the resources belonging to the infrastructure. A
Federated infrastructure-as-a-service is the next step in the
IaaS research, where different data-centres can join a
community to share resources. Once the management of a
Federation will be mature, research and business will be able
to point the right direction to connect different federations,
standardize operations and processes in order to concretely
form a cloud of clouds like the Internet is a network of
[1] NIST SP 800-145, “A NIST definition of Cloud computing,
[2] Leonard Kleinrock, ”A vision for the Internet,”, ST Journal of
Research 2 (2005), no. 1, pp. 45.
[3] E.M. Rogers, Diffusion of innovations, 5th edition, Free Press, 2003.
[4] Lijun Mei, W. K. Chan, and T. H. Tse, ”A tale of Clouds: Paradigm
comparisons and some thoughts on research issues,”, Proceedings of
the 2008 IEEE Asia-Pacific Services Computing Conference (APSCC),
IEEE Computer Society Press, 2008, pp. 464469.
[5] Ravi Iyer, Ramesh Illikkal, Omesh Tickoo, Li Zhao, Padma Apparao,
and Don Newell, ”VM3: Measuring, modeling and managing VM
shared resources,”, Computer Networks: The International Journal of
Computer and Telecommunications Networking 53 (2009), no. 17,
[6] Xinhui Li, Ying Li, Tiancheng Liu, Jie Qiu, and Fengchun Wang, ”The
method and tool of cost analysis for Cloud computing,”, 2009 IEEE
Inter- national Conference on Cloud Computing, Sep 2009, pp. 93100.
[7] Michael Armbrust, Armando Fox, Rean Griffith, Anthony D. Joseph,
Randy H. Katz, Andrew Konwinski, Gunho Lee, David A. Patterson,
Ariel Rabkin, and Matei Zaharia, Above the Clouds: A Berkeley view
of Cloud computing,”, Tech. report, UC Berkeley Reliable Adaptive
Distributed Systems Laboratory, 2009.
[8] McKinsey, “Cleaning the Air on Cloud Computing”, 2009.
[9] J. G. Koomey, ”Estimating Total Power Consumption by Servers in the
U.S. and the World,”, Lawrence Berkeley National Laboratory,
Stanford University, CA (2007).
[10] Fang Liu, Jin Tong, Jian Mao, Robert Bohn, John Messina, Lee Badger,
Dawn Leaf, “NIST Cloud Computing Reference Architecture,
[11] Google, Google AppEngine,
[12] Amazon, Amazon Elastic Compute Cloud (EC2),
[13] OASIS, Reference Model for Service Oriented Architecture 1.0, , 2006.
[14] SOAP: W3C, Simple Object Access Protocol, , 2007.
[15] R.T. Fielding, Representational state transfer (REST). Chapter 5 in
“Architectural Styles and the Design of Network-based Software
Architectures, Ph.D. thesis, 2001.
[16] Clark, D. D. and M. S. Blumenthal (2000),Rethinking the Design of
the Internet: The End-to-End Arguments vs. the Brave New World,”,
Version for TPRC Submission. 28th Research Conference on
Communication, Information and Internet Policy (TPRC). Alexandria,
VA, USA; September 23-25.
[17] Richard Stallman, “Cloud Computing is a Trap,”,
[18] Jeffrey Voas, Jia Zhang, “Cloud Computing: New Wine or Just a New
Bottle?,”, IEEE IT Professional, 2009.
[19] D. A. Peak and M. H. Azadmanesh, Centralization- decentralization
cycles in computing: Market evidence,”, Information & Management
31 (1997), no. 6, 303 317.
[20] Benny Rochwerger, David Breitgand, Eliezer Levy, Alex Galis,
Kenneth Nagin, Ignacio M. Llorente, Ruben Montero, Yaron Wolfsthal,
Erik Elm- roth, Juan Caceres, Muli Ben-Yehuda, Wolfgan Emmerich,
and Fermin Galan, The reservoir model and architecture for open
federated Cloud computing,”, IBM Journal of Research and
Development 53 (2009), no. 4.
... Taking into account knowledge engineering methodologies, it is possible to define a general model that is able to capture the features of Cloud resources, not only on the infrastructure level, but also on the platform and software ones. The description of the resources can be created using ontologies placed in every Control Service Layer of the Cloud-stack [23] Figure 4. The ontologies have been implemented using the Web Ontology Language (OWL). ...
Conference Paper
Full-text available
Cloud Computing is a paradigm that applies a service model on infrastructures, platforms and software. In the last few years, this new idea has been showing its potentials and how, in the long run, it will affect Information Technology and the act of interfacing to computation and storage. This article introduces the FCFA project, a framework for an ontology-based resource life-cycle management and provisioning in a federated Cloud Computing infrastructure. Federated Clouds are presumably the first step toward a Cloud 2.0 scenario where different providers will be able to share their assets in order to create a free and open Cloud Computing marketplace. The contribution of this article is a redesign of a Cloud Computing infrastructure architecture from the ground-up, leveraging semantic web technologies and natively supporting a federated resource provisioning.
Cloud federation can be described through the concept of collaboration, where each organization has its own cloud(s) that deals with a different and independent domain but needs to work together with other organizations in order to fulfill a specific shared objective. According to this perspective, the federation is a collection of interacting clouds that collaborate with one another through the instantiation and management of shared subsets of resources (computation and storage resources as well as sensors and actuators). This idea could be profitably used in those scenarios in which different organizations have to share several resources (e.g., emergency response or disaster management scenario). On the other hand, when different independent organizations share their resources, several issues arise. One of them is related to interoperability problems. As a consequence, this work also introduces a framework for an ontology-based resource life cycle management and provisioning in a federated cloud infrastructure. Therefore, The main contributions of this work consists of redesigning a cloud infrastructure architecture from the ground up, leveraging Semantic Web and Semantic Web Service Technologies, and natively supporting a federated provisioning of any kind of resource. This paper exploits, as a motivating scenario, a flood emergency response system. Copyright © 2014 John Wiley & Sons, Ltd.
Conference Paper
Full-text available
Cloud computing is an emerging computing paradigm. It aims to share data, calculations, and services transpar-ently among users of a massive grid. Although the industry has started selling cloud-computing products, research challenges in various areas, such as UI design, task decomposition, task distribution, and task coordinat-ion, are still unclear. Therefore, we study the methods to reason and model cloud computing as a step toward identifying fundamental research questions in this para-digm. In this paper, we compare cloud computing with service computing and pervasive computing. Both the industry and research community have actively examined these three computing paradigms. We draw a qualitative comparison among them based on the classic model of computer architecture. We finally evaluate the compar-ison results and draw up a series of research questions in cloud computing for future exploration.
Full-text available
The emerging cloud-computing paradigm is rapidly gaining momentum as an alternative to traditional IT (information technology). However, contemporary cloud-computing offerings are primarily targeted for Web 2.0-style applications. Only recently have they begun to address the requirements of enterprise solutions, such as support for infrastructure service-level agreements. To address the challenges and deficiencies in the current state of the art, we propose a modular, extensible cloud architecture with intrinsic support for business service management and the federation of clouds. The goal is to facilitate an open, service-based online economy in which resources and services are transparently provisioned and managed across clouds on an on-demand basis at competitive costs with high-quality service. The Reservoir project is motivated by the vision of implementing an architecture that would enable providers of cloud infrastructure to dynamically partner with each other to create a seemingly infinite pool of IT resources while fully preserving their individual autonomy in making technological and business management decisions. To this end, Reservoir could leverage and extend the advantages of virtualization and embed autonomous management in the infrastructure. At the same time, the Reservoir approach aims to achieve a very ambitious goal: creating a foundation for next-generation enterprise-grade cloud computing.
personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement The RAD Lab's existence is due to the generous support of the founding members Google, Microsoft, and Sun Microsystems and of the affiliate members Amazon Web Services, Cisco Systems, Facebook, Hewlett-
Cloud computing has evolved from previous computing paradigms going back as far to the days of mainframes, but is it really different? Do the explosive new capabilities from cloud computing solve any of the problems left unsolved from three decades ago? The authors in this issue discuss their views on what cloud computing is leaving the reader to decide for themselves.
Strategies concerning centralized and decentralized commercial computing have been major issues for more than two decades. Using longitudinal sales data consolidated into three major computer categories (mainframes, minicomputers, and microcomputers), we investigate whether historical market data show evidence of centralization and decentralization. Our finding of cyclic behavior leads us to conclude that computing sales data exhibits broadly cyclic characteristics. We suggest that computing strategies oscillate unevenly between domination of centralization and decentralization, and that commercial computing has already experienced two centralization/decentralization cycles. Currently, computing is nearing the end of the second cycle's decentralization period and is at the threshold of centralization in a third cycle.
With cloud and utility computing models gaining significant momentum, data centers are increasingly employing virtualization and consolidation as a means to support a large number of disparate applications running simultaneously on a chip-multiprocessor (CMP) server. In such environments, contention for shared platform resources (CPU cores, shared cache space, shared memory bandwidth, etc.) can have a significant effect on each virtual machine’s performance. In this paper, we investigate the shared resource contention problem for virtual machines by: (a) measuring the effects of shared platform resources on virtual machine performance, (b) proposing a model for estimating shared resource contention effects, and (c) proposing a transition from a virtual machine (VM) to a virtual platform architecture (VPA) that enables transparent shared resource management through architectural mechanisms for monitoring and enforcement. Our measurement and modeling experiments are based on a consolidation benchmark (vConsolidate) running on a state-of-the-art CMP server. Our virtual platform architecture experiments are based on detailed simulations of consolidation scenarios. Through detailed measurements and simulations, we show that shared resource contention affects virtual machine performance significantly and emphasize that virtual platform architectures is a must for future virtualized datacenters.