Conference Paper

NaaS: network-as-a-service in the cloud

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Cloud computing realises the vision of utility computing. Tenants can benefit from on-demand provisioning of computational resources according to a pay-per-use model and can outsource hardware purchases and maintenance. Tenants, however, have only limited visibility and control over network resources. Even for simple tasks, tenants must resort to inefficient overlay networks. To address these shortcomings, we propose Network-as-a-Service (NaaS), a framework that integrates current cloud computing offerings with direct, yet secure, tenant access to the network infrastructure. Using NaaS, tenants can easily deploy custom routing and multicast protocols. Further, by modifying the content of packets on-path, they can efficiently implement advanced network services, such as in-network data aggregation, redundancy elimination and smart caching. We discuss applications that can benefit from NaaS, motivate the functionality required by NaaS, and sketch a possible implementation and programming model that can be supported by current technology. Our initial simulation study suggests that, even with limited processing capability at network switches, NaaS can significantly increase application throughput and reduce network traffic.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Cloud computing is also used in edge [3] and fog [4] computing to enable such services [5], which require advanced system management for resources (e.g., IaaS). These various use cases of network resources in cloud centers are feasible with network virtualization, and the increasing number of tenants are demanding their own virtual network management functionalities (e.g., arbitrary virtual network composition, custom forwarding, and performance diagnosis for its virtual network) [6]. ...
... However, most cloud solutions provide IaaS for host nodes only, and these solutions rely on host-based network virtualization (H-NV) [7], [8], which uses overlay networking between virtual machine monitors. With H-NV, composing an arbitrary virtual network, which consists of virtual switches and links mapped to the physical topology, as well as virtual network management and optimization are restricted for each tenant [6]. In contrast, programmable network virtualization (P-NV) [9]- [11], a structure in which a network hypervisor virtualizes the entire network and switches, provides fully programmable virtual networks (VNs) and enables flexible network management for each tenant. ...
... In cloud networking, a key requirement is scalability [6], [12]; thus, we first investigate the amount of resources H-NV and P-NV consume when end hosts perform the same networking operations. This scalability also determines the number of tenants that can be supported in a cloud. ...
Conference Paper
We propose a new concept called "flow rule virtualization" (FlowVirt) for programmable network virtualization (P-NV). In P-NV, network hypervisor is a key component in that it plays a role in creating and managing virtual networks. This paper first reports a critical limitation of network hypervisor - scalability problem, which results in the high consumption of the switch memory, control channel, and CPU cycles: 3.9, 4.7, and 1.7 times higher than host-based network virtualization, respectively. This scalability problem arises because all the flow rules from the virtual network controllers are directly installed into switches. To resolve the scalability problem, FlowVirt introduces a flow rule abstraction: virtual and physical flow rules. By separating virtual and physical flow rules, the abstraction virtualizes flow rules so that FlowVirt can merge virtual flow rules to a smaller number of physical flow rules to be installed in switches. The evaluation results show the enhanced scalability of FlowVirt. The number of flow rules to be installed in switches decreases by up to 10 times compared to the previous P-NV. The control channel bandwidth and CPU cycles are also reduced by up to 14 and 3 times, respectively.
... Network as a service (NaaS) is one of the latest cloud services. NaaS was proposed in 2012 (Costa, Migliavacca, Pietzuch, & Wolf, 2012). The concept of NaaS is to outsource to the cloud networking service providers in order to limit the cost of data communications for the cloud consumers, as well as to improve network flexibility (Costa, et al., 2012). ...
... NaaS was proposed in 2012 (Costa, Migliavacca, Pietzuch, & Wolf, 2012). The concept of NaaS is to outsource to the cloud networking service providers in order to limit the cost of data communications for the cloud consumers, as well as to improve network flexibility (Costa, et al., 2012). NaaS enable the cloud consumers to use network connectivity services and/or inter-cloud network connectivity services. ...
... VPN as a Service is one of the subsets of the Network as a service (NaaS). NaaS is one of the latest cloud services (Costa, et al., 2012).NaaS brings huge attraction to the enterprises and industries nowadays because it limits the cost of data communications for the cloud consumers as well as improves network flexibility. It is important to thoroughly investigate the benefits and shortcomings of NaaS, taking into account the influence and impact on enterprise network as well as the security issues. ...
Chapter
Full-text available
Securing a cloud network is an important challenge for delivering cloud services to enterprise clouds. There are a number of secure network protocols, such as VPN protocols, currently available, to provide different secure network solutions for enterprise clouds. For example, PPTP, IPSec, and SSL/TLS are the most widely used VPN protocols in today’s securing network solutions. However, there are some significant challenges in the implementation stage. For example, which VPN solution is easy to deploy in delivering cloud services? Which VPN solution is most user-friendly in enterprise clouds? This chapter explores these issues by implementing different VPNs in a virtual cloud network environment using open source software and tools. This chapter also reviews cloud computing and cloud services and looks at their relationships. The results not only provide experimental evidence but also facilitate the network implementers in deployment of secure network solutions for enterprise cloud services.
... Hence, network slicing leverages the benefits of a virtualized resource sharing environment enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) [2]- [4]. Based on softwarization and virtualization, it is capable of enabling Network-as-a-Service (NaaS) [5] and allows the coexistence of multiple networks on the same physical infrastructure. An E2E network slice is composed of the Radio Access Network (RAN), transport and Core Network (CN) sub-network slices in between the end (user) devices [6]. ...
... , i ∈ I, t ∈ T, b ∈ B, k ∈ K, 2: j ∈ J , h j kb (t), P max 3: Output: Solution to Eq. (8a) and providing fairness 4: between eMBB users. 5: Relax x tox, and decompose P 0 toP 1 , P 2 , and P 3 6: Set R min 7: for l ← 0 to L do 8: Findx (l) , p (l) , ρ (l) from feasible convex set as a 9: solution ofP 1 , P 2 , and P 3 respectively. 10: Find x (l) via Eq. ...
Article
Full-text available
Network slicing has been a significant technological advance in the 5G mobile network allowing delivery of diverse and demanding requirements. The slicing grants the ability to create customized virtual networks from the underlying physical network, while each virtual network can serve a different purpose. One of the main challenges yet is the allocation of resources to different slices, both to best serve different services and to use the resources in the most optimal way. In this paper, we study the radio resource slicing problem for Ultra-Reliable Low Latency Communications (URLLC) and enhanced Mobile Broadband (eMBB) as two prominent use cases. The URLLC and eMBB traffic is multiplexed over multiple numerologies in 5G New Radio, depending on their distinct service requirements. Therein, we present our optimization algorithm, Mixed-numerology Mini-slot based Resource Allocation (MiMRA), to minimize the impact on eMBB data rate due to puncturing by different URLLC traffic classes. Our strategy controls such impact by introducing a puncturing rate threshold. Further, we propose a scheduling mechanism that maximizes the sum rate of all eMBB users while maintaining the minimum data rate requirement of each eMBB user. The results obtained by simulation confirm the applicability of our proposed resource allocation algorithm.
... Network-as-a-Service (NaaS) [71] is proposed as a framework to integrate cloud environment with direct and secure access to the network infrastructure. NaaS allows tenants to customize forwarding decisions and network management based on application needs. ...
... Our idea is similar to NaaS [71] that involves the optimization of resource allocations by considering network and computing resources as a unified whole. By default on multicore operating system, the processes are distributed among multiple cores dynamically and randomly. ...
Thesis
Due to the growing trend of “Softwarization”, virtualization is becoming the dominating technology in data center and cloud environment. Software Defined Network (SDN) and Network Function Virtualization (NFV) are different expressions of “Network Softwarization”. Software switch is exactly the suitable and powerful tool to support network softwarization, which is also indispensable to the success of network virtualization. Regarding the challenges and opportunities in network softwarization, this thesis aims to investigate the deployment of software switch in a SDN-enabled network virtualization environment.
... Previous researches on this topic has made it possible to show the significance of NaaS in business [5] and the importance of the NaaS in the cloud [1]. Other research has been undertaken in terms of other technologies such as radio access network as a service in [6], virtual private network as a service in [7]. ...
... Developing as usual applications for NaaS is complex since the complexity of the network topology of the data centers is completely exposed [1]. This is rather complex to understand because the normal developers do not need to deal with this layer daily. ...
Article
Full-text available
This paper introduces Network-as-a-Service (NaaS) in the cloud and the impacts that this technology has had and will have on the business world and on other sectors. Although there has been a lot of researches on this new technology, there remains a lack of research about the influencing factors of the NaaS within cloud computation and influences that this technology has on the tenants. In this research, a SWOT analysis is used to develop a SWOT Matrix for NaaS in the Cloud, and the categories and parameters of the SWOT analysis are determined and discussed. In order to make this research as complete as possible the literature sources and literature topics are from different areas such as from the enterprise world, pure technical development, private blogs etc. This paper is useful for tenants that want to take an overview of the NaaS in the cloud and whether it is worth implementing or not and a basis for future research.
... The world's SaaS market market was valued $134.44 billion in 2018; and is expected to grow as high as $220.21 billion by 2022 3 . Similarly, NaaS providers have network infrastructure which is virtually offered to a third-party in the form of bandwidth capacities using an on-demand provisioning model [14]. NaaS enables IaaS companies to use their network with high dynamism, scalability and flexibility, adapting to SaaS requirements as they emerge. ...
... Moreover, in the future, we will reconsider the aforementioned problem for further investigation (co-operative game), and mathematical proof for finding a Nash equilibrium [21]. The supposition that network is limited by the speed of the network interface at each device could be non-essentially tolerable, albeit bottlenecks can often exist elsewhere [14], [15]. More specifically, for NaaS providers the topology of the network may be very important as well, as building virtual networks, virtual network functions (VNFs), and running the SDN (software defined network) controls are complex tasks. ...
Article
Internet of Things (IoT) is producing an extraordinary volume of data daily, and it is possible that the data may become useless while on its way to the cloud, due to long distances. Fog/edge computing is a new model for analysing and acting on time-sensitive data, adjacent to where it is produced. Further, cloud services provided by large companies such as Google, can also be localised to improve response time and service agility. This is accomplished through deploying small-scale datacentres in various locations, where needed in proximity of users; and connected to a centralised cloud that establish a multi-access edge computing (MEC). The MEC setup involves three parties, i.e., service providers (IaaS), application providers (SaaS), network providers (NaaS); which might have different goals, therefore, making resource management difficult. Unlike existing literature, we consider resource management with respect to all parties; and suggest game-theoretic resource management techniques to minimise infrastructure energy consumption and costs while ensuring applications’ performance. Our empirical evaluation, using Google’s workload traces, suggests that our approach could reduce up to 11.95 percent energy consumption, and $\sim$ 17.86% user costs with negligible loss in performance. Moreover, IaaS can reduce up to 20.27 percent energy bills and NaaS can increase their costs-savings up to 18.52 percent as compared to other methods.
... The public cloud provides tenants with on-demand provisioning of computing and storage resources, which reduces considerable costs in capital expense and operating expense [4]. Thus many tenants migrate networks into the public cloud, and it is critical to protect their network from attack [18]. ...
... 4. The user policy is that the http packets from A to B go through PFW(NO.1) ...
Conference Paper
In the public cloud, the software security functions that multitenants deploy in their virtual networks have limited performance. SmartNIC overcomes these limitations by implementing these security functions with hardware acceleration. However, the shared SmartNIC resources are not open for external users with security considerations. Since the security requirements of tenants are diverse, it is tedious for network operators to develop these functions from scratch with low-level APIs. This paper presents UniSec, a unified programming framework for fast security functions development while improving performance with SmartNIC acceleration. UniSec provides modular abstraction for a single function and module sharing among multiple security functions. With the well-defined APIs of UniSec, developers only need to focus on the core logic instead of complex underlying operations including resource management, matching algorithms, etc. Experimental results show that the code has been reduced by 65% on average for each security function with UniSec. UniSec also improves processing performance up to 76%, compared with the software-only implementation.
... SDN has achieved the separation of data plane and control Manuscript plane and SDN hypervisor provides an essential virtualization for the logic separation from physical resources, such as FlowVisor [1] and OpenVertix [2], which can effectively control the network flow to divide the network resources into slices. Therefore, VN and SDN can be effectively combined to form a new method, which is beneficial to implement different network types and deliver NaaS [3] for satisfying the requirements of multi-tenants by sharing physical resources. Mapping VN onto cloud datacenter network usually includes virtual controller (vcontroller) mapping, virtual switch (vswitch) mapping and virtual link mapping, which is similar to traditional virtual network embedding and also NP-hard [4]- [7]. ...
... The major contributions in this paper are summarized as follows: 1 Introduce topology-aware survivable virtual network embedding approach by sharing cloud datacenter network, which enhance the survivability of virtual network. 2 Apply SVNE strategy with network delay and disjoint path between vcontroller and vswitch to minimize the expected percentage of control path loss through the shared physical SDN, which can significantly enhance the survivability. 3 The optimal controller location selection factor is used to improve the survivability of virtual network while little impacts acceptance ratio and revenue/cost ratio because network centrality is used to optimize the embedding. 4 Extensive experiments have been conducted and the proposed algorithm was compared with other similar algorithms ...
Article
Full-text available
Software-defined networking (SDN) has emerged as a promising approach to enable network innovation, which can provide network virtualization through a hypervisor plane to share the same cloud datacenter network among multiple virtual networks. While, this attractive approach may bring some new problem that leads to more susceptible to the failure of network component because of the separated control and forwarding planes. The centralized control and virtual network sharing the same physical network are becoming fragile and prone to failure if the topology of virtual network and the control path is not properly designed. Thus, how to map virtual network into physical datacenter network in virtualized SDN while guaranteeing the survivability against the failure of physical component is extremely important and should fully consider more influence factors on the survivability of virtual network. In this paper, combining VN with SDN, a topology-aware survivable virtual network embedding approach is proposed to improve the survivability of virtual network by an enhanced virtual controller embedding strategy to optimize the placement selection of virtual network without using any backup resources. The strategy explicitly takes account of the network delay and the number of disjoint path between virtual controller and virtual switch to minimize the expected percentage of control path loss with survivable factor. Extensive experimental evaluations have been conducted and the results verify that the proposed technology has improved the survivability and network delay while keeping the other within reasonable bounds. © 2018 The Institute of Electronics, Information and Communication Engineers.
... Despite the enormous potential of SDN and NFV, directly deploying a network measurement service in third-party public cloud infrastructures [19,20] incurs privacy and security issues [17,[21][22][23][24]. To provide different application-level metrics of interest, the measurement service hosts accurate and timely statistics on the global flows. ...
Article
Full-text available
Network measurements are the foundation for network applications. The metrics generated by those measurements help applications improve their performance of the monitored network and harden their security. As severe network attacks using leaked information from a public cloud exist, it raises privacy and security concerns if directly deployed in network measurement services in a third-party public cloud infrastructure. Recent studies, most notably OblivSketch, demonstrated the feasibility of alleviating those concerns by using trusted hardware and Oblivious RAM (ORAM). As their performance is not good enough, and there are certain limitations, they are not suitable for broad deployment. In this paper, we propose FO-Sketch, a more efficient and general network measurement service that meets the most stringent security requirements, especially for a large-scale network with heavy traffic volume and burst traffic. Let a mergeable sketch update the local flow statistics in each local switch; FO-Sketch merges (in an Intel SGX-created enclave) these sketches obliviously to form a global “one big sketch” in the cloud. With the help of Oblivious Shuffle, Divide and Conquer, and SIMD speedup, we optimize all of the critical routines in our FO-Sketch to make it 17.3x faster than a trivial oblivious solution. While keeping the same level of accuracy and packet processing throughput as non-oblivious Elastic Sketch, our FO-Sketch needs only ∼ 4.5MB enclave memory space in total to record metrics and for PORAM to store the global sketch in the cloud. Extensive experiments demonstrate that, for the recommended setting, it takes only ∼ 0.6s in total to rebuild those data during each measurement interval.
... Paolo Costa et al., [10] proposed Network-as-a-Service (NaaS), a structure that incorporates current cloud computing contributions with direct, yet secure, tenant access to the network infrastructure. Using NaaS tenants can simply organize custom routing and multicast protocols. ...
Article
A data center (DC) denotes to any huge faithful group of computers that is retained and operated by an organization. Data centers of numerous sizes are being made and hired for a dissimilar set of resolves today. On the one hand, big universities and isolated enterprises gradually consolidating their IT services within on-site data centers comprising hundreds to thousands of servers. On the other hand, huge online service providers such as Google, Microsoft, and Amazon quickly constructing geographically varied cloud data centers often have more than 10K servers; to offer a variation of cloud-based services such as Email, Web servers, Gaming, Storage, and Instant Messaging. Though there is great interest in planning developed networks for data centers, very little is identified about the network-level traffic characteristics of present data centers. In this paper, we focused on a study of the network traffic in data centers and defining the anomaly detection system in secure cloud computing environment.
... paths between entities 2 . With such benefits, a tenant can build a custom VN topology of network switches and monitor their performance bottlenecks [11]. Additionally, within its VN, the tenant can process network connections with priorities for the purpose of quality-of-experience enhancements [8]. ...
... Network Function Virtualisation (NFV) decouples network functions from hardware appliances and pushes forward the deployment of software network functions in the cloud [6], [23], [66]. Enterprises can subscribe to versatile network functions from cloud service providers with high scalability and reduced cost. ...
... Application offloading is the simplest method, in which an entire task or application is assigned to a cloudlet for processing [77]. In the component offloading technique, a part or thread of an application is offloaded to a cloudlet for processing [78]. ...
Article
Full-text available
A cloudlet is an emerging computing paradigm that is designed to meet the requirements and expectations of the Internet of things (IoT) and tackle the conventional limitations of a cloud (e.g., high latency). The idea is to bring computing resources (i.e., storage and processing) to the edge of a network. This article presents a taxonomy of cloudlet applications, outlines cloudlet utilities, and describes recent advances, challenges, and future research directions. Based on the literature, a unique taxonomy of cloudlet applications is designed. Moreover, a cloudlet computation offloading application for augmenting resource-constrained IoT devices, handling compute-intensive tasks, and minimizing the energy consumption of related devices is explored. This study also highlights the viability of cloudlets to support smart systems and applications, such as augmented reality, virtual reality, and applications that require high-quality service. Finally, the role of cloudlets in emergency situations, hostile conditions, and in the technological integration of future applications and services is elaborated in detail.
... Other examples of the "as-a-service" collection include backend-as-a-service (BaaS), storage-as-a-service (STaaS), cooperation-as-a-service (CaaS), traffic-information-as-aservice (TIaaS), vehicle-witnesses-as-a-service (VWaaS), mobile-backend-as-a-service (MBaaS) [50], database-as-aservice (DBaS) (i.e., Relational Cloud) [51], network-as-aservice (NaaS) [52], and function-as-a-service (FaaS) [53]. ...
Article
Determining how to structure vehicular network environments can be done in various ways. Here, we highlight vehicle networks’ evolution from vehicular ad-hoc networks (VANET) to the internet of vehicles (IoVs), listing their benefits and limitations. We also highlight the reasons in adopting wireless technologies, in particular, IEEE 802.11p and 5G vehicle-to-everything, as well as the use of paradigms able to store and analyze a vast amount of data to produce intelligence and their applications in vehicular environments. We also correlate the use of each of these paradigms with the desire to meet existing intelligent transportation systems’ requirements. The presentation of each paradigm is given from a historical and logical standpoint. In particular, vehicular fog computing improves on the deficiences of vehicular cloud computing, so both are not exclusive from the application point of view. We also emphasize some security issues that are linked to the characteristics of these paradigms and vehicular networks, showing that they complement each other and share problems and limitations. As these networks still have many opportunities to grow in both concept and application, we finally discuss concepts and technologies that we believe are beneficial. Throughout this work, we emphasize the crucial role of these concepts for the well-being of humanity.
... Aside from the aforementioned cloud services, IaaS, PaaS, and SaaS, Network as a service (NaaS) and Storage as a Service (StaaS) are examples of the latest cloud services. NaaS aims to outsource to the cloud networking service providers in order to limit the cost of data communications for cloud consumers, as well as to improve network flexibility (Costa, Migliavacca, Pietzuch, & Wolf, 2012). NaaS offers the cloud consumers to use network connectivity services and/or inter-cloud network connectivity services. ...
Chapter
Full-text available
Web services are playing a pivotal role in business, management, governance, and society with the dramatic development of the Internet and the Web. However, many fundamental issues are still ignored to some extent. For example, what is the unified perspective to the state-of-the-art of Web services? What is the foundation of Demand-Driven Web Services (DDWS)? This chapter addresses these fundamental issues by examining the state-of-the-art of Web services and proposing a theoretical and technological foundation for demand-driven Web services with applications. This chapter also presents an extended Service-Oriented Architecture (SOA), eSMACS SOA, and examines main players in this architecture. This chapter then classifies DDWS as government DDWS, organizational DDWS, enterprise DDWS, customer DDWS, and citizen DDWS, and looks at the corresponding Web services. Finally, this chapter examines the theoretical, technical foundations for DDWS with applications. The proposed approaches will facilitate research and development of Web services, mobile services, cloud services, and social services.
... NaaS from Cisco [12] focuses on network management efficiency, which has a different scope than Libera. Hardware-based network programmability has been proposed in NaaS by Costa et al. [13]. It installs field programmable gate array (FPGA)-based specialized hardware (called NaaS box) to switches and permits the direct control of tenants. ...
Article
Current network virtualization allows tenants to have their own virtual networks. However, new demands to "program" virtual networks at a finer granularity have arisen as tenants want the ability to provision and control switches and links in their virtual networks. This study proposes a new concept called the p-NIaaS model. The p-NIaaS model enables tenants to program their own packet processing logic and monitor network status from any virtual network infrastructure, which is not possible with current network virtualization. This article presents the Libera network hypervisor that implements the p-NIaaS model. Libera overcomes the shortcomings of existing network hypervisors such as scalability, VM migration support, and flexibility. The evaluation shows that the Libera hypervisor is highly scalable and effectively supports VM migration. We also present the overheads of Libera. Libera incurs up to 11 percent overhead in comparison with a non-virtualized network, which we believe is promising in the first prototype of the p-NIaaS model.
... First, datacenter operators need to implement NFs to enforce tenant isolation while guaranteeing Service Level Agreements (SLAs) [1,2]. Second, with the trend of network as a service (NaaS) in the cloud [3][4][5], tenants (especially enterprises) have moved line-of-business applications to the cloud [4]. For instance, Walmart has focused on migrating its thousands of internal business applications to Microsoft Azure to decrease operational costs associated with legacy architecture [6]. ...
Article
Full-text available
In the public cloud, FPGA-based SmartNICs are widely deployed to accelerate network functions (NFs) for datacenter operators. We argue that with the trend of network as a service (NaaS) in the cloud is also meaningful to accelerate tenant NFs to meet performance requirements. However, in pursuit of high performance, existing work such as AccelNet is carefully designed to accelerate specific NFs for datacenter providers, which sacrifices the flexibility of rapidly deploying new NFs. For most tenants with limited hardware design ability, it is time-consuming to develop NFs from scratch due to the lack of a rapidly reconfigurable framework. In this paper, we present a reconfigurable network processing pipeline, i.e., DrawerPipe, which abstracts packet processing into multiple “drawers” connected by the same interface. NF developers can easily share existing modules with other NFs and simply load core application logic in the appropriate “drawer” to implement new NFs. Furthermore, we propose a programmable module indexing mechanism, namely PMI, which can connect “drawers” in any logical order, to perform distinct NFs for different tenants or flows. Finally, we implemented several highly reusable modules for low-level packet processing, and extended four example NFs (firewall, stateful firewall, load balancer, IDS) based on DrawerPipe. Our evaluation shows that DrawerPipe can easily offload customized packet processing to FPGA with high performance up to 100 Mpps and ultra-low latency (<10 µs). Moreover, DrawerPipe enables modular development of NFs, which is suitable for rapid deployment of NFs. Compared with individual NF development, DrawerPipe reduces the line of code (LoC) of the four NFs above by 68%.
... This on-demand network connectivity service helps in managing the network infrastructure in an optimal way. These services could grant the tenants an access to the network elements and manage the network in a flexible way according to the customers' needs and their payments [14], [15]. Virtual Private Network (VPN) with specific network requirements is an example for the NaaS. ...
... Network as a service (NaaS) [13] is the providence of network services by third parties to the customers that do not want to build their own networking infrastructure. These services that can be contracted over a period of time and paid as per demand include configuring and operating routers and protocols, wide area networks (WAN), firewalls, software defined networks (SDN), bandwidth on demand (BoD), extended Virtual Private Area Networks (VPN), and so on. ...
Chapter
Full-text available
Learning Outcomes After reading this chapter, the students will be able to: • To understand the usage of services in cloud. • To understand the importance of Everything-as-a-service. • To identify different types of as-a-service offered in cloud. • To understand the different protocols used in cloud communication mechanism. • To identify, understand and classify different APIs and their working mechanisms in various as-a-service services in cloud
... Table 1 gives a description of this model (DaaS). Table 2 shows a proposed network service description model based on the IaaS description model mentioned in the paper by Saouli et al. (2015), which in turn was collected from the literature from Duan (2011), Zheng et al. (2016), Costa et al. (2012) and Dohare and Lobiyal (2017). Thus, Table 2 lists the network characteristics. ...
Article
Nowadays, cloud computing is regarded as a new paradigm in the field of information technology. Since the emergence of this paradigm, most of corporations have started using it through the deployment or the use of its services. Furthermore, service deployment in the cloud forms its kernel function. To ensure a good function of service deployment we need to combine a set of services. In fact, to find a limited number candidate services constitutes a problem due to the absence of a unified service description. In this paper, we propose a solution to solve the deployment problem by considering it as a service composition taking into account the quality of service. Since we have many atomic services, this paper transforms this composition problem into an optimisation problem. Moreover, to reduce the number of service candidates we propose a new service description language to assist the discovery process. The implementation of this model has been provided in order to evaluate our system. The obtained results demonstrate the effectiveness of our proposed system.
... This paper focuses on the design, implementation and performance of a single FLICK middlebox. However, the wider vision is of a number of such boxes within a data centre [10]. ...
... Several vendors offering cloud services include Amazon EC2, Microsoft Azure, Google Cloud, etc. Sefraoui et al. [34] conducted a comparative study of different IaaS providers and how these cloud service providers have better resource utilization than legacy technologies. Network-as-a-Service (NaaS) [4] is a relatively newer cloud service model. This service can help users create network topologies and even use custom routing protocols, and emulate network devices such as switches and routers. ...
Article
Full-text available
We analyzed the performance of a multi-node OpenStack cloud amid different types of controlled and self-induced network errors between controller and compute-nodes on the control plane network. These errors included limited bandwidth, delays and packet losses of varying severity. This study compares the effects of network errors on spawning times of batches of instances created using three different virtualization technologies supported by OpenStack, i.e., Docker containers, Linux containers and KVM virtual machines. We identified minimum / maximum thresholds for bandwidth, delay and packet-loss rates below / beyond which instances fail to launch. To the authors’ best knowledge, this is the first comparative measurement study of its kind on OpenStack. The results will be of particular interest to designers and administrators of distributed OpenStack deployments.
... In addition, network virtualization offers on-demand virtual networks customized for particular service and user requirements [13]. May the current trends of Network-as-a-service (Naas) [14] and Network Function Virtualization (NFV) [15] change the method for managing IPv6 transition? However, Network virtualization and IPv6 have not been addressed together within a target architecture. ...
... Network as a Service: NaaS provides cloud users with the ability to use transport connectivity services and/or inter-cloud network connectivity services such as Virtual Private Networks (VPNs), mobile network virtualisation, bandwidth on demand, etc. (Costa et al., 2012). ...
Article
Full-text available
Cloud computing is one of the most trendy terminologies. Cloud providers aim to satisfy clients' requirements for computing resources such as services, applications, networks, storage and servers. They offer the possibility of leasing these resources rather than buying them. Many popular companies, such as Amazon, Google and Microsoft, began to enhance their services and apply the technology of cloud computing to provide cloud environment for their customers. Although there are lots of advantages in using a cloud-based system, some issues must be handled before organisations and individuals have the trust to deploy their systems in cloud computing. Security, privacy, power efficiency, compliance and integrity are among those important issues. In this paper, we focus on cloud computing along with its deployment and delivery models. A comparison between cloud computing with other computing models is presented, this is in addition to a survey on different major security issues, challenges and risks which currently pose threats to the cloud industry. Moreover, we discuss cloud security requirements and their importance for deployment and delivery models. Finally, we present cloud computing security future trends and research openings.
... Network as a Service: NaaS provides cloud users with the ability to use transport connectivity services and/or inter-cloud network connectivity services such as Virtual Private Networks (VPNs), mobile network virtualisation, bandwidth on demand, etc. (Costa et al., 2012). ...
Article
Full-text available
Cloud computing is one of the most trendy terminologies. Cloud providers aim to satisfy clients' requirements for computing resources such as services, applications, networks, storage and servers. They offer the possibility of leasing these resources rather than buying them. Many popular companies, such as Amazon, Google and Microsoft, began to enhance their services and apply the technology of cloud computing to provide cloud environment for their customers. Although there are lots of advantages in using a cloud-based system, some issues must be handled before organisations and individuals have the trust to deploy their systems in cloud computing. Security, privacy, power efficiency, compliance and integrity are among those important issues. In this paper, we focus on cloud computing along with its deployment and delivery models. A comparison between cloud computing with other computing models is presented, this is in addition to a survey on different major security issues, challenges and risks which cur...
... In addition to these three main models, many others have been proposed such as Hardware as a Service [6], Communication as a Service [22], Network as a Service [23], Data as a Service [22], Workplace as a Service [24], Security as a service [25], Business Process as a service [26], Identity and Policy Management as a Service [27], STorage as a Service [28], Cluster as a Service [29], etc. ...
Article
Full-text available
Cloud Computing is a business model revolution more than a technological one. It capitalized on various technologies that have proved themselves and reshaped the use of computers by replacing their local use by a centralized one where shared resources are stored and managed by a third-party in a way transparent to end-users. With this new use came new needs and one of them is the need to search through Cloud services and select the ones that meet certain requirements. To address this need, we have developed, in a previous work, the Cloud Service Research and Selection System (CSRSS) which aims to allow Cloud users to search through Cloud services in the database and find the ones that match their requirements. It is based on the Skyline and ELECTRE IS. In this paper, we improve the system by introducing 7 new dimensions related to QoS constraints. Our work's main contribution is conceiving an Agent that uses both the Skyline and an outranking method, called ELECTREIsSkyline, to determine which Cloud services meet better the users' requirements while respecting QoS properties. We programmed and tested this method for a total of 10 dimensions and for 50 000 cloud services. The first results are very promising and show the effectiveness of our approach.
... Moreover, using SDN, traffic control and management functions, like routing, can be moved out of the network nodes and placed on a centralized controller software. Coupling both the approaches, a centralized entity, called Orchestrator, has a complete view of the network allowing it to manage the Control Plane and deploy Virtual Network Functions (VNFs) [12] [13]. This represents an enabling factor for a rapid evolution of the dynamic service chain provisioning. ...
Conference Paper
Full-text available
In the last few years, Software Defined Networks (SDN) and Network Functions Virtualization (NFV) have been introduced in the telecommunications networks as a new way to design, deploy and manage networking services. Working together, they are able to consolidate and deliver the networking components using standard IT virtualization technologies making, in such a way, Telco infrastructures more flexible and adaptive with respect to the needs of both end users and service providers. In this context, this paper presents a prototype of softwarized live streaming platform allowing carriers to simplify functions/services management and deployment. Target of the proposed framework is enabling small/medium and unusual content providers to share events with their followers without the need of adopting a dedicated and expensive data delivery infrastructure.
... It enables the design of efficient in-network services. Network as a Service is not a new concept, but a new business model for delivery efficient in-network services [5]. It enables the design of efficient in-network services. ...
Article
Full-text available
Cloud Computing (CC) is a technology that surely brings innovations in today’s business world, more and more companies around the world are widely incorporating this technology into their businesses. From a technical, as well as organizational point of view transferring enterprise IT to the Cloud is a complex task. Various factors have to be taken into consideration in order to make a right choice when moving IT services to the Cloud. The goal of this paper is to identify and to discuss in detail all factors that influence organization’s decision to adopt Cloud. General model for Cloud adoption, introduced in Pantelic et al. [13]a, consists of the key factors driving the organizational benefits when moving to the Cloud. The purpose of the model is to support decision makers in evaluating the benefits, risks and costs of using Cloud Computing. In this paper the general model is extended with two new aggregation methods for harmonization of alternatives rankings in a group decision process. We present the results of two new methods using the method results from previous research [13], as rank inputs, into an aggregate (group) preference. The idea is to find consensus ranking that minimizes disagreement among previous methods results. There were no strong differences between the results of performed methods. The results have shown that Software as a service model and Storage as a service model dominated according to not just arithmetic-mean method, but also to geometric-mean method.
Article
This paper presents V-Sight, a network monitoring framework for programmable virtual networks in clouds. Network virtualization based on software-defined networking (SDN-NV) in clouds makes it possible to realize programmable virtual networks; consequently, this technology offers many benefits to cloud services for tenants. However, to the best of our knowledge, network monitoring, which is a prerequisite for managing and optimizing virtual networks, has not been investigated in the context of SDN-NV systems. As the first framework for network monitoring in SDN-NV, we identify three challenges: non-isolated and inaccurate statistics, high monitoring delay, and excessive control channel consumption for gathering statistics. To address these challenges, V-Sight introduces three key mechanisms: 1) statistics virtualization for isolated statistics, 2) transmission disaggregation for reduced transmission delay, and 3) pCollector aggregation for efficient control channel consumption. The evaluation results reveal that V-Sight successfully provides accurate and isolated statistics while reducing the monitoring delay and control channel consumption in orders of magnitude. We also show that V-Sight can achieve a data plane throughput close to that of non-virtualized SDN.
Article
Full-text available
With the rapid advance of mobile computing technology and wireless networking, there is a significant increase of mobile subscriptions. This drives a strong demand for mobile cloud applications and services for mobile device users. This brings out a great business and research opportunity in mobile cloud computing (MCC). This paper first discusses the market trend and related business driving forces and opportunities. Then it presents an overview of MCC in terms of its concepts, distinct features, research scope and motivations, as well as advantages and benefits. Moreover, it discusses its opportunities, issues and challenges. Furthermore, the paper highlights a research roadmap for MCC.
Article
Full-text available
In heterogeneous wireless networks, service providers typically employ multiple radio access technologies to satisfy the requirements of quality of service (QoS) and improve system performance. However, many challenges remain when using modern cellular mobile communications radio access technologies (e.g., wireless local area network, long-term evolution, and fifth-generation), such as inefficient allocation and management of wireless network resources in heterogeneous wireless networks (HWNs). This problem is caused by the sharing of available resources by several users, random distribution of wireless channels, scarcity of wireless spectral resources, and dynamic behavior of generated traffic. Previously, resource allocation schemes have been proposed for HWNs. However, these schemes focus on resource allocation and management, whereas traffic class is not considered. Hence, these existing schemes significantly increase the end-to-end delay and packet loss, resulting in poor user QoS and network throughput in HWNs. Therefore, this study attempts to solve the identified problem by designing an enhanced resource allocation (ERA) algorithm to address the inefficient allocation of available resources vs. QoS challenges. A computer simulation was performed to evaluate the performance of the proposed ERA algorithm by comparing it with a joint power bandwidth allocation algorithm and a dynamic bandwidth allocation algorithm. On average, the proposed ERA algorithm demonstrates a 98.2% bandwidth allocation, 0.75 s end-to-end delay, 1.1% packet loss, and 98.9% improved throughput performance at a time interval of 100 s.
Chapter
Cloud computing is the newest paradigm in computing that turned upside down the way of resource provisioning and payment. Cloud computing improves the resource utilization through virtualization enabling both customers as well as service providers to reap the benefits. Thus, virtualization is the enabling technology that made this revolution possible. Through virtualization, it is possible to host multiple independent systems on a single hardware without interfering with each other. Server virtualization techniques can be grouped based how the underlying hardware and operating systems are presented to the users. In this chapter, the author takes an in depth look at how different virtualization has been implemented along with their security and quality of service issues.
Conference Paper
Programmable network hardware can run services traditionally deployed on servers, resulting in orders-of-magnitude improvements in performance. Yet, despite these performance improvements, network operators remain skeptical of in-network computing. The conventional wisdom is that the operational costs from increased power consumption outweigh any performance benefits. Unless in-network computing can justify its costs, it will be disregarded as yet another academic exercise.
Research Proposal
Full-text available
Cloud computing... energy performance and cost aware scheduling for cloud data centers.
Thesis
Nowadays, service composition is one of the major problems in the Cloud due to the exceptional growth number of service deployed by providers. Recently, atomic services cannot deal with all client requirements. Traditional service composition gives the clients a composite service without non-functional parameters. To respond to both functional and non-functional parameters, we need a service composition. Since web services cannot communicate with each other or participate dynamically in service composition, this issue led us to use a dynamic entity represented by an agent. This work proposes an agent-based architecture with a new cooperation protocol that can offer an automatic and adaptable service composition by giving composite service with a maximum number of quality of service (QoS). As the provider uses the cloud to deploy their services, which this process needs to be handled well to make the service operate in best conditions according to QoS values. In order to deploy services in the Cloud, each one needs some combination of services model. When we mentioned combination problems, service composition represents the solution to this issue. Besides, each service provider looks for the best service deployment that gives his service the best conditions through the quality of service. The present thesis transforms the service deployment problem into an optimization problem. Moreover, in this thesis, we proposed a service description model that assists in the service discovery process.
Article
Fair bandwidth guarantee has always been a problem in cloud center networks. Normally, tenants are selfish and their selfishness is not well controlled by existing network resource allocation methods, which usually results in link congestion and unfair allocation. Thus, it is critical for cloud providers to balance and at the same time maximize the network resource utilization rate to fairly provide bandwidth for tenants on demand. Inspired by this, this paper proposes a network resource preallocation model, which could be applied in cloud centers using software-defined network technology. Based on network model and tenants’ bandwidth demands, cloud providers could generate enormous resource preallocation strategies and choose the optimal one from them. As an issue of computational time consumption, we further use a genetic algorithm in the optimal strategy choosing procedure. The experiment results show that our method has maximally 15% improvement on the decrease of unsatisfied bandwidth demands of tenants comparing with common equal cost multipaths method.
Article
Data centers backup using a cloud service is a significant operation that saves the data from being lost. It reduces the costs and optimize the hardware provisioning level. Software defined networking (SDN) is a growing network paradigm which divides the usual monolithic control and data plane found in the classical network paradigm. It improves and simplifies the network management process. This research addresses a specific problem in SDNs regarding Quality of Service (QoS) target achievement in backup operations between data centers (inter-site). This fundamental backup activity has special QoS requirements. The aim of this article is to provide a new cost-efficient inter-site backup model for cloud networks using the SDN technique. It helps in choosing a path to perform the backup in the current network according to customer's and provider's requirements, while taking the QoS parameters into consideration simultaneously, in order to achieve an efficient-cost solution.
Article
Full-text available
The cloud computing paradigm entails great opportunities for cost saving, tailored business solutions, state-of-the-art technical capabilities and systems as well as anytime worldwide access for organisations over the Internet. Hence, more and more clients shift their applications, business processes, services and data to the cloud. As a consequence thereof, customers hand over control and authority towards third-party cloud service providers. These circumstances raise various security concerns aggravated by multi-tenancy and resource pooling hampering cloud solution adoption. Most academic researchers examined cloud computing related security concerns either focused on technical specific (e.g. virtualisation), service model and legal issues. Beginning with introducing and defining key terms in cloud environments, especially cloud security, this survey systematically reviews technical and non-technical cloud computing security challenges in recent academic literature. The authors provide a general and consistent overview accompanied by obtaining current approaches to defining comprehensive industry standards. Additionally, gaps in previous related works are revealed, thus future research implications are pointed out. This paper fosters the understanding and relations of current security issues in cloud computing partly because the authors link academia and industry perspectives where suitable.
Chapter
Network-as-a-Service (NaaS) is one of new and promising cloud service models, through which the network infrastructure is offered to cloud customers as a service. Quality of Service (QoS) criteria play a significant role in the Service Level Agreement (SLA) for pricing and evaluation. This chapter describes NaaS service model and focuses on QoS criteria and issues in NaaS as well as for networking in general. A description about Software Defined Networking (SDN) has been presented, this promising network paradigm, which is considered one of programmable network designs, that depends on decoupling the control and data planes in order to achieve easier manageable networks. Finally, a deep research about applying a QoS policy in cloud and normal networks using SDN has been conducted, that shows the main categories for achieving the required QoS level. Their advantages and disadvantages have been mentioned, which in turn helps in choosing the best model according to multiple factors: the current network hardware, the traffic data type, the budget, etc.
Article
Full-text available
The importance of evaluating performance of cloud systems has been increasing with the rapid growing market demands for cloud computing. However, the performance testers often have to go through the hassle of tedious manual operations when interacting with the cloud. A cloud performance evaluation framework is designed for both broad cloud support and good workload extensibility, which provides an automatic interface to monitor the capability and scalability of Infrastructure-as-a-Service cloud systems. Cloud API modules are implemented for Amazon EC2 service and OpenStack. It can achieve flexible control workflows for multiple of different workloads and user customization to test scenarios. With several built-in workloads and metric aggregation methods, a series of tests is performed on our private clouds to compare the performance and scalability from multiple aspects. A methodology is also proposed to build a cost-performance model to better understand and analyze the efficiency of different types of cloud systems. Based on the results of the experiments, the model indicates a polynomial relation between performance per instance and the overall cost.
Article
Full-text available
In recent times, vehicular network research has attracted the attention of both researchers and the industry partly due to its potential applications in efficient traffic management, road safety, entertainment, etc. Resources such as communication, on-board unit, storage and computing units, and battery are generally installed in the vehicles participating in intelligent transportation systems. The need to maximize the utilization of these resources has also resulted in interest in cloud based vehicular networks (CVNs), an area of active research. This paper survey the CVNs literature published between 2010 and 2016. In addition, a taxonomy based on three main CVN categories, namely vehicular cloud computing (VCC), vehicle using cloud (VuC) and hybrid cloud (HC), is presented. In the taxonomy, we focus on related systems, architectures, applications and services. Although VCC has been extensively discussed in the literature, a comprehensive survey on the two other categories is lacking. Hence, this motivates our research. Through an extensive comparison of common characteristics among cloud computing, mobile cloud computing, VCC, VuC and HC and overview of the existing architectures, we present a conceptual HC architecture. Finally, we conclude the paper with open issues and challenges.
Article
Cloud computing has emerged in recent years as a promising paradigm that facilitates such new service models as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). As the number of cloud service provider increases, there exists a demand to dynamically provision virtual data centers (VDC) on top of the infrastructure provider’s physical data centers. This research addresses problems related to energy and resource efficiently embedding virtual data centers inside physical data centers under dynamic resource allocation conditions, in which VDCs continuously join and leave the system. Dynamic VDC embedding is challenging as it is an NP-hard problem that should meet multiple objectives. In this article, we propose heuristic joint VDC embedding – server consolidation approaches as one solution for that problem. Evaluation results show that the joint approach outperforms existing ones in terms of resource and energy efficiency and can keep system complexity acceptable.
Conference Paper
Due to the emergence of a hyper-connectivity communication paradigm, the "softwarisation" of the Internet infrastructure and of its network management framework is gaining increasing popularity. Cloud computing helps supporting this evolution, together with the emergence of Software Defined Networking (SDN) and Network Functions Virtualization (NFV). This technological paradigm allows to both move services closer to users, ensuring a lower latency in the service fruition, and support personalization of services by means of the migration of the latter towards edge nodes (in the so-called "fog computing" fashion). Actually, the development of cloud services based on the SDN-NFV paradigm has to cope with the availability for researcher and network enthusiasts to access to a full SDN testbed. Target of this paper is to present an emulation framework allowing simplification and cost reduction of network and application functions development, test and deployment. In such a way, the proposed architecture aims at supporting future Internet personal cloud services in a more scalable and sustainable way. Authors also present a proof-of-concept of the described architecture: a live video broadcasting service enabling small/medium and unusual content providers to share events with a restricted number of interested users without the need of adopting a dedicated and expensive data delivery infrastructure and/or subscribing expensive contracts with Telco.
Article
Full-text available
As one of the fundamental infrastructures for cloud computing, data center networks (DCN) have recently been studied extensively. We currently use pure software-based systems, FPGA based platforms, e.g., NetFPGA, or OpenFlow switches, to implement and evaluate various DCN designs including topology design, control plane and routing, and congestion control. However, software-based approaches suffer from high CPU overhead and processing latency; FPGA based platforms are difficult to program and incur high cost; and OpenFlow focuses on control plane functions at present. In this paper, we design a ServerSwitch to address the above problems. ServerSwitch is motivated by the observation that commodity Ethernet switching chips are becoming programmable and that the PCI-E interface provides high throughput and low latency between the server CPU and I/O subsystem. ServerSwitch uses a commodity switching chip for various customized packet forwarding, and leverages the server CPU for control and data plane packet processing, due to the low latency and high throughput between the switching chip and server CPU. We have built our ServerSwitch at low cost. Our experiments demonstrate that ServerSwitch is fully programmable and achieves high performance. Specifically, we have implemented various forwarding schemes including source routing in hardware. Our in-network caching experiment showed high throughput and flexible data processing. Our QCN (Quantized Congestion Notification) implementation further demonstrated that ServerSwitch can react to network congestions in 23us.
Conference Paper
Full-text available
Data centers avoid IP Multicast (IPMC) because of a series of problems with the technology. We introduce Dr. Multicast (MCMD), a system that maps IPMC operations to a combination of point-to-point unicast and traditional IPMC transmissions. MCMD optimizes the use of IPMC addresses within a data center, while simultaneously respecting an administrator-specified acceptable-use policy. We argue that with the resulting range of options, IPMC no longer represents a threat and can therefore be used much more widely.
Conference Paper
Full-text available
Dryad is a general-purpose distributed execution engine for coarse-grain data-parallel applications. A Dryad application combines computational "vertices" with communication "channels" to form a dataflow graph. Dryad runs the application by executing the vertices of this graph on a set of available computers, communicating as appropriate through files, TCP pipes, and shared-memory FIFOs. The vertices provided by the application developer are quite simple and are usually written as sequential programs with no thread creation or locking. Concurrency arises from Dryad scheduling vertices to run simultaneously on multiple computers, or on multiple CPU cores within a computer. The application can discover the size and placement of data at run time, and modify the graph as the computation progresses to make efficient use of the available resources. Dryad is designed to scale from powerful multi-core single computers, through small clusters of computers, to data centers with thousands of computers.
Conference Paper
Full-text available
Facebook recently deployed Facebook Messages, its first ever user-facing application built on the Apache Hadoop platform. Apache HBase is a database-like layer built on Hadoop designed to support billions of messages per day. This paper describes the reasons why Facebook chose Hadoop and HBase over other systems such as Apache Cassandra and Voldemort and discusses the application's requirements for consistency, availability, partition tolerance, data model and scalability. We explore the enhancements made to Hadoop to make it a more effective realtime system, the tradeoffs we made while configuring the system, and how this solution has significant advantages over the sharded MySQL database scheme used in other applications at Facebook and many other web-scale companies. We discuss the motivations behind our design choices, the challenges that we face in day-to-day operations, and future capabilities and improvements still under development. We offer these observations on the deployment as a model for other companies who are contemplating a Hadoop-based solution over traditional sharded RDBMS deployments.
Conference Paper
Full-text available
OpenFlow is a great concept, but its original design imposes excessive overheads. It can simplify network and traffic management in enterprise and data center environments, because it enables flow-level control over Ethernet switching and provides global visibility of the flows in the network. However, such fine-grained control and visibility comes with costs: the switch-implementation costs of involving the switch's control-plane too often and the distributed-system costs of involving the OpenFlow controller too frequently, both on flow setups and especially for statistics-gathering. In this paper, we analyze these overheads, and show that OpenFlow's current design cannot meet the needs of high-performance networks. We design and evaluate DevoFlow, a modification of the OpenFlow model which gently breaks the coupling between control and global visibility, in a way that maintains a useful amount of visibility without imposing unnecessary costs. We evaluate DevoFlow through simulations, and find that it can load-balance data center traffic as well as fine-grained solutions, without as much overhead: DevoFlow uses 10--53 times fewer flow table entries at an average switch, and uses 10--42 times fewer control messages.
Conference Paper
Full-text available
Cluster computing applications like MapReduce and Dryad transfer massive amounts of data between their computation stages. These transfers can have a significant impact on job performance, accounting for more than 50% of job completion times. Despite this impact, there has been relatively little work on optimizing the performance of these data transfers, with networking researchers traditionally focusing on per-flow traffic management. We address this limitation by proposing a global management architecture and a set of algorithms that (1) improve the transfer times of common communication patterns, such as broadcast and shuffle, and (2) allow scheduling policies at the transfer level, such as prioritizing a transfer over other transfers. Using a prototype implementation, we show that our solution improves broadcast completion times by up to 4.5X compared to the status quo in Hadoop. We also show that transfer-level scheduling can reduce the completion time of high-priority transfers by 1.7X.
Conference Paper
Full-text available
Cloud data centers host diverse applications, mixing workloads that require small predictable latency with others requiring large sustained throughput. In this environment, today's state-of-the-art TCP protocol falls short. We present measurements of a 6000 server production cluster and reveal impairments that lead to high application latencies, rooted in TCP's demands on the limited buffer space available in data center switches. For example, bandwidth hungry "background" flows build up queues at the switches, and thus impact the performance of latency sensitive "foreground" traffic. To address these problems, we propose DCTCP, a TCP-like protocol for data center networks. DCTCP leverages Explicit Congestion Notification (ECN) in the network to provide multi-bit feedback to the end hosts. We evaluate DCTCP at 1 and 10Gbps speeds using commodity, shallow buffered switches. We find DCTCP delivers the same or better throughput than TCP, while using 90% less buffer space. Unlike TCP, DCTCP also provides high burst tolerance and low latency for short flows. In handling workloads derived from operational measurements, we found DCTCP enables the applications to handle 10X the current background traffic, without impacting foreground traffic. Further, a 10X increase in foreground traffic does not cause any timeouts, thus largely eliminating incast problems.
Conference Paper
Full-text available
Application-independent Redundancy Elimination (RE), or identifying and removing repeated content from network transfers, has been used with great success for improving network performance on enterprise access links. Recently, there is growing interest for supporting RE as a network-wide service. Such a network-wide RE service benefits ISPs by reducing link loads and increasing the effective network capacity to better accommodate the increasing number of bandwidth-intensive applications. Further, a networkwide RE service democratizes the benefits of RE to all end-to-end traffic and improves application performance by increasing throughput and reducing latencies. While the vision of a network-wide RE service is appealing, realizing it in practice is challenging. In particular, extending single vantage-point RE solutions designed for enterprise access links to the network-wide case is inefficient and/or requires modifying routing policies. We present SmartRE, a practical and efficient architecture for network-wide RE. We show that SmartRE can enable more effective utilization of the available resources at network devices, and thus can magnify the overall benefits of network-wide RE. We prototype our algorithms using Click and test our framework extensively using several real and synthetic traces.
Conference Paper
Full-text available
The latest large-scale data centers offer higher aggregate bandwidth and robustness by creating multiple paths in the core of the net- work. To utilize this bandwidth requires different flows take different paths, which poses a challenge. In short, a single-path transport seems ill-suited to such networks. We propose using Multipath TCP as a replacement for TCP in such data centers, as it can effectively and seamlessly use available bandwidth, giving improved throughput and better fairness on many topologies. We investigate what causes these benefits, teasing apart the contribution of each of the mechanisms used by MPTCP. Using MPTCP lets us rethink data center networks, with a different mindset as to the relationship between transport protocols, rout- ing and topology. MPTCP enables topologies that single path TCP cannot utilize. As a proof-of-concept, we present a dual-homed variant of the FatTree topology. With MPTCP, this outperforms FatTree for a wide range of workloads, but costs the same. In existing data centers, MPTCP is readily deployable leveraging widely deployed technologies such as ECMP. We have run MPTCP on Amazon EC2 and found that it outperforms TCP by a factor of three when there is path diversity. But the biggest benefits will come when data centers are designed for multipath transports.
Conference Paper
Full-text available
Data-intensive applications are increasingly designed to execute on large computing clusters. Grouped aggregation is a core primitive of many distributed programming models, and it is often the most efficient available mechanism for computations such as matrix multiplication and graph traversal. Such algorithms typically require non-standard aggregations that are more sophisticated than traditional built-in database functions such as Sum and Max. As a result, the ease of programming user-defined aggregations, and the efficiency of their implementation, is of great current interest. This paper evaluates the interfaces and implementations for user-defined aggregation in several state of the art distributed computing systems: Hadoop, databases such as Oracle Parallel Server, and DryadLINQ. We show that: the degree of language integration between user-defined functions and the high-level query language has an impact on code legibility and simplicity; the choice of programming interface has a material effect on the performance of computations; some execution plans perform better than others on average; and that in order to get good performance on a variety of workloads a system must be able to select between execution plans depending on the computation. The interface and execution plan described in the MapReduce paper, and implemented by Hadoop, are found to be among the worst-performing choices.
Article
Full-text available
Network use has evolved to be dominated by content distri- bution and retrieval, while networking technology still can only speak of connections between hosts. Accessing con- tent and services requires mapping from the what that users care about to the network's where. We present Content- Centric Networking (CCN) which takes content as a primi- tive - decoupling location from identity, security and access, and retrieving content by name. Using new approaches to routing named content, derived heavily from IP, we can si- multaneously achieve scalability, security and performance. We have implemented the basic features of our architecture and demonstrate resilience and performance with secure file downloads and VoIP calls.
Article
Full-text available
We revisit the problem of scaling software routers, motivated by recent advances in server technology that enable high-speed parallel processing---a feature router workloads appear ideally suited to exploit. We propose a software router architecture that parallelizes router functionality both across multiple servers and across multiple cores within a single server. By carefully exploiting parallelism at every opportunity, we demonstrate a 35Gbps parallel router prototype; this router capacity can be linearly scaled through the use of additional servers. Our prototype router is fully programmable using the familiar Click/Linux environment and is built entirely from off-the-shelf, general-purpose server hardware.
Conference Paper
To become a credible alternative to specialized hardware, general-purpose networking needs to offer not only flexibility, but also predictable performance. Recent projects have demonstrated that general-purpose multicore hard-ware is capable of high-performance packet processing, but under a crucial simplifying assumption of uniformity: all processing cores see the same type/amount of traffic and run identical code, while all packets incur the same type of conventional processing (e.g., IP forwarding). Instead, we present a general-purpose packet-processing system that combines ease of programmability with predictable performance, while running a diverse set of applications and serving multiple clients with different needs. Offering predictability in this context is considered a hard problem because software processes contend for shared hardware resources--caches, memory controllers, buses--in unpredictable ways. Still, we show that, in our system, (a) the way in which resource contention affects performance is predictable and (b) the overall performance depends little on how different processes are scheduled on different cores. To the best of our knowledge, our results constitute the first evidence that, when designing software network equipment, flexibility and predictability are not mutually exclusive goals.
Article
Software packet forwarding has been used for a long time in general purpose operating systems. While interesting for prototyping or on slow links, it is not considered a viable solution at very high packet rates, where various sources of overhead (particularly, the packet I/O mechanisms) get in the way of achieving good performance. Having recently developed a novel framework (called netmap) for packet I/O on general purpose operating systems, we have investigated how our work can improve the performance of software packet processing. The problem is of interest because software switches/routers are widely used, and they are becoming inadequate with the increasing use of 1..10 Gbit/s links. The two case studies (OpenvSwitch and Click userspace) that we report in this paper give very interesting answers and insights. First of all, accelerating the I/O layer has the potential for huge benefits: we improved the performance of OpenvSwitch from 780 Kpps to almost 3 Mpps, and that of Click userspace from 490 Kpps to 3.95 Mpps, by simply replacing the I/O library (libpcap) with our accelerated version. On the other hand, reaching these speedups was not purely mechanical. The original versions of the two systems had other limitations, partly hidden by the slow packet I/O library, which prevented or limited the exploitation of these speed gains. In the paper we make the following contributions: i) present an accelerated version of libpcap which gives significant speedups for many existing packet processing applications; ii) show how we modified two representative applications (in particular, Click userspace), achieving huge performance improvements; iii) prove that existing software packet processing systems can be made adequate for high speed links, provided we are careful in removing other bottlenecks not related to packet I/O1.
Article
Although active networks have generated much debate in the research community, on the whole there has been little hard evidence to inform this debate. This paper aims to redress the situation by reporting what we have learned by designing, implementing and using the ANTS active network toolkit over the past two years. At this early stage, active networks remain an open research area. However, we believe that we have made substantial progress towards providing a more flexible network layer while at the same time addressing the performance and security concerns raised by the presence of mobile code in the network. In this paper, we argue our progress towards the original vision and the difficulties that we have not yet resolved in three areas that characterize a "pure" active network: the capsule model of programmability; the accessibility of that model to all users; and the applications that can be constructed in practice.
Article
In finance and healthcare, event processing systems handle sensitive data on behalf of many clients. Guaranteeing information security in such systems is challenging because of their strict performance requirements in terms of high event throughput and low processing latency. We describe DEFCON, an event processing system that enforces constraints on event flows between event processing units. DEFCON uses a combination of static and runtime techniques for achieving light-weight isolation of event flows, while supporting efficient sharing of events. Our experimental evaluation in a financial data processing scenario shows that DEFCON can provide information security with significantly lower processing latency compared to a traditional approach.
Conference Paper
Although there is tremendous interest in designing improved networks for data centers, very little is known about the network-level traffic characteristics of data centers today. In this paper, we conduct an empirical study of the network traffic in 10 data centers belonging to three different categories, including university, enterprise campus, and cloud data centers. Our definition of cloud data centers includes not only data centers employed by large online service providers offering Internet-facing applications but also data centers used to host data-intensive (MapReduce style) applications). We collect and analyze SNMP statistics, topology and packet-level traces. We examine the range of applications deployed in these data centers and their placement, the flow-level and packet-level transmission properties of these applications, and their impact on network and link utilizations, congestion and packet drops. We describe the implications of the observed traffic patterns for data center internal traffic engineering as well as for recently proposed architectures for data center networks.
Conference Paper
Most implementations of critical Internet protocols are written in type-unsafe languages such as C or C++ and are regularly vulnerable to serious security and reliability problems. Type-safe languages eliminate many errors but are not used to due to the perceived performance overheads. We combine two techniques to eliminate this performance penalty in a practical fashion: strong static typing and generative meta-programming. Static typing eliminates run-time type information by checking safety at compile-time and minimises dynamic checks. Meta-programming uses a single specification to abstract the low-level code required to transmit and receive packets. Our domain-specific language, MPL, describes Internet packet protocols and compiles into fast, zero-copy code for both parsing and creating these packets. MPL is designed for implementing quirky Internet protocols ranging from the low-level: Ethernet, IPv4, ICMP and TCP; to the complex application-level: SSH, DNS and BGP; and even file-system protocols such as 9P. We report on fully-featured SSH and DNS servers constructed using MPL and our OCaml framework Melange, and measure greater throughput, lower latency, better flexibility and more succinct source code than their C equivalents OpenSSH and BIND. Our quantitative analysis shows that the benefits of MPL-generated code overcomes the additional overheads of automatic garbage collection and dynamic bounds checking. Qualitatively, the flexibility of our approach shows that dramatic optimisations are easily possible.
Conference Paper
We have developed Gigascope, a stream database for network applications including traffic analysis, intrusion detection, router configuration analysis, network research, network monitoring, and performance monitoring and debugging. Gigascope is undergoing installation at many sites within the AT&T network, including at OC48 routers, for detailed monitoring. In this paper we describe our motivation for and constraints in developing Gigascope, the Gigascope architecture and query language, and performance issues. We conclude with a discussion of stream database research problems we have found in our application.
Conference Paper
Reconfigurable computing in the cloud helps to solve many practical problems relating to scaling out data- centers where computation is limited by energy consumption or latency. However, for reconfigurable computing in the cloud to become practical several research challenges have to be addressed. This paper identifies some of the perquisites for reconfigurable computing systems in the cloud and picks out several scenarios made possible with immense cloud-based computing capability. Keywords-reconfigurable computing; cloud computing. I. INTRODUCTION
Conference Paper
The shared nature of the network in today's multi-tenant datacenters implies that network performance for tenants can vary significantly. This applies to both production datacenters and cloud environments. Network performance variability hurts application performance which makes tenant costs unpredictable and causes provider revenue loss. Motivated by these factors, this paper makes the case for extending the tenant-provider interface to explicitly account for the network. We argue this can be achieved by providing tenants with a virtual network connecting their compute instances. To this effect, the key contribution of this paper is the design of virtual network abstractions that capture the trade-off between the performance guarantees offered to tenants, their costs and the provider revenue. To illustrate the feasibility of virtual networks, we develop Oktopus, a system that implements the proposed abstractions. Using realistic, large-scale simulations and an Oktopus deployment on a 25-node two-tier testbed, we demonstrate that the use of virtual networks yields significantly better and more predictable tenant performance. Further, using a simple pricing model, we find that the our abstractions can reduce tenant costs by up to 74% while maintaining provider revenue neutrality.
Conference Paper
We present PacketShader, a high-performance software router framework for general packet processing with Graphics Processing Unit (GPU) acceleration. PacketShader exploits the massively-parallel processing power of GPU to address the CPU bottleneck in current software routers. Combined with our high-performance packet I/O engine, PacketShader outperforms existing software routers by more than a factor of four, forwarding 64B IPv4 packets at 39 Gbps on a single commodity PC. We have implemented IPv4 and IPv6 forwarding, OpenFlow switching, and IPsec tunneling to demonstrate the flexibility and performance advantage of PacketShader. The evaluation results show that GPU brings significantly higher throughput over the CPU-only implementation, confirming the effectiveness of GPU for computation and memory-intensive operations in packet processing.
Conference Paper
Today's data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Non-uniform bandwidth among data center nodes complicates application design and limits overall system performance. In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available from today's higher-end solutions. Our approach requires no modifications to the end host network interface, operating system, or applications; critically, it is fully backward compatible with Ethernet, IP, and TCP.
Conference Paper
Some end-to-end network services benefit greatly from network support in terms of utility and scalability. However, when such support is provided through service-specific mechanisms, the proliferation of one-off solutions tend to decrease the robustness of the network over time. Programmable routers, on the other hand, offer generic support for a variety of end-to-end services, but face a different set of challenges with respect to performance, scalability, security, and robustness. Ideally, router-based support for end-to-end services should exhibit the kind of generality, simplicity, scalability, and performance that made the Internet Protocol (IP) so successful. In this paper we present a router-based building block called ephemeral state processing (ESP), which is designed to have IP-like characteristics. ESP allows packets to create and manipulate small amounts of temporary state at routers via short, predefined computations. We discuss the issues involved in the design of such a service and describe three broad classes of problems for which ESP enables robust solutions. We also present performance measurements from a network-processor-based implementation.
Conference Paper
Our goal is to enable fast prototyping of networking hard- ware (e.g. modified Ethernet switches and IP routers) for teaching and research. To this end, we built and made avail- able the NetFPGA platform. Starting from open-source ref- erence designs, students and researchers create their designs in Verilog, and then download them to the NetFPGA board where they can process packets at line-rate for 4-ports of 1GE. The board is becoming widely used for teaching and research, and so it has become important to make it easy to re-use modules and designs. We have created a standard interface between modules, making it easier to plug modules together in pipelines, and to create new re-usable designs. In this paper we describe our modular design, and how we have used it to build several systems, including our IP router reference design and some extensions to it.
Conference Paper
This paper examines an extreme point in the design space of programmable switches and network policy enforcement. Rather than relying on extensive changes to switches to provide more programmability, SideCar distributes custom processing code between shims running on every end host and general purpose sidecar processors, such as server blades, connected to each switch via commonly available redirection mechanisms. This provides applications with pervasive network instrumentation and programmability on the forwarding plane. While not a perfect replacement for programmable switches, this solves several pressing problems while requiring little or no change to existing switches. In particular, in the context of public cloud data centers with 1000s of tenants, we present novel solutions for multicast, controllable network bandwidth allocation (e.g., use-what-you-pay-for), and reachability isolation (e.g., a tenant's VM only sees other VMs of the tenant and shared services).
Conference Paper
MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
Article
Overlay networks are used today in a variety of distributed systems ranging from file-sharing and storage systems to communication infrastructures. However, designing, building and adapting these overlays to the intended application and the target environment is a difficult and time consuming process. To ease the development and the deployment of such overlay networks we have implemented P2, a system that uses a declarative logic language to express overlay networks in a highly compact and reusable form. P2 can express a Naradastyle mesh network in 16 rules, and the Chord structured overlay in only 47 rules. P2 directly parses and executes such specifications using a dataflow architecture to construct and maintain overlay networks. We describe the P2 approach, how our implementation works, and show by experiment its promising trade-off point between specification complexity and performance.
Conference Paper
Although active networks have generated much debate in the research community on the whole there has been little hard evidence to inform this debate. This paper aims to redress the situation by reporting what we have learned by designing, implementing and using the ANTS active network toolkit over the past two years. At this early stage, active networks remain an open research area. However, we believe that we have made substantial progress towards providing a more flexible network layer while at the same time addressing the performance and security concerns raised by the presence of mobile code in the network. In this paper, we argue our progress towards the original vision and the difficulties that we have not yet resolved in three areas that characterize a "pure" active network: the capsule model of programmability; the accessibility of that model to all users; and the applications that can be constructed in practice
Article
This paper presents an algorithm for content-based forwarding, an essential function in content-based networking. Unlike in traditional address-based unicast or multicast networks, where messages are given explicit destination addresses, the movement of messages through a content-based network is driven by predicates applied to the content of the messages. Forwarding in such a network amounts to evaluating the predicates stored in a router's forwarding table in order to decide to which neighbor routers the message should be sent. We are interested in finding a forwarding algorithm that can make this decision as quickly as possible in situations where there are numerous, complex predicates and high volumes of messages. We present such an algorithm and give the results of studies evaluating its performance.
Article
This paper presents an algorithm for content-based forwarding, an essential function in content-based networking. Unlike in traditional address-based unicast or multicast networks, where messages are given explicit destination addresses, the movement of messages through a content-based network is driven by predicates applied to the content of the messages. Forwarding in such a network amounts to evaluating the predicates stored in a router's forwarding table in order to decide to which neighbor routers the message should be sent. We are interested in finding a forwarding algorithm that can make this decision as quickly as possible in situations where there are large numbers of predicates and high volumes of messages. We present such an algorithm and give the results of studies evaluating its performance.
Article
Click is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements. Individual elements implement simple router functions like packet classification, queueing, scheduling, and interfacing with network devices. Complete configurations are built by connecting elements into a graph; packets flow along the graph's edges. Several features make individual elements more powerful and complex configurations easier to write, including pull processing, which models packet flow driven by transmitting interfaces, and flow-based router context, which helps an element locate other interesting elements. We demonstrate several working configurations, including an IP router and an Ethernet bridge. These configurations are modular---the IP router has 16 elements on the forwarding path---and easy to extend by adding additional elements, which we demonstrate with augmented configurations. On commodity PC hardware ...
Article
This paper presents Click, a flexible, modular software architecture for creating routers. Click routers are built from fine-grained components; this supports fine-grained extensions throughout the forwarding path. The components are packet processing modules called elements. The basic element interface is narrow, consisting mostly of functions for initialization and packet handoff, but elements can extend it to support other functions (such as reporting queue lengths). To build a router configuration, the user chooses a collection of elements and connects them into a directed graph. The graph's edges, which are called connections, represent possible paths for packet handoff. To extend a configuration, the user can write new elements or compose existing elements in new ways, much as UNIX allows one to build complex applications directly or by composing simpler ones using pipes
Exploiting Parallelism To Scale Software Routers
  • M Egi
  • B.-G Fall
  • K Iannaccone
  • G Knies
  • A Manesh
  • M And Rat-Nasamy
  • S Routebricks
DOBRESCU, M., EGI, N., ARGYRAKI, K., CHUN, B.-G., FALL, K., IANNACCONE, G., KNIES, A., MANESH, M., AND RAT- NASAMY, S. RouteBricks: Exploiting Parallelism To Scale Software Routers. In SOSP (2009).
Multicast: Rx for Datacenter Communication Scalability
  • Dr
Dr. Multicast: Rx for Datacenter Communication Scalability. In EuroSys (2010).
  • M Alizadeh
  • A Greenberg
  • D A Maltz
  • J Padhye
  • P Patel
  • B Prabhakar
  • S Sengupta
  • M Sridharan
ALIZADEH, M., GREENBERG, A., MALTZ, D. A., PADHYE, J., PATEL, P., PRABHAKAR, B., SENGUPTA, S., AND SRIDHARAN, M. Data Center TCP (DCTCP). In SIGCOMM (2010).
Melange: Towards a "functional
  • A Madhavapeddy
  • A Ho
  • T Deegan
  • D Scott
  • R Sohan
MADHAVAPEDDY, A., HO, A., DEEGAN, T., SCOTT, D., AND SOHAN, R. Melange: Towards a "functional" Internet. In EuroSys (2007).
  • M Al-Fares
  • A Loukissas
  • A Vahdat
  • Scalable
AL-FARES, M., LOUKISSAS, A., AND VAHDAT, A. A Scalable, Commodity Data Center Network Architecture. In SIGCOMM (2008).
Gigascope: A Stream Database For Network Applications
  • C Cranor
  • T Johnson
  • O Spataschek
  • V Shkapenyuk
CRANOR, C., JOHNSON, T., SPATASCHEK, O., AND SHKAPENYUK, V. Gigascope: A Stream Database For Network Applications. In SIGMOD (2003).
  • J Gibb
  • G Bolouki
  • S And Mckeown
  • N Netfpga
NAOUS, J., GIBB, G., BOLOUKI, S., AND MCKEOWN, N. NetFPGA: Reusable Router Architecture for Experimental Research. In PRESTO (2008).
Scaling Flow Management for High-Performance Networks
  • A R Mogul
  • J C Tourrilhes
  • J Yalagandula
  • P Sharma
  • P And Banerjee
  • S Devoflow
CURTIS, A. R., MOGUL, J. C., TOURRILHES, J., YALAGANDULA, P., SHARMA, P., AND BANERJEE, S. DevoFlow: Scaling Flow Management for High-Performance Networks. In SIGCOMM (2011).