ArticlePDF Available

How to Operate Container Clusters more Efficiently? Some Insights Concerning Containers, Software-Defined-Networks, and their sometimes Counterintuitive Impact on Network Performance

Authors:

Abstract

In previous work, we concluded that container technologies and overlay networks typically have negative performance impacts, mainly due to an additional layer to networking. This is what everybody would expect, only the degree of impact might be questionable. These negative performance impacts can be accepted (if they stay moderate), due to a better flexibility and manageability of the resulting systems. However, we draw our conclusion only on data covering small core machine types. This extended work additionally analyzed the impact of various (high core) machine types of different public cloud service providers (Amazon Web Services, AWS and Google Compute Engine, GCE) and comes to a more differentiated view and some astonishing results for high core machine types. Our findings stand to reason that a simple and cost effective strategy is to operate container cluster with highly similar high core machine types (even across different cloud service providers). This strategy should cover major relevant and complex data transfer rate reducing effects of containers, container clusters and software-defined-networks appropriately.
A preview of the PDF is not available
... As internal part of C 4 S, it is important to know if the network performance impact using these technologies is acceptable or not. Therefore, we developed ppbench [26], a tool for benchmarking network performance according to the use of container technology, the use of a cluster/SDN, the message size, the machine type and the programming language. It could be determined that the performance impact depends on all factors. ...
... The biggest insights from the benchmark tests are: the selection of the programming language has a big effect on the performance and the impact of a SDN depends on the machine type -using high-core machines can make the impact of the SDN negligible. All our findings about performance impacts using container and container clusters are published in [26], [27], [28], [29]. ...
... Particularly with regard to the performance, we generally recommend the machine-pairs with the highest amount of cores. We have published more details about EasyCompare and the benchmark results in [30], [26]. ...
Article
Full-text available
Cloud computing enables companies getting computational and storage resources on demand. Especially when using features like elasticity and scaling, cloud computing can be a very powerful technology to run, e.g., a webservice without worries about failure by overload or wasting money by paid use of unneeded resources. For using these features, developers can use or implement cloud-native applications (CNA), containerized software running on an elastic platform. Nevertheless, a CNA can be complex at planning, installation and configuration, maintenance and searching for failures. Small and medium enterprises (SMEs) are mostly limited by their personnel and financial restrictions. So, using these offered services can facilitate a very fast realization of the software project. However, using these (proprietary) services it is often difficult to migrate between cloud vendors. This paper introduces C4S, an open source system for SMEs to deploy and operate their container application with features like elasticity, auto-scaling and load balancing. The system also supports transferability features for migrating containers between different Infrastructure as a Service (IaaS) platforms. Thus, C4S is a solution for SMEs to use the benefits of cloud computing with IaaS migration features to reduce vendor lock-in.
... They can use available comparison matrices like the one shown in Table 2. The reader might want to check our contributions covering these aspects [18,44]. ...
... Astonishingly it is quite complex to figure out these impacts in the upfront of a microservice design process due to missing and specialized benchmarks. To get clear about these performance implications, we developed a benchmark intentionally designed for the microservice domain [39,[44][45][46]. It seems to be more useful to reflect fundamental design decisions and their performance impacts in the upfront of a microservice architecture development and not in the aftermath. ...
... Figure 8 and Figure 9 show some measurements from accompanying performance studies which have been done for this project. According to these studies [18,39,[44][45][46], we identified at least two general take aways: The impact of programming languages on REST-performance should not be underestimated (see Figure 8) and is a good indicator to compare performance impacts. Although Go is meant to be one of the most suited languages for network I/O, it turned out that Go is only the best choice for messages smaller than half of a TCP standard receive window. ...
Chapter
Full-text available
Cloud computing enables to deliver innovative services very agile and at world-scale. Even very small companies with access on first class cloud engineering knowledge can create exponential business growth. Companies like Instagram or Netflix proofed that impressively. But, operating cloud-native applications is extremely challenging and overburdens the capabilities of most SME IT-departments. Sadly, most of current research solution proposals do not focus on these micro or small companies intentionally. Instead of that, this contribution presents a concept for an elastic cloud runtime platform, which is intentionally designed for such small companies. This elastic platform shall provide a transferable, vendor lock-in avoiding, elastic, manageable and easy-to-adapt cloud runtime environment (not only) for SMEs, small research groups or non-profit organizations.
... This Section summarizes the main points of our performance considerations. The following of our studies investigated performance aspects on service level [31,32,33] and on elastic container platform level [34,35]. We developed the research benchmarking prototypes EASY-COMPARE for the platform level and PPBENCH for the service level (see Table 18) to do these pre-solution performance analytics. ...
... [91] x x x Investigation of Impacts on Network Performance in the Advance of a Microservice Design (2017) [33] x x Project Cloud TRANSIT -Or to Simplify Cloud-native Application Provisioning for SMEs (2016) [54] x Journal Papers A Brief History of Cloud Application Architectures (2018) [30] x x Understanding Cloud-native Applications after 10 Years of Cloud Computing (2017) [3] x x Taming the Complexity of Elasticity, Scalability and Transferability in Cloud Computing (2016) [53] x x How to Operate Container Clusters more Efficiently? (2015) [34] x x x Automatic Benchmarking of IaaS Cloud Service Providers for a World of Container Clusters (2015) [35] x x x CloudTRANSIT -Transferierbare IT-Services mittels einer generischen Cloud Service Description Language (2014) [92] x Lightweight Virtualization Cluster -Howto overcome Cloud Vendor Lock-in (2014) [16] x Conference Papers Towards Distributed Clouds (2018) [2] x Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-native Applications (2018) [61] x x About being the Tortoise or the Hare? Making Cloud Applications too Fast and Furious for Attackers (2018) [82] x About an Immune System Understanding for Cloud-native Applications (2018) [81] x Smuggling Multi-Cloud Support into Cloud-native Applications using Elastic Container Platforms (2017) [51] x x x ClouNS -a Cloud-Native Application Reference Model for Enterprise Architects (2016) [1] x ppbench -A Visualizing Network Benchmark for Microservices (2016) [32] x x Overcome Vendor Lock-In by Integrating Already Available Container Technologies (2016) [52] x x About Microservices, Containers and their Underestimated Impact on Network Performance (2015) [31] x ...
Technical Report
Full-text available
The project CloudTRANSIT dealt with the question of how to transfer cloud applications and services at runtime without downtime across cloud infrastructures from different public and private cloud service providers. This technical report summarizes the outcomes of approximately 20 research papers that have been published throughout the project. This report intends to provide an integrated birds-eye view on these-so far-isolated papers. The report references the original papers where ever possible. This project also systematically investigated practitioner initiated cloud application engineering trends of the last three years that provide several promising technical opportunities to avoid cloud vendor lock-in pragmatically. Especially European cloud service providers should track such kind of research because of the technical opportunities to bring cloud application workloads back home to Europe. Such workloads are currently often deployed and inherently bound to U.S. providers. Intensified EU General Data Protection (GDPR) policies, European Cloud Initiatives, or "America First" policies might even make this imperative. So, technical solutions needed for these scenarios that are manageable not only by large but also by small and medium-sized enterprises. Therefore, this project systematically analyzed commonalities of cloud infrastructures and cloud applications. Latest evolutions of cloud standards and cloud engineering trends (like containerization) were used to derive a cloud-native reference model (ClouNS) that guided the development of a pragmatic cloud-transferability solution. This solution intentionally separated the infrastructure-agnostic operation of elastic container platforms (like Swarm, Kubernetes, Mesos/Marathon, etc.) via a multi-cloud-scaler and the platform-agnostic definition of cloud-native applications and services via an unified cloud application modeling language. Both components are independent but complementary. Because of their independence, they can even contribute (although not intended) to other fields like moving target based cloud security-but also distributed ledger technologies (block-chains) made provide options here. The report summarizes the main outcomes and insights of a proof-of-concept solution to realize transferability for cloud applications and services at runtime without downtime.
... All CO systems also support an internal DNS service that enables lookup of the cluster IP address of a service based on its name that is specified in its Service configuration. 6. Host ports: Finally, it is also possible to attach HostPorts to containers so that clients can directly send network request to a container without requiring to pass through service proxies and the virtual network. This allows clients to use their own load balancer solution, and possibly avoid the known performance overheads of the virtual network layer [22], [21]. Note that in the case of Kubernetes and Mesos, which adopt the Container Networking Interface specification, a separate portmap plugin [6] must be installed on each node of the cluster in order to enable communication to and from host ports. ...
... The overhead of the 4th and 5th features (i.e., virtual network software and service proxies for enabling TCP/IP location transparency) may however introduce a nonsignificant performance overhead, as already indicated by prior research [22], [21]. We expect however that some of this overhead may be avoided by exposing database containers via the combination of the sixth feature (i.e., host ports) and a stable floating IP address or stable VM hostname which can be provisioned by the underlying cloud provider. ...
Conference Paper
Full-text available
Container orchestration systems, such as Docker Swarm, Kubernetes and Mesos, provide automated support for deployment and management of distributed applications as sets of containers. While these systems were initially designed for running load-balanced stateless services, they have also been used for running database clusters because of improved resilience attributes such as fast auto-recovery of failed database nodes, and location transparency at the level of TCP/IP connections between database instances. In this paper we evaluate the performance overhead of Docker Swarm and Kubernetes for deploying and managing NoSQL database clusters, with MongoDB as database case study. As the baseline for comparison, we use an OpenStack IaaS cloud that also allows attaining these improved resilience attributes although in a less automated manner.
... To the best of our knowledge, a performance analysis of this abstraction has not been conducted, but the analysis of nested containers is the closest research field. In [15,17], network performance degradation was observed in some configurations because of a deployment based on full nested containers. This degradation is caused by the usage of network virtualization technologies -Linux Bridge or OpenvSwichttwice or by the usage of Software Defined Table 8 Summary of related work with the kind of model they proposed, the assumptions of the model, the virtualisation infrastructure that they used and the experimental framework. ...
... Model Virtualisation infrastructure Experimental framework [9] Experimental approach VMs Hyper-V, KVM, vSphere, Xen [10] Experimental approach VMs KVM, Xen, vBox [11,12] Experimental approach Containers and VMs Docker, KVM, Xen, LXC [13] Experimental approach Containers and VMs LXC, OpenVZ, VServer, Xen [14] Experimental approach Containers LXC [15,16] Experimental approach Containers Docker, KVM [17] Experimental approach Containers Docker, Weave [18,19] Continuous Markov Chains (Exponential PDF) Containers over VMs Docker Swarm over Amazon EC2 [20] Nets within Nets (any PDF) VMs Simulations Our work Nets within Nets (any PDF) Experimental approach Containers Kubernetes over bare metal Networks and encryptation. However, the Kubernetes pod abstraction gives a common space port to all containersand therefore the same IP address to all servicesso the performance degradation may be different. ...
Article
A key challenge for supporting elastic behaviour in cloud systems is to achieve a good performance in automated (de-)provisioning and scheduling of computing resources. One of the key aspects that can be significant is the overheads associated with deploying, terminating and maintaining resources. Therefore, due to their lower start up and termination overhead, containers are rapidly replacing Virtual Machines (VMs) in many cloud deployments, as the computation instance of choice. In this paper, we analyse the performance of Kubernetes achieved through a Petri net-based performance model. Kubernetes is a container management system for a distributed cluster environment. Our model can be characterised using data from a Kubernetes deployment, and can be exploited for supporting capacity planning and designing Kubernetes-based elastic applications.
... This Section summarizes the main points of our performance considerations. The following of our studies investigated performance aspects on service level [29,30,31] and on elastic container platform level [32,33]. We developed the research benchmarking prototypes EASY-COMPARE for the platform level and PPBENCH for the service level (see Table 18) to do these pre-solution performance analytics. ...
... [89] x x x Investigation of Impacts on Network Performance in the Advance of a Microservice Design (2017) [31] x x Project Cloud TRANSIT -Or to Simplify Cloud-native Application Provisioning for SMEs (2016) [52] x Journal Papers Understanding Cloud-native Applications after 10 Years of Cloud Computing (2017) [2] x x Taming the Complexity of Elasticity, Scalability and Transferability in Cloud Computing (2016) [51] x x How to Operate Container Clusters more Efficiently? (2015) ) [32] x x x Automatic Benchmarking of IaaS Cloud Service Providers for a World of Container Clusters (2015) [33] x x x CloudTRANSIT -Transferierbare IT-Services mittels einer generischen Cloud Service Description Language (2014) [90] x Lightweight Virtualization Cluster -Howto overcome Cloud Vendor Lock-in (2014) [15] x A Lightweight Virtualization Cluster Reference Architecture Derived from Open Source PaaS Platforms (2014) [16] x Conference Papers Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-native Applications (2018) [59] x x About being the Tortoise or the Hare? Making Cloud Applications too Fast and Furious for Attackers (2018) [80] x About an Immune System Understanding for Cloud-native Applications (2018) [79] x Smuggling Multi-Cloud Support into Cloud-native Applications using Elastic Container Platforms (2017) [49] x x x Towards a Description of Elastic Cloud-native Applications for Transferable Multi-Cloud-Deployments (2017) [60] x ClouNS -a Cloud-Native Application Reference Model for Enterprise Architects (2016) [1] x ppbench -A Visualizing Network Benchmark for Microservices [30] x x Overcome Vendor Lock-In by Integrating Already Available Container Technologies (2016) [50] x x About Microservices, Containers and their Underestimated Impact on Network Performance (2015) [29] x ...
Preprint
Full-text available
The project CloudTRANSIT dealt with the question how to transfer cloud applications and services at runtime without downtime across cloud infrastructures from different public and private cloud service providers. This technical report summarizes the outcomes of more than 20 research papers and reports that have been published throughout the course of the project. The intent of this report is to provide an integrated birds-eye view on these-so far-isolated papers. The report references to the original papers where ever possible. This project also systematically investigated practitioner initiated cloud application engineering trends of the last three years that provide several promising technical opportunities to avoid cloud vendor lock-in pragmatically. Especially European cloud service providers should track such kind of research because of the technical opportunities to bring cloud application workloads back home to Europe that are currently often deployed and inherently bound to U.S. providers. Intensified EU General Data Protection (GDPR) policies, European Cloud Initiatives, or "America First" policies might even make this imperative. There are technical solutions needed for these scenarios that are manageable not only by large but also by small and medium sized enterprises. Therefore, this project systematically analyzed commonalities of cloud infrastructures and of cloud applications. Latest evolutions of cloud standards and cloud engineering trends (like containerization) were used to derive a cloud-native reference model (ClouNS) that guided the development of a pragmatic cloud-transferability solution. This solution intentionally separated the infrastructure-agnostic operation of elastic container platforms (like Swarm, Kubernetes, Mesos/Marathon, etc.) via a multi-cloud-scaler and the platform-agnostic definition of cloud-native applications and services via an unified cloud application modeling language. Both components are independent but complementary. Because of their independence they can even contribute (although not intended) to other fields like moving target based cloud security. The report summarizes the main outcomes and insights of a proof-of-concept solution to realize transferability for cloud applications and services at runtime without downtime.
... For instance, Docker's Swarm Mode (since version 1.12) provides an encrypted data and control plane and Kubernetes can be configured to use encryptable overlay network plugins like Weave. The often feared network performance impacts can be contained [20,21]. ...
Preprint
Full-text available
Cloud-native applications are often designed for only one specific cloud infrastructure or platform. The effort to port such kind of applications into a different cloud is usually a laborious one time exercise. Modern Cloud-native application architecture approaches make use of popular elastic container platforms (Apache Mesos, Kubernetes, Docker Swarm). These kind of platforms contribute to a lot of existing cloud engineering requirements. This given, it astonishes that these kind of platforms (already existing and open source available) are not considered more consequently for multi-cloud solutions. These platforms provide inherent multi-cloud support but this is often overlooked. This paper presents a software prototype and shows how Kubernetes and Docker Swarm clusters could be successfully transfered at runtime across public cloud infrastructures of Google (Google Compute Engine), Microsoft (Azure) and Amazon (EC2) and further cloud infrastructures like Open-Stack. Additionally, software engineering lessons learned are derived and some astonishing performance data of the mentioned cloud infrastruc-tures is presented that could be used for further optimizations of IaaS transfers of Cloud-native applications.
... For instance, Docker's Swarm Mode (since version 1.12) provides an encrypted data and control plane and Kubernetes can be configured to use encryptable overlay network plugins like Weave. The often feared network performance impacts can be contained [20,21]. ...
Chapter
Full-text available
Cloud-native applications are often designed for only one specific cloud infrastructure or platform. The effort to port such kind of applications into a different cloud is usually a laborious one time exercise. Modern Cloud-native application architecture approaches make use of popular elastic container platforms (Apache Mesos, Kubernetes, Docker Swarm). These kind of platforms contribute to a lot of existing cloud engineering requirements. This given, it astonishes that these kind of platforms (already existing and open source available) are not considered more consequently for multi-cloud solutions. These platforms provide inherent multi-cloud support but this is often overlooked. This paper presents a software prototype and shows how Kubernetes and Docker Swarm clusters could be successfully transfered at runtime across public cloud infrastructures of Google (Google Compute Engine), Microsoft (Azure) and Amazon (EC2) and further cloud infrastructures like OpenStack. Additionally, software engineering lessons learned are derived and some astonishing performance data of the mentioned cloud infrastructures is presented that could be used for further optimizations of IaaS transfers of Cloud-native applications.
... This is especially true for small core machine types. However, due to attenuation effects, SDN can even show positive effects in case of non-continuous network behavior [12]. Machine types with more cores decrease the performance impact of SDN because CPU contention effects are reduced. ...
Conference Paper
Full-text available
Due to REST-based protocols, microservice architectures are inherently horizontally scalable. That might be why the microservice architectural style is getting more and more attention for cloud-native application engineering. Corresponding microservice architectures often rely on a complex technology stack which includes containers, elastic platforms and software defined networks. Astonishingly, there are almost no specialized tools to figure out performance impacts (coming along with this microservice architectural style) in the upfront of a microservice design. Therefore, we propose a benchmarking solution intentionally designed for this upfront design phase. Furthermore, we evaluate our benchmark and present some performance data to reflect some often heard cloud-native application performance rules (or myths).
Conference Paper
Full-text available
Elastic container platforms (like Kubernetes, Docker Swarm, Apache Mesos) fit very well with existing cloud-native application architecture approaches. So it is more than astonishing, that these already existing and open source available elastic platforms are not considered more consequently in multi-cloud research. Elastic container platforms provide inherent multi-cloud support that can be easily accessed. We present a solution proposal of a control process which is able to scale (and migrate as a side effect) elastic container platforms across different public and private cloud-service providers. This control loop can be used in an execution phase of self-adaptive auto-scaling MAPE loops (monitoring, analysis, planning, execution). Additionally, we present several lessons learned from our prototype implementation which might be of general interest for researchers and practitioners. For instance, to describe only the intended state of an elastic platform and let a single control process take care to reach this intended state is far less complex than to define plenty of specific and necessary multi-cloud aware workflows to deploy, migrate, terminate, scale up and scale down elastic platforms or applications.
Article
Full-text available
Selection of qualified personnel is a key success factor for an organization. The complexity and importance of the problem call for analytical methods rather than intuitive decisions. In literature, there are various methods regarding personnel selection. This paper considers a real application of personnel selection with using the opinion of expert by one of the decision making model, it is called SAW method. This paper has applied seven criteria that they are qualitative and positive for selecting the best one amongst five personnel and also ranking them. Finally the introduced method is used in a case study.
Article
Full-text available
Cloud service selection can be a complex and challenging task for a cloud engineer. Most current approaches try to identify a best cloud service provider by evaluating several relevant criteria like prices, processing, memory, disk, network performance, quality of service and so on. Nevertheless, the decision making problem involves so many variables, that it is hard to model it appropriately. We present an approach that is not about selecting a best cloud service provider. It is about selecting most similar resources provided by different cloud service providers. This fits much better practical needs of cloud service engineers. Especially, if container clusters are involved. EasyCompare, an automated benchmarking tool suite to compare cloud service providers, is able to benchmark and compare virtual machine types of different cloud service providers using an Euclidian distance measure. It turned out, that only 1% of theoretical possible machine pairs have to be considered in practice. These relevant machine types can be identiífied by systematic benchmark runs in less than three hours. We present some expectable but also astonishing evaluation results of EasyCompare used to evaluate two major and representative public cloud service providers: Amazon Web Services and Google Compute Engine.
Conference Paper
Full-text available
Microservices are used to build complex applications composed of small, independent and highly decoupled processes. Recently, microservices are often mentioned in one breath with container technologies like Docker. That is why operating system virtualization experiences a renaissance in cloud computing. These approaches shall provide horizontally scalable, easily deployable systems and a high-performance alternative to hypervisors. Nevertheless, performance impacts of containers on top of hypervisors are hardly investigated. Furthermore, microservice frameworks often come along with software defined networks. This contribution presents benchmark results to quantify the impacts of container, software defined networking and encryption on network performance. Even containers, although postulated to be lightweight, show a noteworthy impact to network performance. These impacts can be minimized on several system layers. Some design recommendations for cloud deployed systems following the microservice architecture pattern are derived.
Article
Full-text available
Cloud platforms encompass a large number of storage services that can be used to manage the needs of customers. Each of these services, offered by a different provider, is characterized by specific features, limitations and prices. In presence of multiple options, it is crucial to select the best solution fitting the requirements of the customer in terms of quality of service and costs. Most of the available approaches are not able to handle uncertainty in the expression of subjective preferences from customers, and can result in wrong (or sub-optimal) service selections in presence of rational/selfish providers, exposing untrustworthy indications concerning the quality of service levels and prices associated to their offers. In addition, due to its multi-objective nature, the optimal service selection process results in a very complex task to be managed, when possible, in a distributed way, for well-known scalability reasons. In this work, we aim at facing the above challenges by proposing three novel contributions. The fuzzy sets theory is used to express vagueness in the subjective preferences of the customers. The service selection is resolved with the distributed application of fuzzy inference or Dempster-Shafer theory of evidence. The selection strategy is also complemented by the adoption of a game theoretic approach for promoting truth-telling ones among service providers. We present empirical evidence of the proposed solution effectiveness through properly crafted simulation experiments.
Article
Full-text available
To overcome vendor lock-in obstacles in public cloud computing the capability to define transferable cloud-based services is crucial but not yet solved satisfactory. This is especially true for small and medium sized enterprises being typically not able to operate a vast staff of cloud service and IT experts. Actual state of the art cloud service design does not deal systematically how to define, deploy and operate cross-plattform capable cloud services. This is mainly due to inherent complexity of the field and differences in details between a plenty of existing public and private cloud infrastructures. One way to handle this complexity is to restrict cloud service design to a common subset of commodity features provided by existing public and private cloud infrastructures. Nevertheless these restrictions raise new service design questions and have to be answered in ongoing research in a pragmatic manner regarding the limited IT-operation capabilities of small and medium sized enterprises. By simplifying and harmonizing the use of cloud infrastructures using lightweight virtualization approaches the transfer of cloud deployments between a variety of cloud service providers will become possible. This article will discuss several aspects like high availability, secure communication, elastic service design, transferability of services and formal descriptions of service deployments which have to be addressed and are investigated by our ongoing research.
Article
Full-text available
Actual state of the art of cloud service design does not deal systematically how to define, deploy and operate cross-platform capable cloud services. By simplifying and harmonizing the use of IaaS cloud infrastructures using lightweight virtualization approaches, the transfer of cloud deployments between a variety of cloud service providers becomes more frictionless. This article proposes operating system virtualization as an appropriate and already existing abstraction layer on top of public and private IaaS infrastructures, and derives an reference architecture for lightweight virtualization clusters. This reference architecture is reflected and derived from several existing (open source) projects for container-based virtualization like Docker, Jails, Zones, Workload Partitions of various Unix operating systems and open source PaaS platforms like CoreOS, Apache Mesos, OpenShift, CloudFoundry and Kubernetes.
Article
Cloud computing aims at providing elastic applications that can be scaled at runtime. In practice, traditional service composition methods are not flexible enough to changes in cross-cloud environment. In view of this challenge, an adaptive service selection method for cross-cloud service composition is proposed in this paper, by dynamically selecting proper services with near-optimal performance for adapting to changes in time. First, concretely speaking, the service selection and execution are modeled with Markov decision process to ensure flexibility. Second, service pair is defined, and the way to build service pairs set is proposed to predict the performance of candidate services. Third, the adaptive service selection algorithm is designed to select proper cloud services for changing cross-cloud environment. Finally, a case study for cross-cloud service composition and experiments are presented for validating the feasibility of our proposal. Concurrency and Computation: Practice and Experience, 2013.(c) 2013 Wiley Periodicals, Inc.
Article
Emerging mega-trends (e.g., mobile, social, cloud, and big data) in information and communication technologies (ICT) are commanding new challenges to future Internet, for which ubiquitous accessibility, high bandwidth, and dynamic management are crucial. However, traditional approaches based on manual configuration of proprietary devices are cumbersome and error-prone, and they cannot fully utilize the capability of physical network infrastructure. Recently, software-defined networking (SDN) has been touted as one of the most promising solutions for future Internet. SDN is characterized by its two distinguished features, including decoupling the control plane from the data plane and providing programmability for network application development. As a result, SDN is positioned to provide more efficient configuration, better performance, and higher flexibility to accommodate innovative network designs. This paper surveys latest developments in this active research area of SDN. We first present a generally accepted definition for SDN with the aforementioned two characteristic features and potential benefits of SDN. We then dwell on its three-layer architecture, including an infrastructure layer, a control layer, and an application layer, and substantiate each layer with existing research efforts and its related research areas. We follow that with an overview of the de facto SDN implementation (i.e., OpenFlow). Finally, we conclude this survey paper with some suggested open research challenges.