Fig 2 - uploaded by Rastin Pries
Content may be subject to copyright.
Three-tier data center architecture.

Three-tier data center architecture.

Source publication
Conference Paper
Full-text available
The high power consumption of data centers confronts the providers with major challenges. However, not only the servers and the cooling consume a huge amount of energy, but also the data center network architecture makes an important contribution. In this paper, we introduce different data center architectures and compare them regarding their power...

Contexts in source publication

Context 1
... three-tier data center architecture is currently the most common architecture. It consists of three different layers, the access layer, the aggregation layer, and the core layer as shown in Figure 2. The aggregation layer facilitates the increase in the number of server nodes (more than 10,000 servers) while keeping inexpensive layer-2 switches in the access network for providing a loop-free topology. ...
Context 2
... three-tier architecture also normally uses ECMP for load balancing and as the maximum number of allowed ECMP paths is eight, a typical three- tier architecture consists of eight core switches. Figure 2 only shows two core switches. The current connection between the layers is similar to the two-tier architecture. ...

Similar publications

Article
Full-text available
Self-similar traffic distributions have been observed in a wide range of networking applications and models such as LANs, WANs, telnet, FTP, WWW, ISDN, SS7 and VBR traffic over ATM. Therefore, it has been suggested that many other theoretical protocols and systems need to be reevaluated under this different type of traffic before practical implemen...
Article
Full-text available
Future advances in networking and storage will make it feasible to build multimedia on-demand servers that provide services similar to those of a neighborhood vid eotape rental store over metropolitan area networks such as B-ISDN. Such multimedia servers can support real-time retrieval of multimedia objects by users onto their ISDN videophones and...
Article
Full-text available
Cloud computing can be defined as an application and services which runs on distributed network using virtualized and it is accessed through internet protocols and networking. Cloud computing resources and virtual and limitless and information’s of the physical systems on which software running are abstracted from the user. Cloud Computing is a sty...
Article
Full-text available
The signal-to-interference-and-noise ratio (SINR) is of key importance for the analysis and design of wireless networks. For addressing new requirements imposed on wireless communication, in particular high availability, a highly accurate modeling of the SINR is needed. We propose a stochastic model of the SINR distribution where shadow fading is c...
Article
Full-text available
Nowadays, Information centric networking has tremendous importance in accessing internet based applications. The increasing rate of Internet traffic has encouraged to adapt content centric architectures to better serve content provider needs and user demand of internet based applications. These architectures have built-in network Caches and these p...

Citations

... One such example to support this is the Lexicon star rating system where VMs are being used over Cloud [18]. Report [6,23] estimates data-center electricity consumption. ...
Article
Full-text available
In recent days, cloud computing data centres are considerably involved in performing operations. It accounts for the enormous energy consumption, which increases with an increase in computing capacity. Thinking with respect for the environment, reducing operating costs and energy consumption can prove to be beneficial. Previous works in data-centre energy optimization only involved scheduling the job between the servers based on thermal profiles or workload parameters. Dynamic power management by shutting down the free accessories of data centres was also considered in many models to reduce energy consumption. Further, the role of the communication fabric focused on energy consumption. The proposed work focuses on the minimization of energy consumption at both computing servers and communicating devices. Here, a parameter is defined named config to initialize the configuration of a system in a current state. The parameter will assist the existing Dynamic Voltage Frequency Scheduling (DVFS) scheme for assigning the tasks to a virtual machine to minimize energy consumption at computing servers. Moreover, it extends the Data-centre Energy-efficient Network-aware Scheduling (DENS) with the peer-to-peer load balancer to reduce energy consumption from networking components. The proposed system uses a scheduling algorithm for the cloud data centre, which reduces the energy consumption both at the server and the communication fabric level. Based on the number of samples for the energy consumption, 95% confidence achieve. Energy consumed by the proposed P2BED-C model is 1610.22 Wxh, while other existing approaches FCFS and Round Robin consumed 1684.32 and 1678.35, respectively. The results show considerable improvement in the power utilization of the server resulting in more power savings.
... Compared with other complex modulation schemes such as quadrature phase shift keying (QPSK) and quadrature amplitude modulation (QAM), the PAM4 is particularly suitable for data center interfaces owing to its simplicity, low power consumption, and cost effectiveness [4][5][6][7][8]. Most recently, data centers have faced a practical problem related to the wiring complexity and power consumption because of the increase in the scale of facilities [9] [10]. As a promising solution to overcome this issue, wireless interconnection has been proposed [11]. ...
Article
In modern optical communications, pulse amplitude modulation 4 (PAM4) is employed to achieve higher data rates than that achieved by conventional non-return-to-zero format. Meanwhile, there is an increasing interest to convert wired connections to wireless in data centers using high-speed millimeter-wave and terahertz (THz) links. Here, we introduced the PAM4 modulation for THz wireless communications using a resonant tunneling diode (RTD) receiver. Compared with a Schottky-barrier diode receiver, the RTD receiver has higher sensitivity, and a stronger nonlinearity at low input power when it is operated with an amplified detection scheme. We achieved 24-Gbaud (48-Gbit/s) transmission in the 300-GHz band with a quasi-real-time digital signal processing (DSP), which is the fastest PAM4 wireless communication without an offline DSP to the best of our knowledge.
... The core switches handle huge amounts of data traffic across the entire DC, therefore consuming more energy than aggregate switches, which consume more energy than edge switches [9]. Core links usually have higher bandwidth than aggregation links, which have higher bandwidth than access links [52]. Therefore, the operational cost of the core links that the cloud provider handles is higher than the cost of the aggregate links, which is higher than the cost of the access links. ...
Article
Full-text available
Task scheduling and data replication are highly coupled resource management techniques that are widely used by cloud providers to improve the overall system performance and ensure service level agreement (SLA) compliance while preserving their own economic profit. However, balancing the trade-off between system performance and provider profit is very challenging. In this paper, we propose a novel scheduling algorithm called Bottleneck and Cost Value Scheduling (BCVS) algorithm coupled with a novel dynamic data replication strategy called Correlation and Economic Model-based Replication (CEMR). The main goal is to improve data access effectiveness in order to meet service level objectives in terms of response time SLORT and minimum availability SLOMA, while preserving the provider profit. The BCVS algorithm focuses on reducing system bottleneck situations caused by data transfer when the CEMR focuses on preventing future SLA violations and guaranteeing a minimum availability. An economic model is also proposed to estimate the cloud provider profit. Simulation results indicate that the proposed combination of scheduling and replication algorithms offers higher monetary profit for the cloud provider by up to 30% compared to existing strategies. Moreover, it allows better performance.
... MWh energy is consumed by the Internet data. On the other hand, to compute energy consumed using vehicles for the data transfer, we have considered the average power consumption of a server equal to 145 watts per hour following the research work of Pries at el. [31]. Using USB 3.2 for the 1TB data transfer between the vehicle and the infrastructure, it takes 7 minutes, hence, the average power consumed by a server for this (7 minutes) duration is around 17watt/min approximately. ...
Article
Full-text available
Smart cities are envisioned to facilitate the well being of the society through efficient management of the Internet of Things resources and the data produced by these resources. However, the enormous number of such devices would result in unprecedented growth in data, creating capacity issues related to acquisition, transfer from one location to another, storage and finally the analysis. The traditional networks are not sufficient to support the transfer of this huge amount of data, proving to be costly both in terms of delay and energy consumption. Alternative means of data transfers are thus required to support this big data produced by smart cities. In this paper, we have proposed an efficient data-transfer framework based on volunteer vehicles whereby we employ vehicles to carry data in the direction of the destination. The framework promotes self belonging, social awareness and energy conservation through urban computing encouraging participation by citizens. The proposed framework can also facilitate the research community to benchmark their own route selection algorithms easily. Further, we performed an extensive evaluation of the proposed framework based on realistic models of vehicles, routes, data-spots, data chunks to be transmitted and the energy consumed. Our results show the efficacy of the proposed data transfer framework as the energy consumed through vehicles is significantly less than that consumed by transmission over the Internet thereby reducing the carbon footprint. The results also offer several insights into optimal configuration of a vehicular data transfer network based on analysis of delay, energy consumption, and data-spot utilization.
... Table 5 shows number of servers per server type per PoD, which is obtained by multiplication of each entry of Table 4 by 12. ii) Communication Network Parameters Table 6 presents the performance characteristics of the chosen switches for the communications network. Power consumption parameter values of the switches, PD ℓ, e and PS ℓ, e , are the same as given in [36][37][38]. We also assume that dynamic power consumption of a NIC is given by PW NIC = 0.6 microW. ...
Article
Full-text available
Cloud computing datacenters consume huge amounts of energy, which has high cost and large environmental impact. There has been significant amount of research on dynamic power management, which shuts down unutilized equipment in a datacenter to reduce energy consumption. The main consumers of power in a datacenter are servers, communications network and the cooling system. Optimization of power in a datacenter is a difficult problem because of server resource constraints, network topology and bandwidth constraints, cost of VM migration, the heterogeneity of workloads and the servers. The arrival of new jobs and departure of completed jobs also create workload heterogeneity in time. As a result, most of the previous research has concentrated on partial optimization of power consumption, which optimizes either server and/or network power consumption through placement of VMs. Temporal load aware optimization, minimization of power consumption as a function of time has vastly been studied. When optimization also included migration, then solution had been divided into two steps, in the first step optimization of server and/or network power consumption is performed and in the second step migration of VMs has been taken care of, which is not an optimal solution. In this work, we develop joint optimization of power consumption of servers, network communications and cost of migration with workload and server heterogeneity subject to resource and bandwidth constraints through VM placement. Optimization results in an integer quadratic program (IQP) with linear/quadratic constraints in number of VMs assigned to a job on a server. IQP can only be solved for very small size systems, however, we have been able to decompose IQP to master and pricing sub-problems which may be solved through column generation technique for systems with larger sizes. Then, we have extended the optimization to manage temporal heterogeneity of the workload. It is assumed that time-axis is slotted and at the end of each slot jobs makes probabilistic complete/partial release of the VMs that they are holding and there will also be new job arrivals according to a Poisson process. The system will perform re-optimization of power consumption at the end of each slot that also includes the cost of VM migration. In the re-optimization, VMs of unfinished jobs may experience migration while new jobs are assigned VMs. We have obtained numerical results for optimal power consumption for the system as well as its power consumption due to two heuristic VM assignment algorithms. The results show optimization achieves significant power savings compared to the heuristic algorithms. We believe that our work advances state-of-the art in dynamic power management of datacenters and the results will be helpful to cloud service providers in achieving energy saving.
... Thus, as important as the need for commensurate effort that is required to step-up the desired requirements in the DCN, the DCN environment still faces a lot of challenges and various efforts are still in motion to adequately address these limitations. Key among the challenges include network overall performance, resilience, agility, scalability, backward compatibility, automated address allocation, energy efficiency among others [1], [6]. Several approaches have been proposed to improve the performance of data centers wherein lies the major goal of this research writeup. ...
... Furthermore, the challenges of server-rack failures, link outages and the need for a more Fig.1: Architecture of (a) DCell and (b) Fat-tree data center [6]. scalable architecture informed the design of a DCell architecture [10] as shown in Figure 1(a). ...
... Architecture of Elastic-tree data center[6]. ...
Conference Paper
Full-text available
Data Center Networks (DCN) faces diverse challenges as a consequence of the rapid increase in the number of hosted servers and switches which compensate for everyday increase in hosting applications and data storage demands. Some of the challenges include low bisectional bandwidth, fault tolerance, the high cost of maintenance among others. Moreover, several research articles have addressed some of these challenges and most of the solutions proposed new DCN architectures to handle the various challenges that occur in contemporary DCN. However, a little research has been conducted to compare the performance of those new architectures in the light of the various challenges experienced. In this write-up, we highlighted various DCN architectures that are available and performed experiments on some selected, highly rated architectures based on literature report to determine their performance in relation to the peculiar challenges in DCNs. This report implemented DCell, Fat-tree, Elastic-tree and Optical switch architectures via OMNeT++ simulator and compared their performances using both URTD and ERTD. The DCell, Fat-tree, Elastic-tree performed well when the number of servers was little but degenerate with increasing servers. However, the Optical switch approach showed a better stability and consistency with respect to mean packet delay and throughput during increasing packet transmission
... So far it has been an interesting research direction to reduce the server farm energy requirements and to optimize the power efficiency which may be viewed as a ratio of performance improvement to power consumption reduction. For energy-efficient management of data centers, some authors have dealt with several key interesting issues, for example, data center network architecture by Al-Fares et al. [1], Guo et al. [20] and Pries et al. [35]; green networks and cloud by Kliazovich et al. [25], Mazzucco et al. [32], Gruber and Keller [18], Goiri et al. [13,14] with solar energy, Li et al. [27] with wind energy, Wu et al. [41] and Zhang et al. [43]; networks of data centers by Greenberg et al. [16,17], Gunaratne et al. [19], Shang et al. [38], Kliazovich et al. [26] and Wang et al. [39]; resiliency of data centers by Heller et al. [23] and Baldoni et al. [3]; revenues, cost and performance by Elnozahy et al. [8], Chen et al. [6], Benson et al. [4], Dyachuk and Mazzucco [7], Mazzucco et al. [31] and Aroca et al. [2]; analyzing key factors by Greenawalt [15] for hard disks, Chase et al. [5] for hosting centers (i.e., the previous one of data center), Guo et al. [21] for base station sleeping control, Guo et al. [22] for edge cloud systems, Horvath and Skadron [24] for multi-tier web server clusters, Lim et al. [30] for multi-tier data centers, Rivoire et al. [34] for a full-system power model, Sharma et al. [37] for QoS, Wierman et al. [40] for processor sharing, and Xu and Li [42] for part execution. ...
Conference Paper
Full-text available
By analyzing energy-efficient management of data centers, this paper proposes and develops a class of interesting Group-Server Queues, and establishes two representative group-server queues through loss networks and impatient customers, respectively. Furthermore, such two group-server queues are given model descriptions and necessary interpretation. Also, simple mathematical discussion is provided, and simulations are made to study the expected queue lengths, the expected sojourn times and the expected virtual service times. In addition, this paper also shows that this class of group-server queues are often encountered in many other practical areas including communication networks, manufacturing systems, transportation networks, financial networks and healthcare systems. Note that the group-server queues are always used to design effectively dynamic control mechanisms through regrouping and recombining such many servers in a large-scale service system by means of, for example, bilateral threshold control, and customers transfer to the buffer or server groups. This leads to the large-scale service system that is divided into several adaptive and self-organizing subsystems through scheduling of batch customers and regrouping of service resources, which make the middle layer of this service system more effectively managed and strengthened under a dynamic, real-time and even reward optimal framework. Based on this, performance of such a large-scale service system may be improved greatly in terms of introducing and analyzing such group-server queues. Therefore, not only analysis of group-server queues is regarded as a new interesting research direction, but there also exist many theoretical challenges, basic difficulties and open problems in the area of queueing networks.
... So far it has been an interesting research direction to reduce the server farm energy requirements and to optimize the power efficiency which may be viewed as a ratio of performance improvement to power consumption reduction. For energy-efficient management of data centers, some authors have dealt with several key interesting issues, for example, data center network architecture by Al-Fares et al. [1], Guo et al. [20] and Pries et al. [35]; green networks and cloud by Kliazovich et al. [25], Mazzucco et al. [32], Gruber and Keller [18], Goiri et al. [13,14] with solar energy, Li et al. [27] with wind energy, Wu et al. [41] and Zhang et al. [43]; networks of data centers by Greenberg et al. [16,17], Gunaratne et al. [19], Shang et al. [38], Kliazovich et al. [26] and Wang et al. [39]; resiliency of data centers by Heller et al. [23] and Baldoni et al. [3]; revenues, cost and performance by Elnozahy et al. [8], Chen et al. [6], Benson et al. [4], Dyachuk and Mazzucco [7], Mazzucco et al. [31] and Aroca et al. [2]; analyzing key factors by Greenawalt [15] for hard disks, Chase et al. [5] for hosting centers (i.e., the previous one of data center), Guo et al. [21] for base station sleeping control, Guo et al. [22] for edge cloud systems, Horvath and Skadron [24] for multi-tier web server clusters, Lim et al. [30] for multi-tier data centers, Rivoire et al. [34] for full-system power model, Sharma et al. [37] for QoS, Wierman et al. [40] for processor sharing, and Xu and Li [42] for part execution. ...
Article
Full-text available
From analyzing energy-efficient management of data centers, this paper proposes and develops a class of interesting {\it Group-Server Queues}, and establishes two representative group-server queues through loss networks and impatient customers, respectively. Furthermore, such two group-server queues are given model descriptions and necessary interpretation. Also, simple mathematical discussion is provided, and simulations are made to study the expected queue lengths, the expected sojourn times and the expected virtual service times. In addition, this paper also shows that this class of group-server queues are often encountered in other practical areas including communication networks, manufacturing systems, transportation networks, financial networks and healthcare systems. Note that the group-server queues are always used to design effectively dynamic control mechanisms through regrouping and recombining such many servers in a large-scale service system by means of, for example, bilateral threshold control, and customers transfer to the buffer or server groups. This leads to that the large-scale service system is divided into several adaptive and self-organizing subsystems through scheduling of batch customers and various service resources, which makes that the middle layer of this service system can more effectively be managed and strengthened under a dynamically, real-time and even reward framework. Based on this, performance of such a large-scale service system may be improved greatly in terms of introducing and analyzing such group-server queues. Therefore, not only is analysis of group-server queues regarded as a new interesting research direction, but it also exists many theoretic challenging, basic difficulties and open problems in the area of queueing networks.
... It should be noted that the power consumed by the switches at the edge tier (i.e., at the top of the rack, ToR), which interconnect the servers located in the same rack, is dominant because of a huge number of ToR switches. It can reach up to 90 percent of the total power consumed by all types of switches in DCNs [8] . Therefore, improving energy efficiency at the ToR level should be considered in the first place in order to decrease the overall DCN power consumption. ...
Article
Full-text available
The growing popularity of cloud and multimedia services is dramatically increasing the traffic volume that each data center needs to handle. This is driving the demand for highly scalable, flexible, and energy-efficient networks inside data centers, in particular for the edge tier, which requires a large number of interconnects and consumes the dominant part of the overall power. Optical fiber communication is widely recognized as the highest energy- and cost-efficient technique to offer ultra-large capacity for telecommunication networks. It has also been considered as a promising transmission technology for future data center applications. Taking into account the characteristics of the traffic generated by the servers, such as locality, multicast, dynamicity, and burstiness, the emphasis of the research on data center networks has to be put on architectures that leverage optical transport to the greatest possible extent. However, no feasible solution based on optical switching is available so far for handling the data center traffic at the edge tier. Therefore, apart from conventional optical switching, we investigate a completely different paradigm, passive optical interconnects, and aim to explore the possibility for optical interconnects at the top of the rack. In this article, we present three major types of passive optical interconnects and carry out a performance assessment with respect to the ability to host data center traffic, scalability, optical power budget, complexity of the required interface, cost, and energy consumption. Our results have verified that the investigated passive optical interconnects can achieve a significant reduction of power consumption and maintain cost at a similar level compared to its electronic counterpart. Furthermore, several research directions on passive optical interconnects have been pointed out for future green data centers.
Article
The continuously emerging cloud services provide a unprecedented traffic growth into the large-scale data centers (DCs) globally. In this paper, we introduce an optical DC network (DCN) architecture to organize the servers into computing clusters. Since a high percentage of the total DCΝ traffic is served within a cluster, we assume two distinct networks: the intra-cluster passive optical network that handles the traffic destined to any server of the same cluster and the inter-cluster one to route the traffic to any other cluster. The servers interconnection within the passive optical intra-cluster network causes low power consumption, while the Top-of-Cluster (ToC) switch requires less ports than a relative Top-of-Rack (ToR) one to interconnect the same number of servers within the intra network, reducing even more the total power consumption. In the data plane, the intra- and the inter-cluster networks use separate wavelengths. In the control plane, the software-defined networking (SDN) paradigm is followed. Especially, in each cluster we adopt a cluster controller to coordinate the medium access control (MAC) in both the intra and inter-cluster networks. Unlike other studies that assume electrical connectivity with the controller, we consider that it is performed in the optical domain to guarantee the effective synchronized operation of the control and data planes. In our work, we focus on the intra-cluster network. We propose a synchronous transmission software-defined bandwidth allocation (SD-BA) MAC protocol to fairly coordinate the collisions-free transmission of different quality of service traffic categories in the intra-cluster network, based on the wavelength and time division multiplexing (W&TDM) techniques. The proposed DCN architecture along with the SD-MAC protocol provides scalability and efficiency. Simulations results show that the proposed SD-BA MAC protocol achieves almost 100% bandwidth utilization, while it reaches at high loads 145% higher throughput, 573% lower delay and 233% less dropped packets as compared to the relative DMAC network architecture (Zheng and Sun, Apr. 2020) [24]. Also, the proposed intra-cluster DCN architecture is compared to some other currently leading relative ones in terms of throughput and power consumption and it is proven to be a performance and energy efficient DCN solution.