ArticlePDF Available

Abstract and Figures

The main problems with several existing Information and Communication Technology (ICT) power footprint investigations are: too limited (geographical and temporal) system boundary, overestimation of power saving potential in the next decade, assume that historical power use can predict future global power use in the next decade despite unprecedented data traffic growth, assume that Moore's law relation to digital circuitry can continue "forever" and that no problems with extra cooling power will occur for several decades. The highly variable outlooks for the future power consumptions depend on "starting values", disruptions, regional differences and perceptual estimations of electricity intensity reductions and data traffic increase. A hugely optimistic scenario-which takes into account 20% annual improvement of the J/bit in data centers and networks until 2030 is presented. However, the electric power consumption of the present ICT scope will be significant unless great efforts are put into power saving features enabling such improvements of J/bit. Despite evident risks, it seems though that planned power saving measures and innovation will be able to keep the electricity consumption of ICT and the World under some kind of control. The major conclusion is based on several simulations in the present study-that future consumer ICT infrastructure cannot slow its overall electricity use until 2030 and it will use more than today. Data traffic may not be the best proxy metric for estimating computing electricity. Operations and J/operation seem more promising for forecasting and scaling of bottom-up models.
Content may be subject to copyright.
New perspectives on internet electricity use in 2030
Anders S.G. Andrae
Huawei Technologies Sweden AB, Kista, Sweden.;
Received: 24 April 2020; Accepted: 18 June 2020; Published: 30 June 2020.
Abstract: The main problems with several existing Information and Communication Technology (ICT) power
footprint investigations are: too limited (geographical and temporal) system boundary, overestimation of
power saving potential in the next decade, assume that historical power use can predict future global power
use in the next decade despite unprecedented data traffic growth, assume that Mooret’s law relation to digital
circuitry can continue "forever" and that no problems with extra cooling power will occur for several decades.
The highly variable outlooks for the future power consumptions depend on "starting values", disruptions,
regional differences and perceptual estimations of electricity intensity reductions and data traffic increase. A
hugely optimistic scenario - which takes into account 20% annual improvement of the J/bit in data centers
and networks until 2030 is presented. However, the electric power consumption of the present ICT scope will
be significant unless great efforts are put into power saving features enabling such improvements of J/bit.
Despite evident risks, it seems though that planned power saving measures and innovation will be able to
keep the electricity consumption of ICT and the World under some kind of control. The major conclusion is
based on several simulations in the present study - that future consumer ICT infrastructure cannot slow its
overall electricity use until 2030 and it will use more than today. Data traffic may not be the best proxy metric
for estimating computing electricity. Operations and J/operation seem more promising for forecasting and
scaling of bottom-up models.
Keywords: Communication, computing, data center, data traffic, devices, electricity use, electricity intensity,
5G, forecast, information, instructions, networks, operations, video streaming.
1. Introduction
In recent years some controversy has emerged concerning the potential electric power use of Information
and Communication Technology (ICT) technology going forward in the present decade. The electricity
consumption is important as there are more or less sustainable ways of producing electricity. Most schools of
thought agree that with the current moderate data traffic the power consumption of ICT has - so far - been kept
more or less under control. There are conflicting messages regarding the path to a power consumption under
control. Depending on scope, in 2020 ICT stands for up to 7% of the total global electricity use. Researchers
have used different ways to measure, different ways to model and have also used different kind of statistics.
The rise of ICT electric power use is far from a "phantom" problem. A recent review [1] confirmed that ICT
systems - despite a large number of energy saving technologies at hand are at a critical point regarding current
and future energy consumption of telecommunication networks, data centers and user-related devices. Most
evidence speaks against flattening or reducing ICT power. For example, Weldon estimated that the electricity
use of all connected devices - including all consumer devices with network connections - would rise from
200 TWh in 2011 to 1100 TWh in 2019 and 1400 TWh in 2025 [2]. Hintemann argued credibly against too
pessimistic (e.g. expected and worst case in [3]) and optimistic scenarios for global data center power by
listing indisputable global trends such as cryptocurrency mining, relentless speed of data center construction
and cloud to hybrid cloud [4]. Moreover, for 2018 Hintemann estimated as much as 400 TWh for global data
center electricity use [4]. Then it has been argued that the efficiency gains will continue unhindered between
2022 and 2030 thanks to Artificial Intelligence (AI) [[5]. Nevertheless, on computing level Khokhriakov et
al. found that multicore processor computing is not energy proportional as the optimization for performance
alone results in increase in dynamic energy consumption by up to 89% and optimization for dynamic energy
alone results in performance degradation by up to 49% [6]. Actual electricity measurements from Leibniz
Eng. Appl. Sci. Lett. 2020,3(2), 19-31; doi:10.30538/psrp-easl2020.0038
Eng. Appl. Sci. Lett. 2020,3(2), 19-31 20
Supercomputing Centre in Germany showed that between 2000 and 2018 - despite higher power efficiency -
the increase in system density and overall performance lead to increase in electricity consumption [7]. The
electricity generated by renewable energy is increasing. In 2015 the share of hydro, wind, solar and biomass
power was 25% on average in China [8] which is of importance as the growth of ICT construction will be of
huge significance there compared to more developed nations.
Truthfully it is challenging to make accurate predictions of global ICT electric power use as it is
problematic to account for unknown unknowns. Most researchers agree that the data traffic - no matter how it
is defined - will increase exponentially for several years as it has been doing the last decade. The disagreement
concerns how fast and how large the ICT related power use will become in around 2030. Probably there is a
parallel to linear or exponential thinking of how fast some entity will increase. Further discussions concern
whether the anticipated extra electricity use by ICT really is a concern if the additional power can drive the
corresponding share of sustainable electric power in specific grids used by the ICT infrastructure. The cost
of electricity has to date been rather small for ICT Service providers compared to other expenditures [9], but
this could change if the electricity prices and electricity use increase. There is not much expectation that future
consumer ICT infrastructure can actually slow its overall electricity use until 2030. With the current knowledge,
there are more circumstances pointing towards rising - 1-2 PWh - power consumption of ICT than slowing or
2030 is rather far away and unprecedented changes in economic activity is hard to predict as the first
quarters of 2020 has showed. Here it is assumed that the trend of more ICT and data will not be affected
dramatically until 2030 as a result of the slow-down Q1-2 2020. Therefore trends are more important than
"exact" use patterns and numbers, as we do not exactly know how and which devices will be used in the
future. Blockchain, artificial intelligence (AI), virtual reality (VR), and augmented reality (AR) might be
the biggest trends for ICT power use. Anyway, a proper power analysis of the ICT Sector should include
production of hardware including embedded chips, use of data centers, use of networks, and use of consumer
communication devices.
Production is today around 20% of ICTs footprint but there is room for improving the precision. The
digital revolution may possibly in itself help optimize the power use of production. However the total emission
of ICT production - and thereby the power use - may well be heavily underestimated [10]. Use stage power of
data centers is now around 15%, but is expected to become one of the most important drivers for ICT electricity
use. Use stage power of Networks (wireless and core) is now at around 15% of ICT, but its share is expected to
increase. There is however considerable uncertainty about 5G’s power use depending on point of introduction,
learning curve and regional differences.
Use stage power of consumer devices (including Wi-Fi modems) is now at some 50% of ICT total power
use but is ideally expected to decrease thanks to advanced power saving features. Current downward trend is
expected to continue if no "dramatic processing power saving problems related to Moores law" happen around
2022. The speed of electricity intensity reduction vs. the speed of data traffic increase is the determinant of ICT
power. As hypothesized in Section 5, other more fundamental determinants are possible.
1.1. Objectives
The objective of this prediction study is to estimate the global electric power use in 2030 associated
with computing and communication - the Information and Communication Technology (ICT) infrastructure -
consisting of the use stage of end-user consumer devices, network infrastructure and data centers as well as
the production of hardware for all. The specific purpose is to update previous predictions [3] and understand
if the power consumption is still likely to develop as previously understood.
1.2. Hypothesis
The hypothesis is that the electric power consumption of the ICT Sector will increase along something
in between the best and expected scenario as outlined by Andrae and Edler in 2015 [3] when adding new
assumptions of data traffic and electricity intensity improvements.
Eng. Appl. Sci. Lett. 2020,3(2), 19-31 21
Table 1. Differences between [3] expected case and the present prediction for data centers.
Global Data Center IP Traffic (ZettaBytes/year) Electricity use (TWh)
\cite{10} Present \cite10 present
2020 13 19 660 299
2021 16 25 731 311
2022 20 33 854 328
2023 25 43 998 320
2024 30 56 1166 377
2025 37 72 1362 412
2026 46 94 1592 471
2027 56 122 1860 551
2028 69 159 2173 652
2029 85 206 2539 788
2030 105 268 2967
2. Materials and methods
The approach follows the one outlined in [3] however with several new assumptions for parameters such
as electricity intensity improvements and data traffic growth. The expected case scenario in [3] constitute the
baseline for the present research, however, the best case scenario is also shown occasionally for entities of the
ICT Sector. The baseline year is 2020 and only one trend curve - for ICT total - will be proposed toward 2030.
All assumptions made are available in the Supplementary Information.
2.1. Alternate assumptions for data centers use stage
Compared to the expected case scenario in [3] the following assumptions have been made
The annual electricity intensity improvement taking place from - 2010 to 2022 - has been increased to 20%
instead of 10%. This implies a lower starting point in 2020 than in [3].
A much higher amount of data will be processed in the data centers (see Table 1).
Data traffic is a crude proxy for power use but the numbers are reported frequently [10]. Operations/s [11]
may be a better proxy as will be discussed in Section 5.
3. Alternate assumptions for Networks
3.1. Wireless access
Compared to expected case scenario in [3] the following assumptions have been made
The factor of historical improvement of the TWh/EB factor between 2010 and 2020 as assumed in [3] has
been corrected.
Andrae and Edler [3] arrived at an accumulated improvement factor of 0.083 in 2030 for 5G by assuming
22% improvement between 2010 and 2022 and 5% improvement from 2022 to 2030. However, it is wrong to
assume an improvement for 5G from 2010 to 2020 as 5G did not (more or less) exist then. Due to gradually
introduced Moore’s law problems, the accumulated improvement factor is assumed to be 0.229 in 2030. On
top of this, a gradually waning Moore’s law is introduced for all mobile technology Gs from 2022 so that the
improvement factors run from 19% in 2022 to 5% in 2030, instead of 5% from 2022 to 2030. This leads to more
than 4 times more TWh from 5G in the latest understanding mentioned in [12] than in [3]. Tables 2 and 3 show
some of the new assumptions.
According to [13], in 2020 4G networks deliver 20 kbit/J while [3] predicted (better) 40 kbit/J in 2020. For
5G [13] predicted 10 Mbit/J while [3] predicted (worse) 0.8-2.8 Mbit/J for 2030. The starting point in 2020 for
5G in [3] is 0.05 Mbit/J. As shown in Table 2, the energy efficiency prediction for 5G has decreased - compared
to [3] - to 0.18-0.22 Mbit/J [12].
Eng. Appl. Sci. Lett. 2020,3(2), 19-31 22
Table 2. Differences between [3] and the present prediction for 5G mobile networks.
2021 2022 2023 2024 2025 2026 2027 2028 2029 2030
Best Case 5G Traffic EB 41 164 324 677 1248 2656 4316 6685 9928 14403
TWh 0 0 0 1 1 3 4 7 9 13
Expected case 5G Traffic EB 44 189 399 892 1762 3881 6996 11609 18473 28714
TWh 0 1 2 4 7 15 26 41 61 91
Best case 5G Traffic EB 41 164 324 677 1248 2565 4316 6685 9928 14403
TWh 0 5 8 15 23 41 62 87 120 166
Expected case 5G Traffic EB 44 189 399 892 1762 3881 6996 11609 18473 28714
TWh 2 7 12 23 38 74 120 181 268 396
Table 3. Differences between [3] expected case and the present prediction for mobile networks.
Electricity use TWh
\cite{10} Present
2020 98 98
2021 92 94
2022 100 92
2023 114 95
2024 127 102
2025 144 116
2026 145 142
2027 149 181
2028 157 237
2029 172 320
2030 196 446
3.2. Fixed access wired
One of the major weaknesses of the predictions done in [3] is likely the overestimation of fixed wired
(core) networks. To improve this, a faster improvement of the TWh/EB is assumed between 2010 and 2022,
20% per year is used instead of 10%. However, a gradually waning Moore’s law is introduced from 2022 so
that the improvement factors run from 19% in 2022 to 5% in 2030, instead of 5% from 2022 to 2030. Overall
however, this results in a dramatically lower electricity use of these networks in 2030 compared to [3] (Table
Table 4. Differences between [3] expected case and the present prediction for fixed access wired networks.
Electricity use (TWh)
\cite{10} Present best case Present expected case
2020 439 134 171
2021 494 129 171
2022 588 126 174
2023 703 125 179
2024 843 126 188
2025 1014 129 200
2026 1222 138 223
2027 1477 152 255
2028 1789 169 296
2029 2171 192 352
2030 2641 224 428
Eng. Appl. Sci. Lett. 2020,3(2), 19-31 23
3.3. Alternate assumptions for Devices power use including Wi-Fi modems
From 2020 the improvement of kWh/unit/year for devices is assumed 3% as in [10]. The difference is
that Wi-Fi is added to the consumer devices section. Wi-Fi is overestimated in [3] as the Wi-Fi modems electric
power use is actually rather independent of handled traffic. The action taken is to increase the electricity
intensity improvement from 10% to 20% per year from 2010 to 2022 for the expected case scenario. The
resulting electricity use is shown in Table 5. As a sensitivity check, 2 billion homes globally - each with one
3 Watt Wi-Fi modem - would use on average around 52 TWh per year. This shows that the new assumption
is more reasonable than previous [3]. Table 5 shows that adding Wi-Fi (moving Wi-Fi from the Networks) to
consumer devices, both in [3] and here, suggest increasing and flattening TWh, respectively.
Table 5. Differences between [3] expected case and the present prediction for devices use stage electric power
Electricity use (TWh)
\cite{10}Consumer devices Present Consumer devices Wi-Fi modems
+ Wi-Fi modems + Wi-Fi modems
2020 1132 1039 72
2021 1153 1051 75
2022 1171 1054 79
2023 1186 1049 84
2024 1200 1037 91
2025 1217 1017 99
2026 1250 1008 113
2027 1298 1008 133
2028 1365 1017 157
2029 1451 1038 190
2030 1559 1073 234
This prediction is to be considered highly uncertain as the devices will of course also be affected by the
power issues related to the slow-down of Mooret’s law. This slow-down is included for Wi-Fi devices. Anyway,
the order of magnitude for the TWh is most likely correct. Still, a reduction of consumer devices power use
seems quite optimistic. It can happen though, thanks to a firm focus on power saving and updated energy
labeling requirements for end-user devices.
In the future the use stage electric power of USB dongles, smart home devices, wearables, AR & VR
devices, and Wi-Fi modems should be added systematically. Moreover, due to a strong push for longer
lifetimes for consumer devices, lifetimes may increase compared to [3].
3.4. Alternate assumptions for Production of ICT hardware
Andrae and Edler [3] overestimated the electric power used to produce ICT goods used in Networks
and Data Centers. This is improved in the present prediction by setting the so called life cycle ratio for
Networks and Data Center production to 0.02 instead of 0.15. This assumption brings down the production
TWh significantly. Assuming that 2 million base stations with ?3 MWh/unit [14] - used in wireless access
networks - and 60 million servers with 1 MWh/unit [15] - used in data centers - will be produced in 2030, the
electric power needed would be around 66 TWh. The present study predicts 38 TWh in 2030 - of 289 TWh - for
all network and data center equipment. This suggests that a 0.02 life cycle ratio for production is reasonable
for traffic dependent calculations of data centers and networks. Table 6shows that the production estimates
are much lower in the present study than in [3].
Eng. Appl. Sci. Lett. 2020,3(2), 19-31 24
Table 6. Differences between [3] expected case and the present prediction for production of ICT hardware.
Electricity use (TWh)
\cite{10} Present
2020 549 381
2021 540 358
2022 547 339
2023 562 324
2024 584 311
2025 614 302
2026 650 295
2027 696 291
2028 752 290
2029 821 292
2030 903 298
With the current twists and turns in the global economy it is almost undoable to predict parameters for
production of ICT. Still, the latest understanding [10]is that production of ICT is underestimated.
4. Results
The stability of Andrae and Edler [3] trend analysis - of how much electric power the ICT Sector might
use in 2030 - is remarkable considering the number of changed (improved) assumptions made in the present
update and others [1012]. In summary in 2030, all entities are predicted to use much less electricity - except
wireless access networks - than the expected scenario in [10]. The total TWhrs - for the current studied Internet
scope - are very close to best case scenario in [3].
4.1. Data centers power use
Figure 1shows some trends for data centers 2020 to 2030.
Figure 1. Trends for data centers 2020 to 2030.
Although the electricity intensity improvements are assumed higher than in [3] the consequences caused
by data traffic increase compensate, and the electricity use might still rise. 366 TWh in 2030 - for the best case -
are due to a very moderate data traffic growth.
4.2. Networks power use
Figures 2and 3show some trends for Networks 2020 to 2030.
Eng. Appl. Sci. Lett. 2020,3(2), 19-31 25
Figure 2. Trends for wireless access networks 2020 to 2030.
Figure 3. Trends for fixed access wired networks 2020 to 2030.
4.3. Devices power use
Figure 4shows some trends for end-user consumer ICT goods use stage 2020 to 2030.
Figure 4. Trends for consumer ICT goods use stage 2020 to 2030.
5. Summary
Figures 5and 6show some trends for the synthesis per contributing category in 2020 and 2030, separately.
Andrae and Edler [3] is compared to the present update.
Eng. Appl. Sci. Lett. 2020,3(2), 19-31 26
Figure 5. Trends for ICT electric power overall 2020.
Figure 6. Trends for ICT electric power overall 2030.
Generally the values for 2020 are lower for most entities. For 2030 too except for wireless access networks
which will use more electricity. In total the electric power predictions for the ICT Sector have been reduced
by 31% in 2020 and 61% 2030 in the present study compared to expected case scenario in [3]. Potential further
reductions are discussed in Section 8.1.
6. Discussion
The ideal framework for ICT electric power footprint would be based on annual shipments of each ICT
good, each lifetime and each measured annual and lifetime electric power consumption. However, it may
not be practicable to make that journey yet. Connectivity and smart metering is probably the road ahead for
collecting power data. Still, the electricity predictions need to be checked against bottom-up and national
top-down assessments too. It is crucial to find out how such national assessments are done and to which
degree ICT electric power consumption estimates are included.
The implications for researchers regarding the path to sustainable computing practices are at least four:
1. Produce research results which help reduce the electricity use and environmental impact of computing
2. Sourcing of the power
3. Power saving strategies
4. Recycling strategies for the used computers, screens etc
Knowing the high degree of variability, here follows some suggestions for future research approach of this
topic. Nissen et al. [16] suggested that process flow modelling would be the best for improving the precision
of wireless access networks energy use modeling. As for the future forecasting of ICTs electricity use, Artificial
Neural Networks seems a very useful modeling tool [17].
Eng. Appl. Sci. Lett. 2020,3(2), 19-31 27
7. Bottom-up considerations for research
The electricity cost of individual computing in particular might be difficult to isolate. Still, there are
ways with which we can implement green computing. For example, somehow mimicking the green software
coding idea "Proof of stake" - by which the cryptocurrency ethereum plan to slash its power use [18] seems
like a good idea. Nevertheless it does not seem useful for individuals to calculate their personal ICT electricity
consumption, but some measures probably can be taken. One easy measure is to turn off the video image in
communication when voice+video is possible but visual communication is not really required. Still, in Section
4.3 the overall individual and global electricity cost of video streaming is estimated.
8. Testing of the order of magnitude of worldwide ICT and data center electric power use
8.1. What if the 20% per year electricity intensity improvements continue after 2022
Figure 7shows the summary of the present predictions. At the moment Wi-Fi based - or fixed optic fibre
broadband - computing is preferable to wireless 4G based computing from an overall electricity consumption
point of view.
Figure 7. Trends for ICT electric power use 2020 to 2030.
The "extreme positive" scenario assumes that no slowdown of electricity intensity improvements happen
after 2022 i.e. no gradually waning annual improvements from 2022 to 2030 as in the present baseline (expected
case scenario) - and that 20% improvements still happen in Networks and Data Centers until 2030. In that case
ICT power will more or less stay flat while the total data traffic grows 14 times between 2020 and 2030. The
electricity use of networks and data centers will be 54% less in such an "extreme positive" scenario than in the
present study.
8.2. Blockchain and cryptocurrencies
The blockchain is established on databases that are not consolidated in one server, but in a global network
of computers. The information is eternally registered, in sequential order, and in all parts of the computer
network. The computing power allocated to the specific blockchain application bitcoin is likely very high [19].
The reason is that with bitcoin every new piece of information added to the chain requires that someone
uses computer power to solve an advanced cryptographic problem via Proof of Work. The sooner this
cryptographic problem gets resolved, the greater the likelihood that the person who is in charge of the mining
of bitcoin cryptocurrencies will be paid in bitcoin cryptocurrency. The demand for bitcoins - as long as it
lasts - will therefore increase the demand for electric power. Mora et al., [19] pointed out that any further
development of cryptocurrencies should critically aim to reduce electricity demand. Reducing the power use
of cryptocurrencies might have a solution in the form of Proof of Stake instead of Proof of Work [18].
Eng. Appl. Sci. Lett. 2020,3(2), 19-31 28
Table 7. 2020 and 2030 key electricity intensity indicators relevant for video streaming
Entity used
in video
2020 2030 unit
share of
total Global
access traffic
(internet traffic)
share of
total Global
access traffic
(internet traffic)
access network
(98 TWh/549 ExaByte)
(446/30899) kWh/GB 15% 85%
Fixed access
wired networks
(428/25901) kWh/GB 85% 15%
Data center 0.015
(974/274599) kWh/GB
8.3. Renewable electric power and ICT
There are discussions ongoing about the possibility that ICT infrastructure can be run entirely on
renewable power. One of many challenges is that the renewable power should be located in the vicinity of
the ICT infrastructure.
Using renewable energy to power data centers and networks can reduce the environmental impacts.
However, the uneven geospatial distribution of renewable energy resources and regions with high ICT use
might create uncertainty of supply [20,21]. The relation between renewable energy resources and associated
environmental impacts - of data centers and networks driven by renewable energy at a global scale should be
investigated thoroughly [10].
Overall the present predictions suggest a trajectory in between the Best and Expected Case Scenarios in
[3], 1990 TWh in 2020 and 3200 TWh in 2030 (Figure 6). The ICT Sector has and will have a considerable
share of the global electricity footprint.
8.4. Bottom-up calculation of the electricity use associated with video streaming
It is relevant to estimate how much data is generated - and associated electric power used - by normal
behavior like video streaming several hours every day. For the present estimations the following key indicators
are used (Table 7). The electricity intensities are set to decrease massively, especially for wireless access
networks. However, those networks are perhaps used much less extensively for video streaming in 2020 than
optic fixed access. Table 7suggests that the electric power use of video streaming is strongly correlated to the
way in which the video streaming is obtained. Streaming via a 4G router directly or with Wi-Fi is less efficient
at the moment than optical broadband via a mobile phone/tablet using Wi-Fi.
Typically standard definition video use 1 GB per hour and high definition (HD) video use 3 GB per hour.
Other video formats with higher resolution (e.g. 8K 3D) might use even higher amounts. It is assumed 20 GB
per hour for the most typical video technology used in 2030.
By this information it is possible to predict the current and future data generation and electricity
consumption associated with video streaming and relate it to the total for ICT.
8.5. Data amounts and TWh from global video streaming
For 2020 it is assumed that one person watches video streaming in HD 2 hours/day in weekdays and 4
hour/day on weekends, i.e. 18 hours per week and 936 hours per year.
To provide these hours, 2808 GB per person is generated in 2020. If all entities are used in Table 7to deliver
the stream, 285 kWh per year per person is required. Assuming that 2 billion persons have this behavior, 570
TWh is needed for 5230 ExaBytes. This suggests that video streaming is a noticeable driver for ICT electric
power use in 2020. For 2030 it is assumed that one person watches video streaming in HD 2 hours/day in
weekdays and 4 hour/day on weekends, i.e. 18 hours per week and 936 hours per year.
However, due to higher GB/hour, 18720 GB per person is generated in 2030. If all entities are used in
Table 7to deliver the stream, 352 kWh per year per person is needed. Assuming that 7 billion persons will
Eng. Appl. Sci. Lett. 2020,3(2), 19-31 29
have this behavior, 2464 TWh is required for 122040 ExaBytes. These simple hypotheses shows that increasing
electricity use of the ICT Sector is unquestionably in the cards.
9. Conclusions
It is very difficult to fathom the circumstances under which the electric power use of communication and
computing (the ICT infrastructure) cannot rise considerably until 2030. The total TWh will develop along an
average of the best and expected scenario in [3] with a strong leaning to the best case.
10. Next steps
New advances in large-scale fiber-optic communication systems [2225] should be translated to J/bit
and used for predictions of the fixed core network. Advances in heat recovery and lowering temperature
of microchips [26] may have big implications for the global ICT power use. The reason is that the energy
consumption per transistor is strongly correlated to the temperature at which the transistor is working [11].
The overall effect of solving the Internet of Things, edge devices high computation, memory requirements and
power dilemma is not well understood [27]. Moreover, it is plausible that ICT infrastructure can help save
electric power in society as a whole, and Ono et al. suggested 1300 TWh in 2030 [28]. These assumptions
should be further explored. Also these areas are next steps:
Find out best way to define an operation or instruction in computing
Forecast the number of different operations and instructions
Measure different J/operation or J/instruction.
Andrae [9] put forward these hypotheses for 2015: (i) the "traffic" (instructions/s) was around 1
Zettainstructions/s in total and (ii) the energy efficiency was overall around 7 Gigainstructions/J.
Falsifying in detail the above hypotheses would enable reliable forecasting of the power consumption
of computing which involve new technologies. Equation (1) may be the way forward if the data could be
8760 × (Ins
ICTt=ICT sector total global average electricity use in Wh related to processing and computing.
j=computing type; special purpose, general purpose, machine learning, dark calculations etc.
i=ICT good type.
Ins =computing instructions.
Has data traffic reached the end of the line as proxy for ICT power forecasting? Machine learning
training done in a data center may send only a few bits of data to the data center, presumably creating a
relatively small amount of IP traffic. That is, on one hand the training process may imply many calculations
without necessarily generating a lot of IP traffic. On the other hand the training may use more energy due to
the required operations and J/operation [12]. Deep learning may use enormous amounts of electricity [29],
however unclear how many Joules per instruction. Then research [30] showed that this electricity use may be
reduced 1000 times. These frameworks and speculations need more analysis and put into a global perspective
and Equation 1. Another angle to be analyzed further is that as web page sizes increase, the metrics Page Load
Time and Page Render Time have larger impact on energy usage on the client side [31].
Conflicts of Interest: “The author declares no conflict of interest.”
[1] Lorincz, J., Capone, A., & Wu. J. (2019). Greener, Energy-Efficient and Sustainable Networks: State-Of-The-Art and
New Trends. Sensors, 19(22), 4864.
[2] Weldon, M. K. The future X network: a Bell Labs perspective. CRC press, 2016.
[3] Andrae, A. S. G., & Edler, T. (2015). On global electricity usage of communication technology: trends to 2030.
Challenges, 6, 117-157.
Eng. Appl. Sci. Lett. 2020,3(2), 19-31 30
[4] Hintemann, R. (2020). Efficiency gains are not enough: Data center energy
consumption continues to rise significantly. [cited 18 June 2020]: Available from:
[5] ING Economics Department. (2019). Further efficiency gains vital to limit electricity use of data. [cited 12 June 2020]:
Available from: efficiency gains vital to limit electricity use of
[6] Khokhriakov, S., Manumachu, R. R., Lastovetsky, A. (2020). Multicore processor computing is not energy
proportional: An opportunity for bi-objective optimization for energy and performance. Applied Energy, 268, 114957.
[7] Shoukourian, H., Kranzlmüller, D. (2020). Forecasting power-efficiency related key performance indicators for
modern data centers using LSTMs. Future Generation Computer Systems, 112, 362-382.
[8] Liang, Y., Yu, B., & Wang, L. (2019). Costs and benefits of renewable energy development in China’s power industry.
Renewable Energy, 131, 700-712.
[9] Andrae, A. S. G., Hu, L., Liu, L., Spear, J., & Rubel, K. (2017) Delivering tangible carbon emission and cost reduction
through the ICT supply chain. International Journal of Green Technology, 3, 1-10.
[10] Andrae, A. S. G. (2020). Hypotheses for Primary Energy Use, Electricity Use and C?2 Emissions of Global Computing
and Its Shares of the Total Between 2020 and 2030. WSEAS Transactions of Power Systems; 15, 50-59.
[11] Andrae, A.S.G. (2019) Prediction Studies of Electricity Use of Global Computing in 2030. International Journal of Science
and Engineering Investigations, 8, 27-33.
[12] Andrae, A. S. (2019). Comparison of several simplistic high-level approaches for estimating the global energy and
electricity use of ICT networks and data centers. International Journal of Green Technology, 5, 51.
[13] Fulpagare, Y., Bhargav, A., & Joshi, Y. (2020). Predictive model development and validation for raised floor plenum
data center. Journal of Electronic Packaging, 142(2).
[14] Goldey, C. L., Kuester, E. U., Mummert, R., Okrasinski, T. A., Olson, D., & Schaeffer, W. J. (2010, May). Lifecycle
assessment of the environmental benefits of remanufactured telecommunications product within a "green" supply
chain. In Proceedings of the 2010 IEEE International Symposium on Sustainable Systems and Technology (pp. 1-6). IEEE.
[15] Bashroush, R. (2018). A comprehensive reasoning framework for hardware refresh in data centers. IEEE Transactions
on Sustainable Computing, 3(4), 209-220.
[16] Nissen, N.F., Stobbe, L., Richter, N., Zedel, H., & Lang, K.D. (2019). Between the User and the Cloud: Assessing the
Energy Footprint of the Access Network Devices. In Technologies and Eco-innovation towards Sustainability I; 49-64.
Springer, Singapore.
[17] Soni, U., Roy. A., Verma, A., & Jain, V. (2019). Forecasting municipal solid waste generation using artificial intelligence
models-a case study in India. SN Applied Sciences, 1, 162.
[18] Fairley, P. (2018). Ethereum will cut back its absurd energy use. IEEE Spectrum, 56, 29-32.
[19] Mora, C., Rollins, R. L., Taladay, K., Kantar, M.B., Chock, M.K., Shimada, M., & Franklin, E.C. Bitcoin emissions alone
could push global warming above 2 C. Nature Climate Change, 8, 931-933.
[20] Mills, M.P. (2020). Our love of the cloud is making a green energy
future impossible. TechCrunch, [cited 18 June 2020]. Available from:
[21] Mills, M.P. (2020). Digital Cathedrals. [cited 18 June 2020]. Available from:
[22] Li, L., Patki, P. G., Kwon, Y. B., Stelmakh, V., Campbell, B. D., Annamalai, M., & Vasilyev, M. (2017). All-optical
regenerator of multi-channel signals. Nature communications, 8(1), 1-11
[23] Zhao, X., Yu, Z., Liu, B., Li, Y., Chen, H., & Chen, M. (2018, October). An integrated optical neural network chip based
on Mach-Zehnder interferometers. In 2018 Asia Communications and Photonics Conference (ACP) (pp. 1-3). IEEE.
[24] Patri, S. K., Autenrieth, A., Rafique, D., Elbers, J. P., & Machuca, C. M. (2020, March). HeCSON: Heuristic for
Configuration Selection in Optical Network Planning. In Optical Fiber Communication Conference (pp. Th2A-32). Optical
Society of America.
[25] Hennessy, J. L., & Patterson, D. A. (2019). A new golden age for computer architecture. Communications of the ACM,
62(2), 48-60.
[26] Pu, S., Liao, Y., Chen, K., Fu, J., Zhang, S., Ge, L., & Liu, K. (2020). Thermogalvanic Hydrogel for Synchronous
Evaporative Cooling and Low-Grade Heat Energy Harvesting. Nano Letters, 20(5), 3791-3797.
[27] Garofalo, A., Rusci, M., Conti, F., Rossi, D., & Benini, L. (2020). PULP-NN: accelerating quantized neural networks
on parallel ultra-low-power RISC-V processors. Philosophical Transactions of the Royal Society A, 378(2164), 20190155.
Eng. Appl. Sci. Lett. 2020,3(2), 19-31 31
[28] [28] Ono, T., Iida, K., & Yamazaki, S. (2017). Achieving sustainable development goals (SDGs) through ICT services.
FUJITSU Scientific & Technical Journal, 53(6), 17-22.
[29] Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. arXiv
preprint arXiv:1906.02243.
[30] Cai, H., Gan, C., & Han, S. (2019). Once for all: Train one network and specialize it for efficient deployment. arXiv
preprint arXiv:1908.09791.
[31] Persson, M. (2020). JavaScript DOM Manipulation Performance: Comparing Vanilla JavaScript and Leading
JavaScript Front-end Frameworks.
2020 by the authors; licensee PSRP, Lahore, Pakistan. This article is an open access article
distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license
... Besides the amount of data processed (see software sufficiency below), decisive factors for the energy demand of data centers are waste heat recovery, cooling technology, and server utilization [102]. Data center infrastructure and associated environmental impacts are expected to increase significantly in the future [103,104]. ...
... The largest share of IP data traffic takes place within data centers [108], which, together with the networks, account for about half of the sector's operational electricity demand [76]. The networks' contribution to that demand is determined by the type of access network (mobile vs. fixed, optical fiber vs. ADSL), bandwidth, utilization factor of network components, and the kind of access device used [103,[109][110][111]. ...
... Moreover, suggestions such as an advertising ban on selected Internet areas (e.g., on search engines, social media platforms) could face significant political opposition from IT companies or from the entire marketing industry. However, given that the share of energy use from applying ICT throughout economy and society is higher than the energy consumed in producing hardware [76,103] Civil society Activism and political participation that demand sufficiency-oriented production, consumption, data handling, and legislation; grassroot movements, associations, voluntary work (e.g., repair café) software sufficiency would also contribute more to reducing the overall burden from the sector. Challenges-but also effectiveness-increase further when looking at user sufficiency. ...
Full-text available
ICT hold significant potential to increase resource and energy efficiencies and contribute to a circular economy. Yet unresolved is whether the aggregated net effect of ICT overall mitigates or aggravates environmental burdens. While the savings potentials have been explored, drivers that prevent these and possible counter measures have not been researched thoroughly. The concept digital sufficiency constitutes a basis to understand how ICT can become part of the essential environmental transformation. Digital sufficiency consists of four dimensions, each suggesting a set of strategies and policy proposals: (a) hardware sufficiency, which aims for fewer devices needing to be produced and their absolute energy demand being kept to the lowest level possible to perform the desired tasks; (b) software sufficiency, which covers ensuring that data traffic and hardware utilization during application are kept as low as possible; (c) user sufficiency, which strives for users applying digital devices frugally and using ICT in a way that promotes sustainable lifestyles; and (d) economic sufficiency, which aspires to digitalization supporting a transition to an economy characterized not by economic growth as the primary goal but by sufficient production and consumption within planetary boundaries. The policies for hardware and software sufficiency are relatively easily conceivable and executable. Policies for user and economic sufficiency are politically more difficult to implement and relate strongly to policies for environmental transformation in general. This article argues for comprehensive policies for digital sufficiency, which are indispensible if ICT are to play a beneficial role in overall environmental transformation.
... In a recent study using panel data from OECD countries, Köppl-Turyna et al. (2021) find a corresponding elasticity 9 estimate of 0.82 (i.e. a 1% increase in electricity demand leads to a 0.82% increase in CO 2 emissions). Most research predicts an exponential surge of data traffic for the coming years (Andrae, 2020a). To name just a few, Weldon (2016) projected that electricity consumption of all connected devices would increase from 200 TWh in 2011 to 1100 TWh and 1400 TWh by and 2025, respectively. ...
... Shoukourian and Kranzlmüller (2020) show that despite higher electricity efficiency, the increase in density and performance of the Leibniz supercomputing center in Garching (Bavaria, Germany) is increasing the electricity consumption of the center. Andrae (2020a) does not expect the total electricity consumption of the ICT sector to decrease or flatten out in the near future. ...
... On the one hand, although Koomey's Law has slowed down since 2012, many scientists, including Masanet et al. (2020), predict further efficiency improvements in data centers. On the other hand, other studies, such as Koot and Wijnhoven (2021), Andrae (2020a), or Belkhir and Elmeligi (2018), conclude that energy efficiency improvements might reach their limits. The comparison of Fig. 5a and b illustrates that electricity consumption is approximately 4 TWh lower when efficiency improvements are included, compared to the situation where no efficiency improvement is assumed. ...
Massive increases in Internet data traffic over the last years have led to rapidly rising electricity demand and CO2 emissions, giving rise to environmental externalities and network congestion costs. One particular concern is the rise in data traffic generated by video-streaming services. We analyze the electricity-saving potential related to video streaming in Europe from 2020 to 2030. To this end, three trend scenarios (Business-as-usual, Gray, and Green) are considered and modeled bottom-up, taking specific electricity consumption (and trends) of data transmission networks, end-use devices, and data centers into account. Using these scenarios, we examine in more detail the approximate electricity-saving impact that regulatory interventions and technical standards can have on the electricity consumption of end-users, network operators, and data centers. The model results reveal that regulatory intervention can have a significant impact on electricity consumption and CO2 emissions of the residential houshold sector.
... Estimates vary widely and require constant updating due to changes in demands and technological capacities [15]. Yet, most of them come out quite differently from the wishful thinking of the GeSI-report, and all require both political and industrial efforts for emission reductions to take place, see [16][17][18]. ...
... [95] (pp. [16][17]. An example from one of my research projects might explain how some of the challenges can be unfolded empirically. ...
Full-text available
How can an anthropology of digital technology contribute to our understanding of climate mitigating initiatives? Governments and private sector industries argue that climate mitigation must focus on “decoupling” economic growth from carbon emissions if we are to reduce climate impact while still maintaining a healthy economy. Most proponents of decoupling envisage that digitalization will play a central role in this operation. Critics, however, argue that IT has a large and often unacknowledged climate impact, while IT solutions also frequently bring new and unforeseen problems, particular or systemic. The challenge of decoupling is thus broader than the management of the relationship between the economy and the climate. As much as decoupling is about how we imagine that the climate crisis can be solved with technologies, trusting that they can create the changes we need, it is also about the cultural value of lifestyles that we do not want to change. Seeing the climate crisis from this perspective opens the door for an anthropology of digital technology, which allows us to approach decoupling as a matter of how sociocultural change is imagined in the spaces between IT, climate change and society. The article thus contributes to the qualitative social scientific literature on perceptions of change by focusing on some of the ways that implicit ideas of change are embedded in the promotion of digital technologies as solutions to climate change. In addition, it presents to a wider scientific audience the perspectives that an anthropologically inspired analytic may provide on this topic.
... Numerous research and estimations highlighted the urgent situation that needs to be truly considered. In fact, future ICT infrastructures can hardly slow their overall electricity use until 2030, and they will keep on using more energy than today, despite the decrease of the energy cost per operation [8,9]. Moreover, data centers are estimated to account for around 1% of worldwide electricity use [95]. ...
... About 222 changes were submitted to the original projects repositories, resulting in a total of 59 Pull requests for 140 tested applications issued from F-droid. 9 At least 16 Pull requests have been successfully merged. ...
Energy consumption is an emerging concern in multiple domains and fields, including ICT and data centers. In fact, the energy consumption of data centers has drastically increased in the last decade for both hardware and software entities, especially due to the democratization of cloud services and the huge amount of transiting data. Formerly, The energy consumption was mainly related to the used hardware, and its capacity to maintain a low power consumption while achieving tasks. However, the running software is in fact as important as hardware, and is as responsible for very substantial gains or drawbacks in energy consumption.The ultimate goal of this thesis is to help developers and practitioners understand and actively think about green software design in their work, in order to reduce the energy consumption of their software and deliver energy efficient products. We thus contribute to supplement green software design knowledge.. To achieve this, we start with conducting a qualitative study with developers, to discuss the multiple hurdles they are facing and their requirements to promote green software design within companies.To reduce software energy consumption, practitioners have to measure it and track its evolution first. In our second contribution we investigate the problem of energy consumption variations. We provide guidelines on controllable factors that one could easily tune to reduce this variation and conduct steady and reproducible energy measurements.Once practitioners are able to measure the energy consumption of their software, they can work on reducing it and produce energy efficient software. Thus, this thesis delivers 3 more contributions, focusing on the Java language. The first contribution aims at helping developers choose and configure their execution environment. We identified substantial differences in energy consumption using multiple JVM platforms with different JIT and GC configurations for different use cases. The second and third contributions study the impact on energy consumption of small changes that developers often apply on their source code (code refactoring and API/methods substitutions respectively). We show through these studies that structure oriented code refactorings do not substantially alter software energy consumption. On the other hand, Java I/O methods substitution drastically changed the energy consumption depending on the use case.This thesis contributes to enrich the knowledge on green software design and provides insights and approaches to enhance the energy efficiency at multiple levels of software development.
... The number of devices will rise by 50% and the digital footprint will double or triple in 2025 [23]. Another study [24] showed that the carbon footprint of ICT in 2015 was similar to 2010 due to energy-efficient devices and smartphone usage. ...
Full-text available
The advent of easily accessible technology, e-commerce, online streaming, and social networking platforms has led to massive amounts of data being stored and processed every second. The IT infrastructures needed to support this digital age consume a large amount of energy and have a negative impact on the environment. There have been several different efforts to estimate the carbon footprint of the internet, but there is no proven exact method for it. Therefore, the goals of this paper are, first—to critically review the carbon emission calculation methods and compare the results, and second—to publicize the environmental impact of our daily simple habit of internet usage. We calculated the carbon footprint of the most popular four online services (TikTok, Facebook, Netflix, and YouTube) by using top-cited methods such those from Obringer, the Shift Project, Andrae, and Hintemann and Hinterholze. When comparing the emitted carbon dioxide, the weighted average of online video streaming usage per day is 51 times more than 14 h of an airplane ride. Netflix generates the highest CO2 emissions among the four applications due to its high-resolution video delivery and its number of users.
... Data centres themselves have a substantial carbon footprint of around 100 megatons of CO 2 e (comparable to American commercial aviation) emitted just from the yearly generation of 200 TWh of electricity [2]. Projections estimate that this footprint will increase by 2-to 9-fold in the next decade [3], with an electricity usage potentially as high as 974 TWh in 2030 [4]. This doesn't even include the environmental impact of producing and disposing of the hardware needed for computation. ...
... According to the study [1], the energy consumption of Information and Communications Technology (ICT) is 7% of the global electricity usage in 2020 and is forecast to be around the average of the best-case and expected scenarios (7% and 21%) by 2030. ...
Full-text available
The energy efficiency in ICT is becoming a grand technological challenge and is now a first-class design constraint in all computing settings. Energy predictive modelling based on performance monitoring counters (PMCs) is the leading method for application-level energy optimization. However, a sound theoretical framework to understand the fundamental significance of the PMCs to the energy consumption and the causes of the inaccuracy of the models is lacking. In this work, we propose a small but insightful theory of energy predictive models of computing, which formalizes both the assumptions behind the existing PMC-based energy predictive models and properties, heretofore unconsidered, that are basic implications of the universal energy conservation law. The theory’s basic practical implications include selection criteria for model variables, model intercept, and model coefficients. The experiments on two modern Intel multicore servers show that applying the proposed selection criteria improves the prediction accuracy of state-of-the-art linear regression models from 31.2% to 18%. Finally, we demonstrate that employing energy models constructed using the proposed theory for energy optimization can save a significant amount of energy (up to 80% for applications used in experiments) compared to state-of-the-art energy measurement tools.
Digital technology and entertainment is a significant driver of electricity use globally, resulting in increased GHG emissions. Research has been conducted on electricity use associated with adigital services, but to date no complete study of television distribution has been conducted. Here we present the first assessment of electricity used for distribution and viewing of television over different distribution platforms terrestrial, satellite, cable and online streaming. We use a novel methodology that combines life cycle assessment techniques with models of the diversity of actual user behaviour, derived from detailed audience monitoring and online behaviour analytics data. This can be applied to assess overall electricity usage for a given media company's services and allows comparison of the electricity demanded per viewerhour of each distribution platform. We apply this to a representative national TV provider - the British Broadcasting Corporation – and show the mean estimate for BBC distribution/viewing electricity use in 2016 is 2171 GWh, resulting in emissions of 1.12 MtCO2e. We show that viewing over streaming, cable and satellite platforms used a mean of 0.17–0.18 KWh per device-hour (88–93 gCO2e) while terrestrial broadcast used a mean of 0.07 kWh (36 gCO2e). We identify home networking equipment and set-top boxes as key hotspots in the system, and show that though streaming is similar in impact to cable and satellite, this is because people use smaller devices to view – meaning the networking equipment in and beyond the home has a higher impact while the end device has a lower one.
Technical Report
Full-text available
In 2018, the power requirements of data centers in Germany rose significantly again. Compared to the previous year, the demand for electrical energy by servers and data centers increased by 6% to 14 billion kWh. This growth is primarily due to the strong expansion of cloud computing capacity in Germany. Substantial new data center capacities were built up, particularly in the greater Frankfurt area, but also at other locations. This development is expected to continue in the future. Trends such as edge computing and artificial intelligence are expected to lead to a significant expansion of data center infrastructures in Germany, Europe and worldwide. If the existing efficiency potentials are not realized , the energy consumption of data centers will continue to rise significantly. These are the results of a recent study by the Borderstep Institute on the development of the energy consumption of data centers in Germany.
Full-text available
Energy proportionality is the key design goal followed by architects of multicore processors. One of its implications is that optimization of an application for performance will also optimize it for energy. In this work, we show that energy proportionality does not hold true for multicore processors. This finding creates the opportunity for bi-objective optimization of applications for energy and performance. We propose and study a novel application-level bi-objective optimization method for energy and performance for multithreaded dataparallel applications. The method uses two decision variables, the number of identical multithreaded kernels (threadgroups) executing the application and the number of threads per threadgroup, with a given workload partitioned equally between the threadgroups. We experimentally demonstrate the efficiency of the method using four popular and highly optimized multithreaded data-parallel applications, two employing two-dimensional fast Fourier transform and the other two, dense matrix multiplication. The experiments performed on four modern multicore processors show that the optimization for performance alone results in increase in dynamic energy consumption by up to 89% and optimization for dynamic energy alone results in performance degradation by up to 49%. By solving the bi-objective optimization problem, the method determines up to 11 Pareto-optimal solutions. Finally, we propose a qualitative dynamic energy model employing performance events as variables to explain the discovered energy nonproportionality. The model shows that the energy nonproportionality on our experimental platforms for the two data-parallel applications is due to disproportionately energy expensive activity of the data translation lookaside buffer.
Full-text available
There is no doubt that the economic and computing activity related to the digital sector will ramp up faster in the present decade than in the last. Moreover, computing infrastructure is one of three major drivers of new electricity use alongsidefuture and current hydrogen production and battery electric vehicles charging. Here is proposed a trajectory in this decade for CO2 emissions associated with this digitalization and its share of electricity and energy generation as a whole. The roadmap for major sources of primary energy and electricity and associated CO2 emissions areprojected and connected to the probable power use of the digital industry. The truncation error for manufacturing related CO2 emissions may be 0.8 Gt or more indicating a larger share of manufacturing and absolute digital CO2 emissions.While remaining at a moderate share of global CO2 emissions (4-5%), the resulting digital CO2 emissions will likely rise from 2020 to 2030. The opposite may only happen if the electricity used to run especially data centers and production plants is produced locally (next to the data centers and plants) from renewable sources and data intensity metrics grow slower than expected.
Conference Paper
Full-text available
We present a transceiver configuration selection heuristic combining Enhanced Gaussian Noise (EGN) models, which shows a 40% increase in throughput and 87% decrease in execution time, compared to only approximate EGN and Full-Form EGN respectively.
Full-text available
Although information and communications technologies (ICTs) have the potential of enabling powerful social, economic and environmental benefits, ICT systems give a non-negligible contribution to world electricity consumption and carbon dioxide (CO2) footprint. This contribution will sustain since the increased demand for user′s connectivity and an explosion of traffic volumes necessitate continuous expansion of current ICTs services and deployment of new infrastructures and technologies which must ensure the expected user experiences and performance. In this paper, analyses of costs for the global annual energy consumption of telecommunication networks, estimation of ICT sector CO2 footprint contribution and predictions of energy consumption of all connected user-related devices and equipment in the period 2011–2030 are presented. Since presented estimations of network energy consumption trends for main communication sectors by 2030 shows that highest contribution to global energy consumption will come from wireless access networks and data centres (DCs), the rest of the paper analyses technologies and concepts which can contribute to the energy-efficiency improvements of these two sectors. More specifically, different paradigms for wireless access networks such as millimetre-wave communications, Long-Term Evolution in unlicensed spectrum, ultra-dense heterogeneous networks, device-to-device communications and massive multiple-input multiple-output communications have been analysed as possible technologies for improvement of wireless networks energy efficiency. Additionally, approaches related to the DC resource management, DCs power management, green DC monitoring and thermal management in DCs have been discussed as promising approaches to improvement of DC power usage efficiency. For each of analysed technologies, future research challenges and open issues have been summarised and discussed. Lastly, an overview of the accepted papers in the Special Issue dedicated to the green, energy-efficient and sustainable networks is presented.
Full-text available
International Journal of Green Technology 2019; 5(1): 50-63. Currently the global energy and electricity use of ICT networks and data centers are estimated and predicted by several different top-down approaches. It has not been investigated which prediction approach best answers to the 5G, Artificial Intelligence and Internet of Things megatrends which are expected to emerge until 2030 and beyond. The analysis of the potential correlation between storage volume, communication volume and computations (instructions, operations, bits) is also lacking. The present research shows that several different activity metrics (AM)-e.g. data traffic, subscribers, capita, operations-have and can be been used. First the global baseline electricity evolution (TWh) for 2010, 2015 and 2020 for networks of fixed, mobile and data centers is set based on literature. Then the respective AM-e.g. data traffic-associated with each network are identified. Then the following are proposed: Compound Aggregated Growth Rate (CAGR) for each AM, CAGR for TWh/AM and the resulting TWh values for 2025 and 2030. The results show that AMs based on data traffic are best suited for predicting future TWh usage of networks. Data traffic is a more robust (scientific) AM to be used for prediction than subscribers as the latter is a more variable and less definable concept. Nevertheless, subscriber based AM are more uncertain than data traffic AM as the subscriber is neither a well-defined unit, nor related to the network equipment which handle the data. Despite large non-chaotic uncertainties, data traffic is a better AM than subscribers for expressing the energy evolution of ICT Networks and Data Centers. Top-down/high-level models based on data traffic are sensitive to the amount of traffic however also to the development of future electricity intensity. For the first time the primary energy use of computing, resulting from total global instructions and energy per instruction, is estimated. Combining all networks and data centers and using one AM for all does not reflect the evolution improvement of individual network types. Very simplistic high-level estimation models tend to both overestimate and underestimate the TWh. However, looking at networks and data centers as one big entity better reflects the future converging paradigm of telecom, ICT and computing. The next step is to make the prediction models more sophisticated by using equipment standards instead of top-down metrics. The links between individual equipment roadmaps (e.g. W/(bits per second)) and sector-level roadmaps need further study.
Modern HPC and cloud data centers, being mission critical facilities, continue to grow in size and number as the amount of Internet and cloud-based services increases. Moreover, the recent hype in the usage of machine learning based technologies, requiring appropriate infrastructures for data storage and computational power, further pushes the need for modern data centers. This expenditure not only increases the operational costs, but also affects the environment by generating carbon footprints. Due to increased load on supporting power grids, some governmental organizations start to reconsider data center deployment procedures with an increased demand of renewable energy utilization and waste heat recovery. One of the challenges when powering data centers using renewable energy is the intermittent availability of the power. The dynamics of available power must be taken into account to maximize usage of accessible renewable energy while adhering to existing service level agreements as well as to thermal and power requirements. This requires a framework that can continuously match the actual behavior of the target computing infrastructure to the current availability of power and cooling. This paper introduces machine learning based approaches aiming to model various data center energy/power consumption related Key Performance Indicators (KPIs). A validation is performed using the multi-year operational data obtained at Leibniz Supercomputing Centre (LRZ). This framework is used as a building block for achieving a data center infrastructure-aware resource management and scheduling.
Efficient heat removal and recovery are two conflicting processes that are difficult to achieve simultaneously. Here, in this work, we pave a new way to achieve this through the use of a smart thermogalvanic hydrogel film, in which the ions and water undergo two separate thermodynamic cycles: thermogalvanic reaction and water-to-vapor phase transition. When the hydrogel is attached to a heat source, it can achieve efficient evaporative cooling while simultaneously converting a portion of the waste heat into electricity. Moreover, the hydrogel can absorb water from the surrounding air to regenerate its water content later on. This reversibility can be finely designed. As an applicative demonstration, the hydrogel film with a thickness of 2 mm was attached to a cell phone battery while operating. It successfully decreased the temperature of the battery by 20 °C and retrieved electricity of 5 μW at the discharging rate of 2.2 C.
With the explosion in digital traffic, the number of data centers as well as demands on each data center, continue to increase. Concomitantly, the cost (and environmental impact) of energy expended in the thermal management of these data centers is of concern to operators in particular, and society in general. In the absence of physics-based control algorithms, CRAC units are typically operated through conservatively pre-determined set points, resulting in sub-optimal energy consumption. For a more optimal control algorithm, predictive capabilities are needed. In this paper, we develop a data-informed, experimentally validated and computationally inexpensive system level predictive tool that can forecast data center behavior for a broad range of operating conditions. We have tested this model on experiments as well as on (experimentally) validated transient CFD simulations for two different data center design configurations. The validated model can accurately forecast temperatures and air flows in a data center (including the rack air temperatures) for ten to fifteen minutes into the future. Once integrated with control aspects, we expect that this model can form an important building block in a future intelligent, increasingly automated data center environment management systems.
We present PULP-NN, an optimized computing library for a parallel ultra-low-power tightly coupled cluster of RISC-V processors. The key innovation in PULP-NN is a set of kernels for quantized neural network inference, targeting byte and sub-byte data types, down to INT-1, tuned for the recent trend toward aggressive quantization in deep neural network inference. The proposed library exploits both the digital signal processing extensions available in the PULP RISC-V processors and the cluster’s parallelism, achieving up to 15.5 MACs/cycle on INT-8 and improving performance by up to 63 × with respect to a sequential implementation on a single RISC-V core implementing the baseline RV32IMC ISA. Using PULP-NN, a CIFAR-10 network on an octa-core cluster runs in 30 × and 19.6 × less clock cycles than the current state-of-the-art ARM CMSIS-NN library, running on STM32L4 and STM32H7 MCUs, respectively. The proposed library, when running on a GAP-8 processor, outperforms by 36.8 × and by 7.45 × the execution on energy efficient MCUs such as STM32L4 and high-end MCUs such as STM32H7 respectively, when operating at the maximum frequency. The energy efficiency on GAP-8 is 14.1 × higher than STM32L4 and 39.5 × higher than STM32H7, at the maximum efficiency operating point. This article is part of the theme issue ‘Harmonizing energy-autonomous computing and intelligence’.