To read the full-text of this research, you can request a copy directly from the author.
... As AI is a relatively new technology with various application domains and quick advancements, its influence on twin transition is difficult to assess. Research from the field of sustainable AI indicates that it could play a special role either as a driver, for instance, because it strengthens the linkage between digitalisation and sustainability and makes them proceed hand in hand (e.g., Behera et al., 2023;Vinuesa et al., 2020;Willenbacher et al., 2021), or as a technology that overshadows and counteracts sustainability activities due to its high saliency in public debates and its high energy consumption (e.g., Heilinger et al., 2023;Vries, 2023). In terms of scientific evidence on the role of AI, a survey of scientific articles in 2020 found no publications addressing this question (Wang et al., 2020). ...
... In contrast, studies from other fields than twin transition revealed a major environmental impact of AI technologies (van Wynsberghe, 2021). For example, research on sustainable AI discusses the high energy consumption and greenhouse gas emissions caused by the training and use of AI systems (Schwartz et al., 2019;Strubell et al., 2019;Vries, 2023). For example, training the language model GPT-3, which served as the foundation for OpenAI's AI-based chatbot ChatGPT, is estimated to have consumed around 1,300 MWh of energy and generated approximately 550 tons of CO 2 e emissions (Patterson et al., 2021). ...
... For example, training the language model GPT-3, which served as the foundation for OpenAI's AI-based chatbot ChatGPT, is estimated to have consumed around 1,300 MWh of energy and generated approximately 550 tons of CO 2 e emissions (Patterson et al., 2021). Furthermore, operating ChatGPT reportedly requires up to 2.9 Wh per query, several times more than a simple Google search, which consumes around 0.3 Wh (Vries, 2023). Given the rapidly increasing proliferation of AI models and the growing focus on achieving higher accuracy as the primary evaluation criterion, the field of green AI calls for new evaluation metrics for AI models that consider their energy efficiency (Yarally et al., 2023). ...
Sustainability and digitalisation can be considered as two megatrends that force companies to innovate in order to remain competitive. The twin transition circumscribes the parallelism of these megatrends, with the aim of achieving a synergistic combination. However, to date, there is a lack of scientific evidence on the relationship between sustainability and digitalisation and on the role of the twin transition for companies. Therefore, this article presents a systematic literature review following the PRISMA methodology to analyse how the concept of twin transition is currently understood, how the two transitions are related, and which organisational, external, and technological contextual factors influence the twin transition in companies. On the technological level, we put special emphasis on the impact of artificial intelligence (AI) for the twin transition, since it may significantly influence the interplay between digitalisation and sustainability. Our findings, derived from 70 scientific publications, reveal that (i) the social dimension of sustainability is mostly underrepresented in the discussion, (ii) the positive impact of digital to sustainable transition is much more evidence-based than vice versa, (iii) establishing the right capabilities, methods, and competences is essential to drive twin transition in companies, and (iv) indirect, positive ecological aspects of AI, as well as direct negative aspects, are predominantly discussed, but without addressing important social aspects. Finally, based on the literature review, we propose a new definition of the term twin transition. Future research is needed to reduce ambivalence and to provide a more profound and evidence-based picture of the assumed relations.
... Strubell et al. quantified the economic and environmental costs of training neural network models in natural language processing and made recommendations to reduce costs and improve equity in natural language processing research and practice [12]. De Vries estimated the carbon footprint of AI models [13]. ...
... The International Electrotechnical Commission (IEC) predicts that the AI industry's electricity demand will increase more than tenfold by 2026 compared to 2023 [13]. Furthermore, the International Energy Agency (IEA) points out that if AI is fully implemented in search engines like Google, power demand could increase more than tenfold [1, p.34]. ...
... Such an increase from current estimates could potentially elevate the global electricity demand from AI data centers alone to hundreds of TWh annually, approaching the VOLUME 13, 2025 current consumption of nations like Spain or Australia [14]. Specifically, comparing the electricity consumption of conventional Google search (0.3 Wh each) with that of ChatGPT (2.9 Wh each), IEA estimate that 10 13 Wh/year of additional electricity will be required, assuming that 9 billion searches (inferences) are performed daily [1, p.35]. According to a study by SemiAnalysis, electricity consumption could reach 29.2 TWh/year if LLM were implemented for all Google searches [40]. ...
The rapid adoption of blockchain technology and generative AI contributes significantly to global electricity consumption, raising concerns about environmental sustainability. The first step in saving energy is to identify current consumption. However, since blockchain and generative AI are cloud-based services, it is difficult to understand electricity consumption outside one’s facilities. This creates a barrier for user companies and organizations seeking to increase the accuracy of calculating Scope 3 emissions. This study quantifies the electricity consumption of these technologies at a system-wide and per-use level. It compares them to traditional services such as payment networks and web search engines. Bitcoin, a Proof of Work (PoW) blockchain, consumes approximately 121 TWh, equivalent to 0.43% of global electricity consumption, and its energy demand per transaction is 720,000 times higher than that of the Visa payment system. Ethereum’s move to Proof of Stake (PoS) in 2022 reduces energy consumption by 99.988%, demonstrating the potential for efficiency gains. Generative AI models also have significant energy requirements, especially during the training and inference phases. For example, training GPT-4 required approximately 9450 MWh, and daily inference work exceeded 500 MWh. The results show that inference, driven by frequent user interaction, often exceeds the energy consumption of training. The study underscores the urgency of addressing these technologies’ environmental impact through strategies such as adopting energy-efficient consensus mechanisms and optimizing AI’s lifecycle. These findings are intended to guide organizations in refining their Scope 3 emissions calculations and adopting sustainable technology practices.
... Though the focus of our paper is the cognitive "cost" of using LLM/Search Engine in a specific task, and more specifically, the cognitive debt one might start to accumulate when using an LLM, we actually argue that the cognitive cost is not the only concern, material and environmental cost is as high. According to a 2023 study [120] LLM query consumes around 10 times more energy than a search query. It is important to note that this energy does not come free, and it is more likely that the average consumer will be indirectly paying for it very soon [121,122]. ...
... It is important to note that this energy does not come free, and it is more likely that the average consumer will be indirectly paying for it very soon [121,122]. [120], as well as our very approximate estimates on the total energy impact by the LLM group and Search Engine group. ...
This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
... As a result, there is an urgent need for novel computing paradigms and optimization methods that minimize power usage, latency, and size. The rapid, often unsustainable growth of AI is also driving up carbon emissions; for instance, training a single large model can emit over 284 tons of CO2 [3][4], and AI could account for 0.5% of global electricity use by 2027 [5]. Moreover, applications such as edge computing and IoT (Internet of Things), where low latency and small form factors are critical, further underscore the need for more power efficient alternatives [6]. ...
... They introduce a stochastic refractory period, preventing neurons from repeatedly flipping variables in successive steps. Specifically, after a variable neuron changes its state, from 0 to 1 or vice-versa, further changes are inhibited for a random number of iterations 5 . In [142], the authors show that Loihi2 implementation of SA outperforms a standard linear-schedule anneal on a D-Wave quantum annealer for the QUBO problem. ...
Neuromorphic computing (NC) introduces a novel algorithmic paradigm representing a major shift from traditional digital computing of Von Neumann architectures. NC emulates or simulates the neural dynamics of brains in the form of Spiking Neural Networks (SNNs). Much of the research in NC has concentrated on machine learning applications and neuroscience simulations. This paper investigates the modelling and implementation of optimization algorithms and particularly metaheuristics using the NC paradigm as an alternative to Von Neumann architectures, leading to breakthroughs in solving optimization problems. Neuromorphic-based metaheuristics (Nheuristics) are supposed to be characterized by low power, low latency and small footprint. Since NC systems are fundamentally different from conventional Von Neumann computers, several challenges are posed to the design and implementation of Nheuristics. A guideline based on a classification and critical analysis is conducted on the different families of metaheuristics and optimization problems they address. We also discuss future directions that need to be addressed to expand both the development and application of Nheuristics.
... Despite advances in hardware and software efficiency, however, large-scale AI developments have contributed to a sharp rise in overall energy consumption [5,6,11]. The rise in energy consumption is particularly notable in the operation of large foundational models such as ChatGPT, which, despite being optimized for efficiency, still require immense computational resources that are supported by energy-intensive DC operations [12]. For instance, Google has revealed that the increased electricity demand driven by AI and its expanding DC infrastructure has resulted in a 48% surge in greenhouse gas (GHG) emissions above the company's 2019 baseline [13,14]. ...
... Based on several studies showing the rapidly growing energy consumption and environmental impact of AI development [12,[16][17][18], high-profile media outlets have increasingly covered the environmental costs associated with AI technologies [13,14,19]. The energy consumption of 1 Currently, the fastest-growing advanced-AI use case is Generative AI (Gen AI), which will account for approximately 40 percent of the total [9]. 2 As systems become more efficient, their reduced cost or increased utility may lead to greater use, ultimately increasing overall resource consumption. ...
... In practice, many estimates of AI training power consumption multiply the manufacturer rated maximum power draw (also referred to as thermal design power or TDP) of AI chips [33] or servers [34] by the reported training time (typically in GPU-hours) of a given model [35], [36]. This reflects data availability, as real time power measurements are not typically retrievable months or years after training has occurred [28], and multiplying GPU-hours by chip TDP is a straightforward modeling approach. ...
As AI's energy demand continues to grow, it is critical to enhance the understanding of characteristics of this demand, to improve grid infrastructure planning and environmental assessment. By combining empirical measurements from Brookhaven National Laboratory during AI training on 8-GPU H100 systems with open-source benchmarking data, we develop statistical models relating computational intensity to node-level power consumption. We measure the gap between manufacturer-rated thermal design power (TDP) and actual power demand during AI training. Our analysis reveals that even computationally intensive workloads operate at only 76% of the 10.2 kW TDP rating. Our architecture-specific model, calibrated to floating-point operations, predicts energy consumption with 11.4% mean absolute percentage error, significantly outperforming TDP-based approaches (27-37% error). We identified distinct power signatures between transformer and CNN architectures, with transformers showing characteristic fluctuations that may impact grid stability.
... Deploying these reasoning-enhanced LLMs, however, comes at an immense computational cost. Even in current static reasoning models which follow fixed input-output mappings without external tool interaction (Figure 1(a,b)), LLMs run on thousands of GPUs, whose power, cooling, and capital costs drive monthly expenses into the tens of millions of dollars [44]. A single ChatGPT query is estimated to consume about ten times the electricity of a typical web search [10] and requires a substantial amount of cooling water [50]. As a result, hyperscalers are investing at an unprecedented scale. ...
Large-language-model (LLM)-based AI agents have recently showcased impressive versatility by employing dynamic reasoning, an adaptive, multi-step process that coordinates with external tools. This shift from static, single-turn inference to agentic, multi-turn workflows broadens task generalization and behavioral flexibility, but it also introduces serious concerns about system-level cost, efficiency, and sustainability. This paper presents the first comprehensive system-level analysis of AI agents, quantifying their resource usage, latency behavior, energy consumption, and datacenter-wide power consumption demands across diverse agent designs and test-time scaling strategies. We further characterize how AI agent design choices, such as few-shot prompting, reflection depth, and parallel reasoning, impact accuracy-cost tradeoffs. Our findings reveal that while agents improve accuracy with increased compute, they suffer from rapidly diminishing returns, widening latency variance, and unsustainable infrastructure costs. Through detailed evaluation of representative agents, we highlight the profound computational demands introduced by AI agent workflows, uncovering a looming sustainability crisis. These results call for a paradigm shift in agent design toward compute-efficient reasoning, balancing performance with deployability under real-world constraints.
... The unknowns include whether certain positive effects will outweigh the opposing negative effects or vice versa and what indirect risks will arise. For example, adding to the environmental cost of the energy-intensive training and use of AI models [51][52][53] , the potential acceleration of innovation may result in shorter product iterations which may lead to increased consumption and consequent environmental damage. Conversely, the systems may lead to more environmentally compatible innovations resulting in an overall positive effect. ...
Humanity is progressing towards automated product development, a trend that promises faster creation of better products and thus the acceleration of technological progress. However, increasing reliance on non-human agents for this process introduces many risks. This perspective aims to initiate a discussion on these risks and appropriate mitigation strategies. To this end, we outline a set of principles for safer AI-driven product development which emphasize human oversight, accountability, and explainable design, among others. The risk assessment covers both technical risks which affect product quality and safety, and sociotechnical risks which affect society. While AI-driven product development is still in its early stages, this discussion will help balance its opportunities and risks without delaying essential progress in understanding, norm-setting, and regulation.
... Proponents argue that advancements in efficient AI design and hardware could mitigate these effects. However, questions persist regarding whether such innovations can keep pace with the growing computational demands of AI, especially given projections of a steep increase in data centers' share of global electricity consumption [77]. • Performance Trade-offs (TF = 224; 42 cases; 67%): Achieving high levels of accuracy in AI systems frequently results in increased computational complexity, leading to a trade-off between performance and energy consumption. ...
Recent advances in Artificial Intelligence (AI) have generated both excitement and concern within the power sector. While AI holds significant promise, enabling improved forecasting of renewable energy generation, enhanced grid resilience, and better supply-demand balancing, it also raises critical issues around transparency, data privacy, accountability, and fairness in power distribution. Despite the growing body of research on AI applications in power systems, there is a lack of structured understanding of the key socio-technical matters of concern (MCs) surrounding its integration. This paper addresses this gap by conducting a systematic literature review combined with qualitative text analysis to identify and synthesize the most prominent socio-technical concerns in the academic discourse. We analyzed a curated sample of peer-reviewed papers published between 1987 and 2024, focusing on high-impact journals in the field. Our analysis reveals four major categories of concern: (1) Operational Concerns-relating to AI’s reliability, efficiency, and integration with existing grid systems; (2) Sustainability Concerns-centered on energy consumption, environmental impact, and AI’s role in the energy transition; (3) Trust Concerns-including transparency, explainability, cybersecurity, and ethics; and (4) Regulatory and Economic Concerns-covering issues of accountability, regulatory compliance, and cost-effectiveness. By mapping these concerns into a cohesive analytical framework, this study contributes to the literature by offering a clearer understanding of AI’s sociotechnical challenges in the power sector. The framework also informs future research and policymaking efforts aimed at the responsible and sustainable deployment of AI in power systems.
... The rapid rise in global electricity demand is fueled by significant growth across various sectors. For example, the increasing heating and cooling needs due to climate change (Zhang et al., 2022), the increasing reliance on artificial intelligence and data centers (de Vries, 2023), and the substantial shift in the transportation sector driven by the widespread adoption of electric vehicles (EVs) are major contributors (Blumberg et al., 2022). Among these, EVs are expected to grow at an exponential rate, placing considerable pressure on electricity infrastructure, potentially causing supply imbalances during peak times. ...
The rising demand for electricity presents significant challenges to grid stability. Demand response programs address this issue by incentivizing consumers to adjust consumption during peak periods. Load aggregators facilitate these programs by coordinating load reductions across participants; however, they face challenges in maintaining profitability and minimizing operational costs, particularly in nascent demand response markets. In this study, we evaluate three participant selection strategies: duration based, price based, and forecast based, within the context of Thailand’s pilot demand response programs. We propose a dual forecasting methodology that combines short-term load profile forecasting using XGBoost and long-term load duration curve predictions using SARIMAX. This integrated approach improves forecast accuracy and enables more strategic participant selection. Simulation results demonstrate that the dual forecasting strategy consistently minimizes operational costs and reduces the number of participant calls, outperforming conventional strategies even under resource-constrained, high-risk scenarios. These findings suggest that the dual forecasting strategy offers a cost-effective and reliable solution for demand response management, particularly in environments with limited participant availability, making it well-suited for deployment in emerging markets.
... The AI revolution is eliciting a further intensification of our future energy regime (de Vries, 2023). According to a Morgan Stanley Research report on energy demands of generative AI, AI's power demands will skyrocket 70% annually (Maslej et al., 2024). ...
We review evidence that humans are undergoing a major evolutionary transition (MET). We show that the modern period satisfies diagnostic criterion for a MET and then describe the major changes in ideological and cognitive forms that culturally facilitate the current transition. The current MET appears to be moving toward a panhuman, planet-wide superorganism characterized by new forms of social cooperation and a new form of cognition we designate as techno-biotic cognition. We show how forms of 21st century technology such as artificial intelligence are shaping and being shaped by the MET and are in turn influencing human evolution and culture. We suggest that accelerated development in areas of the brain unique to humans might be coevolving with new forms of human cognition that characterize the current MET.
... On the physical level it is important to focus on the ecological implications. While there are many possible sustainability-related use cases of AI, the concrete technologies residing under this umbrella term have an astronomically high consumption of material resources like water, cobalt lithium, or energy (e.g., Li et al., 2023;de Vries, 2023), whose production and disposal has major ecological impacts, especially in the global south. Overall, the use of AI skyrockets all indicators of current and future estimated digital ecological impact (Taddeo et al., 2021). ...
Artificial intelligence (AI) is currently considered a sustainability "game-changer" within and outside of academia. I argue that while there are indeed many sustainability-related use cases for AI, they are likely to have more overall drawbacks than benefits. To substantiate this claim, I differentiate three 'AI materialities' of the AI supply chain: first the literal materiality (e.g. water, cobalt, lithium, energy consumption etc.), second, the informational materiality (e.g. lots of data and centralised control necessary), and third, the social materiality (e.g. exploitative data work, communities harm by waste and pollution). In all materialities, effects are especially devastating for the global south while benefiting the global north. A second strong claim regarding sustainable AI circles around so called apolitical optimisation (e.g. regarding city traffic), however the optimisation criteria (e.g. cars, bikes, emissions, commute time, health) are purely political and have to be collectively negotiated before applying AI optimisation. Hence, sustainable AI, in principle, cannot break the glass ceiling of transformation and might even distract from necessary societal change. To address that I propose to stop 'unformation gathering' and to apply the 'small is beautiful' principle. This aims to contribute to an informed academic and collective negotiation on how to (not) integrate AI into the sustainability project while avoiding to reproduce the status quo by serving hegemonic interests between useful AI use cases, techno-utopian salvation narratives, technology-centred efficiency paradigms, the exploitative and extractivist character of AI and concepts of digital degrowth. In order to discuss sustainable AI, this article draws from insights by critical data and algorithm studies, STS, transformative sustainability science, critical computer science, and public interest theory.
... Schneider Electric estimates that AI's power demand will grow from 4 GW (35 TWh annually) in 2023 to 15 GW (131 TWh annually) by 2028 [2]. Other predictions indicate that AI workloads might consume between 85 TWh and 134 TWh by 2027, potentially increasing total energy demand of data centers by 30-50% [3]. An annual consumption of 5.8 TWh (resp. ...
Power management has become a crucial focus in the modern computing landscape, considering that {\em energy} is increasingly recognized as a critical resource. This increased the importance of all topics related to {\em energy-aware computing}. This paper presents an experimental study of three prevalent power management techniques that are {\em power limitation, frequency limitation}, and {\em ACPI/P-State governor modes} (OS states related to power consumption). Through a benchmark approach with a set of six computing kernels, we investigate {\em power/performance} trade-off with various hardware units and software frameworks (mainly TensorFlow and JAX). Our experimental results show that {\em frequency limitation} is the most effective technique to improve {\em Energy-Delay Product (EDP)}, which is a convolution of energy and running time. We also observe that running at the highest frequency compared to a reduced one could lead to a reduction of factor in EDP. Another noticeable fact is that frequency management shows a consistent behavior with different CPUs, whereas opposite effects sometimes occur between TensorFlow (TF) and JAX with the same power management settings.
... Power consumption is among the largest expenses in data centers and is estimated at 1-1.5% of global electricity use [1]. The recent surge in training Large Language Models (LLMs), consuming around 29.3 terawatt-hours per year-equivalent to Ireland's energy consumption [2]-has prompted companies like Amazon, Google, and Microsoft to invest billions in nuclear energy [3] to meet this demand. The Frontier supercomputer, the world's first exascale supercomputer, consumes 22.7 MW continuously [4]. ...
Power consumption is a major concern in data centers and HPC applications, with GPUs typically accounting for more than half of system power usage. While accurate power measurement tools are crucial for optimizing the energy efficiency of (GPU) applications, both built-in power sensors as well as state-of-the-art power meters often lack the accuracy and temporal granularity needed, or are impractical to use. Released as open hardware, firmware, and software, PowerSensor3 provides a cost-effective solution for evaluating energy efficiency, enabling advancements in sustainable computing. The toolkit consists of a baseboard with a variety of sensor modules accompanied by host libraries with C++ and Python bindings. PowerSensor3 enables real-time power measurements of SoC boards and PCIe cards, including GPUs, FPGAs, NICs, SSDs, and domain-specific AI and ML accelerators. Additionally, it provides significant improvements over previous tools, such as a robust and modular design, current sensors resistant to external interference, simplified calibration, and a sampling rate up to 20 kHz, which is essential to identify GPU behavior at high temporal granularity. This work describes the toolkit design, evaluates its performance characteristics, and shows several use cases (GPUs, NVIDIA Jetson AGX Orin, and SSD), demonstrating PowerSensor3's potential to significantly enhance energy efficiency in modern computing environments.
... The amount of data we have has increased exponentially and will reach nearly 181 zettabytes by 2025, which is three times more than what we had in 2020. The rising use of AI and other new digital technologies will increase their footprints [69]. The demand for electricity in data centers across the globe will reach 800-1200 TWh by 2026 [70]. ...
Modern financial institutions now manage increasingly advanced data-related activities and place a growing emphasis on environmental and energy impacts. In financial modeling, relational databases, big data systems, and the cloud are integrated, taking into consideration resource optimization and sustainable computing. We suggest a four-layer architecture to address financial data processing issues. The layers of our design are for data sources, data integration, processing, and storage. Data ingestion processes market feeds, transaction records, and customer data. Real-time data are captured by Kafka and transformed by Extract-Transform-Load (ETL) pipelines. The processing layer is composed of Apache Spark for real-time data analysis, Hadoop for batch processing, and an Machine Learning (ML) infrastructure that supports predictive modeling. In order to optimize access patterns, the storage layer includes various data layer components. The test results indicate that the processing of market data in real-time, compliance reporting, risk evaluations, and customer analyses can be conducted in fulfillment of environmental sustainability goals. The metrics from the test deployment support the implementation strategies and technical specifications of the architectural components. We also looked at integration models and data flow improvements, with applications in finance. This study aims to enhance enterprise data architecture in the financial context and includes guidance on modernizing data infrastructure.
... Therefore, AI has promoted the reorganization of the energy framework and industrial upgrading. However, some scholars have pointed out that due to the high power consumption of AI computing [13,14], it may lead to intensified energy consumption. For Google, OpenAI, Microsoft, and other companies, despite setting ambitious carbon reduction targets, achieving carbon neutrality by 2030 still faces challenges [15,16]. ...
Highlights
What are the main findings?
The CET policy can reduce the total energy consumption and promote the renewable energy consumption locally, with no significant influence on total energy consumption in surrounding areas. However, it causes a decrease in the renewable energy consumption ratio in neighboring regions.
AI significantly reduces energy consumption and promotes renewable energy consumption in surrounding areas. Benefiting from AI-enabled smart city construction, the local region achieves a notable 8.55% reduction in total energy consumption, which exceeds the effect of implementing CET policy alone.
What is the implication of the main finding?
The CET policy of cities exerts a catalytic effect, increasing energy consumption and carbon emission costs in local regions, promoting energy structure transformation, while avoiding the relocation of high-energy-consuming enterprises to surrounding areas. However, due to the “siphoning effect”, the policy absorbs renewable resources from neighboring regions, necessitating enhanced coordination with adjacent areas.
AI can break down regional barriers through spatial effects, fostering cross-regional spillovers of green concepts and the application of green technological innovations, thereby counterbalancing the “siphoning effect” and facilitating the formation of a green smart city cluster. Smart city development enables the compatibility of “green resilience” and “smart functionality”.
Abstract
Amidst climate change and the energy crisis worldwide, the synergy between smart city and environmental policies has become a key path to improving the green resilience of cities. This study examines the spatial effects of carbon emission trading (CET) policy on urban energy performance under the context of artificial intelligence (AI)-empowered smart cities. Using the spatial Durbin model (SDM) and analyzing data from 262 Chinese cities covering the period 2013–2021, the results reveal that: (1) smart cities significantly benefit from the institutional support of the local CET policy, resulting in an 8.55% reduction in energy consumption in the pilot city; (2) AI advancement contributes directly to reducing energy consumption in surrounding areas by 21.84% through spatial effects, and compensates for the imbalance of regional renewable energy caused by the “siphon effect” of CET policy. This study provides empirical evidence for developing countries to build green and resilient cities. This paper proposes the need to build a national CET market, strengthen government supervision, and make reasonable use of AI technology, transforming the green and resilient model of smart cities from Chinese experience to global practice.
... As compute demands increases faster than performance gains, this scaling approach results in an outsized environmental impact [45]. The large environmental footprint of generative AI (GenAI) has led researchers to call for more computationally efficient algorithms and hardware [14], yet few studies have explored 'green AI'-that is, AI systems where efficiency is treated as a primary evaluation criterion alongside accuracy [40]-from an HCI perspective. ...
Creativity is a valuable human skill that has long been augmented through both analog and digital tools. Recent progress in generative AI, such as image generation, provides a disruptive technological solution to supporting human creativity further and helping humans generate solutions faster. While AI image generators can help to rapidly visualize ideas based on user prompts, the use of such AI systems has also been critiqued due to their considerable energy usage. In this paper, we report on a user study (N = 24) to understand whether energy consumption can be reduced without impeding on the tool's perceived creativity support. Our results highlight that, for example, a main effect of (image generation) condition on energy consumption, and index of creativity support per prompt but not per task, which seem mainly attributed to image quantity per prompt. We provide details of our analysis on the relation between energy usage, creativity support, and prompting behavior, including attitudes towards designing with AI and its environmental impact.
... A recent study found that if generative AI were to be incorporated in every Google search query, Google's electricity consumption associated with AI would be 29.3TWh [12]. As a comparison, Google's total electricity consumption was at 18.3TWh in 2021, of which 10-15% was due to AI. ...
Due to increased computing use, data centers consume and emit a lot of energy and carbon. These contributions are expected to rise as big data analytics, digitization, and large AI models grow and become major components of daily working routines. To reduce the environmental impact of software development, green (sustainable) coding and claims that AI models can improve energy efficiency have grown in popularity. Furthermore, in the automotive industry, where software increasingly governs vehicle performance, safety, and user experience, the principles of green coding and AI-driven efficiency could significantly contribute to reducing the sector's environmental footprint. We present an overview of green coding and metrics to measure AI model sustainability awareness. This study introduces LLM as a service and uses a generative commercial AI language model, GitHub Copilot, to auto-generate code. Using sustainability metrics to quantify these AI models' sustainability awareness, we define the code's embodied and operational carbon.
Generative AI's (GenAI) rapid growth raises environmental concerns due to high energy consumption. Despite accelerating technological advancements, understanding how different stakeholders in the GenAI ecosystem can contribute to environmental sustainability remains limited. We address this gap with a taxonomy of actions for environmentally sustainable GenAI ecosystems. Our taxonomy, developed through a design science approach combining literature review and case analysis, categorizes environmental sustainability interventions across resources, models, and usage. We identify key stakeholders (hardware manufacturers, cloud providers, model developers, application providers) and map their roles in implementing these actions. The taxonomy reveals trade-offs between performance, cost, and environmental sustainability, highlighting the need for context-specific strategies. Through an illustrative vignette, we demonstrate how GenAI application providers can systematically implement sustainability measures. We provide a framework for researchers and practitioners to develop environmentally responsible GenAI solutions, fostering coordinated action to ensure GenAI benefits without compromising environmental well-being.
Artificial intelligence (AI) is revolutionising neuroimaging by enabling automated analysis, predictive analytics, and the discovery of biomarkers for neurological disorders. However, traditional artificial neural networks (ANNs) face challenges in processing spatiotemporal neuroimaging data due to their limited temporal memory and high computational demands. Spiking neural networks (SNNs), inspired by the brain’s biological processes, offer a promising alternative. SNNs use discrete spikes for event-driven communication, making them energy-efficient and well suited for the real-time processing of dynamic brain data. Among SNN architectures, NeuCube stands out as a powerful framework for analysing spatiotemporal neuroimaging data. It employs a 3D brain-like structure to model neural activity, enabling personalised modelling, disease classification, and biomarker discovery. This paper explores the advantages of SNNs and NeuCube for multimodal neuroimaging analysis, including their ability to handle complex spatiotemporal patterns, adapt to evolving data, and provide interpretable insights. We discuss applications in disease diagnosis, brain–computer interfaces, and predictive modelling, as well as challenges such as training complexity, data encoding, and hardware limitations. Finally, we highlight future directions, including hybrid ANN-SNN models, neuromorphic hardware, and personalised medicine. Our contributions in this work are as follows: (i) we give a comprehensive review of an SNN applied to neuroimaging analysis; (ii) we present current software and hardware platforms, which have been studied in neuroscience; (iii) we provide a detailed comparison of performance and timing of SNN software simulators with a curated ADNI and other datasets; (iv) we provide a roadmap to select a hardware/software platform based on specific cases; and (v) finally, we highlight a project where NeuCube has been successfully used in neuroscience. The paper concludes with discussions of challenges and future perspectives.
The rapid growth of information technology (IT) has ushered in a new global, digital-driven, hyperconnected world. Digital and networked technologies are ubiquitous and ingrained in our lives. This chapter discusses the interrelationship between IT and sustainability based on a conceptual framework which focuses on IT for sustainability (IT4S). It discusses selected resource- and well-being-oriented aspects of sustainable design, deployment, and usage of IT, and their adoption in the economy and society. It highlights the critical role that IT is increasingly playing in achieving the UN Sustainable Development Goals (UN SDGs).
The availability of real-world object stimuli that meet researchers’ requirements, for example regarding colour, orientation and resolution, is an ongoing challenge in visual cognition research. Traditionally, there has been a reliance on manually curated object stimuli, which are inefficient to create and may afford researchers limited control over stimulus characteristics. However, recent advances in artificial intelligence (AI) can facilitate the generation of custom-made, highly realistic visual stimuli. We report a generative AI method we used to efficiently generate 200 images of everyday objects for use in research. We also report the results from a subsequent validation study in which we assessed the nameability, perceived realism and familiarity of the stimuli in a sample of 45 younger (18-35) and 45 older (65-85) adults. As anticipated, the majority of the stimuli were rated highly across all three measures, and no significant age differences were observed. The results thus generally validated most of the stimuli for future research requiring nameable, realistic everyday object images. The stimuli, each in seven colours, and the corresponding validation scores are openly available for future use by others. Our research highlights the broader utility of an AI-based approach for generating realistic object stimuli. Our method is reproducible, flexible, and efficient, providing a valuable reference for researchers seeking to custom-create their own object stimuli.
Artificial Intelligence (AI) has expanded significantly in recent years, permeating various sectors of the economy and daily lives. However, this rapid adoption requires an analysis of the underlying trade-offs associated with its operation, which are often unknown to the public. Based on an analysis of data extracted from academic articles, technical reports, data repositories, and government documents, this article explores the physical, energy, and geopolitical dimensions underpinning AI. Despite often being perceived as immaterial, AI relies on a vast and complex physical infrastructure, supported by data centers that house thousands of pieces of equipment manufactured from a wide range of minerals and metals, many of which are classified as critical. Currently, approximately 12,000 data centers are in operation worldwide, including 992 hyperscale facilities that cover areas of thousands of square meters. The short life cycle of data center equipment, combined with inadequate disposal, removes valuable metals from the supply chain, intensifying mineral extraction and exacerbating socio-environmental impacts. Meanwhile, the competition between the United States and China for control over critical minerals and leadership in AI technologies has heightened geopolitical tensions, with mutual restrictions on the export of advanced technologies and essential minerals. Another key aspect is the high energy consumption of AI applications: in the United States, data centers already account for about 4% of national electricity consumption, with projections reaching 9.1% by 2030. Although major technology companies invest in renewable energy sources, such as solar and wind, to meet this growing demand, these sources also require significant volumes of critical minerals. This set of factors highlights the complex interconnection between Artificial Intelligence, Data Centers, Critical Minerals, Energy, and Geopolitics.
A inteligência artificial (IA) tem se expandido significativamente nos últimos anos, permeando diversos setores da economia e da vida cotidiana. Entretanto, essa rápida adoção exige uma análise das contrapartidas associadas ao seu funcionamento, muitas vezes desconhecidas pelo público. Com base em uma análise de dados extraídos de artigos acadêmicos, relatórios técnicos, repositórios de dados e documentos governamentais, este artigo explora as dimensões físicas, energéticas e geopolíticas subjacentes à IA. Apesar de frequentemente vista como imaterial, a IA depende de uma vasta e complexa infraestrutura física, sustentada por data centers que abrigam milhares de equipamentos produzidos a partir de uma ampla gama de minerais e metais, muitos deles classificados como críticos. Atualmente, existem cerca de 12 mil data centers em operação globalmente, incluindo 992 de hiperescala, que ocupam áreas de milhares de metros quadrados. O curto ciclo de vida dos equipamentos desses centros, combinado ao descarte inadequado, retira metais valiosos da cadeia de suprimentos, intensificando a extração mineral e agravando impactos socioambientais. Paralelamente, a disputa entre Estados Unidos e China pelo controle de minerais críticos e pela liderança em tecnologias de IA tem acirrado tensões geopolíticas, com restrições mútuas à exportação de tecnologias avançadas e minerais essenciais. Outro aspecto importante é o alto consumo energético das aplicações de IA: nos Estados Unidos, os data centers já representam cerca de 4% do consumo nacional de eletricidade, com previsão de atingir 9,1% até 2030. Embora as grandes empresas de tecnologia invistam em fontes renováveis, como solar e eólica, para suprir essa demanda crescente, tais fontes também requerem volumes significativos de minerais críticos. Esse conjunto de fatores evidencia a complexa interconexão entre Inteligência Artificial, Data Centers, Minerais Críticos, Energia e Geopolítica.
Life in society is a function of the tension between the tangible and the intangible, although human beings have a natural tendency to give more attention and meaning to what their senses directly perceive. If there is smoke, we believe there is fire. AI, understood in a broad sense, is becoming the new electricity or even the new oxygen. A technology deified in such a disruptive, omnipotent, and omnipresent way that it will revolutionize all dimensions of society, from work, mobility, teaching, health, business, and the very nature of life. However, the deep and structurally unsustainable material dimension of this technology has received less attention and without smoke no one looks for the origin of the fire, and it spreads at a speed never seen before. The objective of this article is to identify the main layers of AI’s materiality, questioning its apparent benevolent relationship with management.
Background
In recent years, Large Models (LMs) have been rapidly developed, including large language models, visual foundation models, and multimodal LMs. They are updated and iterated at a very fast pace. These LMs can accomplish many tasks, e.g ., daily work assistant, intelligent customer service, and intelligent factory scheduling. Their development has contributed to various industries in human society.
Aims
The architectural flaws of LMs lead to several problems, including illusions and difficulty in locating errors, limiting their performance. Solving these problems properly can facilitate their further development.
Methods
This work first introduces the development of LMs and identifies their current problems, including data and energy consumption, catastrophic forgetting, reasoning ability, localization fault, and ethical problems. Then, potential solutions to these problems are provided, including increase data and computation capability, neural‐symbolic synergy, and data orientation to human pattern.
Discussion
This work discusses developing vertical domain LMs on top of some base LMs. In addition, this work introduces three typical real‐world applications of LMs, including autonomous driving, smart industrial productions, and intelligent medical assistance.
Conclusion
By embracing the advantages of LMs and solving their fundamental problems, many industries are expected to achieve promising prospects in the future.
This paper introduces an analog spiking neuron that utilizes time-domain information, i.e. a time interval of two signal transitions and a pulse width, to construct a spiking neural network (SNN) for a hardware-friendly physical reservoir computing (RC) on a complementary metal-oxide-semiconductor platform. A neuron with leaky integrate-and-fire is realized by employing two voltage-controlled oscillators with opposite sensitivities to the internal control voltage, and the neuron connection structure is restricted by the use of only 4 neighboring neurons on a 2-dimensional plane to feasibly construct a regular network topology. Such a system enables us to compose an SNN with a counter-based readout circuit, which simplifies the hardware implementation of the SNN. Moreover, another technical advantage thanks to the bottom–up integration is the capability of dynamically capturing every neuron state in the network, which can significantly contribute to finding guidelines on how to enhance the performance for various computational tasks in temporal information processing. Diverse nonlinear physical dynamics needed for RC can be realized by collective behavior through dynamic interaction between neurons, like coupled oscillators, despite the simple network structure. With behavioral system-level simulations, we demonstrate physical RC through short-term memory and exclusive OR tasks, and the spoken digit recognition task with an accuracy of 97.7% as well. Our system is considerably feasible for practical applications and can also be a useful platform for studying the mechanism of physical RC.
Embodied Artificial Intelligence (Embodied AI) is gaining momentum in the machine learning communities with the goal of leveraging current progress in AI (deep learning, transformers, large language and visual-language models) to empower robots. In this chapter we put this work in the context of "Good Old-Fashioned Artificial Intelligence" (GOFAI) (Haugeland, 1989) and the behavior-based or embodied alternatives (R. A. Brooks 1991; Pfeifer and Scheier 2001). We claim that the AI-powered robots are only weakly embodied and inherit some of the problems of GOFAI. Moreover, we review and critically discuss the possibility of cross-embodiment learning (Padalkar et al. 2024). We identify fundamental roadblocks and propose directions on how to make progress.
The rise of automation, artificial intelligence (AI), and autonomous systems raises important questions about the future role of humans and the field of human factors/ergonomics in workplaces. This paper builds on Dr. Peter Hancock's 2023 'Are Humans Still Necessary?' article published in the Ergonomics journal. Using a multi-method approach that included a debate, opinion polling, roundtable discussions, and AI queries, the current effort examined the necessity of human involvement in future work environments. Debate team members presented arguments for and against the need for human workers, considering human factors, technology, and socioeconomic factors. Observations indicate that while AI may handle routine tasks, humans will likely remain essential for complex decision making, creativity, and ethical considerations. The paper advocates for viewing workplace dynamics as collaborative human-AI partnerships rather than competition, highlighting the need for a transdisciplinary approach in which human factors/ergonomics professionals play a vital role in enhancing these relationships.
У статті розглядається можливість застосування моделей машинного навчання на мобільних пристроях для прогнозування рівня споживання електроенергії у мікромережах. Дані пристрої здатні забезпечити додатковий захист при обробці приватних даних, але мають суттєвий перелік обмежень, що вимагає додаткової оцінки як способу підготовки моделей, так і способу їх запуску.У даній роботі проведено оцінку ефективності моделей прогнозування рівня споживання електроенергії на базі LSTM після їх конвертації у мобільні формати CoreML, Tensorflow Lite для подальшого використання у якості частини підсистеми прогнозування на периферійних мобільних пристроях. Для навчання та оцінки ефективності моделей було використано набори даних двох категорій споживачів: промислового підприємства закритого типу та малого об’єкта цивільної інфраструктури. Для розробки тестових прототипів моделей було використано програмний пакет Tensorflow з подальшою конвертацією даних прототипів у формати, що дозволяють їх запуск на мобільних пристроях Apple. Конвертовані моделі були оцінені за розміром, точністю прогнозування, швидкістю надання прогнозу, кількістю споживаної оперативної пам’яті, впливом на нагрів пристрою та навантаженням на його центральний процесор.В результаті оцінки було зроблено висновок, що втрата точності прогнозування після конвертації є незначною, а продуктивність конвертованих моделей дозволяє виконувати прогнозування в режимі реального часу з допустимим значенням рівня використання обчислювальних ресурсів пристрою. Даний результат підтверджує можливість використання мобільних пристроїв у якості периферійного засобу обчислення у підсистемі прогнозування рівня споживання енергії.
This paper explores the interplay of AI language technologies, sign language interpreting, and linguistic access, highlighting the complex interdependencies shaping access frameworks and the tradeoffs these technologies bring. While AI tools promise innovation, they also perpetuate biases, reinforce technoableism, and deepen inequalities through systemic and design flaws. The historical and contemporary privileging of sign language interpreting as the dominant access model, and the broader inclusion ideologies it reflects, shape AIs development and deployment, often sidelining deaf languaging practices and introducing new forms of linguistic subordination to technology. Drawing on Deaf Studies, Sign Language Interpreting Studies, and crip technoscience, this paper critiques the framing of AI as a substitute for interpreters and examines its implications for access hierarchies. It calls for deaf-led approaches to foster AI systems that remain equitable, inclusive, and trustworthy, supporting rather than undermining linguistic autonomy and contributing to deaf aligned futures.
AI in Society provides an interdisciplinary corpus for understanding artificial intelligence (AI) as a global phenomenon that transcends geographical and disciplinary boundaries. Edited by a consortium of experts hailing from diverse academic traditions and regions, the 11 edited and curated sections provide a holistic view of AI’s societal impact. Critically, the work goes beyond the often Eurocentric or U.S.-centric perspectives that dominate the discourse, offering nuanced analyses that encompass the implications of AI for a range of regions of the world. Taken together, the sections of this work seek to move beyond the state of the art in three specific respects. First, they venture decisively beyond existing research efforts to develop a comprehensive account and framework for the rapidly growing importance of AI in virtually all sectors of society. Going beyond a mere mapping exercise, the curated sections assess opportunities, critically discuss risks, and offer solutions to the manifold challenges AI harbors in various societal contexts, from individual labor to global business, law and governance, and interpersonal relationships. Second, the work tackles specific societal and regulatory challenges triggered by the advent of AI and, more specifically, large generative AI models and foundation models, such as ChatGPT or GPT-4, which have so far received limited attention in the literature, particularly in monographs or edited volumes. Third, the novelty of the project is underscored by its decidedly interdisciplinary perspective: each section, whether covering Conflict; Culture, Art, and Knowledge Work; Relationships; or Personhood—among others—will draw on various strands of knowledge and research, crossing disciplinary boundaries and uniting perspectives most appropriate for the context at hand.
3D-fotogrammetria on vakiintunut osaksi suomalaista kenttäarkeologiaa 2010- ja 2020-luvuilla. Se mahdollistaa fotorealististen ja mittatarkkojen mallien tuottamisen kevyellä ja edullisella kalustolla. Tässä katsauksessa vertaillaan uusia älypuhelimella toimivia fotogrammetriaohjelmistoja, jotka toteuttavat mallien laskennan omissa pilvipalveluissaan. Kulosaaren pronssikautisella röykkiöllä tehtyjen koemittausten perusteella kaikki kolme ohjelmistoa tuottavat visuaalisesti oikealta näyttäviä malleja verrattain nopeasti ja vähällä vaivalla, mutta mallien geometriassa on useiden senttimetrien eroavaisuuksia keskenään. Katsauksessa kokeillaan myös applikaatiota, joka yhdistelee neuraalista säteilykenttämallia (Neural Radiance Fields) fotogrammetriaan kohtalaisen lupaavin tuloksin.
The rapid expansion of artificial intelligence (AI) and cloud computing is creating a significant but often overlooked impact on global water resources. This paper presents a global assessment of water consumption in AI-driven data centres, distinguishing between operational water use at the facility and at the electricity generation stage, and embodied water associated with hardware manufacturing and supply chain. To anticipate future demand, a scenario-based probabilistic forecasting framework inspired by Bayesian methods is developed, combining sparse empirical data with expert-informed assumptions and policy-relevant growth trajectories for the years 2030 and 2050. Results suggest that, without mitigation, global water use associated with data centres could increase more than seven times by mid-century, with cooling-related operational use accounting for the majority of demand. Several mitigation pathways are identified, including improvements in cooling efficiency, adoption of alternative technologies, and infrastructure planning that takes into account regional water availability. A sensitivity analysis highlights the strong influence of compute growth and efficiency trends on future outcomes. The findings offer a transparent and adaptable basis for aligning AI infrastructure development with long-term water sustainability goals.
Artificial intelligence (AI) is increasingly entangled with the polycrisis—persistent, interconnected disruptions shaping the Anthropocene. Using the Anthropocene Traps framework, we analyze 14 structural, self-reinforcing dynamics, revealing how AI both reinforces and potentially counteracts polycrisis. While AI may enhance information gathering, efficiency, and ecological research, it also intensifies growth-for-growth logics, infrastructure lock-ins, technological arms races, and biosphere disconnect. The ambivalence reflects AI’s deep embedding in institutional, economic, and normative structures that drive the polycrisis. Addressing these dynamics requires political reform and resilience strategies explicitly informed by the Anthropocene Traps framework.
In this paper, we critique a pervasive discourse about the environmental implications of artificial intelligence as witnessed in news media, public policy analysis and computer science literature. In this discourse, AI is seen through a paradoxical lens: as essential to reducing the damaging effects of the climate crisis and, at the same time, a looming threat to both the climate and broader ecological crises. This seemingly contradictory framing of AI as both ‘remedy’ and ‘poison’ resonates with the concept of pharmakon, a heuristic device used extensively in the philosophy of technology. In this paper we show how the policy discourses of leading actors such as the OECD, Green Software Foundation and Microsoft's data scientists resolve the pharmacological nature of AI's environmental impact by narrowing the scope of its toxic properties and hence the solutions required to enable the technology's continued use and expansion. We argue that these discourses are reducing and oversimplifying the problem at stake to a simple proposition: we need more AI for climate tech applications but less energy thirsty AI. We show how this framing of the problem arose from a particular recent political history of the ‘techlash’, which in turn prompted considerable efforts to quantify AI's carbon footprint. We suggest a different problematisation inspired by science and technology studies scholar Andrew Barry's methodological approach, one that can re-open the problem-space of AI's environmental impact. This approach is sketched through four methodological starting points: unpacking the material entanglements between AI and ecologies; being sensible to geohistory – the specific locally situated nature of data centres and energy grids sustaining AI training, tuning and deployment; envisioning the multiplicity of solutions to the climate crisis (beyond carbon accounting of the AI footprint); and finally, rereading AI (by acknowledging the heterogeneity of actors and interests along AI supply chains).
In current radiology practice, radiologists identify a finding in the current imaging exam, manually match it against the description from the prior exam report and assess interval changes. Large Language Models (LLMs) can identify report findings, but their ability to track interval changes has not been tested. The goal of this study was to determine the utility of a privacy-preserving LLM for matching findings between two reports (prior and follow-up) and tracking interval changes in size. In this retrospective study, body MRI reports from NIH (internal) were collected. A two-stage framework was employed for matching findings and tracking interval changes. In Stage 1, the LLM took a sentence from the follow-up report and discovered a matched finding in the prior report. In Stage 2, the LLM predicted the interval change status (increase, decrease, or stable) of the matched findings. Seven LLMs were locally evaluated and the best LLM was validated on an external non-contrast chest CT dataset. Agreement with the reference (radiologist) was measured using Cohen’s Kappa (κ). The internal body MRI dataset had 240 studies (120 patients, mean age, 47 ± 16 years; 65 men) and the external non-contrast chest CT dataset contained 134 studies (67 patients, mean age, 58 ± 18 years; 44 men). On the internal dataset, TenyxChat-7B LLM fared the best for matching findings with an F1-score of 85.4% (95% CI: 80.8, 89.9) over the other LLMs (p < 0.05). For interval change detection, the same LLM achieved a 62.7% F1-score and showed a moderate agreement (κ = 0.46, 95% CI: 0.37, 0.55). For the external dataset, the same LLM attained F1-scores of 81.8% (95% CI: 74.4, 89.1) for matching findings and 77.4% for interval change detection respectively, with a substantial agreement (κ = 0.64, 95% CI: 0.49, 0.80). The TenyxChat-7B LLM used for matching longitudinal report findings and tracking interval changes showed moderate to substantial agreement with the reference standard. For structured reporting, the LLM can pre-fill the “Findings” section of the next follow-up exam report with a summary of longitudinal changes to important findings. It can also enhance the communication between the referring physician and radiologist.
The urgent global imperative to mitigate climate change has brought carbon-neutral supply chains to the forefront of sustainability and operations management discourse. As organizations strive to meet net-zero emission targets, technologies such as Artificial Intelligence (AI) and Quantum Computing (QC) have emerged as powerful enablers of this transformation. This systematic literature review investigates the roles of AI and QC in achieving carbon-neutral supply chains, examining how these technologies optimize forecasting, logistics, procurement, emissions monitoring, and real-time decision-making across diverse industrial contexts. By following the PRISMA 2020 methodology, a total of 87 peer-reviewed articles published between 2015 and 2025 were identified, screened, and synthesized from databases including Scopus, Web of Science, IEEE Xplore, ScienceDirect, and Google Scholar. The review reveals that AI significantly enhances operational sustainability through intelligent demand forecasting, inventory optimization, carbon footprint assessment, and green procurement decision-making. Quantum computing, while still in its early stages of maturity, offers high-potential applications in solving complex optimization problems such as vehicle routing, energy grid balancing, and low-emission manufacturing simulation. The integration of AI and QC-especially when combined with technologies like digital twins and blockchain-was found to support advanced sustainability modeling, emissions traceability, and secure carbon data verification. These integrated systems enable supply chains to become not only more efficient but also more transparent and accountable in their environmental impact. However, the review also highlights substantial challenges to implementation, including quantum hardware limitations, high energy demands, cost barriers, and the lack of integration with existing enterprise systems. This study contributes to the growing field of sustainable digital transformation by offering a comprehensive understanding of how AI and quantum technologies can jointly support carbon neutrality objectives in global supply chain ecosystems.
Addressing global environmental conservation problems requires rapidly translating natural and conservation social science evidence to policy‐relevant information. Yet, exponential increases in scientific production combined with disciplinary differences in reporting research make interdisciplinary evidence syntheses especially challenging. Ongoing developments in natural language processing (NLP), such as large language models, machine learning (ML), and data mining, hold the promise of accelerating cross‐disciplinary evidence syntheses and primary research. The evolution of ML, NLP, and artificial intelligence (AI) systems in computational science research provides new approaches to accelerate all stages of evidence synthesis in conservation social science. To show how ML, language processing, and AI can help automate and scale evidence syntheses in conservation social science, we describe methods that can automate querying the literature, process large and unstructured bodies of textual evidence, and extract parameters of interest from scientific studies. Automation can translate to other research agendas in conservation social science by categorizing and labeling data at scale, yet there are major unanswered questions about how to use hybrid AI‐expert systems ethically and effectively in conservation.
With the ever‐growing adoption of artificial intelligence (AI)‐based systems, the carbon footprint of AI is no longer negligible. AI researchers and practitioners are therefore urged to hold themselves accountable for the carbon emissions of the AI models they design and use. This led in recent years to the appearance of researches tackling AI environmental sustainability, a field referred to as Green AI. Despite the rapid growth of interest in the topic, a comprehensive overview of Green AI research is to date still missing. To address this gap, in this article, we present a systematic review of the Green AI literature. From the analysis of 98 primary studies, different patterns emerge. The topic experienced a considerable growth from 2020 onward. Most studies consider monitoring AI model footprint, tuning hyperparameters to improve model sustainability, or benchmarking models. A mix of position papers, observational studies, and solution papers are present. Most papers focus on the training phase, are algorithm‐agnostic or study neural networks, and use image data. Laboratory experiments are the most common research strategy. Reported Green AI energy savings go up to 115%, with savings over 50% being rather common. Industrial parties are involved in Green AI studies, albeit most target academic readers. Green AI tool provisioning is scarce. As a conclusion, the Green AI research field results to have reached a considerable level of maturity. Therefore, from this review emerges that the time is suitable to adopt other Green AI research strategies, and port the numerous promising academic results to industrial practice.
This article is categorized under: Technologies > Machine Learning
Amid the current climate emergency and global energy crisis, regulators have started to consider their options to limit the power demand of cryptocurrency networks. One specific way crypto-asset communities can limit their environmental impact is by avoiding or replacing the energy-intensive proof-of-work (PoW) mining mechanism. Ethereum, the second largest crypto-asset by market capitalization, had its PoW replaced with an alternative known as proof-of-stake during an event called The Merge on September 15, 2022. In this perspective, the likely range of electricity saved due to this change is estimated, while the limitations in assessing these figures are highlighted. Lastly, the challenges and opportunities in replicating The Merge on other cryptocurrencies such as Bitcoin are discussed.
Progress in machine learning (ML) comes with a cost to the environment, given that training ML models requires significant computational resources, energy and materials. In the present article, we aim to quantify the carbon footprint of BLOOM, a 176-billion parameter language model, across its life cycle. We estimate that BLOOM's final training emitted approximately 24.7 tonnes of~\carboneq~if we consider only the dynamic power consumption, and 50.5 tonnes if we account for all processes ranging from equipment manufacturing to energy-based operational consumption. We also study the energy requirements and carbon emissions of its deployment for inference via an API endpoint receiving user queries in real-time. We conclude with a discussion regarding the difficulty of precisely estimating the carbon footprint of ML models and future research directions that can contribute towards improving carbon emissions reporting.
In this essay, I begin by identifying the reasons that automation has not wiped out a majority of jobs over the decades and centuries. Automation does indeed substitute for labor—as it is typically intended to do. However, automation also complements labor, raises output in ways that leads to higher demand for labor, and interacts with adjustments in labor supply. Journalists and even expert commentators tend to overstate the extent of machine substitution for human labor and ignore the strong complementarities between automation and labor that increase productivity, raise earnings, and augment demand for labor. Changes in technology do alter the types of jobs available and what those jobs pay. In the last few decades, one noticeable change has been a "polarization" of the labor market, in which wage gains went disproportionately to those at the top and at the bottom of the income and skill distribution, not to those in the middle; however, I also argue, this polarization is unlikely to continue very far into future. The final section of this paper reflects on how recent and future advances in artificial intelligence and robotics should shape our thinking about the likely trajectory of occupational change and employment growth. I argue that the interplay between machine and human comparative advantage allows computers to substitute for workers in performing routine, codifiable tasks while amplifying the comparative advantage of workers in supplying problem-solving skills, adaptability, and creativity.
Machine learning (ML) workloads have rapidly grown, raising concerns about their carbon footprint. We show four best practices to reduce ML training energy and carbon dioxide emissions. If the whole ML field adopts best practices, we predict that by 2030, total carbon emissions from training will decline.
Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model
Jan 2022
Luccioni
The Demand Elasticity Paradox: More than Meets the AI
Jan 2023
Syed
Google just launched Bard, its answer to ChatGPT—and it wants you to make it better
Jan 2023
Heaven
Nvidia is ‘dominating’ and could unlock $300 billion in AI revenue by 2027, analyst says
Jan 2023
Bary
Focus: For tech giants, AI like Bing and Bard poses billion-dollar search problem
Jan 2023
Dastin
Nvidia adds jet fuel to AI optimism with record results, $25 billion buyback
Jan 2023
Mehta
Meet the $10,000 Nvidia chip powering the race for A.I.
Jan 2023
Leswing
Mining 2.0 Trends as Defunct Crypto Mining Rigs Tap into the AI Boom
Jan 2023
Moses
TSMC new CoWoS packaging plant to start volume production in mid-2027
Jan 2023
Chen
Google just launched Bard, its answer to ChatGPT-and it wants you to make it better. MIT Technology Review
Mar 2023
W D Heaven
Heaven, W.D. (2023). Google just launched
Bard, its answer to ChatGPT-and it wants
you to make it better. MIT Technology
Review. March 21, 2023. https://www.
technologyreview.com/2023/03/21/
1070111/google-bard-chatgpt-openaimicrosoft-bing-search/.
The Inference Cost of Search Disruption -Large Language Model Cost Analysis
Jan 2023
Semianalysis
SemiAnalysis. (2023). The Inference Cost of
Search Disruption -Large Language Model
Cost Analysis. https://www.semianalysis.
com/p/the-inference-cost-of-searchdisruption.
Focus: For tech giants, AI like Bing and Bard poses billion-dollar search problem. Reuters
Feb 2023
J Dastin
S Nellis
Dastin, J., and Nellis, S. (2023). Focus: For
tech giants, AI like Bing and Bard poses
billion-dollar search problem. Reuters.
February 22, 2023. https://www.reuters.com/
technology/tech-giants-ai-like-bing-bardposes-billion-dollar-search-problem-2023-02-22/.
Nvidia adds jet fuel to AI optimism with record results, $25 billion buyback. Reuters
Aug 2023
C Mehta
M A Cherney
S Nellis
Mehta, C., Cherney, M.A., and Nellis, S.
(2023). Nvidia adds jet fuel to AI optimism
with record results, $25 billion buyback.
Reuters. August 23, 2023. https://www.
reuters.com/technology/nvidia-forecaststhird-quarter-revenue-above-wall-streetexpectations-2023-08-23/.
Meet the $10,000 Nvidia chip powering the race for A.I. CNBC
Feb 2023
K Leswing
Leswing, K. (2023). Meet the $10,000 Nvidia
chip powering the race for A.I. CNBC.
February 23, 2023. https://www.cnbc.com/
2023/02/23/nvidias-a100-is-the-10000-chippowering-the-race-for-ai-.html.
Powering a Google search
Jan 2009
Google
Google (2009). Powering a Google search.
https://googleblog.blogspot.com/2009/01/
powering-google-search.html.
Nvidia is 'dominating' and could unlock $300 billion in AI revenue by 2027, analyst says. MarketWatch
Jul 2023
E Bary
Bary, E. (2023). Nvidia is 'dominating' and
could unlock $300 billion in AI revenue by
2027, analyst says. MarketWatch. July 24,
2023. https://www.marketwatch.com/story/
nvidia-is-dominating-and-could-unlock-300-billion-in-ai-revenue-by-2027-analyst-says-915935c0.
TSMC new CoWoS packaging plant to start volume production in mid-2027. DigiTimes
Jul 2023
M Chen
Chen, M., and Chan, R. (2023). TSMC new
CoWoS packaging plant to start volume
production in mid-2027. DigiTimes. July 26,
2023. https://www.digitimes.com/news/
a20230725PD214/cowos-tsmc.html.
The Demand Elasticity Paradox: More than Meets the AI. Medium
Jul 2023
T Syed
Syed, T. (2023). The Demand Elasticity
Paradox: More than Meets the AI. Medium.
July 24, 2023. https://ai.plainenglish.io/thedemand-elasticity-paradox-more-thanmeets-the-ai-1e87e63a4cfa.
Mining 2.0 Trends as Defunct Crypto Mining Rigs Tap into the AI Boom. Cryptopolitan
Jul 2023
R Moses
Moses, R. (2023). Mining 2.0 Trends as
Defunct Crypto Mining Rigs Tap into the AI
Boom. Cryptopolitan. July 2, 2023. https://
www.cryptopolitan.com/mining-2-0-cryptomining-rigs-tap-into-ai/.