ArticlePDF Available

Should AI be Designed to Save Us From Ourselves?: Artificial Intelligence for Sustainability

Article

Should AI be Designed to Save Us From Ourselves?: Artificial Intelligence for Sustainability

Abstract

The environment is being destroyed at alarming rates, undermining prospects of reconciling human wellbeing and development and respect for planetary boundaries [1]. Even a thirty percent increase in global action towards goals already en - shrined in international agreements on environment, trade and development would not achieve widely shared sustainability goals without transgressing planetary boundaries [1]. In short, existing institutions are not steering us safely towards environmental sustainability and the common good.
60 0278-0097/20©202 0IEEE IEEE TECHNOLOGY AND SOCIETY MAGAZINE JUNE 2020
Digital Object Identifier 10.1109/MTS.2020.2991502
Date of cu rrent v ersion : 18 June 2020
Myanna Lahsen
Should AI
be Designed
to Save Us
from
Ourselves?
Artificial Intelligence for Sustainability
ISTOCK/HATCHA
Reconciling Environmental
Sustainability with
Human Needs
The environment is being destroyed
at alarming rates, undermining pros-
pects of reconciling human well-
being and development and respect
for planetar y boundar ies [1]. Even
a thirty percent increase in global
action towards goals already en -
shrined in international ag reements
on environment, trade and develop-
ment would not achieve widely
shared sustainability goals without
transgressing planetary boundaries
[1]. In short, existing institutions are
not steering us safely towards envi-
ronmental sustainability and the
common good.
The challenge of fostering positive
social and environmental policy
based on scientific recommendations
is growing steeper, not weaker, in the
age of artificial intelligence (AI) and
social media. Political uses of AI
in the form of data flows, machine
learning, and large-scale data analyt-
ics and algorithms are threatening
to make societies more rather than
less opaque, creating a “black box
61JUNE 2020 IEEE TECHNOLOGY AND SOCIETY MAGAZINE
society” [2]. These technologies carry profound threats
[3]–[5], and their current governance is weakening demo-
cratic ideals and their potential to serve the common
good [6]. Yet, information and communications technolo-
gies (ICT) — both traditional media and new, artificia l
intelligence-steeped socia l media — are also potent
means of nur turing the needed understanding, va lues,
and mobilization, however [7]. They must be governed to
arouse the necessary levels of attention, understanding,
care, and mass action.
This need and possibility is not widely discussed in
mainstream academic and societal arenas, not even in
prominent, quite detailed prescriptions for “transforma-
tive policies” [1], [8] by which to reconcile the sustainable
development goals with respect for planetary boundaries
within a fast-nar rowing time frame. These proposals, like
current science and scholarship more generally, invest a
striking level of hope in civil society as the needed impe-
tus for change, without acknowledging the fact that cur-
rent information environments are inadequate for
stimulating the needed mass public mobilization and
transformations. Existing recommendations are incom-
plete — and will not be sufficiently heeded — if unac-
companied by clear recommendations and action plans
to manage and reform traditional mass media and har-
ness new digital media and AI to stimulate changes in
values, understanding and social engagements that in
turn can transform legislation and economic policies.
Recognition of this — and, thus, action to harness of
ICTs to the common good and sustainability goals —
may hinge on overcoming dominant misunderstandings
of both traditional and new media as neutral transmit-
ters of information rather than constitutive of realit y
through the shaping of subjectivities and of meaning [9],
along with other common, historically conditioned, and
erroneous assumptions about information and commu-
nications technologies that sustain the status quo [10].
Insufficiently attentive to obstacles to the hoped for
societal mobilization, or perhaps recoiling from fear of
social engineering, few are asking whether and how AI
can be used to help foster the vitally needed changes in
institutions, values, and worldviews in ways that respect
human rights and principles of democracy, equity, and
fairness. They are either not hearing or not heeding
expert s on AI, who stress that decision mak ing about AI
and its impacts is inescapable, and that its wise, demo-
cratic governance for the common good is urgently
needed and holds historically unprecedented potential
to reshape society for the common good.
Illustrating the Omission
With few exceptions, leading academic research on glob-
al environmental change and earth systems science is
vague when it comes to the how of socia l change. At
most, academic analyses tout multi-stakeholder dialogue
and greater public participation, measures that thus far
have not been visibly achieved, much less brought the
needed levels of institutional and structural changes [9].
The topics of communications media reform and AI gov-
ernance a re rarely raised outside a subset of social-sci-
ence media studies, even when (now AI-infused)
communications media are identified as a problem.
Randers et al. [1] identify five transformative policies
to reconcile Agenda 2030 development goals with
resource- and environmental quality concerns (“plane-
tar y boundar ies” [11]), calling for rapid action to green
energy matrixes and development models, increase
access to education, and reduce global inequality.
Sachs et al. [8] augment to six the number of transfor-
mative policies needed, adding more attention to social
movement dimensions of the challenge and to both
positive and negative potentials of “Artificial intelligence
and other digital technologies — sometimes referred to
as the Four th Industrial Revolution — [which] are dis -
rupting nearly every sector of the economy,” including,
specifically, the problem of manipulation by social
media [8, p. 810].
One of the four major governance mechanisms that
Sachs et al. propose to operationalize the Transforma-
tions is “Social activism to change norms and behav-
iors.”1 They discuss how technologies of the Four th
Industrial Revolution might be harnessed to achieve
positive change in a myriad of areas, but not their
potential to stimulate the needed activism and “changes
in the hear ts and minds of the people,” which they note
often precede — and drive — changes in legislation and
economic policies [8, p. 812]. If changes in people’s
“hearts and minds” are a crucial conditioner of action, it
would seem logical to start with, or at least dedicate
more attention to, the problem of how to foster such
changes in favor of the equity, development, and envi-
ronmental concerns encapsulated in the SDGs. Sachs
et al.’s analysis gives no cues as to how this cr ucial first
step will be reached, just as it is silent on how their oth-
erwise impressively comprehensive framework listing
myriad policies conducive to achieving the SDGs actual-
ly might win public favor and be adopted.
A legacy of the linear model of the science-policy inter-
face, this silence is endemic in academia. The same
imprecision characterizes earlier scholarship, when tradi-
tional mass media (television, radio, newspapers, maga-
zines, etc.), not AI-saturated, new interactive “social”
media, were relatively more dominant. Raskin et al.’s
2002 book Great Transition” [12, p. 47] discusses social
1The others a re goal -base d desig n and tech nology m ission s, goal -based
organ izati on of governmen t and fina ncing , and diplom acy and int ernat ional
cooper ation for pe ace, fin ance, a nd par tnersh ips.
62 IEEE TECHNOLOGY AND SOCIETY MAGAZINE JUNE 2020
dimensions of sustainability challenges. The authors —
who include the late Robert Kates, social scientist and
leading sustainabilit y scientist — envision a “values-led
shift toward an alternative global vision,” including “life-
style changes and greater social solidarity” [12, p. 47].
Similar to Sachs et al. and innumerable other ana lysts
[13]–[18], they place their hope in civil society, saying that
it needs to overcome fragmentation and unite to push for
change. They mention k nowledge and value change as
key levers for transformation but are silent on how people
will absorb the science, perceive an imperative to act, and
successfully press public officials to also act. Raskin et al.
mention the media only a few fleeting times, and exclu-
sively as a source of problematic consumerism and cor-
porate power, rather than an essential par t of the solution:
“The values of consumerism, materialism and possessive
individualism spread rapidly, reinforced by communica-
tions media,” they wr ite, or “The market jubilation ema-
nating from the media and the public relations machines
of multinational corporations.” (pp. 77–78).
Updating the framework twelve years later in a 2014
single-authored publication, Raskin does not mention
communications technologies at all, new or old, despite
the rapid rise in the latter since his co-authored book.
Recognizing that “[t]he principal characters now on the
global stage — intergovernmental institutions, transna-
tional corporations, and big civil society organizations —
are unlikely candidates” for the role of change agents, he
hints at a “global citizens movement ... stirring in the
wings ... the missing actor in the drama of transition
[who] could move toward center stage as crises intensify
and consciousness shifts” [17, p. 6]. Raskin is silent on
how that consciousness shift is to be achieved, just as
the 2002 book was unspecific about the conditions that
would engender an empowered and informed public.
If such mass mobilization has not been sufficiently
forceful for decades (if ever), and if mass media are
offering countervailing messages, how are large-scale
shifts in values and lifestyles to come about? On what is
their hope based? Why do they so conspicuously end
their analyses where they do? Their silence shows a
complete silo between earth system science and critical
communications scholars, who have longed stressed
the crucial importance of reforming mass media to
achieve progressive change [19], because the profound,
constitutive effects of current, corporate media land-
scapes obstruct such change [9] and are strengthened
with the adaption of AI [20].
Misinformation in Turbo-Mode: A Growing
Communication and Education Challenge
Misinformation is a growing problem for environmental
action, and mass communications media and more
recently ar tificial intelligence are its tools. A I-boosted
social media platforms spread climate change disinfor-
mation at scale [21]. Whereas “snail mail” and “robo-
call” telemarketing were used to spread climate
denialism in the 1990s [22], [23], today’s social media
micro-targeting reaches many more people [22], [23].
Techniques such as “dark posts” are even less transpar-
ent and accountable [24]. The impact of these new tech-
nologies inspire Lewandowsky et al.’s pr ovoc ative
suggestion that we are seeing the emergence of “an
alternative epistemology that does not conform to con-
ventional standards of evidentiary support” [25, p. 356].
They plausibly suggest that we are at risk of moving
towards a socio-political order in which expertise mat-
ters little (“experts are derided as untrustworthy or elit-
ist whenever their reported facts threaten the rule of the
well-financed or the prejudices of the uninformed”) [25,
p. 356] because “power lies with those most vocal and
influential on social media: from celebrities and big cor-
porations to botnet puppeteers who can mobilize mil-
lions of tweet bots or sock puppets — that is, fake
online personas through which a small group of opera-
tives can create an illusion of a widespread opinion”
[25, p. 356]. Optimism that Covid-19 may have strength-
ened public acceptance of science and expertise is tem-
pered by recognition of the deluge of misinformation
circulating about it online [26].
This new, dizzying information environment has
helped climate-change deniers (including politica l lead-
ers) utilize social media to manipulate voters and elec-
tion processes by mining personal microdata from
platforms such as Facebook and T witter.
Besides legitimate, human users, online social media
platforms host automated agents called social bots —
computer algorithms that automatically produce con-
tent and interact with humans on social media. Bots
can be enter taining and helpful, but they can also ser ve
sinister motives. They can imitate humans to manipu-
late discussions, alter the apparent popularity of users,
post incendiary messages that obstruct constructive
debate, and spread misinformation. Such nefarious
uses of social bots abound. By harvesting our own data
from platforms such as Facebook, bots are increasingly
lifelike, making their artifice hard to detect. Multiplica-
tion of bot accounts in coordinated fashion can amplify
the power of individuals and small groups by means of
orchestrated “astroturf” campaigns that simulate grass-
roots suppor t for political agendas [27].
As narrow-AI bots grow ever more intelligent and
lifelike, their influence grows as they harness data
about our own proclivities to better tailor their mes -
sages. Bots can produce false appearances of authen-
ticity, accuracy, popularity, and consensus [28], using
these amplified powers of persuasion to prey on all-
too-human tendencies to align our attitudes, beliefs
63JUNE 2020 IEEE TECHNOLOGY AND SOCIETY MAGAZINE
and factual interpretations with perceived social
norms [29], [30].
Some bots specifically aim to achieve greater inf lu-
ence Bots are capable of achieving influence in real life
networks by aggregating followers and expanding their
social circles. They do this by identifying and attracting
the attention of influentia l people in the networks, insert-
ing into popular discussions and calling attention to
themselves by sending inquiries to the most popular and
influential people. Using keyword searches online, they
can generate topically appropriate and even possibly
interesting content [28, p. 99].
The so- called Oregon petition campaign against cli-
mate policy that reached tens of thousands in the 1990s
is a clumsy prelude to how new AI-suppor ted technolo-
gies increase the power to shape public factual under-
standings and opinions. “Conjuring science” [31] by
falsely appearing sanctioned by peer-reviewed science,
the discredited 1998 petition still has life on social
media, where it was the most circulated climate story in
2016 [32]. And such misinformation has great persuasive
power, when some three-quarters of adult Americans are
prone to believe fake news [24, p. 9]. Use of bots in such
influence campaigns are growing cheaper and therefore
much more accessible and plentiful. Using misappropriat-
ed data from millions of Facebook profiles, narrow AI
methods have helped Donald Trump, Jaír Bolsonaro, and
other authoritarian political leaders win power in coun-
tries holding the key to conser vation of global biodiver-
sity and carbon stocks. (Instead, these leaders are
dismantling already insufficient environmental controls
and other safeguards for public and environmental safety
and health.) As The Guardian noted in the wake of news
of spikes in deforestation and withdrawal of support for
climate change, “the planet can’t take many more Bolso-
naros” [33]. Sowing confusion and disorientation is a
deliberate strategy witnessed a mong populist leaders.
According to one of Donald Trump’s former campaign
executives, Trump’s deliberate strategy was to “keep
things moving so fast, to talk so loudly literally and
metaphor ically — that the media, and the people, can’t
keep up” [24, p. 6]
Narrow AI has thus already proved to be astonishing-
ly dangerous for democracy, human well-being, and sus-
tainability prospects [28]. Already driving major
decisions of corporations in Silicon Valley and in Wall
Street firms [2], narrow AI is developing with such speed
and consequences that some foresee that the signifi-
cance of its impact will be analogous to the Cambrian
explosion in the evolution of life, when organisms
changed from single-cell to multiple-cell organisms over
500 million years ago [34]. This suite of persuasive tech-
nologies are likely to increase socio-economic inequali-
ty — and spread doubt about climate change — unless
deliberately and intentionally governed now, democrati-
cally, wisely, and urgently [20].
Current Media Structures and AI Shape Public
Values, Society, and Our Common Future
Scholars acknowledge the need for misinformation
campaigns to be blocked, proposing political and
technica l — including AI — means of doing so [25,
35]. Even so, a foundational assumption in mainstream
environmental research is that, barring such cam-
paigns, current information environments are neutral,
or at least adequate for necessary changes to take
place in coming years. Scientific ideals of staying
above the fray of politics, as well as cultural precon-
ceptions, may also feed positions to this effect. Depo-
liticized understandings of communication systems as
neutral, and of “the communication challenge” as a
mere problem of choosing the right words and fram-
ings, continue to prevail among climate scientists and
other public-policy scholars [9], as do suggestions that
media don’t really matter because of people’s cogni-
tive resista nce, ignoring media’s role in the deeper
conditioning of subjectivities [36].
Current, predominantly corporate commercia l
media systems are largely accepted as benevolent or at
leastnatural, especially in the United States [37, p.
493]. Historian Fred Turner traces deeper historical
roots of this depoliticized conception of mass media,
and a pervasive assumption that they should be gov-
erned and structured without central state control. He
documents a general distrust and aversion to the use
of mass media to shape public values, partly a
response to fascism in 1930s and 1940s Germany. An
ambivalence about using mass media was “resolved”
by opting for a fragmented, largely pr ivate, decentral-
ized communications system as the best way to
improve public life while avoiding the danger of
Prominent “transformative” policy
frameworks need to harness
AI-steeped information and
communications technologies
to foster positive changes in
understanding, care, and mass action.
64 IEEE TECHNOLOGY AND SOCIETY MAGAZINE JUNE 2020
Harnessing of ICTs to the
sustainability goals may hinge
on overcoming dominant
misunderstandings of mass media as
neutral transmitters of information
rather than constitutive of reality.
despotism [10]. Ironically, as he obser ves, this left
Americans blind to the dangers of social media. A fate-
ful decision was taken in the 1940s to substitute politi-
cal decision making for engineering, as if the social
good was best served by embracing a laissez-faire
media system, on the assumption — proven wrong
today — that heterogeneity also preserved diversity of
viewpoints and advanced democracy.
A non-dominant strand of scholarship has long
called at tention to the ideological functions of main-
stream, corporate media [37]–[42]. Against dominant
conceptions, they stress that communication structures
are not neutral facilitators of information transmission,
even in the absence of misinformation because,
through the framing of issues, events, and the repeti-
tion of messages, media constitute reality [9] by shap-
ing subjectivities and dominant meanings, including
perceptions of what is possible, good, and necessary.
Our dispositions are shaped by socia l conditioning, in
which media play a central — albeit not exclusive —
role, including understandings and attitudes towards
climate change [43], [44].
Cognitive scientists have shown the extent to which
humans protect deeply engrained values and beliefs
against new scientific information in the area of climate
change [29]. They have also called attention to mass
communications systems’ role in shaping public
assumptions and values through “neuropolitics” [45] — a
process of molding neurological networks in our
brains — and our va lues, assumptions, goals, and ideals —
through persistent and pervasive repetition of pa rticular
messages and frames. Cultural and normative under-
standings are resistant to change, but while social psy-
chology offers important insights into common patterns
in human understanding and behavior, both are mallea-
ble. Cultural and normative understandings are not dic-
tated by our genes, nor do they “fall from the sky”; they
grow in response to information and influences that
become cognitive habits. They grow to “stick” at the level
of habits and emotions, coming to dominate our “fast
thinking” — our immediate, intuitive reactions to infor-
mation that, by some accounts, dominates up to 98% of
our cognition [46], but they grow in response to informa-
tion and inf luences. The social world is malleable. It is
reproduced through the repetition of interpretations: “its
so-called laws are actually norms re-instituted time and
again, dramatized every moment of every day. The “real-
ism” of society .... [is] achieved and per for med” [47, p.
44]. It is through such repetition of messages that elites
make their highly unequal access to, pollution of, and
benefit from environmental resources seem natural
despite being detrimental to the public interest [48]. This
widespread acceptance of the status quo is further
ensured through representations of human nature as
selfish and predatory rather than equally capable of
altruism and inclined towards cooperation [49], [50].
Once mass media are understood as tools of
power, it becomes clear why it matters profoundly
who owns and controls them. Mass media are over-
whelmingly controlled by elites throughout the world.
Media ownership concentration and consequent con-
trol over content is high — and increasing — around
the world [51, p. 9], allowing small groups of elites to
wield vastly dispropor tionate influence on public
understandings of reality, manifestly including climate
change [52]–[54]. In Latin America as elsewhere,
elites use “media’s definitional power to fur ther, con-
sciously or unconsciously, a set of class- and family-
based interests and ideologies that have helped
maintain a status quo of socia l inequality” [55]. In
Brazil, this power has been used to deprive publics of
proper understanding of the unequal benefit-sharing
of the meat- and soy-centered national development
model and economy, and of this key lever for reducing
national emissions [44].
Today, these and other mass meaning-influencing
elites are receiving a turbo-boost from AI.
Mystication about Human Control: Laissez-
faire AI is a Dangerous Political Choice
What world do we want? The question marks a major
challenge — and opportunity — facing AI research-
ers. They must not be left alone in deciding that
question. Communications technologies can be
democratized and made more attentive to environ-
mental and socio-economic justice-attuned gover-
nance; countries like Canada, whose CBC is a major
media producer and controlled by an independent
public board, provide a model.
Leaving decisions about the world we want in the
hands of AI engineers is to allow undue play of the
biases of overwhelmingly white, male engineers in the
global North and of capital elites oriented towards pri-
vate, financial gain rather than considerations of the
65JUNE 2020 IEEE TECHNOLOGY AND SOCIETY MAGAZINE
global commons and the greater good. Tech giants’
algorithms are perpetuating discrimination [56] and
further dividing our social worlds [10], which corpo -
rations map and mine for profit: “Like the extraction
industries of previous centuries, they are highly
motivated to expand their territories and bend local
elites to their will. Without substantial pressure, they
have little incentive to serve a public beyond their
shareholders” [10, p. 32].
Public understanding is woefully limited of this use of
big data- and computation-based “narrow” AI in con-
temporary societies. Narrow AI is already shaping our
minds and practices every day when machines extend
human cognitive performance but in str ictly delimited
tasks. It is built into our mobile phones, online search
engines, AI “helpers” such as Alexa and Siri, self-driving
cars, facial recognition, and bots that redirect our inqui-
ries on webpages, leading us to additional products for
sale. AI algorithms thus guide us to what we learn and
sway our thinking, dispositions, and behavior, including
voting behavior. Our own micro -data, sold to the high-
est bidders in an emerging form of “surveillance
ca pita lism” [6], is the key to narrow AI’s growing effec-
tiveness. Yet, much journalistic and even academic
coverage of AI heralds the technology as a great boon
to public policy, especially in the environmenta l
space. Corporate media’s cautionary tales about AI’s
destructive potential focus more on general AI. The
recent movie Terminator: Dark Fate is one of the latest
Hollywood productions to do so, conveying fear and
human helplessness in the face of artificial intelli-
gence gone hay wire.
General AI is when the machines move beyond very
narrowly defined tasks to acquire a human level of intel-
ligence applicable to any problem. They are the AI phe-
nomenon on which futur ists most fixate our fears, in the
form of terminators and other sentient machines boding
“dark fate” for humans. General AI is a relatively distant
reality, at best [57]. It is narrow AI that is shaping con-
temporary societies.
To the extent that the power of narrow AI is dis-
cussed, it is often portrayed as obscure and inscrutable
and as such elusive to human control [58, p. 3]. Descrip-
tions of humans as “lacking adequate intellectual tools”
to engage the fast-changing, globally interactive actors,
technologies, and processes that shape algorithmic sys-
tems [59, p. 25] risk spreading mystifying perceptions of
algorithms as beyond human control. But AI is, by defini-
tion, controlled by humans. “Technologies can always be
understood at a higher level, intentionally [sic.] in terms
of their designs and operational goals and extensionally
in terms of their inputs, outputs and outcomes”; it is “[t]
raditional power structures [that] can and do turn sys-
tems into opaque black boxes” [60, p. 1].
Current power structures also discourage full recogni-
tion of how technologies already are used to shape our
minds, to not call attention to the fact that they currently
do not sufficiently serve the common good. They limit
recognition that narrow AI can help us better grasp —
and therefore more consciously choose — the multi-sca-
lar and diverse consequences of our choices in complex
systems, and to seize on its potential. This is “an histori-
cal oppor tunit y to reset [societal] relationships in order
to distr ibute power and wealth more equitably” and to
foster societal transformations towards sustainability in
ways that are democratic, just, and that also reduce
socio-political divisions and suffering [61, p. 140]. Algo-
rithms generating search results via Internet browsers
can be shifted away from current oppressive tendencies
[56] into the direction of k nowledge and values condu-
cive to these positive goals.
Transparency is essential for seizing on this poten-
tial: policies must force formalization and disclosure of
assumptions, choices, and adequacy determinations
associated with cur rent and emerging automated and
intelligent systems [60].
Dening Alternatives
Responsible information and communications tech-
nology governance requires, and inherently involves,
normative and political decisions that shape the
future we get, because they exist to optimize what-
ever we tell them.
Therefore it is imperative to optimize them for the
world we actually want [61]–[63]. This agenda co-exists
uneasily with the cultural value scientists place on
value neutrality. Valorization of political neutrality
undermines the needed suppor t for action- oriented
research on how to nurture the needed institutions and
transformations [64]. Besides being premised on an
elusive — and misguided — ideal, value neutrality in
research on institutions results in uncritical examina-
tion of existing societal arrangements [65] and inte-
grates the questionable assumption [66] that value
neutrality is necessary to preserve scientific credibility
in socio-political arenas and that it is possible and
appropriate in media and technology policy [63]. The
(vitally needed) contribution of institutional cr itique and
proposals for reform are their subversive nature; the
fact of being subversive does not mean that they are
frivolous or lack rigor. Indeed, distance from value-neu-
trality increases responsibility for precise, accurate, and
well-grounded accounts [65].
Definition of what constitutes “good” alternative guid-
ing principles and institutions requires assessment and
deliberation. It is not necessarily easy, but there are
sound starting points to be found. See, for example, [65],
[67]–[70] also, specifically, for governance of AI [5], [7],
... In 2015, the UN launched the Sustainable Development Goals (SDGs) 1 , an ambitious project to push for global improvements to be achieved by 2030. With ten years to go before the deadline, the UN has launched a "Decade of Action" 2 to accelerate progress on the SDGs. ...
... When the UN published the SDGs in 2015, emerging technologies like Artificial Intelligence (AI) were not yet mature. However, through its deployment across industry sectors and verticals, 1 https://www.un.org/sustainabledevelopment/ 2 https://www.un.org/sustainabledevelopment/decade-of-action/ issues related to sustainability, fairness, inclusiveness, efficiency, and usability of these technologies have now become priorities for global consumers and producers [1]- [5]. AI can act as an enabler to achieve sustainability goals; however it may also have negative impacts [6]- [8]. ...
... On the one hand, AI has been used for noble ventures such as preserving the coral reef off Australia's Great Barrier Reef by sending Rangerbot, an underwater drone created for killing the crown-of-thorns starfish primarily responsible for the destruction of the coral reef to inject the starfish with poisonous bile salts, with an accuracy for identify the starfish of 99% and the risk of getting hurt by the poison on the starfish is negligible [16]. On the other hand, AI is also being used to create bots that persuade people of radical political agendas, an activity that was partially behind Trump's presidential win [17]. ...
... Lahsen [17] elaborates in detail on the use of AI in current media structures used to shape public values, society, and a shared future for all. What that future would look like depends mainly on how well decisions and policymakers harness the use of AI. ...
Article
Full-text available
The research was designed to contribute to scientific efforts in exploring the attitude of fashion stakeholders towards AI and its use in attaining sustainability in fashion industry. Although the role of AI in Fashion has been studied before, the aim of this research is to challenge and analyze the attitudes towards sustainable fashion of both stakeholders and consumers. The research considers the views of consumers, industry professionals and company shareholders on the role AI plays in pursuing ideas of Sustainable Fashion. Contrary to expectations, the companies with significant turnover did not show any greater awareness of the new trends in the fashion business. Furthermore, previous familiarity with the usage of AI did not prove to promote openness towards the recommendation of apps which use AI to promote Sustainable Fashion. The value of this research lies in the findings, which help provide a framework which can be used to change the viewpoint of the key market players. The crucial finding is that the AI approach on sustainability will influence both users (changing their purchasing decisions toward more sustainable choices if provided with a set of information on ecological impact, production choices), and corporate businesses (changing the overall business strategy, planning, marketing communication and production designs). The paper offers milestones for further research on synergies between AI, fashion industry lined with UNS SDGs and purchasing behavior. Citation: Bolesnikov, M.; Popović Stijačić, M.; Keswani, A.B.; Brkljač, N. Perception of Innovative Usage of AI in Optimizing Customer Purchasing Experience within the Sustainable Fashion Industry. Sustainability 2022,14, 10082. https://doi.org/10.3390/ su141610082
... In 2015, the UN launched the Sustainable Development Goals (SDGs) 1 , an ambitious project to push for global improvements to be achieved by 2030. With ten years to go before the deadline, the UN has launched a "Decade of Action" 2 to accelerate progress on the SDGs. ...
... When the UN published the SDGs in 2015, emerging technologies like Artificial Intelligence (AI) were not yet mature. However, through its deployment across industry sectors and verticals, 1 https://www.un.org/sustainabledevelopment/ 2 https://www.un.org/sustainabledevelopment/decade-of-action/ issues related to sustainability, fairness, inclusiveness, efficiency, and usability of these technologies have now become priorities for global consumers and producers [1]- [5]. AI can act as an enabler to achieve sustainability goals; however it may also have negative impacts [6]- [8]. ...
... Sustainable AI is about more than applying AI to achieve climate goals (Though much work in the field is devoted to this idea. See e.g., [4][5][6][7][8][9][10]), it is about understanding and measuring the environmental impact of developing and using AI. The little information we have on the environmental impact of AI is, to say the least, not encouraging [11]. ...
Article
Full-text available
Artificial intelligence (AI) is becoming increasingly important for the infrastructures that support many of society’s functions. Transportation, security, energy, education, the workplace, the government have all incorporated AI into their infrastructures for enhancement and/or protection. In this paper, we argue that not only is AI seen as a tool for augmenting existing infrastructures, but AI itself is becoming an infrastructure that many services of today and tomorrow will depend upon. Considering the vast environmental consequences associated with the development and use of AI, of which the world is only starting to learn, the necessity of addressing AI alongside the concept of infrastructure points toward the phenomenon of carbon lock-in. Carbon lock-in refers to society’s constrained ability to reduce carbon emissions technologically, economically, politically, and socially. These constraints are due to the inherent inertia created by entrenched technological, institutional, and behavioral norms. That is, the drive for AI adoption in virtually every sector of society will create dependencies and interdependencies from which it will be hard to escape. The crux of this paper boils down to this: in conceptualizing AI as infrastructure we can recognize the risk of lock-in, not just carbon lock-in but lock-in as it relates to all the physical needs to achieve the infrastructure of AI. This does not exclude the possibility of solutions arising with the rise of these technologies; however, given these points, it is of the utmost importance that we ask inconvenient questions regarding these environmental costs before becoming locked into this new AI infrastructure.
... Social engineering is an important part to achieve the goals of environmental sustainability as well as overall harmony. But AIoT boosted tools are being utilized are spreading misinformation on a larger scale which will eventually lead us far away from a learning and mindful society [14]. ...
... It is vitally important to teach rigor and ethics in BD research (as in all research), and in uses of machine-learning, to harness their powerful potential to help societies reach progressive political and environmental goals (Lahsen, 2020;Luers et al., 2020;Nowak et al., 2018). Another key role for specialists in the age of MABD research, however, is "to contextualise and offer insights into what our data do, and maybe more importantly, don't tell us" (Graham, 2012). ...
Article
Full-text available
Machine‐assisted big data (MABD) research is enabling quantitative studies of large‐scale social phenomena, including societal responses to climate change. The rise of MABD science is causing both enthusiasm and concerns. Reviewing prominent criticisms of MABD and their relevance for MABD explorations of macro‐structural factors shaping media coverage of climate change, this article finds that the quality and contributions of such studies depend on avoiding common pitfalls. The review focuses specifically on MABD studies' attempts to identify and make sense of correlations—or lack thereof—between climate vulnerability and climate coverage in different countries. The review draws on insights from a single, nationally focused, context‐attentive, and relatively more qualitative “small data” study in the Global South (Brazil) to shed critical light on assumptions, claims, and policy recommendations made based on the computer‐assisted macro‐studies. The review illustrates why more narrowly focused and qualitative small data studies are complementary and indispensable. Besides providing vital understanding of causal relationships that elude MABD studies, more narrowly focused and context‐sensitive qualitative studies can foster understanding of the consequential mediating roles of place‐specific meaning‐making and political strategizing in how climate and weather phenomena are framed by social actors and mass media in particular places. These are dimensions that escape the Big Data quantitative methods, but that are vital to sound policy advice, as illustrated by the Small Data research from Brazil. This article is categorized under: Social Status of Climate Change Knowledge > Knowledge and Practice
... This brings the importance of AI to be utilized for SDGs based on the principles of the technology being responsible, ethical, trustworthy, explainable, and also sustainable [63]. As elaborated by [64] "laissez-faire AI is a dangerous political choice". ...
Article
Full-text available
Artificial intelligence (AI) is one of the most popular and promising technologies of our time. While there is a clearer understanding on the role of AI in boosting the efficiencies at private companies, government agencies and urban management, there is ambiguity on the specific contributions of AI to environmental sustainability. In this editorial commentary: (a) the important role that AI could play in addressing global environmental sustainability challenges is discussed; (b) the need for a consolidated AI approach to support the efforts in addressing global environmental sustainability problems—e.g., meeting the global sustainable development goals, developing smart and sustainable cities and regions, and tackling the climate and biodiversity crises—is identified; (c) the emerging Green AI concept that offers a consolidated AI perspective that is an essential step towards global environmental sustainability is introduced; (d) the adoption of the Green AI approach by industry, government, and not-for-profit organizations for addressing environmental sustainability challenges of the planet and for improving the quality of lives of our societies in cities is advocated. The editorial commentary also introduces the contributions to the Special Issue on reviews and perspectives on smart and sustainable metropolitan and regional cities.
... But it is also relevant for technologies such as broadband, artificial intelligence, and digital communication, which could help societies transform away from carbon-emitting mobility and to generally engender the values needed to foster transformations toward sustainability at the needed pace and scale. 6 It could also apply to creating and transferring zero-carbon technologies at all scales of human living. If coupled to well-communicated forms of social payments from high-carbon-emitting human behavior, it could provide the basis for innovative employment opportunities in moving toward net-zero living embracing, via training programs, otherwise redundant high-carbon-generating workforces. ...
Article
Full-text available
Emerging technologies such as artificial intelligence help operations management achieve sustainability. However, in sustainable operations management studies, scholars pay less attention to product design, which can be highly affected by artificial intelligence. In addition, sustainability is perceived as maintaining economic development while limiting environmental harm caused by human activity. Therefore, social sustainability is treated as peripheral compared to economic and environmental sustainability. However, social sustainability now has gained more attention because it is the basis on which meaningful economic and environmental sustainability can be valid. Thus, I systematically reviewed present studies on product design considering artificial intelligence to understand what types of social sustainability are achieved when applying artificial intelligence to product design. This study discovered artificial intelligence can improve social sustainability in product design, but social sustainability diversity is necessary. These findings can contribute to the inclusion of different types of social sustainability in product design when using artificial intelligence.
Book
Full-text available
The dual adoption of the UN Sustainable Development Goals (SDGs) together with the Paris Climate Agreement, both in 2015, represents a global turning point. We have never before had such a universal development plan for people and planet. For the first time in human history the world has agreed on a democratically adopted roadmap for humanity’s future, which aims at attaining socially inclusive and highly aspirational socio-economic development goals, within globally defined environmental targets. Humanity’s grand ambition is surely to aim at an inclusive and prosperous world development within a stable and resilient Earth system. This human quest is to attain as many of the SDGs as possible by 2030, and then continue following a sustainable global trajectory well beyond the next 12 years. This report has identified one such possible, smarter pathway to success through five transformative and synergistic actions.
Article
Full-text available
We juxtapose 386 prominent contrarians with 386 expert scientists by tracking their digital footprints across ∼200,000 research publications and ∼100,000 English-language digital and print media articles on climate change. Projecting these individuals across the same backdrop facilitates quantifying disparities in media visibility and scientific authority, and identifying organization patterns within their association networks. Here we show via direct comparison that contrarians are featured in 49% more media articles than scientists. Yet when comparing visibility in mainstream media sources only, we observe just a 1% excess visibility, which objectively demonstrates the crowding out of professional mainstream sources by the proliferation of new media sources, many of which contribute to the production and consumption of climate change disinformation at scale. These results demonstrate why climate scientists should increasingly exert their authority in scientific and public discourse, and why professional journalists and editors should adjust the disproportionate attention given to contrarians.
Article
Full-text available
Identifying a shift away from a more humanistic approach in the sociology and political science practiced in the United States since the 1950s, Jeffrey Alexander seeks to recuperate an intellectual tradition of the social sciences that places the cultural meanings and subjective dimensions of social actions at the very centre of analysis, while simultaneously considering the structure nature of social life. Opposing the ‘great divide’ between social sciences and humanities, therefore, Alexander proposes, via his strong program of cultural sociology, a conception of sociology that considers social facts not as ‘things’ but as ‘texts,’ analysing how cultural meanings are socially rooted and structure social life.
Article
Examines ethical principles and guidelines that surround machine learning and artificial intelligence.
Article
According to William E. Rees, "[a] parasite is an organism that gains its utility by sapping the vitality of its host [2]." A vivid illustration of this can be seen in WIRED science writer, Matt Simon's latest book, Plight of the Living Dead: What Real-Life Zombies Reveal about Our World - and Ourselves. Simon fastens on the startling machinations of "real" parasites and how they turn their hosts into agents of their own propagation. In his opening chapter, Simon zeroes in on Ampulex compressa, the jewel wasp, its "hypnotizing beauty — with big eyes and a precious green sheen to its body — [belying] its belligerence [3]." The unlucky target of her design is a roach. In her first move, the wasp injects her venom loaded stinger between the roach's front legs. Included in the venom's more than two hundred compound brew is the weaponized central nervous system inhibiting neurotransmitter, gamma-aminobutyric acid (GABA). For five minutes, GABA paralyzes the roach's ability to block the wasp's next move with its front legs. The now unobstructed wasp removes her stinger and redeploys it to penetrate the now defenseless victim's neck and from there moves into its brain. The wasp, for now at least, is not trying to kill her prey as she feels around for the right spot [actually two spots] in the brain to inject her mind-control potion [3]. These precise locations govern the roach's means of movement. After discharging her venom, the wasp again removes her stinger and lo and behold, in a few minutes the roach, instead of flooding with fear, begins grooming itself as though it hadn't a care in the world.
Article
The Sustainable Development Goals (SDGs) and the Paris Agreement on Climate Change call for deep transformations in every country that will require complementary actions by governments, civil society, science and business. Yet stakeholders lack a shared understanding of how the 17 SDGs can be operationalized. Drawing on earlier work by The World in 2050 initiative, we introduce six SDG Transformations as modular building-blocks of SDG achievement: (1) education, gender and inequality; (2) health, well-being and demography; (3) energy decarbonization and sustainable industry; (4) sustainable food, land, water and oceans; (5) sustainable cities and communities; and (6) digital revolution for sustainable development. Each Transformation identifies priority investments and regulatory challenges, calling for actions by well-defined parts of government working with business and civil society. Transformations may therefore be operationalized within the structures of government while respecting the strong interdependencies across the 17 SDGs. We also outline an action agenda for science to provide the knowledge required for designing, implementing and monitoring the SDG Transformations. The Sustainable Development Goals require profound national and societal changes. This Perspective introduces six Transformations as building blocks for achieving the SDGs and an agenda for science to provide the requisite knowledge.