ArticlePDF Available

On the Limits of Artificial Intelligence (AI) in Education

Authors:

Abstract

The recent hyperbole around artificial intelligence (AI) has impacted on our ability to properly consider the lasting educational implications of this technology. This paper outlines a number critical issues and concerns that need to feature more prominently in future educational discussions around AI. These include: (i) the limited ways in which educational processes and practices can be statistically modelled and calculated; (ii) the ways in which AI technologies risk perpetuating social harms for minoritized students; (iii) the losses incurred through reorganising education to be more ‘machine readable’; and (iv) the ecological and environmental costs of data-intensive and device-intensive forms of AI. The paper concludes with a call for slowing down and recalibrating current discussions around AI and education – paying more attention to issues of power, resistance and the possibility of re-imagining education AI along more equitable and educationally beneficial lines.
Nordisk tidsskrift for pedagogikk og kritikk
Volume 10 | 2024 | pp. 3–14
© 2024 Neil Selwyn. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0
International License (https://creativecommons.org/licenses/BY/4.0/), allowing third parties to copy and redistribute the
material in any medium or format and to remix, transform, and build upon the material for any purpose, even commer-
cially, provided the original work is properly cited and states its license.
Citation: Selwyn, N. (2024). On the Limits of Articial Intelligence (AI) in Education. Nordisk tidsskrift for pedagogikk og
kritikk: Special Issue on Articial Intelligence in Education, 10, 3–14. http://doi.org/10.23865/ntpk.v10.6062
Correspondance: Neil Selwyn, e-mail: neil.selwyn@monash.edu
3
Essay
On the Limits of Articial
Intelligence (AI) in Education
Neil Selwyn
Monash University, Australia
ABSTRACT
The recent hyperbole around articial intelligence (AI) has impacted on our ability to properly
consider the lasting educational implications of this technology. This paper outlines a number cri-
tical issues and concerns that need to feature more prominently in future educational discussions
around AI. These include: (i) the limited ways in which educational processes and practices can be
statistically modelled and calculated; (ii) the ways in which AI technologies risk perpetuating social
harms for minoritized students; (iii) the losses incurred through reorganising education to be more
‘machine readable’; and (iv) the ecological and environmental costs of data-intensive and device-
intensive forms of AI. The paper concludes with a call for slowing down and recalibrating current
discussions around AI and education – paying more attention to issues of power, resistance and the
possibility of re-imagining education AI along more equitable and educationally benecial lines.
Keywords: articial intelligence; automation; digital; education; harms
Received: October, 2023; Accepted: October, 2023; Published: January, 2024
Introduction
The past twelve months have seen articial intelligence (AI) attract heightened levels
of popular and political interest rarely seen before in the sixty-year history of the eld.
Much of this has been fuelled by nanciers chasing quick prots, policymakers keen
to appear supportive of national innovation, and Big Tech corporations scrambling to
catch-up with more agile specialist start-ups. One consequence of this furore is the
difculty of now engaging in balanced and reasoned discussions about the societal
implications and challenges of AI. For example, we have reached a point where the
majority of US adults are now prepared to accept that “the swift growth of articial
intelligence technology could put the future of humanity at risk” (Reuters, 2023).
This special issue of the Nordic Journal of Pedagogy & Critique therefore comes at a
moment when a lot is being said about AI, albeit little of which is likely to hold up to
scrutiny a few years hence.
4
N. Selwyn
While not suffering the extreme peaks and troughs of general public discussions
around AI, the education sector has also been experiencing its own version of AI-fever.
This has perhaps been most obvious in educational reactions to Chat-GPT and
other ‘generative AI’ writing tools capable of producing pages of plausible-sounding
text in response to short written prompts. At the beginning of 2023, initial publicity
around this particular form of AI raised widespread concerns over the likelihood of
students using such tools to fraudulently produce written assignments. This triggered
a succession of university and school-wide ‘bans,’ hasty reformulations of assessment
tasks, and the rapid marketing of new AI counter-measures claiming to be capable
of detecting algorithmically-generated writing. Observing this from the outside, it
seemed alarming how quickly the educational debate around Chat GPT spiralled out
of control, with many otherwise sober commentators reaching extreme conclusions
over the transformative connotations of this technology.
This paper calls for more reasoned responses to the educational possibilities of AI.
While educators should not completely ignore recent developments around machine
learning, large-language models and the like, there is certainly a need to resist the
more extreme hopes and fears that the idea of AI technology continues to provoke. At
the same time, there is also a need to better engage with complex issues and concerns
that have so far tended to remain sidelined in educational discussions around AI.
This requires sustained, ongoing and open dialogue that brings in perspectives not
usually given space in conversations around digital innovation and education futures.
In particular, this requires paying closer attention to the experiences and standpoints
of those groups likely to gain least (and likely to lose most) from the unfettered
implementation of AI technology in education. As a start, this brief paper sets out
some pertinent starting-points from which such discussions can progress in earnest.
AI and education – some basic points of denition
It is perhaps helpful to rst set out the nature and form of the technology under
discussion. While many teachers and students understandably might feel that they
are yet to encounter this technology, tangible applications of AI in education are fast
emerging. For example, government authorities and agencies are beginning to adopt
various forms of ‘automated education governance’ where AI tools are used to process
big data sets from entire school systems in order to model ‘business decisions’ ranging
from future school building priorities through to teacher recruitment. Conversely,
individual schools are now beginning to assign all manner of tasks to AI that would
previously have been delegated to teachers. These include automated grading and
online exam proctoring systems, chat bots that automate general interactions between
teachers and students, and surveillance tools which judge the extent to which a class is
diligently working or not. At the same time, AI tools and diagnostics are also regularly
part of how students are supported in their studies. This includes the use of AI-driven
search, natural language processing to provide automated writing support, and the
5
On the Limits of Articial Intelligence (AI) in Education
use of personalized learning systems to curate online learning content and activities
for different students on the basis of their prior performance.
Crucially, while these applications might seem incredibly sophisticated in
comparison to the educational technologies of the 2000s and 2010s, such examples
all constitute what is termed ‘narrow articial intelligence.’ In other words, these
AI systems are designed to address one specic task (such as grading essays or
predicting student behaviours). These AI tools are rened using training data relating
to this specic area of education, and then operate within pre-dened boundaries to
recognise patterns in a limited range of input data. Thus, the forms of AI currently
entering our schools and classrooms are far-removed (if not totally distinct) from the
speculative forms of AI that often feature in popular discussions of how ‘sentient’
forms of AI might soon replace teachers, render schools obsolete, and even do away
with the need for humans to learn things for themselves. Thus, in contrast to the
fears and hopes that have fast grown up around ideas of ‘general AI,’ ‘digital minds,
‘superintelligence’ and so-called ‘singularity,’ the rst step in establishing a healthy
response to the emergence of AI technologies into schools is to foreground what Divya
Siddarth and colleagues (2021) term ‘Actually Existing AI’ – i.e. the computational
limitations of this technology alongside the IT rms and ows of funding that are
promoting it.
In particular, the idea of actually existing AI pushes us to frame educational AI in
terms of maths, statistics and computation. As Hilary Mason (2018, n.p.) puts it, “AI
is not inscrutable magic – it is math and data and computer programming, made by
regular humans.” Indeed, some elements of the computer science community have
recently begun to deliberately distance themselves from the term ‘AI’ and revert to using
labels that better describe the types of machine learning and algorithmic developments
that underpin their work (see Jordon in Pretz, 2021). Elsewhere, policymakers and
industry actors are also beginning to turn to alternate terms, such as ‘automated
decision making’ and ‘algorithmic forecasting.’ Such linguistic turns reinforce Emily
Tucker’s (2022, n.p.) assertation that “whatever the merit of the scientic aspirations
originally encompassed by the term ‘articial intelligence,’ it [has become] a phrase
that now functions in the vernacular primarily to obfuscate, alienate and glamorize.
Recognising AI as a sophisticated form of statistical processing quickly raises
questions over what can (and what cannot) actually be accomplished with these
technologies in education. For example, from this perspective, a seemingly sentient
AI tool such as Chat GPT is more accurately understood as assembling and
re-arranging pre-existing scraps of text taken from the internet in ways that are
statistically likely to resemble larger pieces of pre-existing text. Generative AI – as
with any AI tool – does not ‘know’ or ‘understand’ what it is doing any more than
any other non-human object. Even if it is producing apparently plausible reams of
text, a generative AI language tool has no ‘understanding’ or ‘knowledge’ of what its
output might mean. Instead, just as a parrot can mimic human speech without any
reference to meaning, so too will a large language model – albeit using sophisticated
6
N. Selwyn
probabilistic information about how text has previously been put together by human
authors (Bender et al., 2021). At best, then, these are statistical simulations, or more
accurately, replications of human-produced text with none of the human ingenuity,
imagination or insight that was used to produce the original source materials.
AI and education – some things to be concerned about
Understanding AI technology as a complex statistical procedure (based on enormous
computational power and data processing) therefore pushes education debates
on AI to reect on some of the obvious limitations of this technology not usually
acknowledged. For example, as with any computational process, AI technologies are
reliant on the quality of the data they are working with. As with any computational
process, AI technologies operate through iteration and optimisation, the use of
approximations and correlations, the production of errors and false matches. All this
makes the application and outputs of any AI system incredibly context-specic and
inherently limited. As the computer scientist Melanie Mitchell (2019, n.p.) puts it:
“People have been trying to get machines to reason since the beginning of the eld
[…] but they’re what people call ‘brittle’ – meaning you can easily make them make
mistakes, and reason incorrectly.
It is well worth thinking further about how this statistically-derived ‘brittleness’
might be evident in educational AI – in particular, taking time to consider how
the statistical limitations of AI might bump up against educational contexts and
educational ambitions. At its heart, the ontological premise of educational AI is
that the social world of any student or classroom is broadly quantiable and subject
to statistical control. Key here is the idea that the social world can be reduced,
represented and modelled in an abstract form. In other words, it is presumed that all
of the key features of any social context can be represented, ordered and rendered
calculable – what Wajcman (2019) describes as an ‘engineering’ mindset. From this
perspective, a social system (such as a classroom) can be unproblematically modelled
as a set of variables than can be manipulated in order to achieve optimal efciency.
In this sense, educational AI applications are dependent on the input of data relating
to education phenomena. This might take the form of data generated from students’
uses of devices and software, data collected in classrooms through sensors and/or pre-
existing contextual data generated ofine (such as assessment results, demographic
details, and so on). In this sense, most AI technologies currently being used in schools
and universities are dependent on various ‘proxy’ variables – easily extractable data
points that can substitute for direct measures of a particular aspect of education. For
example, the time that a student spends watching an online instructional video might
be used as a proxy for their levels of ‘engagement’ with the content of that video. If
large sets of such data can be collated and analysed, then algorithmic models can
be constructed to anticipate what might happen in similar future events. Key here
is the capacity of these systems to adjust and ‘learn’ from mismatches. Indeed,in
7
On the Limits of Articial Intelligence (AI) in Education
simple terms, machine learning involves a computer autonomously developing a
mathematical model and rening it each time an error occurs.
All told, the delegation of key educational decisions and actions to these statistical
logics certainly marks a radical shift in the provision, organisation and governance
of education. While many people seem willing to presume that the AI technologies
just described are capable of increased efciency, precision, standardisation and
consistency of outcomes when compared to traditional human-centred approaches,
concerns are growing that this might not be the case. The following sections briey
outline four such areas of uncertainty and push-back.
Problems of representation and reduction
First, is the extent to which education can be adequately represented, modelled and
manipulated in data form. A strong argument can be made that many of the basic
aspects of teaching and learning cannot be captured reliably in data form. This is
even more true for capturing and representing the complexities of a classroom or a
student’s social circumstances. While all data-driven processes are compromised by
issues of representativeness, reductiveness, and explainability, these constraints are
especially pertinent to uses of AI to model ‘real world’ issues that are embedded in
social contexts such as classrooms. To paraphrase Murray Goulden (2018), even the
most ‘technologically smart’ innovation is likely to be ‘socially stupid’ when deployed
in a real-life context such as a school. As Meredith Broussard (2019, p. 61) argues:
“Math works beautifully on well-dened problems in well-dened situations with
well-dened parameters. School is the opposite of well-dened. School is one of the
most gorgeously complex systems humankind has built.
Thus, however sophisticated AI becomes, any efforts at statistically modelling the
contextual layers implicit in any educational episode or moment will continue to result
in blunt computational approximations of the real-life complexities purportedly being
captured. This phenomenon was illustrated in a Princeton University study which
provided teams of statisticians, data scientists, AI and machine learning researchers
with comprehensive data-sets covering over 4,000 families. Even with this wealth of
data, stretching back over 15 years and boasting nearly 13,000 data points per child,
all these expert teams failed to develop even moderately successful statistical models
for children’s life outcomes relating to school grades and competencies. As Karen
Hao (2020, n.p.) reported at the time: “AI can’t predict how a child’s life will turn
out even with a ton of data.
The social harms of AI
Second, then, are the social consequences of these statistical frailties – the gaps,
omissions, and false errors that arise from the conation of complex social phenomena
into numbers. Recently, we have a trend to acknowledge such issues in loosely-dened
terms of ‘AI ethics’ and ‘AI safety.’ However, there is now growing recognition of the
real-life harms and violence that occur as a result of AI technologies being deployed
8
N. Selwyn
in a social setting – what Shelby et al. (2022, p. 2) dene as “adverse lived experiences
resulting from a system’s deployment and operation in the world.” In terms of the
ongoing educational application of AI, then, one set of concerns relates to what
Shelby refers to as ‘allocative harms’ – i.e. how AI systems are proving prone to
reaching decisions that result in the uneven – and sometimes unfair – distribution
of information, resources and/or opportunities. This is reected in various recent
reports of ‘algorithmic discrimination’ in education – such as automated grading
systems awarding higher grades for privileged students who t the prole of those who
historically have been awarded high grades, or voice recognition systems repeatedly
making false judgements of cheating on language tests against students with non-
native accents (NAO, 2019).
Also of concern are ‘quality-of-service harms’ – i.e. instances where AI systems
fail systematically to perform consistently and to the same standards regardless
of a person’s background or circumstances. This has already come to the fore in
instances where US schools have deployed facial recognition systems that regularly
fail to recognise students of colour (Feathers, 2020), or systems developed to detect
AI-generated writing that discriminate against non-native English speakers, whose
work is more likely to be written formulaically and use common words in predictable
ways (Sample, 2023). Of particular concern is the emergence of educational AI
systems that rely on processes unsuited to disabled and neuro-diverse students – for
example, eye-tracking technologies that take a steady gaze as a proxy for student
engagement (Shew, 2020).
Alongside these concerns are what Shelby terms ‘representational harms’ – i.e. the
ways in which AI systems rely on statistical categorisations of social characteristics and
social phenomenon that often do not split into neatly bounded categories. This can
lead to mis-representations of who students are, their backgrounds and behaviours
in ways that can perpetuate unjust hierarchies and socially-constructed beliefs about
social groups. Finally, are concerns over AI technologies adversely impacting on
social relations within education settings – what Shelby terms ‘interpersonal harms.’
These include AI-driven ‘student activity monitoring systems’ now being marketed
to allow teachers to surveil students’ laptop uses at home, or school authorities using
students’ online activities as the basis of algorithmically-proling students who might
be deemed ‘at risk’ of course non-completion.
Running throughout all these examples is the underpinning concern that even
the most ‘benign’ uses of AI in a school or classroom setting is likely to exacerbate
and entrench pre-existing institutional forms of control. Schools and AI technologies
are similarly built around processes of monitoring, categorising, standardising,
synchronising and sorting. All told, while such exclusionary glitches might not be
a deliberate design feature, AI technologies are proving prone to replicating and
reinforcing oppressions that minoritized students are likely to regularly encounter
during their educational careers. In this sense, one of the most important conversations
we should now be having around the coming-together of education and AI relates to
9
On the Limits of Articial Intelligence (AI) in Education
how AI is imbued with “a tendency to punch down: that is, the collateral damage that
comes from its statistical fragility ends up hurting the less privileged” (McQuillan,
2022, p. 35).
Fitting education around the needs of AI
Third, is the concern that approaching students, teachers, classrooms and schools
primarily in terms of what can be captured in data implies a number of fundamental
rearrangements and reorganisations of education – what might be described as a
recursive standardisation, homogenisation and narrowing of education. This relates to
the question of what AI technologies expect of education (and, more pointedly, what
AI technologies expect of the people involved in education). As Tennant and Stilgoe
(2021, p. 846) remind us, “technological promises, if they succeed, end up making
demands on the world.” Here, then, we are already seeing an increased imperative to
arrange education settings in ‘machine readable’ ways that will produce data that can
be recognised and captured by AI technologies. This chimes with the phenomenon of
what Langdon Winner (1978) termed reverse adaptation – i.e. rather than expecting
technology to adapt to the social world, most people prove remarkably willing to
adapt their social worlds to technologies.
In this respect, one immediate concern is that teachers and students are now
beginning to be compelled to do different things because of AI technologies. For
example, we are seeing reports of students now having to act in ways that are machine-
readable – what might be described as ‘adapting to the algorithm’ (see Høvsgaard,
2019). This might involve a student having to write or speak in a manner that can be
easily recognised by the computer, or to act in ways to produce data that an AI system
can easily process. Similarly, teachers might have to develop ‘parseable pedagogies’–
i.e. easily codied ways of teaching that result in outcomes that can be inputted
into the system. Perhaps less obvious, is the concern that teachers and students end
up engaging in empty performative acts in order to trigger appropriate algorithmic
responses. For example, this is already being seen in reports of call centre workers
repeatedly saying ‘sorry’ during their interactions with callers in order to meet their
automated ‘empathy’ metrics – regardless of whether saying ‘sorry’ is appropriate or
not (Christl, 2023).
AI as environmental burden
Finally, is the underpinning concern that the data-intensive and device-intensive
forms of AI currently being taken up in education incur unsustainable ecological and
environmental costs. For example, MIT Technology Review reported in 2019 that
the carbon emissions associated with training one AI model had been estimated to
exceed 626,000 pounds of carbon dioxide (equivalent emissions to driving 62 petrol-
powered passenger vehicles for twelve months). Similarly, conducting a ‘conversation’
with Chat GPT of between 20 to 50 prompts is estimated to consume 500 ml of
water (Li et al., 2023). Thus in terms of natural resource consumption and energy
10
N. Selwyn
drain alone, as Thompsonet al. (2021, n.p.) understatedly puts it, “the cost of [AI]
improvement is becoming unsustainable.
It is therefore beginning to be argued that educators need to temper any
enthusiasms for the increased take-up of AI with the growing environmental and
ecological harms associated with the production, consumption and disposal of digital
technologies. In this sense, AI should not be seen as an immaterial, other-worldly
technology – somehow weightless, ephemeral and wholly ‘in the cloud.’ In reality,
AI is reliant on a chain of extractive processes that are resource-intensive and with
deleterious planetary consequences. In short, the growing use of AI technologies in
education comes at considerable environmental cost – implicated in the depletion
of scarce minerals and metals required to manufacture digital technologies, massive
amounts of energy and water required to support data processing and storage, and
fast-accumulating levels of toxic waste and pollution arising from the disposal of
digital technology (see Brevini, 2021).
Given all the above, any enthusiasms for the increased use of AI in education must
address the growing concerns among ecologically-concerned commentators that it
might not be desirable (and perhaps even impossible) to justify the development and
use of AI technologies in the medium to long-term. On the one hand, this necessitates
proponents of educational AI to explore how the continued use of AI in schools and
universities might be aligned with ‘green-tech’ principles and perhaps make a positive
contribution to forms of eco-growth. In this sense, there is certainly a pressing need
to explore the extent to which educational AI might be oriented toward emerging
developments in areas such as ‘carbon-responsive computing’ and ‘green’ forms of
machine learning. This implies, for example, developing different forms of AI built
around small datasets and rened processing techniques, and moving beyond ‘brute
force’ computational approaches (Nafus et al., 2021).
On the other hand, however, we also need to give serious consideration to the idea
that AI is ultimately an irredeemable addition to education, and needs to be rejected
outright. Strong arguments are being made that the environmental and ecological
harms arising from AI use cannot be offset by efforts to instigate ‘greener’ forms of
carbon-neutral digital technology and ‘cleaner’ forms of renewable energy. As such,
educationalists would do well to be open to the possibility that most – if not all –
forms of AI technology “are intrinsically incompatible with a habitable earth” (Crary,
2022, n.p.). If this is the case, then it makes little sense to continue to push for
the reframing of education in an era of climate crisis and environmental breakdown
around these technologies. From this perspective, then, AI is nothing more than a
dangerous distraction from much more pressing and threatening planetary issues.
AI and education – some ways forward
The main challenge now facing educators is to avoid getting mired in the considerable
hype that will continue to surround AI in the months (and perhaps years) ahead. Atthe
11
On the Limits of Articial Intelligence (AI) in Education
moment, the emergence of AI is prompting a familiar response that has regularly
accompanied educational discussions of previous ‘new’ technologies over the past
40 years or so. In short, this has involved the sudden appearance of ‘common-sense’
arguments that: (i) the increased incursion of AI tools into classrooms is inevitable;
(ii) that teachers quickly need to upskill (become ‘AI literate’) in order to make best
use of these technologies, and (iii) that we need to seriously rethink how traditional
educational forms and practices might need to change and adapt to the affordances
of AI. In short, educators are positioned as having little control over the nature, pace
and direction of this technological change. Existing forms of schools and schooling
are positioned as providing impediments and barriers to the smooth use of the
technology, and teachers are positioned in decit. The underpinning logic here is
simple – education needs to change quickly in order to ‘catch up’ with this seismic
technological change that has the position to radically transform all aspects of what
it means to educate and be educated.
In contrast, this paper has attempted to recast the imperatives of AI and education
in a substantially different light. Above all, it has stressed the need for educators
to take control and work to proactively shape the agendas that are continuing to
form around what AI might mean for schools, and how we might see AI playing a
constructive role (if at all) in the future classroom. This means getting actively involved
in the conversations and debates that are currently swirling around the topic of AI
and education, led largely by voices with little or no direct expertise in schooling and
education. Education experts need more condence in speaking up and leading these
debates. One key area of discussion are questions over exactly what ‘added-value’ AI
technology can be said to offer. Here, educators are in a key position to push back
against vague claims of AI radically relieving teachers’ workloads or acting as a ‘one-
to-one tutor for the world.’ More immediately, perhaps, educators are also in a key
position to demonstrate the limited outcomes that result from limited educational AI
technologies. At the same time, it is also important for educators to speak up about
the other forms of AI technology that we might collectively believe as capable of being
of genuine education benet.
In all these ways, then, education communities should be looking to play a key role
in providing a collective counter-balance to the hyperbole that has engulfed recent
debates around AI and education. This requires challenging IT industry-led visions of
how education might be best reorganised and/or dissembled, as well as the associated
surrender of public education interests to the economic and political interests that
continue to push AI into education. This also requires pointing to the disadvantages
and harms that are now being noted as key aspects of education become increasingly
reliant on AI technologies – from concerns over AI-led administrative violence and
algorithmic discrimination through to the diminished quality of educational provision
and support. Above all, this requires moving away from portraying AI in education
as a technical object, and instead framing AI as a system that is bound up with the
messy realities of education systems, economic systems, political systems and other
12
N. Selwyn
social systems. Finally, amidst these clarications, counter-arguments and critique
there is also a need for educators to talk more about possible alternate forms of AI
that might better t education – i.e. ways in which AI might be genuinely useful in
being a part of a response to educational needs. As Nick Couldry reasons, making
criticisms of the recent AI turn does not necessarily denote a wholesale rejection of
AI technology altogether:
We are not objecting to the use of AI tools to solve specic problems within clear
parameters that are set and monitored by actual social communities. We are
objecting to the rhetoric and expansionist practice of offering AI as the solution for
everything. (Couldry, 2023, n.p.)
In this spirit then, it falls to the education community to now begin to work out
how to shape a new wave of discussions around AI in education that are framed in
more emancipatory, fair, or perhaps simply kinder ways than the brut(ish) forms of
corporate algorithmic control currently on offer. Indeed, there are some burgeoning
examples of how this might be done. On the one hand, we are beginning to see
some radical calls for feminist, queer, decolonialised and indigenous reimaging of
what AI might be (e.g. Adams, 2021; Klipphahn-Karge et al., 2023; Munn, 2023;
Toupin, 2023). On the other hand, a few mainstream public education agencies and
organisations are also beginning to make a decent start in calling for new forms of
AI that emphasize human elements of learning and teaching, that are sympathetic
to education contexts, that involve educators in their conception, development and
implementation, and are based around values of trust, care and that align with shared
education visions. For example, as the US Ofce of Educational Technology (2023,
p.10) recently contended:
Use of AI systems and tools must be safe and effective for students. They must
include algorithmic discrimination protections, protect data privacy, provide notice
and explanation, and provide a recourse to humans when problems arise. The
people most affected by the use of AI in education must be part of the development
of the AI model, system, or tool, even if this slows the pace of adoption.
Conclusions
All told, this paper has begun to outline the case for slowing down, scaling back and
recalibrating current discussions around AI and education. While this might not feel like
an easy task, the urgency of current conversations around AI and education is clearly
unproductive in the long run. It makes good sense for educators to try to disconnect
themselves from the apparent imperatives of AI-driven educational ‘transformation,
and instead work to slow down discussions around AI and education, and introduce
an element of reection and nuance. Given the technical and social complexity of
AI, it behoves us to try to develop forms of public debate that engage with these
complexities rather than descend to overly-simplistic caricatures and fears. Given the
13
On the Limits of Articial Intelligence (AI) in Education
clear inequalities and injustices already arising from AI technologies it also behoves
us to pay closer attention to “the oppressive use of AI technology against vulnerable
groups in society” (Birhane & Van Dijk, 2020, n.p.). Moreover, all of the concerns
raised in this paper all point to key questions of power – i.e. who gets to decide what
AI tools are implemented in education will inevitably wield considerable inuence
over what goes on in that education setting. As Dan McQuillan (2023, n.p.) argues:
From this perspective, AI is not a way of representing the world but an intervention
that helps to produce the world that it claims to represent. Setting it up in one way
or another changes what becomes naturalised and what becomes problematised.
Who gets to set up the AI becomes a crucial question of power.
Seen in this light, then, it seems crucial that educators and the wider education
community become more involved in debates and decision-making around who
gets to ‘set up’ AI and education. The future of AI and education is not a foregone
conclusion that we simply need to adapt to. Instead, the incursion of AI into education
is denitely something that can be resisted and reimagined.
Acknowledgements
This paper arises from research supported by funding from the Australian Research
Council (DP240100111).
Author biography
Neil Selwyn has been researching and writing about digital education since the mid-
1990s. He is currently a professor at Monash University, Melbourne. Recent books
include: Should Robots Replace Teachers? AI and the Future of education (Polity 2019),
Critical Data Literacies (MIT Press 2023, with Luci Pangrazio), and the third edition
of Education and Technology: Key Issues and Debates (Bloomsbury 2021).
References
Adams, R. (2021). Can articial intelligence be decolonized?Interdisciplinary Science Reviews,46(1–2), 176–197.
Bender, E., Gebru, T., McMillan-Major, A., & Mitchell, M. (2021). On the dangers of stochastic parrots.
InProceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610–623).
Birhane, A., & van Dijk, J. (2020, February). Robot rights? Let’s talk about human welfare instead. InProceedings
of the AAAI/ACM Conference on AI, Ethics, and Society(pp. 207–213). Association for Computing Machinery.
Brevini, B. (2021). Is AI good for the planet? Polity.
Broussard, M.(2021, 22 April). [Tweet]. Twitter. https://twitter.com/merbroussard/status/1384934004030418945
Caird, S., Lane, A., Swithenby, E., Roy, R., & Potter, S. (2015). Design of higher education teaching models and
carbon impacts.International Journal of Sustainability in Higher Education.
Christl, W. (2023). Sur veillance and algorithmic control in the call centre. CrackedLabs. https://crackedlabs.org/en/
data-work/publications/callcenter/
Couldry, N. (2023, 11 April). AI as colonial knowledge production. University World News. https://www.
universityworldnews.com/post.php?story=2023041014520289
Crary, J. (2022). Scorched earth. Ver so.
14
N. Selwyn
Feathers, T. (2020, 2 December). Facial recognition co mpany lied t o school district about its rac ist tech. Vice
Motherboard. https://www.vice.com/en/article/qjpkmx /fac-recognition-company-lied-to-school-district-
about-its-racist-tech
Giannini, S. (2023). Generative AI and the future of education. UNESCO. https://unesdoc.unesco.org/ark:/48223/
pf0000385877
Goulden, M. (2018). [Tweet]. Twitter. https://twitter.com/murraygoulden/status/1038338924270297094
Hao, K. (2020, 2 April). AI can’t predict how a child’s life will turn out even with a ton of data.MIT Technology
Review.
Høvsgaard, L. (2019). Adapting to the test.Discourse: Studies in the Cultural Politics of Education,40(1), 78–92.
Klipphahn-Karge, M., Koster, A., & Bruss, S. (Eds.). (2023). Queer reections on AI. Routledge.
Li, P., Yang, J., Islam, M., & Ren, S. (2023). Making AI less ‘thirsty’: uncovering and addressing the secret water
footprint of AI models.arXiv. https://doi.org/10.48550/arXiv.2304.03271
Mason, H. (2018, 3 July). [Tweet]. Twitter. https://twitter.com/hmason/status/1014180606496968704.
McQuillan, D. (2022). Resisting AI. Policy Press.
McQuillan, D. (2023, 6 June). Predicted benets, proven harms. The Sociological Review: Magazine. https://
thesociologicalreview.org/magazine/june-2023/articial-intelligence/predicted-benets-proven-harms
Mitchell, M. (2019). Articial intelligence: A guide for thinking humans. Farrar, Straus and Giroux.
Munn, L. (2023). The ve tests: Designing and evaluating AI according to Indigenous Māori principles. AI &
Society. https://doi.org/10.1007/s00146-023-01636-x
Nafus, D., Schooler, E., & Burch, K.. (2021). Carbon-responsive computing.Energies,14(21), 6917.
NAO. (2019) Investigation into the response to cheating in English language tests. National Audit Ofce. https://
www.nao.org.uk/wp-content/uploads/2019/05/Investigation-into-the-response-to-cheating-in-English-
language-tests.pdf
Pretz, K. (2021, 31 March). Stop calling everything AI, machine-learning pioneer says. IEEE Spectrum. https://
spectrum.ieee.org/stop-calling-everything-ai-machinelearning-pioneer-says
Reuters. (2023, 18 May). AI threatens humanity’s future, 61% of Americans say: Reuters/Ipsos poll. www.reuters.
com/technology/ai-threatens-humanitys-future-61-americans-say-reutersipsos-2023-05-17/
Salganik, M., Lundberg, I., Kindel, A., Ahearn, C., Al-Ghoneim, K., Almaatouq, A., & Altschul, D. (2020).
Measuring the predictability of life outcomes with a scientic mass collaboration. Proceedings of the
National Academy of Sciences. www.pnas.org/content/117/15/8398
Sample, I. (2023, 10 July). Programs to detect AI discriminate against non-native English speakers, shows
study. Guardian. www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-
against-non-native-english-speakers-shows-study
Shelby, R., Rismani, S., Henne, K., Moon, A., Rostamzadeh, N., Nicholas, P., Yilla, N., Gallegos, J., Smart, A.,
Garcia, E., &Virk, G. (2022). Sociotechnical harms: Scoping a taxonomy for harm reduction. arXiv.https://
doi.org/10.48550/arXiv.2210.05791
Shew, A. (2020). Ableism, technoableism, and future AI.IEEE Technology and Society Magazine,39(1), 40–85.
Siddarth, D., Acemoglu, D., Allen, D., Crawford K., Evans, J., Jordan, M., & Weyl, G. (2021, 1 December). How
AI fails us. https://ethics.harvard.edu/les/center-for-ethics/les/howai_fails_us_2.pdf?m=1638369605
Tennant, C., & Stilgoe, J. (2021). The attachments of ‘autonomous’ vehicles. Social Studies of Science, 51(6),
846–870.
Thompson, N. Greenewald, K., Lee, K., & Manso, G. (2021, 24 September). Deep learning’s diminishing
returns. IEEE Spectrum. https://spectrum.ieee.org/deep-learning-computational-cost
Toupin, S. (2023). Shaping feminist articial intelligence.New Media & Society, 14614448221150776.
Tucker, E. (2022, 17 March). Artice and intelligence. Tech Policy Press. https://techpolicy.press/artice-and-
intelligence/
U.S. Ofce of Educational Technology. (2023). Articial intelligence and future of teaching and learning. Washington
DC, U.S. Department of Education.
Versteijlen, M., Salgado, F., Groesbeek, M., & Counotte, A. (2017). Pros and cons of online education
as a measure to reduce carbon emissions in higher education in the Netherlands. Current Opinion in
Environmental Sustainability,28, 80–89.
Wajcman, J. (2019). How Silicon Valley sets time.New Media & Society,21(6), 1272–1289.
Winner, L. (1978). Autonomous technology. MIT Press.
... McKnight and Shipp, 2024). Others caution against rushing into AI integration due to harms associated with the 'datification' of education and the environment impost that generative AI tools impose (Selwyn, 2024). ...
... This has caused a dichotomy in recommended responses to AI, from leaping forward (e.g. Luckin et al., 2022) to exercising caution (Selwyn, 2024), and everywhere in-between (Crawford et al., 2023). Correspondingly, education systems and schools have taken quite alternative approaches, from banning generative AI to embracing its use. ...
Article
Full-text available
Free access to powerful generative Artificial Intelligence (AI) in schools has left educators and system leaders grappling with how to responsibly respond to the consequent challenges and opportunities that this new technology poses. This paper examines the priorities and challenges that senior Australian educational leaders identify with relation to responsible and ethical use of generative AI in school education, and the reasons for their beliefs. Members of the Australian generative Artificial Intelligence in Education working group as well as other senior policymakers throughout Australia participated in a two-phase data collection process involving survey responses and focus group discussions. Ranking activities revealed a large number of priorities and systemic challenges, with no unilateral consensuses emerging. The highest priorities for senior policymakers related to managing risks, educating teachers, and educating system leaders, while the main systemic and environmental challenges related to the pace of change, teacher capabilities and professional learning, and equitable access to the technology. Throughout the analysis, meta themes emerged that characterised the policy-setting environment as one involving urgency, uncertainty, interconnectedness, contextuality, and complexity, with the pivotal role of teachers highlighted throughout. Reflections on responsible and ethical policy-setting in response to rapid technological change are provided, including with relation to anticipatory and networked governance and the inter-relationship with the broader policy context. Recommendations for further research and practice are also proposed.
... Moreover, we should not forget that multiple aspects of teaching and learning, as well as numerous complexities related to the cultural and social circumstances in a particular class or student cohort are challenging or, perhaps, even impossible to capture in a data form in a representative, reliable and quantifiable way. However advanced and developed these systems may become, they will only ever result in an approximation of real-life situations (Selwyn, 2024). This highlights the importance of the critical and informed educators' role. ...
... This highlights the importance of the critical and informed educators' role. We hope that this special issue can not only contribute to the ongoing international debate on how efficiently support educators' and students' digital literacy development in a tertiary education context but also inspire educators (and students) themselves in taking an active position in discussions on the role of AI in education, its potential, limitations and possible outcomes (Selwyn, 2024 ...
Article
Full-text available
The field of assessment in tertiary education is currently experiencing a significant shift mostly due to the technological advancements. Technology is, in fact, both threatening assessment practices and empowering them when integrated in the assessment process. This special issue discusses the evolution of technology-enhanced assessment in tertiary education focusing on the use of digital resources such as gamification and artificial intelligence to improve learning and evaluation methods. Research findings showcased in this issue delve on the ability of technology-enhanced assessments to promote effective personalised learning experiences while also addressing the ethical dilemmas these innovations bring. The blend of conversations about assessment with AI driven evaluations, immersive simulations and gamified platforms explores the benefits and challenges for teachers and learners alike, not forgetting the importance of sound methodological approaches for both summative and formative assessment. This editorial summarises contributors’ perspectives on how these innovations are changing assessment methods and emphasises the importance of integrating them thoughtfully. At the same time, questions are raised about the evolving role of educators in a technology-driven educational landscape.
... Research has focused on significant ethical issues (Crawford et al., 2023;Holmes et al., 2022), including algorithms bias, fairness, and transparency (Baker & Hawn, 2022); data privacy and security (Huang, 2023); and copyright, intellectual property, authorship, and ownership (Bozkurt, 2024). Research has also explored the actual and potential social harms and environmental impacts of AI (Selwyn, 2022(Selwyn, , 2024, including the role big data increasingly plays in supporting automated and AI-powered decision-making in education, reinforcing the importance of unpacking the black box of AI (Gallini et al., 2023;Bearman & Ajjawi, 2023) and the necessity of AI literacies for students as well as academic and professional staff in higher education. ...
Article
Full-text available
Advances in artificial intelligence (AI) are undoubtedly changing the practice and profession of learning design. While the full impact is yet to be realised, learning designers grapple daily with the challenges, risk, and opportunities these technologies represent for changing how students learn, how faculty teach, and how we design. So, what knowledge, skills, and mindsets do learning designers need to survive and thrive in a post-AI higher education sector? This paper reports on a project to co-design an AI literacies framework for and with a team of learning designers. Using the world café method, we conducted an online workshop with a group of 18 learning designers, drawing on our collective experience and expertise to ideate and refine the essential elements of an AI literacies framework. The data generated was then coded and thematically analysed to develop a practical framework comprising four domains and 16 specific elements, each elaborated to describe the knowledge, skills, and mindsets required for post-AI learning design. This framework informs the development of training programs and professional learning opportunities for learning designers.
... Each of these ideas about the use of AI tools in education should include a natural discussion about their application, thus fostering debates inherent to the use of these applications, such as the limited ways in which educational processes can be modelled, the different ways in which AI technology risks perpetuating social harm for students at risk of exclusion, or the ecological and environmental costs of data-intensive AI forms and devices (Selwyn, 2024); as well as other ethical aspects such as the mis-use of information (partial or biased), the creation of fake information (deepfake) or the sharing of personal data and the lack of legislation in this regard. ...
Chapter
Full-text available
Artificial Intelligence has the necessary potential to transform education, but careful planning of its implementation is essential, knowing what we are doing and why we are doing it. The opportunities for AI to personalize learning, foster connections between subjects, deepen concepts and put learning into practice are enormous. However, as these technologies advance, it is important to rethink our relationship with them, i.e., to assess digital well-being. It is essential to develop a formative, healthy and safe relationship with technol- ogy, with the main objective of finding a balance in digital life by developing skills and competences that allow minimizing the risks without losing the ben- efits. This chapter addresses some of the challenges that need to be considered when implementing AI in education, while respecting the digital well-being of teachers and students, and offers recommendations that may inspire those who wish to start working with this technology in the classroom.
Technical Report
inDigiMOB has worked towards digital inclusion in remote communities since 2016. In the first three years it was largely focused on building skills through employment and mentoring models through community-based partners. Years 2 and 3 of the project were evaluated by Batchelor Institute (Guenther 2019, 2020). In 2022, there was a shift away from this model to a more direct delivery model, which was intended to be largely facilitated with online support. While this model generated quite a bit of activity in its early days, staff changes both within inDigiMOB and with potential partner organisations resulted in impact being diluted. The outcomes of these activities are discussed, along with recommendations in the interim evaluation report, prepared in April 2024 (Guenther & Holmes, 2024). By this time all of the original project team had moved on to different jobs and very few of the project partners were available for interview. The evaluation team then proposed a forward-looking evaluation that would identify what stakeholder thought success would look like for inDigiMOB into the future, and what outcomes should be (or should have been) achieved. For the revised final evaluation, the Batchelor team identified and interviewed 31 stakeholders who had an interest in digital inclusion or a future inDigiMOB program. Three of these were First Nations Media Australia staff, nine were previous end users and 19 were representatives of organisations with an interest in remote digital inclusion, with a focus on Northern Territory, Western Australia and South Australia. Interviews with these stakeholders were conducted face to face or online using Teams or Zoom. In response to the first question about success the themes which emerged focused on 1) successful models; 2) addressing digital inclusion issues; 3) maintaining First Nations language and culture; 4) ensuring access to technology; 5) improving digital literacy skills and 6) prioritising cyber safety. The second question about outcomes pointed to 1) the need to address policy and equity issues with advocacy and education; 2) addressing cost and funding constraints; 3) focusing on economic participation and training; 4) building and maintaining partnerships; 5) engaging in future technologies and 6) addressing health, wellbeing and safety concerns.
Article
Full-text available
The article sheds light on what competences are required for communication graduates who will work in the communications industry in the future. In particular it discusses the relationship between practical job-specific skills, analytical-theoretical skills, and personal competence. Through a survey among just over 1000 members of The Norwegian Communication Association, we see that the industry believes the new graduates are qualified for professional life. This is especially true regarding practical skills, although they believe that language skills need to improve. At the same time, the industry wants students to become stronger in analytical-theoretical skills, for example related to social understanding. Although many believe Artificial Intelligence will influence the industry going forward, they do not express fear about this, nor do they mention specifically that students need to be trained in the use of AI-based tools during their studies
Presentation
Full-text available
This study analyses the design and implementation process of two courses developed by the Centre for Innovation in Technology and Pedagogy (Citep) of the University of Buenos Aires, focused on the integration of Generative Artificial Intelligence (GenAI) into teaching and assessment practices in higher education. The iterative process of design and implementation of these courses is systematised and documented through the Design-Based Research methodology. The results underscore the importance of a flexible, dialogic and multidimensional approach that fosters critical reflection on the integration of GenAI into teaching. Key design dimensions such as participatory co-design, pedagogical isomorphism, tutor moderation and the opportunity to experiment and create are identified. The conclusions suggest there is a need to prioritise continuous teachers' professional development programs centred on the design of new teaching strategies and opportunities for critical reflection on the academic and ethical challenges posed by GenAI in higher education.
Article
Full-text available
Lex Social, Revista de Derechos Sociales es una revista de periodicidad semestral que tiene como finalidad contribuir al debate científico sobre los derechos sociales y las necesidades jurídicas de nuestra sociedad en torno a los mismos.
Chapter
Service learning, a critical pedagogy combining community service with academic goals, fosters civic engagement, practical skill development, and social responsibility. This chapter explores how integrating Gen AI can enhance project-based service learning to address real-world social challenges in higher education (HE). Despite its rapid growth in HE, Gen AI still faces limitations, particularly the lack of a human-centred design method that reflects real-world contexts and the risk of oversimplifying complex tasks, leading to a more automated, less humanised learning process. The DEEP method, structured upon four co-designing phases—direction, education, event, and project—offers a hybrid solution, creating an adaptable and scalable framework for constructing a project with Gen AI for co-designed, social, and personalised learning. The chapter also illustrates the LivePBL project, piloted to train pre-service teachers in China, demonstrating how the method can effectively integrate Gen AI with hands-on co-design to enhance the learning environment and societal exchange.
Preprint
Full-text available
The growing carbon footprint of artificial intelligence (AI) models, especially large ones such as GPT-3 and GPT-4, has been undergoing public scrutiny. Unfortunately, however, the equally important and enormous water footprint of AI models has remained under the radar. For example, training GPT-3 in Microsoft's state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough for producing 370 BMW cars or 320 Tesla electric vehicles) and the water consumption would have been tripled if training were done in Microsoft's Asian data centers, but such information has been kept as a secret. This is extremely concerning, as freshwater scarcity has become one of the most pressing challenges shared by all of us in the wake of the rapidly growing population, depleting water resources, and aging water infrastructures. To respond to the global water challenges, AI models can, and also should, take social responsibility and lead by example by addressing their own water footprint. In this paper, we provide a principled methodology to estimate fine-grained water footprint of AI models, and also discuss the unique spatial-temporal diversities of AI models' runtime water efficiency. Finally, we highlight the necessity of holistically addressing water footprint along with carbon footprint to enable truly sustainable AI.
Article
Full-text available
As AI technologies are increasingly deployed in work, welfare, healthcare, and other domains, there is a growing realization not only of their power but of their problems. AI has the capacity to reinforce historical injustice, to amplify labor precarity, and to cement forms of racial and gendered inequality. An alternate set of values, paradigms, and priorities are urgently needed. How might we design and evaluate AI from an indigenous perspective? This article draws upon the five Tests developed by Māori scholar Sir Hirini Moko Mead. This framework, informed by Māori knowledge and concepts, provides a method for assessing contentious issues and developing a Māori position. This paper takes up these tests, considers how each test might be applied to data-driven systems, and provides a number of concrete examples. This intervention challenges the priorities that currently underpin contemporary AI technologies but also offers a rubric for designing and evaluating AI according to an indigenous knowledge system.
Preprint
Full-text available
Understanding the landscape of potential harms from algorithmic systems enables practitioners to better anticipate consequences of the systems they build. It also supports the prospect of incorporating controls to help minimize harms that emerge from the interplay of technologies and social and cultural dynamics. A growing body of scholarship has identified a wide range of harms across different algorithmic technologies. However, computing research and practitioners lack a high level and synthesized overview of harms from algorithmic systems arising at the micro-, meso-, and macro-levels of society. We present an applied taxonomy of sociotechnical harms to support more systematic surfacing of potential harms in algorithmic systems. Based on a scoping review of computing research (n=172), we identified five major themes related to sociotechnical harms — representational, allocative, quality-of-service, interpersonal harms, and social system/societal harms — and sub-themes. We describe these categories and conclude with a discussion of challenges and opportunities for future research.
Article
Full-text available
The ideal of the self-driving car replaces an error-prone human with an infallible, artificially intelligent driver. This narrative of autonomy promises liberation from the downsides of automobility, even if that means taking control away from autonomous, free-moving individuals. We look behind this narrative to understand the attachments that so-called ‘autonomous’ vehicles (AVs) are likely to have to the world. Drawing on 50 interviews with AV developers, researchers and other stakeholders, we explore the social and technological attachments that stakeholders see inside the vehicle, on the road and with the wider world. These range from software and hardware to the behaviours of other road users and the material, social and economic infrastructure that supports driving and self-driving. We describe how innovators understand, engage with or seek to escape from these attachments in three categories: ‘brute force’, which sees attachments as problems to be solved with more data, ‘solve the world one place at a time’, which sees attachments as limits on the technology’s reach and ‘reduce the complexity of the space’, which sees attachments as solutions to the problems encountered by technology developers. Understanding attachments provides a powerful way to anticipate various possible constitutions for the technology.
Article
Full-text available
AI is altering not only local and global society, but what it means to be human, or, to be counted as such. In the midst of concerns about the ethics of AI, calls are emerging for AI to be decolonized. What does the decolonization of AI imply? This article explores this question, writing from the post-colony of South Africa where the imbrications of race, colonialism and technology have been experienced and debated in ways that hold global meaning and relevance for this discussion. Proceeding in two parts, this article explores the notion of de/coloniality and its emphasis on undoing legacies of colonialism and logics of race, before critiquing two major discontents of AI today: ethics as a colonial rationality and racializing dividing practices. This article develops a critical basis from which to articulate a question that sits exterior to current AI practice and its critical discourses: can AI be decolonized?
Article
Full-text available
Ableism (discrimination in favor of nondisabled people and against disabled people1) impacts technological imagination. Like sexism, racism, and other types of bigotry, ableism works in insidious ways: by shaping our expectations, it shapes how and what we design (given these expectations), and therefore the infrastructure all around us. And ableism shapes more than just the physical environment. It also shapes our digital and technological imaginations - notions of who will "benefit" from the development of Artificial Intelligence (AI) and the ways that those systems are designed and implemented are a product of how we envision the "proper" functioning of bodies and minds.
Article
Full-text available
Purpose – This research aims to examine the main findings of the SusTEACH study of the carbon-based environmental impacts of 30 higher education (HE) courses in 15 UK institutions, based on an analysis of the likely energy consumption and carbon emissions of a range of face-to-face, distance, online and information and communication technology (ICT)-enhanced blended teaching models. Design/methodology/approach – An environmental assessment of 19 campus-based and 11 distance-based HE courses was conducted using questionnaire surveys to gather data from students and lecturers on course-related travel: the purchase and use of ICTs and paper materials, residential energy consumption and campus site operations. Results were converted into average energy and CO 2 emissions, normalised per student per 100 study hours, and then classified by the primary teaching model used by lecturers. Findings – The main sources of HE course carbon emissions were travel, residential energy consumption and campus site operations. Distance-based HE models (distance, online and ICT-enhanced teaching models) reduced energy consumption by 88 per cent and achieved significant carbon reductions of 83 per cent when compared with campus-based HE models (face-to-face and ICT-enhanced teaching models). The online teaching model achieved the lowest energy consumption and carbon emissions, although there were potential rebound effects associated with increased ICT-related energy consumption and paper used for printing. Practical implications – New pedagogical designs using online and distance-based teaching methods can achieve carbon reductions by reducing student travel via residential and campus accommodation. Originality/value – Few studies have examined the environmental performance of HE teaching models. A new classification of HE traditional, online and blended teaching models is used to examine the role of ICTs and the likely carbon impacts.
Article
This article examines the historical and contemporary shaping of feminist artificial intelligence (FAI). It begins by looking at the microhistory of FAI through the writings of Alison Adam and her graduate students to enrich the plural histories of AI and to write back feminist history into AI. Then, to explore contemporary examples of how FAI is being shaped today and how it deploys a multiplicity of meanings, I provide the following typology: FAI (1) as model, (2) as design, (3) as policy, (4) as culture, (5) as discourse, and (6) as science. This typology sheds light on the following questions: What does the term FAI mean? How has FAI been shaped over time?
Article
Digital calendars are logistical media, part of the infrastructure that configures arrangements among people and things. Calendars increasingly play a fundamental role in establishing our everyday rhythms, shaping our consciousness of temporality. Drawing on interviews with Silicon Valley calendar designers, this article explores how the conceptualization and production of scheduling applications codify contemporary ideals about efficient time management. I argue that these ideals reflect the driving cultural imperative for accelerated time handling in order to optimize productivity and minimize time wasting. Such mechanistic approaches treat time as a quantitative, individualistic resource, obscuring the politics of time embedded in what can and cannot be graphically represented on the grid interface. I conclude that electronic calendars are emblematic of a long-standing but mistaken belief, hegemonic in Silicon Valley, that automation will deliver us more time.