ArticlePDF Available

Charting the futures of artificial intelligence in education

Authors:
  • Meaning Processing
  • University College London / UNESCO / IRCAI / Council of Europe
  • Ecole des Ponts Business School; University of New Brunswick; University of Stavanger
Eur J Educ. 2022;00:1–6. wileyonlinelibrary.com/journal/ejed
|
1
© 2022 John Wiley & Sons Ltd.
DOI: 10.1111/eje d.1253 0
EDITORIAL
Charting the futures of artificial intelligence in
education
1 | INTRODUCTION
In less than half a decade, Artificial Intelligence (AI) has become a major topic in the daily news flow and in policy
debates. Technological advance has been fast, and AI has been claimed to transform occupations and work tasks.
It has been argued that everyone should have at least a basic understanding of what AI is in order to make sense of
what is happening and to be able to survive in this new environment, and— to remain relevant— we have to realise
that “our education systems will need a radical change in their purpose, form and content ” (Šucha & Gammel, 2021, p.
38).
Sense- making, indeed, is necessary and useful. One starting point for the thematic first part of this journal
issue was an internal European Commission Joint Research Centre (EC- JRC) pilot project that in 2019 sketched a
process to create an AI Handbook with and for Teachers. A key assumption in that project was that educators, ad-
ministrators, and education policymakers will soon face strong pressures to adopt AI- based systems in education
practices. To be able to assess claims about Artificial Intelligence in education (AIED), avoid overloading teachers
with unnecessar y initiatives, and to find concrete oppor tunities for AIED, we thought it would be useful to engage
teachers in a joint sense- making process where the potential and challenges of AIED would be contextualised in
actual pedagogic settings. In somewhat impolite terms, at the time there was so much hyperbole and nonsense
about AI that we were afraid that both the potential advantages of using AI in education and the actual purposes
of education were being lost in much of the talk about AIED.
2 | IN THIS ISSUE
In this issue, we try to move beyond the hyperbole and make some sense of this potentially very important tech-
nology that, indeed, is already beginning to have a major impact on education. There have been massive invest-
ments in AI technology around the world, as well as high- profile policy statements about the need to promote and
regulate this emerging technology (see the articles in this issue by Blikstein et al., Holmes and Tuomi, and Selwyn).
All the Big Tech giants (Alphabet, Amazon, Apple, and Google) are heavily involved in AI- infused educational
technology. Meanwhile, there are now more than thirty multi- million- dollar funded AIED firms, and the market
is expected to become worth more than US$ 20 billion within five years (GMI, 2022). Commercial interests are
frequently translated into claims about the transformative power of Artificial Intelligence systems in education
(e.g., OECD, 2021). Yet, the technical complexity of AI and AIED systems can make it difficult for practitioners and
policymakers to question such claims or assess their relevance. An important motivation for making this issue was
to help educators to do this. In particular, the comprehensive introductory article by Holmes and Tuomi provides
a detailed overview of AIED- related concepts and the current state- of- the- art.
Beyond highly technical academic research, almost all claims about Artificial Intelligence and AIED are claims
about the future. Statements and opinions about how AI will change education and learning are now omnipresent,
2 
|
    EDITORIAL
but efforts to systematically study the futures of AI in education and learning, building on known futures studies
methods have been scarce. Expert discussions have generated some narrative fragments and examples about
possible AI- enabled futures (Pelletier, 2021; Roschelle et al., 2020; Vuorikari et al., 2020); more often, however,
AIED researchers have painted dualistic images of dystopian and utopian futures of education (e.g., Aiken &
Epstein, 2000; Pinkwart, 2016; Schiff, 2021), or focused on highlighting technological trends and the potential
of future AIED in transforming education or in solving existing problems in education (e.g., Baker, 2021; Woolf
et al., 2013). Ethical challenges and the impact of commercialisation have received increased attention in re-
cent years in critical AIED studies (e.g., Blikstein & Blikstein, 2021; Buckingham Shum & Luckin, 2019; Holmes &
Porayska- Pomsta, 2023; Nemorin et al., 2022; Perrotta et al., 2021; Selwyn, 2019; Williamson & Eynon, 2020),
but, beyond computer science teacher communities (e.g., AI4K12, 2022) and a few experiments (e.g., Pihlajamaa
& Rantapero- Laine, 2020), educators or policymakers have rarely been actively involved in AIED system develop-
ment, research, or technology articulation.
The thematic first part of this issue opens with a perspective paper by Riel Miller and Ilkka Tuomi, Making
the futures of AI in education: Why and how imagining the futures matters. Miller and Tuomi reflect on the future
of AIED from the point of view of futures studies. As noted also by Selwyn in the fifth article in this issue, there
are two very different kinds of AIED. One is the actual AI in use. The other— much more common— is the AIED of
imagined futures. Paradoxically, both rest on the foundation of anticipatory models that we use to make sense
of the present and the future. In general, anticipatory models— as explained in the perspective paper— shape
the futures we can imagine. These anticipations, therefore, largely determine what AI is and can be in those
imagined futures. To understand AIED, therefore, requires that we take a closer look at different articulations of
futures. This brings us to the field of futures studies. Building on earlier work on futures literacy and theory of
anticipation (Fuller, 2017; Miller, 2007, 2018; Poli, 2017; Tuomi, 2019), Miller and Tuomi reflect on how different
futures of AIED emerge as potential futures are used in different ways. AI, as a meaningful technology with real
social, economic, and cultural consequences, can only be understood as a product of our imagined futures. Our
capabilities to imagine futures and understand different ways of using futures, therefore, determine in important
ways what we can make of AI.
The first article in the thematic first part of this issue on The state of the art and practice in AI in education is by
Wayne Holmes and Ilkka Tuomi. Holmes and Tuomi identify some of the key concepts for Artificial Intelligence
in education and provide an overview of the AIED field. The article outlines some of the historical background
and introduces different ways in which AIED is currently being used, noting that different practices need to be
considered both collectively and separately. To organise the different types of AIED, an updated typology of
AIED applications is elaborated (see also Holmes et al., 2019). An earlier version of the typology has previously
informed a number of reports (e.g., Holmes et al., 2022; Miao & Holmes, 2021; Vuorikari et al., 2020). The article
also introduces some key roadblocks, these are obstacles that need to be addressed to integrate AIED in education
practices, ranging from lacking engagement with the ethics of AIED to evidence of impact, commercialisation of
education, and the use of AI as a tool for colonising.
The second article, Ceci n'est pas une école: Discourses of artificial intelligence in education through the lens of se-
miotic analytics, by Paulo Blikstein, Yipu Zheng and Karen Zhuqian Zhou, is an empirical study that focuses on the
commercial discourses of Artificial Intelligence in education. Using text mining to analyse AIED vendor websites,
the article shows how the words used by AIED vendors shape the understanding not only of AI in education but
of education itself. Through rhetorical moves that juxtapose ‘old- fashioned’ teachers and ‘advanced technology’
that works at the speed of light and is able to ‘bring students into the 21st century’, a comparison is constructed
in which technology is portrayed as unequivocally superior. In a highly thought- provoking way, the article points
out that the realisation of AIED comes with a new discourse, and that the narrative part of AIED has important
consequences for education practices and policy.
The third article, Still w(AI)ting for the automation of teaching: An exploration of machine learning in
Swedish primary education using Actor- Network Theory, is by Katarina Sperling, Linnéa Stenliden, Jörgen
   
|
 3
EDITORIAL
Nissen and Fredrik Heintz. The article documents an attempt to use a data- driven AIED system to support
mathematics teaching in primary classrooms in Sweden. Following the tradition of science and technology
studies, it provides an ethnography- informed description of the successes and failures in appropriating
new technological functionality in actual teaching practices. Although the realisation of the planned em-
pirical study was in part limited by the Covid pandemic, the article provides a useful insider view of what
using AIED can be like in practice. To describe the complex dynamics of AIED adoption, the article frames
the process using actor- network theory, and shows the importance of thick descriptions of AIED in real
educational settings.
The fourth article, Artificial intelligence, 21st century competences, and socio- emotional learning in education:
More than high- risk?, by Ilkka Tuomi, focuses on the possible consequences of a collision of three highly influential
education policy discourses: namely, (1) the use of AI in education, (2) suppor ting the development of 21st century
competences, and (3) measuring social and emotional learning. Reviewing existing research on ‘soft skills’ and their
relation to personality traits, abilities, and interest structures, the article shows that the use of AI to analyse and
support the development of 21st century competences and non- epistemic learning may have important social
consequences that require careful consideration. Machine learning models developed using data on 21st century
competences may have fundamental implications for the ways in which societies organise themselves. The article
therefore suggests a moratorium on the use of data on non- epistemic competence components until researchers
and policymakers better understand these implications.
The fifth article, The future of AI and education: some cautionary notes, by Neil Selwyn, reflects on how to avoid
some major caveats in future discussions about AIED. Aligned with the empirical study by Blikstein et al. in this
issue, it highlights the importance of avoiding exaggerated claims about AIED and suggests a more critical stance
on the potential and challenges of AIED. For example, it asks for a clear distinction between ‘actually existing AI’
and ‘speculative AI.’ It also calls for a more refined understanding about what aspects of education can actually
be quantified and represented as data. For current policy debates about digital education, this is a fundamental,
theoretically and practically important question. Addressing it would require rethinking some common beliefs
about the nature of data and digital computation (Rosen, 1978, 1987; Tuomi, 2000). The article argues that, in-
stead of optimistic hyperbole, a balanced view that acknowledges the potential negative impacts, including social
and environmental harms, and the ideological and political nature of AIED is necessary. This is what Facer and
Selwyn (2021) have called non- stupid optimism.
The sixth and final article of the thematic part of this issue, Towards hybrid human- AI learning technologies, by
Inge Molenaar, builds on her earlier research (2021), and suggests elements of a new conceptual language for
thinking about AIED. Suggesting a detect- diagnose- act framework for understanding student- facing AIED and the
six levels of automation model as a way to classify AIED systems, the article shows how different AIED systems re-
quire different ways of dividing control and tasks among teachers and technology. A starting point for the ar ticle is
that AIED needs to be understood as a tool that facilit ates learning and augments teacher and student capabilities.
This is in contrast to the automation perspective that has commonly been associated with AI. The augmentation
perspective on AI leads to a view where human cognition and AI form a hybrid, which Molenaar explores in her
article using self- regulated learning as an example. Molenaar develops the point that shared language is necessary
for the effective involvement of key stakeholders in AIED development and adoption. The article contributes by
articulating concepts and language for this purpose.
AIED is often viewed as a technological speciality that can be competently addressed only by computer sci-
entists specialised in machine learning and AI. However, as Holmes and Tuomi explain, we need to acknowledge
that AIED comprises a complex variety of technologies that therefore require multiple understandings. Further,
as the articles by Blikstein et al., Sperling et al., and Selwyn in this issue illustrate, narrative elements are key parts
of technology, and enable us to make sense of it. Part of the process of technology development, therefore, is the
creation of new languages— an effort Molenaar engages with in her article. The Miller and Tuomi article, in turn,
turns towards the foundations of the meaningful reality that organises our societies and policy debates, showing
4 
|
    EDITORIAL
that anticipatory models make expectations, explanations, justifications— the key elements of stories we tell— and
action possible. Futures and AIED, therefore, are deeply linked conceptually, and an improved capability to use
futures also makes possible new forms of AI, AIED, and education.
3 | PART II ARTICLES
Part II opens with an article on Third space workers in higher education in times of dislocated complexity, by
Kay Livingston and Lorraine Ling, that examines the changing nature of the higher education workforce.
Specifically, with reference to the increasing influence and importance of third space workers such as e-
learning developers, partnerships managers and learning technology specialists. Livingston and Ling deploy
Giddens' Theory of Structuration to analyse two cases of how the higher education workforce is changing,
one drawing from a study in Scotland and the other from Australia. Third space workers are forging new iden-
tities, crossing traditional boundaries and facilitating change internally within the university and externally
through partnerships. The conclusion identifies complex features of ongoing changes in higher education and
highlights the need for structural and policy changes; in particular, a need to recognise and legitimise the role
of third space workers.
The second article on The transition from higher education to first employment in Spain, is by Encarnación
Cordón- Lagares, Félix García- Ordaz and Juan José García del Hoyo. Cordón- Lagares and colleagues report on
findings from secondary analysis of survey data on the time it takes higher education graduates to obtain their
first job in Spain. The statistical analysis draws on parametric and nonparametric analysis of duration models to
estimate the exit rate to employment of university graduates. The results show that graduates with prior work ex-
perience, graduates from private universities and men have a comparative advantage in the transition to employ-
ment. Additional factors discussed include the subject field studied, Information and Communication Technology
(ICT) skills, international experience and the timing of job searches.
The final article, Partnership of schools and civil society organisations to support education of students of varied
linguistic backgrounds— The situation in the Czech Republic, Italy and Spain, is by Janet Wolf, Raquel Casado- Muñoz
and Francesca Pedone. The article reports on a study in which the authors surveyed 34 non- profit organisations
in Czech Republic, Spain, and Italy and interviewed fifteen teachers. Specifically, the study examined how non-
profit organisations and teachers viewed collaboration between schools and non- profits as a potential resource
for teachers for supporting students with a different mother tongue (L2 students). The authors highlight the
potential of non- profit organisations as partners in education for covering themes from intercultural education.
Obstacles identified by non- profit organisations included lacking communication, funding and coordination by
public administration authorities.
Ilkka Tuomi1
Wayne Holmes2
Riel Miller3
1Meaning Processing Ltd., Helsinki, Finland
2Knowledge Lab, Institute of Education (IOE), Faculty of Education and Sociology, University College London
(UCL), London, UK
3J. Herbert Smith Centre, University of New Brunswick, Canada
Correspondence
Ilkka Tuomi, Meaning Processing Ltd., Arkadiankatu 20 A 20, 00100 Helsinki, Finland.
Email: ilkka.tuomi@meaningprocessing.com
   
|
 5
EDITORIAL
ORCID
Ilkka Tuomi https://orcid.org/0000-0002-4179-7103
Wayne Holmes https://orcid.org/0000-0002-8352-1594
Riel Miller https://orcid.org/0000-0001-6329-5983
REFERENCES
AI4K12. (2022). AI4K12.org. https://ai4k12.org
Aiken, R. M., & Epstein, R. G. (2000). Ethical guidelines for AI in education: Starting a conversation. International Journal
of Artificial Intelligence in Education, 11(2) , 1 6 3– 176.
Baker, R. S. (2021). Artificial Intelligence in education: Bringing it all together. In S. Vincent- Lancrin (Ed.), Digital ed-
ucation outlook: Pushing the frontiers with AI, blockchain, and robots (pp. 43– 54). OECD. https://doi.org/10.1787/
f54ea 644- en
Blikstein, P., & Blikstein, I. (2021). Do educational technologies have politics? A semiotic analysis of the discourse of ed-
ucational technologies and artificial intelligence in education. In Algorithmic rights and protections for children. https://
doi.org/10.1162/ba67f 642.646d0673
Buckingham Shum, S., & Luckin, R. (2019). Learning analytics and AI: Politics, pedagogy and practices. British Journal of
Educational Technology, 50(6), 2785– 2793. htt ps://doi.org/10.1111/bjet.1288 0
Facer, K., & Selwyn, N. (2021). Digital technology and the futures of education— Towards ‘non- stupid’ optimism. ED- 2020/
FoE- BP/27. UNESCO. https://unesd oc.unesco.org/ark:/48223/ pf000 0377071
Fuller, T. (2017). Anxious relationships: The unmarked futures for post- normal scenarios in anticipatory sys-
tems. Technological Forecasting and Social Change, 124 (November), 41– 50. https://doi.org /10.1016/j.te chf
ore.2016.07.045
GMI. (2022). AI in education market size & share, growth forecast 2022- 2030. Report. Global Market Insights Inc. h t t p s ://
www.gmins ights.com/indus try- analy sis/artif icial - intel ligen ce- ai- in- educa tion- market
Holmes , W., Bialik , M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching & learning.
The Center for Curriculum Redesign.
Holmes, W., Persson, J., Chounta, I.- A., Wasson, B., & Dimitrova, V. (2022). Artificial intelligence and education. A critical
view through the lens of human rights, democracy, and the rule of law. Council of Europe.
Holmes, W., & Porayska- Pomsta, K. (Eds.). (2023). Ethics of artificial intelligence in education. Taylor & Francis.
Miao, F., & Holmes, W. (2021). AI and education: Guidance for policy- makers. UNESCO https://unesd oc.unesco.org/
ark:/48223/ pf000 0376709
Miller, R. (2007). Futures literacy: A hybrid strategic scenario method. Future s, 39(4), 341– 362. https://doi.org/10.1016/j.
futur es.2006.12.001
Miller, R. (Ed.). (2018). Transforming the future: Anticipation in the 21st centur y. Routledge.
Molenaar, I. (2021). Personalisation of learning: Towards hybrid AI- human learning technologies. In S. Vincent- L ancrin
(Ed.), Digital education outlook: Pushing the frontiers with AI, blockchain, and robots (pp. 5777). OECD. h t t p s : //d o i .
org /10.1787/589b2 83f- en
Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2022). AI hyped? A horizon scan of discourse on artifi-
cial intelligence in education (AIED) and development. Learning, Media and Technology (ahead of print). ht t p s : //d o i .
org/10.1080/17439 884.2022.2095568
OECD. (2021). OECD digital education outlook 2021: Pushing the frontiers with ar tificial intelligence, blockchain and robots.
OECD. https://doi.org/10.1787/589b2 83f- en
Pelletier, K. (2021). 2021 EDUCAUSE Horizon Report: Teaching and Learning Edition. EDUCAUSE. https://libra ry.educa use.
edu/- /media/ files/ libra ry/2021/4/2021h rteac hingl earni ng.pdf
Perrotta, C., Gulson, K. N., Williamson, B., & Witzenberger, K. (2021). Automation, APIs and the distributed labour of
platform pedagogies in Google Classroom. Critical Studies in Education, 62(1), 97– 113. https://doi.org/10.10 80/17508
487.2020.1855597
Pihlajamaa, J., & Rantapero- Laine, A. (2020). School as an innovation platform— A unique model for co- creation. The Finnish
Smart Learning Environments for the Future projec t. European EdTech Network. https://eetn.eu/case- study/ detai l/
Schoo l- as- an- innov ation - platf orm- - - a- uniqu e- model - for- co- creat ion.- The- Finni sh- Smart - Learn ing- Envir onmen ts-
for- the- Futur e- project
Pinkwart, N. (2016). Another 25 years of AIED? Challenges and opportunities for intelligent educational technologies of
the future. International Journal of Artificial Intelligence in Education, 26(2), 771– 783. https://doi.org /10.1007/s4059
3- 016- 0099- 7
Poli, R. (2017). Introduction to anticipation studies. Springer International Publishing.
6 
|
    EDITORIAL
Roschelle, J., Lester, J., & Fusco, J. (2020). AI and the future of learning: exper t panel report. CIRCLS, Center for Integrative
Research in Computing and Learning Sciences. https://circls.org/wp- conte nt/uploa ds/2020/11/CIRCL S- AI- Repor t-
Nov20 20.pdf
Rosen, R. (1978). Fundamentals of Measurement and Representation of Natural Systems. North- Holland.
Rosen, R. (1987). On the scope of syntactics in mathematics and science: The machine metaphor. In J. L. Casti & A.
Karlqvist (Eds.), Real brains, artificial minds (pp. 1– 23). North- Holland.
Schiff, D. (2021). Out of the laboratory and into the classroom: The future of artificial intelligence in education. AI &
SOCIETY, 36(1), 331– 348. https://doi.org/10.1007/s0014 6- 020- 01033 - 8
Selwyn, N. (2019). Should robots replace teachers?: AI and the future of education. Pol it y.
Šucha, V., & Gammel, J.- P. (2021). Humans and societies in the age of artificial intelligence. European Commission,
Directorate- General for Education, Youth, Sport and Culture.
Tuomi, I. (2000). Data is more than knowledge: Implications of the reversed knowledge hierarchy to knowledge man-
agement and organizational memory. Journal of Management Information Systems, 6(3), 103– 117. h t t p s : //d o i .
org/10.1080/07421 222.1999.11518258
Tuomi, I. (2019). Chronotopes of foresight: Models of time- space in probabilistic, possibilistic and constructivist futures.
Futures & Foresight Science, 1(2), 1– 15. https://doi.org /10.10 02/ffo2.11
Vuorikari, R., Punie, Y., & Cabrera, M. (2020). Emerging technologies and the teaching profession: Ethical and pedagogical con-
siderations based on near- future scenarios. Publications Office of the European Union. https://doi.org/10.2760/46933
Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning,
Media and Technology, 45(3), 223– 235. https://doi.org/10.1080/17439 884.2020.1798995
Woolf, B. P., Lane, H. C ., Chaudhri, V. K., & Kolodner, J. L. (2013). AI grand challenges for education. AI Magazine, 34(4),
66– 84. https://doi.org/10.1609/aimag.v34i4.2490
... For instance, according to Statista's 2022 data, investments in 2021 alone totaled 94 billion dollars. Additionally, major international organizations such as the European Union, UNESCO, and the OECD are issuing guidelines to promote the ethical, inclusive, and fair use of AI (Holmes & Tuomi, 2022). ...
... On the other hand, AI applications in education are diverse and cater to different users, addressing various needs each time. Previous research (Baker et al., 2019;Holmes & Tuomi, 2022) categorizes them into three main categories: a) those focusing on students, b) those focusing on teachers, and c) those focusing on educational institutions. ...
Chapter
Full-text available
Recent research indicates that Artificial Intelligence (AI) is poised to revolutionize various facets of human existence, including education. This paper argues that for higher education institutions to successfully integrate and harness the full potential of AI, it is imperative to ensure that students, educators, and other stakeholders possess a solid understanding of AI concepts and applications. AI literacy serves as a fundamental prerequisite for the seamless incorporation of AI across all educational activities and methodologies. This paper addresses the following key policy questions: (a) How can AI be effectively utilized to enhance higher education? (b) What strategies can be employed to maximize the beneficial impact of AI in higher education? (c) How can ethical, inclusive, and equitable practices be promoted in the use of AI within higher education? (d) How can educational institutions prepare individuals with AI literacy skills to navigate and engage with AI technologies in their personal and professional lives? The paper also emphasizes the pivotal role of teaching and learning centers in promoting AI literacy among educators and students. It explores various approaches, such as workshops, training sessions, curriculum development, and resource dissemination, that these centers can employ to enhance AI literacy. Additionally, the paper discusses the importance of fostering ethical awareness and critical thinking skills to ensure responsible AI use in education.
... The activity was funded by Rådet för Artificiell Intelligens of Umeå University, through the project AutoGrAIde "A Student-Driven Interdisciplinary Hackathon on Whether and How to Automate Grading & Assessment" (project number 570002260), coordinated by the first author. This hackathon was developed as a tool for facilitating dialogue around the topic of the use of AI in education (which is of current critical interest (Tuomi et al. (2022)) as to broaden the spectrum of methods available for responsible AI teaching & learning, for which the presence of disciplinary silos has been identified (Javed et al. 2022). As to implement the responsible AI component within the AutoGrAIde Hackathon, the GEDAI method (Growing Ethical Designers of Artificial Intelligence) was applied , thus ensuring a streamlined integration of responsible AI concerns within the activity. ...
Article
Full-text available
Whereas hackathons are widespread within and outside academia and have been argued to be a valid pedagogical method for teaching interdisciplinarity, no detailed frameworks or methods are available for conceptualizing and organizing educational hackathons, i.e., hackathons dedicated to best achieving pedagogic objectives. This paper is dedicated to introducing EDUCational Hackathons for learning how to solve Interdisciplinary Challenges (EDUCHIC) through: (1) defining the fundamental principles for framing an activity as an EDUCHIC, integrating principles from pedagogical methods, hackathon organization, and interdisciplinarity processes; (2) describing general properties that EDUCHIC possess as a consequence of the interaction of the fundamental principles; (3) developing operational guidelines for streamlining the practical organization of EDUCHIC, including an exhaustive end-to- end process covering all the steps for organizing EDUCHIC and practical frames for carrying the key decisions to be made in this process; and (4) a demonstration of these guidelines through illustrating their application for organizing a concrete EDUCHIC.
Article
Full-text available
Artificial Intelligence (AI) is being more and more incorporated into the field of education, providing advancements in personalized learning, adaptive systems, and administrative efficiencies in Open, Distance and Digital Education (ODDE) University. The research is focused on discovering the benefits, such as enhanced personalization and automation, as well as drawbacks like excessive dependence on technology and worries about data privacy. A combination of methods will be used to gather data, including surveys, interviews, and document reviews with the university's faculty, students, and administrators. Predicted results indicate that although AI can improve learning experiences and make administrative tasks more efficient, it also brings about challenges such as infrastructure constraints, ethical concerns, and adaptation struggles for those involved. The consequences are significant for policymakers, educators, and students, offering guidance on how to successfully and ethically incorporate AI into ODDE University. Suggestions consist of enhancing technological infrastructure, providing thorough training for employees, and creating explicit guidelines on data privacy and utilization of AI. The research highlights the importance of a comprehensive approach that considers the viewpoints of all those participating. Additional studies are recommended to investigate the lasting impact and wider uses in various educational settings, with the goal of maximizing the advantages of AI while addressing any possible disadvantages in online learning.
Conference Paper
Full-text available
This study offers suggestions for the ethical and responsible use of these technologies in educational settings by comprehensively addressing the effects of artificial intelligence (AI) in the field of education, its potential negativities and the concept of cyberloafing. Artificial intelligence is revolutionizing learning processes by offering innovative solutions such as providing individualized learning experiences, monitoring student performance, optimizing learning processes and personalizing teaching materials. However, along with these positive effects, it also carries various risks such as data privacy violations, ethical issues, reduced human interaction between teachers and students, systematic biases and inequality among students. Cyberloafing has taken on a new dimension with the proliferation of artificial intelligence technologies. Cyberloafing refers to students' use of digital tools for extracurricular or personal entertainment purposes when they should be using them for educational purposes. In particular, the use of AI-based chatbots and creative content tools by students for noneducational activities can lead to negative consequences such as a decrease in academic achievement and distraction. However, AI also has the potential to monitor and limit such behaviors. This study makes recommendations such as including AI literacy in curricula for all age groups, providing comprehensive in-service trainings for teachers, and strengthening ethical and privacy policies. In addition, the importance of teachers' collaborative use of AI technologies is emphasized. Guidelines and strategies for the responsible use of AI in education should be developed and integrated into education in a way that protects students' critical thinking, creativity and ethical values. In this way, artificial intelligence can be considered as an opportunity rather than a threat in education.
Chapter
Various new revolutions are observed with the changing face of learning paradigms in the education industry due to the impact of the global pandemic. This study reveals the intentions of students and academicians towards improving e-learning systems. The Technology Acceptance Model (TAM) is used to identify the acceptability and expected improvements by them. The model identifies certain factors and categorizes them under various hypothesis parameters. A survey was conducted to collect opinions from multiple respondents, consisting of two main parts. The initial part includes questions for academic staff (16 questions), while the second part includes 18 questions for students. The questions focus on the current e-learning system, its adaptability, and the expected improvements at the next level. The data collected can be used to assess e-learning systems and their issues. The integration of blended learning and IoT devices can improve current features. Data was collected for two months, from March to April 2023. The survey was distributed in Middle Eastern universities, and over twenty universities responded. The staff submitted 1080 responses, and approximately 740 university students participated in the second part.
Article
Full-text available
The study seeks to understand how the AI ecosystem might be implicated in a form of knowledge production which reifies particular kinds of epistemologies over others. Using text mining and thematic analysis, this paper offers a horizon scan of the key themes that have emerged over the past few years during the AIEd debate. We begin with a discussion of the tools we used to experiment with digital methods for data collection and analysis. This paper then examines how AI in education systems are being conceived, hyped, and potentially deployed into global education contexts. Findings are categorised into three themes in the discourse: (1) geopolitical dominance through education and technological innovation; (2) creation and expansion of market niches, and (3) managing narratives, perceptions, and norms.
Chapter
Full-text available
Artificial intelligence has led to a generation of technologies in education – for use in classrooms and by school systems more broadly – with considerable potential to bring education forward. This chapter provides a broad overview of the technologies currently being used, their core applications, and their potential going forward. The chapter also provides definitions of some of the key terms that will be used throughout this book. It concludes with a discussion of the potentials that may be achieved if these technologies are integrated, the shifts in thinking about supporting learners through one-on-one learning experiences to influencing systems more broadly, and other key directions for R&D and policy in the future.
Book
Full-text available
Artificial Intelligence (AI) will radically change our lives and transform our societies. This shift, which has already started, will most probably be the deepest and the fastest humanity has ever experienced. While most of the ongoing discussions on AI limit themselves to the short and medium-term effects, this short and comprehensive report tries to go beyond the most immediate challenges and to explore also some of the longer-term impacts that AI may have on humans and societies. It summarizes the key issues in 10 takeaways and suggests a list of possible actions to be taken by policymakers.
Book
Full-text available
Artificial intelligence (AI) is envisioned as a new tool to accelerate the progress towards the achievement of SDG 4. Policies and strategies for using AI in education are central to maximizing AI’s benefits and mitigating its potential risks. Fostering AI-ready policy-makers is the starting point of the policy development process. This publication offers guidance to policy-makers in understanding AI and responding to the challenges and opportunities in education presented by AI. Specifically, it introduces the essentials of AI such as its definition, techniques, technologies, capacities and limitations. It also delineates the emerging practices and benefit-risk assessment on leveraging AI to enhance education and learning, and to ensure inclusion and equity, as well as the reciprocal role of education in preparing humans to live and work with AI. The publication summarizes three approaches to the policy responses from existing practices: independent approach, integrated approach and thematic approach. In a further step, it proposes more detailed recommendations and examples for planning AI and education policies, aligned with the recommendations made in the 2019 Beijing Consensus on AI and Education.
Technical Report
Full-text available
Will today’s emerging technologies impact the teaching profession in the future? Which parts of the teaching tasks or learning processes could be substituted, enhanced and transformed through automatisation, algorithms and machines? To help educational stakeholders with strategic reflection and anticipatory thinking, eight future-oriented scenarios are outlined using foresight methods. The aim of the scenarios is to see the future as something to shape. These near-future scenarios aim to solve a number of problems that educators of today say prevent them from delivering quality education and training. They take place in classrooms, lecture halls, training centres and digital learning environments in which emerging technologies could be used to support educators in their profession. Key challenges emerging from the scenarios relate to ethical considerations (e.g. balance between human autonomy and machines, datafication of education, pedagogical models) and the evolving competence requirements of teaching professionals. At the end of the report, a number of insights for policy reflection are raised. They aim to prompt the need today to discuss the future role of emerging technologies in education and training, and their impact on the teaching profession
Article
Full-text available
Digital platforms have become central to interaction and participation in contemporary societies. New forms of ‘platformized education’ are rapidly proliferating across education systems, bringing logics of datafication, automation, surveillance, and interoperability into digitally mediated pedagogies. This article presents a conceptual framework and an original analysis of Google Classroom as an infrastructure for pedagogy. Its aim is to establish how Google configures new forms of pedagogic participation according to platform logics, concentrating on the cross-platform interoperability made possible by application programming interfaces (APIs). The analysis focuses on three components of the Google Classroom infrastructure and its configuration of pedagogic dynamics: Google as platform proprietor, setting the ‘rules’ of participation; the API which permits third-party integrations and data interoperability, thereby introducing automation and surveillance into pedagogic practices; and the emergence of new ‘divisions of labour’, as the working practices of school system administrators, teachers and guardians are shaped by the integrated infrastructure, while automated AI processes undertake the ‘reverse pedagogy’ of learning insights from the extraction of digital data. The article concludes with critical legal and practical ramifications of platform operators such as Google participating in education.
Article
Full-text available
Like previous educational technologies, artificial intelligence in education (AIEd) threatens to disrupt the status quo, with proponents highlighting the potential for efficiency and democratization, and skeptics warning of industrialization and alienation. However, unlike frequently discussed applications of AI in autonomous vehicles, military and cybersecurity concerns, and healthcare, AI’s impacts on education policy and practice have not yet captured the public’s attention. This paper, therefore, evaluates the status of AIEd, with special attention to intelligent tutoring systems and anthropomorphized artificial educational agents. I discuss AIEd’s purported capacities, including the abilities to simulate teachers, provide robust student differentiation, and even foster socio-emotional engagement. Next, to situate developmental pathways for AIEd going forward, I contrast sociotechnical possibilities and risks through two idealized futures. Finally, I consider a recent proposal to use peer review as a gatekeeping strategy to prevent harmful research. This proposal serves as a jumping off point for recommendations to AIEd stakeholders towards improving their engagement with socially responsible research and implementation of AI in educational systems.