Available via license: CC BY 4.0
Content may be subject to copyright.
Vol.:(0123456789)
The Australian Educational Researcher
https://doi.org/10.1007/s13384-025-00801-z
What generative Artificial Intelligence priorities
andchallenges dosenior Australian educational policy
makers identify (and why)?
MattBower1 · MichaelHenderson2 · ChristineSlade3 ·
EricaSouthgate4 · KalervoGulson5 · JasonLodge3
Received: 8 March 2024 / Accepted: 1 January 2025
© The Author(s) 2025
Abstract
Free access to powerful generative Artificial Intelligence (AI) in schools has left
educators and system leaders grappling with how to responsibly respond to the con-
sequent challenges and opportunities that this new technology poses. This paper
examines the priorities and challenges that senior Australian educational leaders
identify with relation to responsible and ethical use of generative AI in school edu-
cation, and the reasons for their beliefs. Members of the Australian generative Arti-
ficial Intelligence in Education working group as well as other senior policymakers
throughout Australia participated in a two-phase data collection process involving
survey responses and focus group discussions. Ranking activities revealed a large
number of priorities and systemic challenges, with no unilateral consensuses emerg-
ing. The highest priorities for senior policymakers related to managing risks, educat-
ing teachers, and educating system leaders, while the main systemic and environ-
mental challenges related to the pace of change, teacher capabilities and professional
learning, and equitable access to the technology. Throughout the analysis, meta
themes emerged that characterised the policy-setting environment as one involving
urgency, uncertainty, interconnectedness, contextuality, and complexity, with the
pivotal role of teachers highlighted throughout. Reflections on responsible and ethi-
cal policy-setting in response to rapid technological change are provided, including
with relation to anticipatory and networked governance and the inter-relationship
with the broader policy context. Recommendations for further research and practice
are also proposed.
Keywords Artificial Intelligence· Education· Policymaking· Governance·
Priorities· Challenges
Extended author information available on the last page of the article
M.Bower et al.
Introduction
The release of ChatGPT3 by OpenAI in November 2022 marked a watershed
moment for the Education sector, ushering in a new era where learning and teach-
ing would be infiltrated by powerful and readily available generative Artificial Intel-
ligence. Garnering significant public interest and media coverage (e.g., Mollman,
2023; Roose, 2023; Wingard, 2023), ChatGPT demonstrated the capacity to gener-
ate extended text responses to a broad spectrum of natural language prompts, often
imitating intelligent and knowledgeable human interaction. Artificial Intelligence
(AI) had already found a foothold in everyday life, through its integration in tech-
nologies like personal assistants (e.g., Siri, Alexa), weather prediction, facial iden-
tification, medical diagnosis, legal assistance, and more (Holmes, Bialik and Fadel,
2019; Ibrahim etal., 2023; OECD, 2019). The potential application of AI in educa-
tion had previouslybeen recognised, for instance, use in bespoke learning platforms,
adaptive assessment systems, intelligent predictive analytics, and the application of
narrow-form conversational agents (Akgun and Greenhow, 2021; OECD, 2022).
Yet, the capacity of generative AI to produce human-like responses to a large vari-
ety of requests constituted a significant disruption to many traditional educational
processes and left educational leaders and policy-makers challenged with how to
respond in ways that are ethical and responsible.
The use of generative AI can potentially benefit education in a number of ways,
for instance by directing students to personalised curriculum pathways, producing
customised learning content, providing information about student progress, inform-
ing teacher decision making, contributing to learning design processes, and assisting
in assessment and feedback (Bower etal., 2024; Chiu etal., 2023a, 2023b; Ibrahim
etal., 2023). Yet the implicit biases, inaccuracies, lack of transparency, and potential
disruption to traditional learning and teaching posed by AI in general means that
understanding how to set responsible and ethical policy is not necessarily obvious
(Akgun and Greenhow, 2021). The effective use of generative AI systems in edu-
cation is constrained by teacher capacity and student access, which raises a raft of
equity issues that policymakers need to consider. At a higher level, there is very
little research that examines the real-time decision-making processes and priorities
of policy-makers and educational leaders as they attempt to respond responsibly and
ethically to technological disruption. The perspectives of policy-makers and educa-
tional leaders are vital, because their decisions have the potential to influence educa-
tional processes across an entire sector. Their operation at multiple layers of the edu-
cation system means that decisions about how, when and if to use AI in education
have procedural and ethical impact across multiple scales, from schools to education
systems to national level responses.
Understanding the priorities and challenges of generative AI as identified by
policymakers provides a useful referent for other stakeholders (principals, teachers,
researchers) as they attempt to apply generative AI in responsible and ethical ways.
Systematic analysis of senior policymaker perceptions enables advancement beyond
the individual and rhetorical reports in the media, to determine which issues are
believed to be most important. Investigating the reasons underpinning the priorities
What generative Artificial Intelligence priorities and…
and challenges that policymakers identify also provides rare insight into the realities
of policy-making more generally. Consequently, this study examined the perspec-
tives of senior policy makers and educational system leaders in Australia as they
attempt to responsibly and ethically establish policy for the use of generative AI in
schooling. Specifically, the research question for this study is as follows:
Research Question: What are the main priorities and challenges of responsibly
and ethically implementing generative AI in schools as seen by policy makers
and educational system leaders, and their reasons for them?
Answering this question enables responsible and ethical policy-making, imple-
mentation and research relating to generative AI in education in other settings to
be based on evidence, at the same time as it illuminates more generally the nature
of policy-setting in response to educational disruption. For instance, knowing the
priorities and challenges of senior educational leaders as they attempt to set educa-
tional policy relating to AI may result in changes to future policy-setting processes,
in terms of timing, membership, and guidance offered. Understanding the range of
priorities and degree of policymaker consensus may provide insights into the rea-
sons for policy outcomes.
The paper begins by outlining current issues and debates in research literature
on AI and education, including the responsible and ethical considerations. The con-
text of the study (the Australian schooleducation sector) is described, along with
the mixed methods methodology, participants and analytic approach. The major
priorities and challenges are then identified, including the reasons for them. The
nature of responsible and ethical policy-setting in response to educational disrup-
tion is discussed, and implications are drawn for the responsible use of generative
AI in schools and policy-setting more broadly. The paper concludes by proposing an
agenda for future research.
Literature review andbackground
The field of AI in Education dates back to the 1970s, with advances in intelligent
tutoring systems, the integration of pedagogical agents, data mining and learning
analytics and smart classrooms all receiving significant research and development
attention. The acceleration of advances in machine learning has led to applications,
systems and platforms used for learning that have powerful recommender, predictive
and adaptive features. Substantial ethical and philosophical debates regarding AI’s
role in schooling have been circulating within the academic and policy realms (Per-
rotta and Selwyn, 2020; Southgate etal., 2019; Williamson etal., 2023). Touretzky
etal. (2019) argue that merely knowing how to use AI tools is insufficient, instead
advocating for AI to become a mandatory subject integrated across the curriculum
so that all students can fully grasp the workings of AI. To this extent, UNESCO
have outlined how AI should form part of the K-12 School Curriculum (UNESCO,
2022). The question of whether students should be able to draw upon AI assistance
in their work has been likened to the debate over students’ use of spell-checkers and
M.Bower et al.
calculators (Popenici and Kerr, 2017). However, the release of recent generative AI
tools like ChatGPT represents a significant advancement in technology’s potential
to provide cognitive assistance, including the risk of students merely submitting AI-
generated content that has in turn been based on the work of others, making the ethi-
cal usage of generative AI paramount.
The recent and unprecedented release of publicly accessible, powerful and easy
to use generative AI tools such as ChatGPT has meant that education systems do not
have empirical data upon which to base their policy responses. This is particularly
problematic in light of increasing expectations that schools and school systems adopt
evidence-based practices. Several reviews have categorised various AI applications
within the classroom (Examples include, Celik etal., 2022; Chen etal., 2020; Chiu
etal., 2023a, 2023b; Xu and Ouyang, 2022; Zawacki-Richter etal., 2019; Zhai etal.,
2021), though these have slender emphasis on the widespread application of genera-
tive AI across entire education systems because of its newness. The lack of available
evidence about the impact of generative AI means that schools and school systems
inevitably base their policies and practices on intuition, or derive policy from other
areas of AI application such as health or recruitment. Many jurisdictions initially
chose to ban generative AI outright, so that they could investigate the safety of the
technology and consider the implications for releasing it into schools. However,
other jurisdictions and schools have chosen to embrace generative Artificial Intel-
ligence within the curriculum, identifying the knowledge, skills and dispositions
associated with use of generative AI as being critical for student success into the
future. Either way, the inclusion of generative AI in the curriculum seems inevitable,
with major productivity platform providers such as Microsoft (through Copilot) and
Google (via Gemini) integrating AI support into their suite of applications. These
actions mean that it will be extremely difficult to prevent students accessing genera-
tive AI tools.
These generative AI tools pose a number of risks to education beyond copy-paste
behaviours by students. One major risk relates to dissemination of misinformation
and bias (see Borji, 2023; Krügel etal., 2023). Racial and gender bias in generative
AI models can be fundamentally entrenched, because the models have been built
using data that contains social stereotypes (Buolamwini and Gebru, 2018; Worrell,
2024). Additionally, risks associated with general and analytic forms of AI may be
amplified in generative AI platforms. For instance, sending student information via
the open web may compromise data privacy and security, differing access to tools
may amplify inequality gaps for certain sub-populations (e.g. low-socio economic,
female, Indigenous, and the outputs of generative AI systems may lack explaina-
bility and accountability (Akgun and Greenhow, 2021; Celik etal., 2022). Compli-
cating matters, the ethics of using the models at all are called into question, when
the data that has been used to create them has often been sourced without permis-
sion or from indeterminate sources (Kirova etal., 2023; Smits and Borghuis, 2022).
There are some who broadly question the reliability and efficacy of generative AI at
a fundamental level, because of its potential for inaccuracies and mis-directions (e.g.
McKnight and Shipp, 2024). Others caution against rushing into AI integration due
to harms associated with the ‘datification’ of education and the environment impost
that generative AI tools impose (Selwyn, 2024).
What generative Artificial Intelligence priorities and…
A systematic review of AI risks in school education highlighted privacy and
autonomy as the most frequently identified risk, followed by AI biases, accuracy
and functionality, deepfake and FATE risks (Karan and Angadi, 2023). In response
to such risks, academics have highlighted the importance of applying Fairness,
Accountability, Transparency and Ethics (FATE) principles when using AI in edu-
cation, encouraging the use of eXplainable AI (XAI) whereby the reasons for deci-
sions made by AI are transparent and available (Khosravi etal., 2022). Transpar-
ency may also be enacted by being clear to students, parents and other stakeholders
when, how and why AI is being used. Southgate etal. (2019) argues that human
rights based principles such as participation, accountability, non-discrimination and
empowerment should underpin the ethical design, use and implementation of AI in
education, especially in schools where protecting the vulnerability of children is par-
amount. Whether and how ethical principles are applied by different stakeholders in
response to generative AI platforms is an open question.
At the centre of deciding how to implement generative AI in education are edu-
cational policymakers, who are appointed to determine the parameters, risks and
protocols surrounding the application of generative AI in schools and education
systems. They have the challenging role of considering and balancing the needs of
multiple stakeholders, including students, teachers, principals, parents, and society
more broadly. Policymakers have responsibility for providing adequate guidance
about the use of generative AI in schools, which in turn may inform decisions about
procurement and application at school and systems levels. They also need to balance
responsible and ethical principles with the pragmatics of implementation. Without
proper guidance about the responsible use of generative AI in schools, these deci-
sions may be made in an ad-hoc manner or may prevent opportunities to develop
and use generative AI from being realised.
There are two important dimensions to developing policy to guide the develop-
ment and responsible use of generative AI in education, relating to the nature of
emerging technologies and networked governance. In terms of the first dimension,
the nature of emerging technologies such as generative AI means that there is a lag
time between the introduction of the technology and policy development (UNESCO,
2023b). This lag time leads to a largely reactive rather than anticipatory approach
to policy-making. That is, generative AI technology has in some cases been intro-
duced before there are adequate policies for guiding productive use and mitigating
risks in education. Policy reactivity is an area which other emerging technology
areas have struggled, such as nanotechnology (Guston, 2014). One option to address
issues relating to reactive policy-making has been the introduction of anticipatory
governance. This approach to policy-making both anticipates and aims to manage
the effects of new technology, and involves acting “on a variety of inputs to manage
emerging knowledge-based technologies while such management is still possible”
(Guston, 2008, p. 29).
The second important policy dimension for responsible application of generative
AI in schools relates to networked governance. This involves a change in educa-
tion policy-making and governance that moves beyond governments to involve a
diverse set of actors and organisations. The key is that contemporary policy-mak-
ing networks comprise actors with varied expertise that are inside and outside of
M.Bower et al.
government (Ball, 2009; Lingard and Sellar, 2013). In the technology and educa-
tion space, this can incorporate non-government organisations, academics and uni-
versities, think tanks, philanthropies, and technology companies (Gulson and Sellar,
2019; Gulson etal., 2022). With respect to generative AI and schools, networked
governance could also include teachers, principals, parents, and minority group
leaders, to ensure diverse representation within policy-making processes.
The broader regulatory context necessarily influences how policies such as those
relating to responsible use of generative AI in schooling are set. There are well over
100 international policies on AI that attempt to harness the potential and manage
the harms of AI, including in education (Holmes etal., 2021; Nemorin etal., 2023).
Two notable examples include the AI Act in the European Union (European Com-
mission, 2021) and the US Blueprint for an AI Bill of Rights (White House Office
of Science and Technology Policy, 2022). Both the EU’s AI act and the US Blue-
print for an AI Bill of rights involved extensive consultation and were developed
over years in an effort to apply more anticipatory governance that remained relevant
and applicable well into the future. Yet the fast-changing nature of AI technologies
meant that neither policy fully accounted for the increased functionality or rapid
uptake of generative Artificial Intelligence.
In an analysis of European and international policy documents, specifically
related to AI in education, Linderoth etal. (2024) note three major sociotechnical
themes emerging: AI reshaping education, the surveillance society and AI’s inevita-
ble disruptiveness. These concurrent truths, which can at the same time have posi-
tive and negative implications, highlight the challenging nature of setting policy
relating to AI in education. Adding to the complexity of setting AI in education
policies is that the scope of such policies often multi-scalar and span a number of
domain areas. Depending on the jurisdiction, there are often national level policies
sitting alongside numerous policies at state, education system and school levels, all
aiming to guide the use of AI technologies in schools. These policies are often dis-
parate and disconnected; there is no specific global database that both tracks and
provides analysis of these policies. As well, many of these policies often do not
relate to education specifically, and few relate to generative AI. This makes it diffi-
cult for policy makers and educational leaders to guide the responsible development
and use of generative AI in education. The Australian case illustrates this.
Context ofthis study
The Australian regulatory and legal context for AI in general is evolving and hence
somewhat disjointed. While all legislation whether online or offline applies to AI,
at the time of conducting this study there was no specific legislation to govern AI
and the Privacy Act 1988 had been under review for several years. Prior to our
study, the Australian Government Department of Industry, Science and Resources
(n.d.)released a national, voluntary set of “Artificial Intelligence (AI) Ethics Prin-
ciples” and produced discussion papers for consultation on “Positioning Australia
as a leader in digital economy regulation (automated decision making and AI regu-
lation)” (March–May 2022) and “Supporting responsible AI” (June–August 2023).
What generative Artificial Intelligence priorities and…
There had been several national and state-based commissions that provide guid-
ance on the use and governance of AI such as the Office of the Australian Infor-
mation Commission and Australian Human Rights Commission. As well,there had
been work undertaken by the Tertiary Education Quality and Standards Agency
(TEQSA), for instance, their guidance note relating to assessment reform in the age
of AI (Lodge etal., 2023). There had also been general guides released by inter-
national bodies such as the United Nationals Educational, Scientific and Cultural
Organisation (Miao and Holmes, 2023) that provided general guidance about the use
of generative AI at all levels of education and for research. However, the lack of a
national regulatory framework or guidelines for Australian schools when ChatGPT
was released meant that much of the initial governance work surrounding the use of
generative AI in schools fell to state-based education departments for public school-
ing, diocese-based education administrators for Catholic schools, and principals in
private schools. In summary, there was the clear need for education leaders and the
education sector more broadly to be able to respond to the rapidly changing AI land-
scape in a coordinated and transparent way.
Therefore, in February 2023 the Australian Education Ministers (national, state
and territory government ministers responsible for education) organised an AI in
Education working group to help develop an evidence-based, best practice national
framework that could guide the effective, ethical and responsible use of generative
AI in the school sector. The purpose of developing this framework was to miti-
gate the risks associated with generative AI, as well as take advantage of emerg-
ing opportunities through the application of these technologies. The approximately
thirty working group members included senior jurisdictional and national agency
representatives, who worked with ten highly credentialed independent experts from
universities across Australia. The authors of this paper were all academics who con-
tributed to the working group. The working group members participated in a two-
day workshop in March 2023 to derive founding principles for a national framework
for generative AI use in schools. Over several months, the workshop outcomes were
then refined, released for consultation, and ultimately endorsed by the Australian
Education Ministers in November 2023 as the Australian Framework for Generative
Artificial Intelligence in Schools (Australian Government Department of Education,
2023).
During the workshop, the authors of this paper recognised that there was a need
for not only the development of a national Framework, but also to better understand
the lived and anticipated priorities, risks and challenges experienced by policymak-
ers and education leaders as they attempted to create policy for the responsible use
of generative AI in schools. The circumstances provided the opportunity to conduct
a naturalistic case study exploring the real-time thinking of policymakers, to pro-
vide insight, not only into the perceived priorities and challenges, but also into the
nature of responsive policy-making itself. Accordingly, the research team embarked
on an additional stream of activities to capture the priorities, challenges and char-
acteristics of setting responsible generative AI policy in the education sector. The
team was able to collect all data for this study during a critical period of uncertainty
after widespread awareness of generative AI tools by the public emerged (approxi-
mately January 2023) and before the Australian Framework for Generative AI in
M.Bower et al.
Schools was released (November 2023), shedding light on the nature of responsive
policy-making.
Method
Data collection
This qualitative study was conducted in two phases. The first phase involved survey-
ing 22 senior education leaders in June of 2023to reveal their key concerns regard-
ing the integration of generative AI in their schools and departments. This phase
was used to inform the design of a second phase, involving online focus groups of
the same 22 senior leaders in July of 2023to rank priorities and explore their per-
ceptions in more depth. All aspects of the study were approved by the Monash Uni-
versityResearch Ethics Committee(Project ID38535). Participation was voluntary
and a result of informed consent, which included agreement that data would be de-
identified prior to publication.
Due to the broad range of contexts, roles and responsibilities of the policymakers
and educational leaders, an online qualitative survey was used in phase one to ascer-
tain the breadth of understanding, and range of concerns, priorities and other issues
experienced by the participants. The strength of qualitative surveys for this purpose
has been noted by Braun etal. (2021). The survey included items relating to demo-
graphic details, such as leadership role, but also items asking participants to detail
(a) their perceived gaps in knowledge (what do they feel they need to learn about AI
and education), (b) their major priorities for integrating AI in school education, (c)
the major risks that they identify for use of AI in education, and (d) the major sys-
temic and environmental challenges to responsible and effective integration.
The phase one survey results informed the design of the subsequent focus group
questions and stimuli that were used in phase two. The two 2-h focus groups were
identical in content and structure, but held at separate times to enable more policy-
makers and educational leaders to participate. In the online focus groups, the par-
ticipants were presented with a synthesis of the survey data in relation to the key
priorities, risks and challenges. This included two online ranking tasks in which the
participants were asked to separately rank the priorities as well as the systemic and
environmental challenges that they had collectively identified in the phase one sur-
vey. The results of the ranking exercises were then revealed to the participants in the
focus groups, and provided a catalyst for further focus group discussion about the
issues. The research team took a purely faciliatory (non-participatory) role during
the focus groups, carefully avoiding the sharing of their own thoughts about genera-
tive AI, so as not to unduly influence the perceptions of participants. In the focus
group discussions, participants were asked to explain the reasons underpinning their
ranking of the priorities and challenges, as part of a questioning and non-judgemen-
tal open conversation. Participants were also asked to elaborate about the different
types of risks that they believed were posed by generative AI, and also the sort of
research that they felt would be most useful to support educational practice. The
phase two online focus groups were recorded and transcribed, and along with the
What generative Artificial Intelligence priorities and…
phase two ranking task responses, provided the corpus of data that was analysed and
reported in this study.
Participants
There were 22 senior education decision makers and stakeholders who participated
in both phases of this research study. Participants were recruited via (a) direct email
to the participants of the national working group, (b) direct approach to state and
national offices inviting participation, (c) email to senior education leaders known
to the researchers, (d) referrals to senior policymakers and educational leaders from
other people who had been invited to the focus groups. Eight of the 22 partici-
pants were from the original AI in Education working group, noting that the focus
group meetings provided working group members with an extended opportunity to
elaborate on the reasons for their perceptions. The participants were drawn from
a wide range of contexts: The Australian Government Department of Education;
State/Territory Departments of Education (Victoria, South Australia, New South
Wales, Queensland, Australian Capital Territory); Independent Schools (SA, ACT);
Teacher Union (NSW); Ministerially funded Australian Education Research Organi-
sation; and a national digital service and content provider. The participants also rep-
resented a wide range of senior roles providing a broad understanding of student,
teacher, administration, school and wider system needs. The senior educational lead-
ership roles ranged from chief information officers and directors of strategy, policy
and assessment through to copyright advisors and active school principals who also
were members of representative bodies such as for independent schools.
Analysis
The transcript analysis adopted a combination of inductive and deductive approaches
in the vein of Fereday and Muir-Cochrane (2006). First, the two focus group sessions
were transcribed into a corpus of 23291 words, excluding initial presentation and
scene setting by the researchers. The transcripts were then imported into the NVivo
1.7.1 Qualitative Data Analysis System, in accordance with Bazeley (2020). The
reasons for participant’s priorities and challenges were initially categorised deduc-
tively according to the a priori themes that had been identified in the initial survey.
A second layer of emergent inductive thematic coding was used to characterise the
nature of the policy-making context. The use of both inductive and deductive cod-
ing enables the analysis to both be grounded in the known context at the same time
as being open to unanticipated outcomes (Cohen etal., 2017). Trustworthiness of
interpretation was strengthened by triangulation of the two phases of data collection,
in which the focus groups served as a form of member checking and elaboration
for the priorities and challenges that were identified during the survey (as recom-
mended by Wellington, 2015). The results of this first analysis were then discussed
by the whole research team, with further refinements to coding and interpretation to
strengthen robustness of meaning-making. Ongoing revisitation to the original data
M.Bower et al.
sources was then undertaken, in order to ensure that the findings that were reported
accurately and comprehensively represented the participant perceptions.
Reporting
Participant responses to the priority area rankings and their corresponding ration-
ales are presented first, followed by rankings and rationales for the key environmen-
tal and systemic challenges relating to generative AI in schools. The reporting of
results makes extensive use of primary data (quotes), as rich sources of insight into
the thinking of these senior educational policymakers (vis-à-vis Cohen etal., 2017).
For ease of interpretation, the a priori thematic codes are shown in bold throughout
the text, and the inductive themes that characterise the nature of the policy-making
process are presented in italics. The Discussion section interprets the sorts of priori-
ties and challenges that were expressed by the policymakers and educational leaders
with respect to the responsibleand critical application of generative AI in schools,
and also provides reflections on policy-making process with respect to responsive
and networked governance.
Results
Priorities
Of the 22 senior policymakers and educational leaders that attended the focus
groups, 18 chose to complete the generative AI in education priorities ranking activ-
ity. The results are shown in Table1. The rankings, shown in descending order,
demonstrate that managing risks, education of teachers, system leaders and students,
as well as equitable access are seen as the highest priorities.
Managing risks such as misinformation and privacy was rated the overall high-
est priority by the senior policymakers. The uncertainty of how generative AI might
be used, and the fact that it was already being used with little guidance or under-
standing, meant that senior policymakers perceived a sense of urgency to act, as
explained by one participant:
The horse has really bolted. That term about an emergency situation is really
an interesting one. There was a bit of fear to start with, there’s a bit of excite-
ment, a bit of anticipation about the opportunity. I mean, if you’re on LinkedIn,
you’ve seen it gone absolutely wild in regard to teacher sharing resources and
opportunities of how it writes assessments and plans lessons. So what I what I
put first was risk management, managing the risk of how it’s being used, who’s
using it, and looking at it being an educative process, because we really now
need to sort of pull people back and teach them what AI is how it can be used,
effectively, how it can be used safely. You know, the fact that AI is designed to
give you an answer, even though it might be wrong, you know, those sorts of
things, people just don’t know, they’re just using the tool without any educa-
tion behind it.
What generative Artificial Intelligence priorities and…
Across the corpus of responses, a number of pressing risks relating to gen-
erative AI use were identified, including risks of inaccurate or misleading data,
biased information, data security and academic integrity. There was a sense that
some risks, for instance relating to privacy and copyright issues, were important,
but would largely be managed through regulation and legislation. On the other
hand, other risks were more ephemeral and difficult to address, for instance how
to adjust assessment processes and how to uphold the professionalism and auton-
omy of teachers in a world with increasingly powerful generative AI.
The lack of understanding about how AI tools worked and could be safely used
meant that policymakers identified educating teachers and educating students
as urgent priorities. The complexity of the situation, and the interconnectedness
of issues such as managing risks, educating teachers, and educating students,
were often evident, for instance:
if we had all the time in the world … you set out a logical sequence of all of
these things… but talking to my children and my colleagues on the ground
in classrooms, it’s a little bit of an emergency response that I believe we
need… We need to be looking at the teachers’ capacity and their workload,
and then moving through preparing the students for what we’re actually
doing now. So what can we do right now and then making sure we don’t fall
afoul of some of the other aspects like the ethical and the risks, but really
focusing on what’s happening right now because kids are using it teachers
are using it and they don’t know what they’re doing.
Table 1 Senior policymakers
ranking of AI in education
priorities
Item Mean Min Max
Managing risks (misinformation, privacy etc.) 5.0 1 17
Educating teachers 6.0 1 15
Educating system leaders 6.5 1 17
Equitable access 6.6 1 17
Educating students 7.2 2 16
Ethical integration 7.8 1 18
Understanding student use 9.0 1 15
Updating assessment 9.2 3 18
Supporting disadvantaged 9.4 1 15
National collaboration 9.9 1 18
Updating curriculum 10.4 1 17
Supporting teacher workload 10.5 2 17
Educating parents 10.6 3 18
Guidelines for EdTech companies 10.7 1 18
Training AI on our datasets 12.0 2 18
Improving wellbeing 12.6 2 18
Upholding teacher roles 12.8 1 18
Cost effectiveness 15.1 7 18
M.Bower et al.
Educating system leaders was also seen as a high priority, because of its inter-
connectedness with teacher practice and effective policy response in general, as
detailed by one participant:
…if a system leader, whether that’s a minister, Director General, doesn’t
understand what they’re dealing with, then they will say, Oh, no, we don’t need
to worry about that. No funding for that. Chuck it on the teachers can deal
with that, or, oh, well, I’ve been told that it’s, it’s not that bad. So it’s all fine.
So I think that the educating system leaders, in terms of what we’re actually
dealing with is probably one of the most important things so that the rest can
actually follow.
National collaboration was seen as important, due to the uncertainty, complexity
and urgency of the situation, so as to be able to best support teachers in consistent
ways, such as:
… educational technology has forever been a very slow uptake for teachers.
And this doesn’t seem like business as usual. This is new territory. … So look-
ing at it like that as an opportunity and recognising that the teachers and the
students and the parents are going to continue on this journey with or without
us at their own speed, and being able to use the national kind of conversation.
So we can at least be on the same page… [states shouldn’t spend time arguing
about] AI because it just muddies the waters and it makes things take way too
long. So having some consistency.… the teachers are standing in front of the
students right now we have we have students that are bypassing our network
filtering, not to do awful things. They’re doing it to access chat GPT because
they’re curious about it. So being there to support the teachers is what I think
is critical.
National collaboration was also identified as interconnected with advancing other
priority areas, for instance ethical issues such as equitable access and addressing
student disadvantage, as outlined below:
equitable access, student disadvantage and national collaboration. In the
space, we’re working on developing resources that are available free for
schools, we know that often the most appreciative are those teachers who don’t
necessarily get access to resources because of either the size of their state and
therefore the budget of their state and territory. So I think to ensure that there
is equity of access and that students aren’t disadvantaged, there does need to
be a national approach.
Through the conversations it was apparent that the context of the senior poli-
cymaker within the system they were representing influenced their priorities. For
instance, one senior technologist prioritised working with edtech providers and
managing risks, because these were the key responsibilities of their role, stating:
So therefore, as a technologist and a provider of tech to schools, the main chal-
lenges and certainly the tone of my responses really centre on how do we cre-
ate the right safeguards, guardrails, technicals and limitations to ensure that
What generative Artificial Intelligence priorities and…
schools can consume artificial intelligent products in ways that we know where
their data is, and know who’s accessing it and those sorts of things… [my]
context is very much about the creation of the right platform to then enable
schools to be innovative
Amongst the uncertainty and urgency relating to updating assessment
approaches, generative AI was also seen as a potential catalyst for educational
transformation.
…fear at the senior secondary level. So teachers are really starting to talk
about well, what does this mean for assessment at the senior secondary level?
And how do we revisit assessment, but those conversations are starting to look
like opportunities, I think teachers are going to even be a bit relieved that we
might be moving into that space where we’re looking at doing that differently.
Systemic andenvironmental challenges
All 22 of the senior policymakers and educational leaders who attended the focus
groups chose to rank the systemic and environmental challenges that they had previ-
ously and collectively identified in the survey. The rankings attributed by the sen-
ior policymakers and leaders are shown in descending order in Table2 below. Pace
of change, teacher capabilities and professional learning for all staff, and equitable
access to technology were ranked as the highest priorities by the participants.
Pace of change was seen as a cause of complexity, in terms of being intercon-
nected with other sources of uncertainty and contextual challenges, as explained by
one participant.
…the pace of change that in the last two, three years with COVID, it’s not
just about the AI, but if you consider that there has been COVID, we’ve had
significant flooding, we’ve introduced a whole new system. For us, the pace
of change of AI has almost come as a disruption on top of a whole lot of other
changes that have happened over the last two or three years. So it doesn’t actu-
ally sit in isolation just by itself. So if we consider it as a systemic challenge
for us in terms of to carry on working with our environment, we know there’s a
certain amount of fatigue that actually exists already.
The pace of change was also seen as interconnected with other systemic and
environmental challenges, such as teacher burnout and subsequent teacher short-
ages, due to the risk of generative AI policy and procedures overloading teachers
with too much change in the wake of the COVID-19 pandemic, for example.
…we’ve kind of got this risk… when I was back in schools, I was managing
a faculty during COVID. And we lost a lot of teachers because the pace of
change was too fast. And they suddenly had to move to teaching online. And it
would just be like, I’m retirement age, I’m out of here.
Comments relating to teacher capability and teacher preparedness high-
lighted the important role of teachers as key agents in determining the success of
M.Bower et al.
generative AI use in the classroom. There was particular concern amongst the senior
policymakers that teachers develop deep and flexible understanding of generative
AI so that their capabilities would be resilient to change, as detailed in the following
scenario:
running a training program around this particular product, I don’t know that
we can really afford to do that anymore, I think we’ve really got to build peo-
ple’s capability… you know, that people can actually see what’s ticking under
the hood and understand how that works. And it can be explained in a way that
people can understand it. So I think that building of capability and focus on
people is probably more important than, you know, trying to keep up with the
individual bits of tech.
The senior policymakers expressed numerous concerns about equitable access
to technology, and how the specific context of individual students and schools may
lead to disadvantage for students with the greatest need, such as:
I see the possibility in the future that there might be this divergence between
highly resourced individuals or highly resourced schools that have access to
more and more advanced versions of AI models and those that don’t.
Table 2 Senior policymaker rankings of AI in education systemic and environmental challenges
Item Mean Min Max
Pace of change 4.4 1 4
Teacher capability and preparedness 5.2 1 15
Equitable access to technology 6.9 2 19
Need for professional development all staff 7.2 1 19
Development of policies and procedures 9.1 1 18
Resourcing for schools 9.1 2 22
Academic integrity for high stakes assessment 9.2 1 18
Teacher fatigue and burnout 10.1 1 20
Bias 11.3 3 21
Risk averse environment 11.7 2 22
Student perceptions 12.2 4 22
Adopt new platforms/models without trialling 12.3 2 22
Exposure of schools to ethical & legal challenges 12.7 2 22
Teacher shortages 13.0 2 21
Impact on existing school structures & processes 13.0 3 20
Disconnect high level decisions & ‘on the ground’ 13.0 4 21
Data breaches 13.2 1 22
Governance of ICT in schools not by executives 14.4 2 22
Parents perceptions 15.1 4 21
Copyright law 15.5 1 22
Community perceptions 16.6 5 22
Media perceptions 17.8 1 22
What generative Artificial Intelligence priorities and…
Access was seen as potentially amplifying disparity in a system that was already
inequitable.
We already have a bifurcated system, where the outcomes of student learn-
ing are heavily determined by socio-economic status, where we see increasing
technological advantage, we will see increasing demarcation along socio-eco-
nomic status lines.
The senior leaders identified the development of policies and procedures
and disconnect between high level decisions & ‘on the ground’ as major sys-
temic challenges, because of the complexity and contextuality of designing poli-
cies that would suit all circumstances and be embraced by teachers. One participant
explained:
…we do have this often disconnect. And obviously, in central office, in Depart-
ment of Education, we come up with a good idea. But it can often be very dif-
ferent from what happens on the ground and what supports on the ground. And
I think AI because of some of the blackbox nature of it, is more likely to facili-
tate those kinds of situations where the center says this is great, it will help
everybody, there’s no risks here. And we hit the button to say everybody should
use this. And then you know, a small school, geographically different location,
finds it much more challenging with their kids to actually make it work, than a
rich school with a lot of resources.
Academic integrity for high stakes assessment was seen as an urgent challenge,
because it was interconnected with whether students actually learnt.
the nature of our students is if they can get away with handing in AI for an
assignment, then they won’t bother to learn anything.
The integrity of assessment was also seen as critical because of its relationship
to the integrity of the entire school system, particularly in senior years, as explained
below:
high stakes assessment, as a sort of outcome, for the learning for all of our
students, obviously has a significant impact on people’s confidence in educa-
tion as a whole, even though we just think of it as sort of the students at the
end of year 12. It’s what hopefully we will feel confident in. If people start to
worry that AI is undermining that, then it is undermining not just the exam
results for that year.
Discussion
The senior policymakers identified a number of priorities that directly related to
the responsible and ethical application of generative AI in schools, including, in
order of priority, managing risks (misinformation, privacy etc.), educating teachers,
educating system leaders, equitable access, educating students, ethical integration,
understanding student use, updating assessment, supporting disadvantaged, national
M.Bower et al.
collaboration, updating curriculum, supporting teacher workload, educating parents,
guidelines for edtech companies, training AI on our datasets, improving wellbe-
ing, upholding teacher roles, and operating in cost effective ways. While many of
these priorities have featured in various reports and recommendations relating to the
responsible and ethical use of AI in education (for instance, UNESCO’s Guidelines
for Generative AI in Education and Research, 2023b), a contribution of this study is
to provide an indication of their perceived importance based on empirical evidence
from senior policymakers in Australia. For instance, the senior policymakers in our
study indicated that educating teachers, system leaders, students and parents was a
much higher priority than had been identified by Karan and Angadi’s (2023) system-
atic literature review.
Additionally, while the purpose of this study was not to judge whether the priori-
ties of the policymakers were right or wrong, the high number of priorities and the
wide range of min–max rankings for almost each highlights the fraught nature of
educational leaders trying to agree on generative AI policy priorities, which may
in turn make it difficult to formulate policies that are directed in nature and satisfy
a wide range of stakeholders. Part of this difficulty may be anchored in the way that
the use of AI technologies can have positive and negative implications, depending
on how and why they are used (Crawford etal., 2023). This has caused a dichotomy
in recommended responses to AI, from leaping forward (e.g. Luckinet al., 2022)
to exercising caution (Selwyn, 2024), and everywhere in-between (Crawford etal.,
2023). Correspondingly, education systems and schools have taken quite alternative
approaches, from banning generative AI to embracing its use.
Linderoth etal. (2024) note that the policy framing of AI in education tend to be
either dystopian or utopian in nature, with some emphasising the risks of replac-
ing algorithms with human thought while others barely address AI issues at all.
In Australia, this incongruity is recently reflected in the dichotomous framing of
the Parliamentary inquiry into the use of Generative Artificial Intelligence in the
Australian Education system (House of Representatives Standing Committee on
Employment, Education and Training, 2024) which asserted the positive potentials
of AI to enhanced education, compared to the subsequent Senate Select Commit-
tee on Adopting Artificial Intelligence Report (Australian Senate, 2024), which was
generally much more cautionary about the use of AI. In this light, it may come as
no surprise that there is a lack of consensus about priorities amongst policymakers,
when there is such a wide variety of valances amongst scholars, education systems
and other policy documents. In turn, this lack of socio-technical consensus com-
bined with the fast changing nature of AI technologies makes setting AI in educa-
tion policy extremely challenging, let alone undertaking any form of anticipatory or
networked policy-setting.
Many of the environmental and systemic challenges that featured in our analysis
have also featured in other reports (e.g. Akgun and Greenhow, 2021; Celik etal.,
2022), however, asking senior policymakers to rank challenges with relation to
generative AI added an additional layer of insight. For instance, the pace of change
emerged as the most widely agreed upon challenge with respect to generative AI in
education, with all policymakers ranking it between 1 and 4 out of a list of 22 chal-
lenges. This was not an issue that was typically emphasised in other reports, which
What generative Artificial Intelligence priorities and…
tended to focus more on the principled rather than practical challenges. Capability
development of both teachers and all staff were ranked as the second and fourth most
pressing challenges, with equitable access ranked third. While capability develop-
ment did feature in the review papers (Akgun and Greenhow, 2021; Celik et al.,
2022; Karan and Angadi, 2023), only Karan and Angadi provided any frequency-
based ordering, where social knowledge and skill-building was ranked second least
prevalent. Once again, this illustrates that understanding the situated priorities and
challenges of educational policymakers cannot necessarily be gleaned from a desk
review of prevailing literature.
During the conversations, meta-themes emerged relating to the urgency, uncer-
tainty, contextuality, interconnectedness and complexity characterised the nature
of educational policy-setting in response to generative AI. While there has been
research that has investigated the use of specific frameworks to support policy-mak-
ing, such as systems thinking in the health sector (Haynes etal., 2020) and design
thinking in the public sector more broadly (Lewis etal., 2020), we were not able to
find any research that examined the on-the-ground experiences of senior educational
policymakers as they attempted to respond to a pressing and emergent policy chal-
lenge. To this extent, our paper makes an important contribution to understanding
the nature of policy-making processes in reaction to emerging technologies (build-
ing on Guston, 2008, 2014).
Implications forpolicy andpolicy‑making
The most direct implications of this research are that if education systems are to
responsibly respond to the challenges imposed by generative AI, then there are a
large number of complex priorities and issues that need to be addressed. Accord-
ing to the senior policymakers in this study, they need to manage a wide range of
risks (inaccurate data, biased information, data security, academic integrity, privacy
breaches, copyright infringements, professionalism of teachers) at the same time as
providing comprehensive training for teachers, system leaders and students on how
to use AI effectively and responsibly. Ensuring that there is equitable access to gen-
erative AI technologies was also seen as being of paramount importance (aligning
with other key referents such as Pedro etal., 2019; Southgate etal., 2019). The con-
cerns generally were similar in nature to those previously identified by teachers and
researchers (e.g. Akgun and Greenhow, 2021; Celik etal., 2022).
The fact that both teacher education and equitable access are seen as top priori-
ties and major challenges illustrates the enormity of the task facing policymakers.
Teaching is becoming an increasingly problematic field, with the work of teach-
ers involving burgeoning complexity and workload (Thompson etal., 2024). Ways
in which teachers can undertake significant professional learning to equip them
and their students with the AI skills that they need without overburdening them
is unclear. Similarly, there are massive inequities in access to digital technologies
throughout Australia (Thomas etal., 2023), which mirrors global trends (UNESCO,
2023a). The way for education systems to provide all students with access to the
benefits that AI might provide at the same time as safeguarding them against the
M.Bower et al.
risks is uncertain. The issue of upholding teacher professionalism with relation to
responsible use of AI is of particular concern (Loble and Stephens, 2024) so that
teachers feel empowered rather than threatened by these new and powerful technolo-
gies. Addressing each of these issues is a significant undertaking, hence, we cannot
assume that responsible integration of generative AI into schools can simply be sub-
sumed into business as usual; substantial resourcing and leadership will be required
to effectively and responsibly deploy generative AI in schools.
There are partial solutions emerging. Teachers themselves suggest that they need
wide-ranging professional learning relating to how AI works, use of AI to sup-
port learning, assessment redesign, management of plagiarism, use of AI to sup-
port administration, ethical and legal issues, and would like professional learning
in a variety of forms including workshops, resources, and communities of prac-
tice (Bower etal., under review). To help achieve these ends, Luckinet al. (2022)
suggests a contextual framework for developing teacher AI readiness that aims to
recognise the diversity of educational systems and workplaces, based on engaging
teachers in an iterative process of identifying opportunities, applying ethical AI
techniques, disseminating learning and iteratively improving. While such models
offer general guidance, the helpfulness of such frameworks within specific schools
or education systems is yet to be empirically tested. Similarly, to address the digital
equity gap, Loble and Stephens (2024) recommend providing free or low-cost access
to digital technologies for disadvantaged students and setting equity and inclusion as
core expectations of educational technology use in Australia. While these are worthy
endeavours, the political process to realise these aspirations is not immediately clear.
This study also revealed policy-making in response to the technological disrup-
tion of generative AI as an urgent, uncertain, interconnected process, characterised
by contextuality and complexity. The formation of a national working group and
framework in response to the challenges of generative AI was novel for the Aus-
tralian context, constituting a significant shift towards more networked governance.
A commendable variety of policymakers and educational leaders were directly
involved in the consultation and formulation process, as part of an emergent policy
network. The urgent and responsive nature of the policy-making process may have
meant it was difficult to have all of the right people involved in policy formulation,
a phenomenon that has been previously observed (Guston, 2014). The absence of
different groups such as teachers or those from historically marginalised groups also
points to the challenges of anticipatory governance and technologies like AI. The
aspiration of anticipatory governance is to be ahead of the technology, to manage its
development and use, while this is still possible (Guston, 2008). Due to the urgent
need to provide a response, however, a timely reaction to key issues may take prec-
edence over representation of stakeholder groups. What can be learnt from our study
is that in the next iteration of policy-making on AI use in education it is desirable
to include an even greater variety of stakeholders, for instance, representatives of
people from minority groups, teachers, students and parents in policy formulationso
that they can inform the foundational design of policy, not just feedback upon it.
The fact that professional learning for teachers was seen as both an utmost prior-
ity but also a major challenge should not be underestimated. A study by Nazaretsky
etal. (2022) used a highlighted teachers’ (mis)trust of using AI in their practice and
What generative Artificial Intelligence priorities and…
how a professional development intervention improved their trust in, and willing-
ness to adopt AI-enabled technology. Further complicating this situation, is that the
nature of professionalisation, judgement and expertise in education will change with
the use of AI (Pasquale, 2019). Teachers provide the critical translation of educa-
tional policy into classroom practice, and all issues relating to managing risks, edu-
cating students, ethical integration, and so on, depend on their AI literacies, includ-
ing how AI works and how it should be responsibly and effectively used. At the
same time, teachers often have limited capacity, interest or support to apply AI in
their teaching (Celik et al., 2022; Chiu et al., 2023; Wang and Cheng, 2021). To
this extent, major professional learning and pre-service teacher education initiatives
should be undertaken to help teachers learn about the what, how and why of genera-
tive AI. This aligns with other advice and observations from the sector (e.g. Chounta
etal., 2022; Luckin etal., 2022), and accords with research evidence that the teacher
can play a critical role in the effectiveness of AI use in the classroom (Chiu etal.,
2023; Wang etal., 2023).
Limitations andfuture research
While this paper provided rare insight into the thinking processes of policymakers as
they attempted to responsibly and ethically respond to the technological disruption
proposed by generative Artificial Intelligence, there were limitations to the research
as presented. First, every disruption and policymaking context is different. The sam-
ple of participants, while an esteemed and senior group of policymakers, did not
represent all policymakers responsible for setting national generative AI educational
policy. As well, given the rapid pace of change of technology and that societal and
educational values often change, the issues experienced by this specific group of
policymakers to the emergence of generative AI tools may be quite different to those
experienced by different policymakers in response to different disruptions. Further
research would be required to understand the generalisability of the results observed
in this study.
Observations were based on responses of senior policymakers and educational
leaders during focus groups. More sustained data collection based on observations
of all stakeholders as they actually formulated policy may have resulted in further
findings and insights. As well, while every effort has been made to provide accu-
rate and comprehensive representation of the senior policymakers and educational
leaders (through triangulation, extensive use of primary data in reporting, repeated
revisitation to the transcripts, and collaborative scrutiny by the research team), like
any qualitative research, it is possible that the reporting contains unintended biases.
For instance, it is possible that researcher involvement in the national working group
on AI in Education may have biased the views of the research team, and that other
research teams may have arrived at different conclusions.
In terms of future research, the rapid emergence of generative AI and the disrup-
tion it proposes to learning and teaching raises a number of pressing research ques-
tions that warrant investigation, for instance:
M.Bower et al.
1. Can teachers develop students’ capacity to use generative AI in safe, responsible
and effective ways?
2. Can generative AI be used to improve the quality and personalisation of their
teaching?
3. Can generative AI be utilised by teachers in assessment processes to provide
students with more accurate evaluation and immediate feedback?
4. Can generative AI be used in ways that reduce equity gaps (Indigenous, low-SES,
remote and regional) rather than increasing them?
5. How can professional learning for teachers and pre-service teacher education
programs be best designed and implemented to change teacher practice in ways
that empower them to ask critical and ethical questions relating to the technology
and enhance student learning?
6. How can we iteratively refine AI in education policy and systems to be respon-
sive to research and technological innovation in ways that improve educational
outcomes for all students?
Each of these questions constitutes a substantial program of research, which will
presumably become an increasing focus of the educational research field in future.
Conclusion
While concerns relating to generative AI in education have been widely reported
in the media, this study has provided a systematic analysis and prioritisation of the
concerns of senior policymakers, as they attempt to grapple with the challenges at
hand. This study reveals that policy response to the disruptions caused by genera-
tive AI was characterised by urgency, interconnectedness, uncertainty, contextuality
and hence complexity. The senior policymakers identified a large number of policy
priorities in response to generative AI, foremost of which were managing risks, edu-
cating teachers, educating system leaders and equitable The main systemic and chal-
lenges they identified related to the pace of change, teacher capabilities and profes-
sional learning, and equitable access to the technology. However, the rankings of
the priorities and challenges by policymakers was in all cases wide-ranging, with
the only real consensus being that the pace of change was the number one chal-
lenge. This reveals policymaking in response to generative AI, and potentially to
educational disruption more generally, as inherently problematic and an exigent
undertaking. The fact that teacher education and equitable access were seen as both
top priorities and major challenges points to the enormity of fully and responsibly
responding to the disruption caused by generative AI.
At a higher level, this is one of the first studies to provide first-hand insights into
the thinking processes of policymakers and educational leaders as they respond
to technological disruptions. Noble intentions towards more anticipatory and net-
worked governance were balanced by the pragmatic need to provide a timely
response to the immediate risks and challenges proposed by generative AI. Much
of the urgency and uncertainty of this response was contextual, based on a lack of
previous policy and regulation relating to AI in education, and an education system
What generative Artificial Intelligence priorities and…
composed of jurisdictions that did not typically work together. The foundations laid
through the networked formulation of generative AI policy, and the findings from
this research study, provide touchpoints for future policy responses to generative AI
in education and technological disruption more generally.
Acknowledgements Thanks is extended to all of the senior policymakers and educational leaders who
participated in this research.
Funding Open Access funding enabled and organized by CAUL and its Member Institutions. No funding
was received for conducting this study.
Declarations
Competing interests The authors do not have any financial or non-financial interests that are directly or
indirectly related to the work submitted for publication.
Ethical approval statement The ethical approval statement in the first paragraph of the Methodology has
been updated to specify Monash University Project ID 38535.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons licence, and indicate if changes were made. The images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended
use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permis-
sion directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/
licenses/by/4.0/.
References
Akgun, S., & Greenhow, C. (2021). Artificial intelligence in education: Addressing ethical challenges in
K-12 settings. AI and Ethics, 2, 431–440.https:// doi. org/ 10. 1007/ s43681- 021- 00096- 7
Australian Government Department of Education (2023). Australian Framework for Generative Artificial
Intelligence in Schools. Retrieved 03/03/2024 from https:// www. educa tion. gov. au/ schoo ling/ resou
rces/ austr alian- frame work- gener ative- artifi cial- intel ligen ce- ai- schoo ls
Australian Government Department of Industry, Science and Resources (n.d). Australia’s Artificial Intel-
ligence (AI) Ethics Principles. Retrieved 10/01/2025 from (https:// www. indus try. gov. au/ publi catio
ns/ austr alias- artifi cial- intel ligen ce- ethics- frame work/ austr alias- ai- ethics- princ iples)
Australian Senate (2024). Select Committee on Adopting Artificial Intelligence (AI). Available at: https://
www. aph. gov. au/ Parli ament ary_ Busin ess/ Commi ttees/ Senate/ Adopt ing_ Artifi cial_ Intel ligen ce_ AI/
Adopt ingAI/ Report
Ball, S. J. (2009). Beyond networks? A brief response to ‘which networks matter in education govern-
ance?’ Political Studies, 57(3), 688–691.https:// doi. org/ 10. 1111/j. 1467- 9248. 2009. 00805.x
Bazeley, P. (2020). Qualitative data analysis: Practical strategies. Sage Publications.
Borji, A. (2023). A categorical archive of ChatGPT failures. arXiv preprint arXiv: 2302. 03494.
Bower, M., Torrington, J., Lai, J. W., Petocz, P., & Alfano, M. (2024). How should we change teaching
and assessment in response to increasingly powerful generative Artificial Intelligence? Outcomes of
the ChatGPT teacher survey. Education and Information Technologies, 29, 15403–15439.https://
doi. org/ 10. 1007/ s10639- 023- 12405-0
Bower, M., Torrington, J., Lai, J.W.M., Petocz, P. (under review). Teacher Professional Development for
a Generative Artificial Intelligence World. Australian Journal of Teacher Education.
M.Bower et al.
Braun, V., Clarke, V., Boulton, E., Davey, L., & McEvoy, C. (2021). The online survey as a qualitative
research tool. International Journal of Social Research Methodology, 24(6), 641–654. https:// doi.
org/ 10. 1080/ 13645 579. 2020. 18055 50
Buolamwini, J. & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial
gender classification. In S.A., Friedler & C. Wilson (Eds.), Proceedings of the 1st conference on
fairness, accountability and transparency. Proceedings of Machine Learning Research, 81, 77–91.
Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The promises and challenges of artificial
intelligence for teachers: A systematic review of research. TechTrends, 66(4), 616–630. https:// doi.
org/ 10. 1007/ s11528- 022- 00715-y
Chen, X., Xie, H., Zou, D., & Hwang, G.-J. (2020). Application and theory gaps during the rise of artifi-
cial intelligence in education. Computers and Education: Artificial Intelligence, 1, 100002. https://
doi. org/ 10. 1016/j. caeai. 2020. 100002
Chiu, T. K., Moorhouse, B. L., Chai, C. S., & Ismailov, M. (2023a). Teacher support and student
motivation to learn with Artificial Intelligence (AI) based chatbot. Interactive Learning Environ-
ments. https:// doi. org/ 10. 1080/ 10494 820. 2023. 21720 44
Chiu, T. K. F., Xia, Q., Zhou, X., Chai, C. S., & Cheng, M. (2023b). Systematic literature review on
opportunities, challenges, and future research recommendations of artificial intelligence in edu-
cation. Computers and Education: Artificial Intelligence, 4, 100118. https:// doi. org/ 10. 1016/j.
caeai. 2022. 100118
Chounta, I.-A., Bardone, E., Raudsep, A., & Pedaste, M. (2022). Exploring teachers’ perceptions of
artificial intelligence as a tool to support their practice in Estonian K-12 education. Interna-
tional Journal of Artificial Intelligence in Education, 32(3), 725–755. https:// doi. org/ 10. 1007/
s40593- 021- 00243-5
Cohen, L., Manion, L., & Morrison, K. (2017). Research methods in education (8th ed.). Routledge.
Crawford, J., Cowling, M., & Allen, K. A. (2023). Leadership is needed for ethical ChatGPT: Charac-
ter, assessment, and learning using artificial intelligence (AI). Journal of University Teaching &
Learning Practice, 20(3), 02.https:// doi. org/ 10. 53761/1. 20.3. 02
European Commission (2021). Laying down harmonised rules on Artificial Intelligence (Artifi-
cial Intelligence Act) and amending certain union legislative acts. European Union. Retrieved
28/08/2023 from https:// www. artifi cial- intel ligen ce- act. com/
Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating rigor using thematic analysis: A hybrid
approach of inductive and deductive coding and theme development. International Journal of
Qualitative Methods, 5(1), 80–92.https:// doi. org/ 10. 1177/ 16094 06906 00500 107
Gulson, K. N., Sellar, S., & Webb, P. T. (2022). Algorithms of education: How datafication and artifi-
cial intelligence shapes policy. University of Minnesota Press.
Gulson, K. N., & Sellar, S. (2019). Emerging data infrastructures and the new topologies of education
policy. Environment & Planning D: Society and Space, 37(2), 350–366.https:// doi. org/ 10. 1080/
02680 939. 2019. 16787 66
Guston, D. H. (2014). Understanding ‘anticipatory governance.’ Social Studies of Science, 44(2),
218–242. https:// doi. org/ 10. 1177/ 03063 12713 508669
Guston, D. H. (2008). The Center for Nanotechnology in Society at Arizona State University and
the prospects for anticipatory governance. In N. M. d. S. Cameron & M. E. Mitchell (Eds.),
Nanoscale: Issues and perspectives for the nano century (pp. 377–392). John Wiley & Sons.
Haynes, A., Garvey, K., Davidson, S., & Milat, A. (2020). What can policy-makers get out of systems
thinking? Policy partners’ experiences of a systems-focused research collaboration in preventive
health. International Journal of Health Policy and Management, 9(2), 65.
Holmes, W., Bialik, M., Fadel, C. (2019). Artificial Intelligence in Education Promises and Implica-
tions for Teaching and Learning. (1st ed.).Center for Curriculum Redesign, MA, USA. https://
disco very. ucl. ac. uk/ id/ eprint/ 10139 722/
Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C.,
Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2021). Ethics of AI in edu-
cation: Towards a community-wide framework. International Journal of Artificial Intelligence in
Education. https:// doi. org/ 10. 1007/ s40593- 021- 00239-1
Holmes, W., Bialik, M., & Fadel, C (2019). Artificial Intelligence in Education Promises and Implica-
tions for Teaching and Learning. (1st ed.). Center for Curriculum Redesign: MA, USA. https://
disco very. ucl. ac. uk/ id/ eprint/ 10139 722/
House of Representatives Standing Committee on Employment, Education and Training (2024). Study
buddy or influencer – Inquiry into the use of generative Artificial Intelligence in the Australian
What generative Artificial Intelligence priorities and…
Education System. Available at: https:// www. aph. gov. au/ Parli ament ary_ Busin ess/ Commi ttees/
House/ Emplo yment_ Educa tion_ and_ Train ing/ AIine ducat ion/ Report
Ibrahim, H., Liu, F., Asim, R., Battu, B., Benabderrahmane, S., Alhafni, B., & Zaki, Y. (2023). Percep-
tion, performance, and detectability of conversational artificial intelligence across 32 university
courses. Nature: Scientific Reports, 13(1), 12187.https:// doi. org/ 10. 1038/ s41598- 023- 38964-3
Karan, B., & Angadi, G. R. (2023). Potential risks of artificial intelligence integration into school edu-
cation: A systematic review. Bulletin of Science, Technology and Society, 43(3–4), 67–85.https://
doi. org/ 10. 1177/ 02704 67623 12247 05
Khosravi, H., Shum, S. B., Chen, G., Conati, C., Tsai, Y.-S., Kay, J., Knight, S., Martinez-Maldonado,
R., Sadiq, S., & Gašević, D. (2022). Explainable artificial intelligence in education. Computers
and Education: Artificial Intelligence, 3, 100074. https:// doi. org/ 10. 1016/j. caeai. 2022. 100074
Kirova, V. D., Ku, C. S., Laracy, J. R., & Marlowe, T. J. (2023). The ethics of artificial intelligence in the
era of generative AI. Journal of Systemics, Cybernetics and Informatics, 21(4), 42–50.The ethics of
artificial intelligence in the era of generative AI
Krügel, S., Ostermaier, A., & Uhl, M. (2023). ChatGPT’s inconsistent moral advice influences users’
judgment. Scientific Reports, 13(1), 4569.https:// doi. org/ 10. 1038/ s41598- 023- 31341-0
Lewis, J. M., McGann, M., & Blomkamp, E. (2020). When design meets power: Design thinking, public
sector innovation and the politics of policymaking. Policy & Politics, 48(1), 111-130. https:// doi.
org/ 10. 1332/ 03055 7319X 15579 23042 0081
Linderoth, C., Hultén, M., & Stenliden, L. (2024). Competing visions of artificial intelligence in educa-
tion—A heuristic analysis on sociotechnical imaginaries and problematizations in policy guidelines.
Policy Futures in Education, 22(8), 1662–1678.https:// doi. org/ 10. 1177/ 14782 10324 12289 00
Lingard, B., & Sellar, S. (2013). Globalization, edu-business and network governance: the policy sociol-
ogy of Stephen J. Ball and rethinking education policy analysis. London Review of Education, 11(3),
265–280.https:// doi. org/ 10. 1080/ 14748 460. 2013. 840986
Loble, L., & Stephens, K. (2024). Securing digital equity in Australian education. University of Technol-
ogy Sydney. https:// doi. org/ 10. 57956/ rrpc- 5708
Lodge, J. M., Howard, S., Bearman, M., Dawson, P. (2023). Assessment reform for the age of artificial
intelligence. Australian Government Tertiary Education Quality and Standards Agency (TEQSA).
Available at: https:// www. teqsa. gov. au/ guides- resou rces/ resou rces/ corpo rate- publi catio ns/ asses
sment- reform- age- artifi cial- intel ligen ce
Luckin, R., Cukurova, M., Kent, C., & du Boulay, B. (2022). Empowering educators to be AI-ready.
Computers and Education: Artificial Intelligence, 3, 100076. https:// doi. org/ 10. 1016/j. caeai. 2022.
100076
McKnight, L., & Shipp, C. (2024). “Just a tool”? Troubling language and power in generative AI writing.
English Teaching: Practice & Critique,23(1), 23-35. https:// doi. org/ 10. 1108/ ETPC- 08- 2023- 0092
Miao, F., & Holmes, W. (2023). Guidance for generative AI in education and research. https:// unesd oc.
unesco. org/ ark:/ 48223/ pf000 03866 93
Mollman, S. (2023). ChatGPT passed a Wharton MBA exam and it’s still in its infancy. One professor
is sounding the alarm. Fortune. Retrieved 22/02/2023 from https:// fortu ne. com/ 2023/ 01/ 21/ chatg pt-
passed- whart on- mba- exam- one- profe ssor- is- sound ing- alarm- artifi cial- intel ligen ce/
Nazaretsky, T., Ariely, M., Cukurova, M., & Alexandron, G. (2022). Teachers’ trust in AI-powered edu-
cational technology and a professional development program to improve it. British Journal of Edu-
cational Technology, 53, 914–931. https:// doi. org/ 10. 1111/ bjet. 13232
Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2023). AI hyped? A horizon scan of dis-
course on artificial intelligence in education (AIED) and development. Learning, Media and Tech-
nology, 48(1), 38–51. https:// doi. org/ 10. 1080/ 17439 884. 2022. 20955 68
OECD. (2019). Artificial intelligence in society. OECD Publishing, Paris. https:// doi. org/ 10. 1787/ eedfe
e77- en
OECD. (2022). Trends shaping education 2022. OECD Publishing, Paris. https:// doi. org/ 10. 1787/ 6ae87
71a- en
Pasquale, F. (2019). Professional judgment in an era of artificial intelligence and machine learning.
Boundary. https:// doi. org/ 10. 1215/ 01903 659- 72713 51
Pedro, F., Subosa, M., Rivas, A., & Valverde, P. (2019). Artificial intelligence in education: Challenges
and opportunities for sustainable development. Retrieved 28/08/2023 from https:// hdl. handle. net/
20. 500. 12799/ 6533
M.Bower et al.
Perrotta, C., & Selwyn, N. (2020). Deep learning goes to school: Toward a relational understanding of AI
in education. Learning, Media and Technology, 45(3), 251–269.https:// doi. org/ 10. 1080/ 17439 884.
2020. 16860 17
Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and
learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1),
1–13. https:// doi. org/ 10. 1186/ s41039- 017- 0062-8
Roose, K. (2023). Don’t ban ChatGPT in schools - Teach with it. The New York Times. Retrieved
12/02/2023 from https:// www. nytim es. com/ 2023/ 01/ 12/ techn ology/ chatg pt- schoo ls- teach ers. html
Selwyn, N. (2024). On the limits of Artificial Intelligence (AI) in education. Nordisk Tidsskrift for Peda-
gogikk Og Kritikk. https:// doi. org/ 10. 23865/ ntpk. v10. 6062
Smits, J., & Borghuis, T. (2022). Generative AI and intellectual property rights. Law and artificial intel-
ligence: Regulating AI and applying AI in legal practice (pp. 323–344). TMC Asser Press.
Southgate, E., Blackmore, K., Pieschl, S., Grimes, S., McGuire, J., & Smithers, K. (2019). Artificial
intelligence and emerging technologies in schools. Retrieved from https:// www. educa tion. gov. au/
suppo rting- family- school- commu nity- partn ershi ps- learn ing/ resou rces/ ai- schoo ls- report
Thomas, J., McCosker, A., Parkinson, S., Hegarty, K., Featherstone, D., Kennedy, J., Holcombe-
James, I., Ormond-Parker, L., & Ganley, L. (2023). Measuring Australia’s Digital Divide: Aus-
tralian Digital Inclusion Index: 2023. Melbourne: ARC Centre of Excellence for Automated
Decision-Making and Society, RMIT University, Swinburne University of Technology, and
Telstra.
Thompson, G., Creagh, S., Stacey, M., Hogan, A., & Mockler, N. (2024). Researching teachers’ time
use: Complexity, challenges and a possible way forward. The Australian Educational Researcher,
51(4), 1647–1670.https:// doi. org/ 10. 1007/ s13384- 023- 00657-1
Touretzky, D., Gardner-McCune, C., Martin, F., & Seehorn, D. (2019). Envisioning AI for K-12: What
should every child know about AI? Proceedings of the AAAI Conference on Artificial Intelli-
gence https:// doi. org/ 10. 1609/ aaai. v33i01. 33019 795
UNESCO (2022). K-12 AI curricula: A mapping of government-endorsed AI curricula. UNESCO
Publishing, Paris. https:// unesd oc. unesco. org/ ark:/ 48223/ pf000 03806 02
UNESCO (2023a). Global education monitoring report, 2023: technology in education: a tool on
whose terms? UNESCO Publishing, Paris. https:// unesd oc. unesco. org/ ark:/ 48223/ pf000 03857
23_ eng
UNESCO. (2023b). Guidance for generative AI in education and research. UNESCO Publishing,
Paris. https:// www. unesco. org/ en/ artic les/ guida nce- gener ative- ai- educa tion- and- resea rch
Wang, T., & Cheng, E. C. K. (2021). An investigation of barriers to Hong Kong K-12 schools incor-
porating Artificial Intelligence in education. Computers and Education: Artificial Intelligence.
https:// doi. org/ 10. 1016/j. caeai. 2021. 100031
Wang, X., Liu, Q., Pang, H., Tan, S. C., Lei, J., Wallace, M. P., & Li, L. (2023). What matters in AI-
supported learning: A study of human-AI interactions in language learning using cluster analy-
sis and epistemic network analysis. Computers and Education, 194, 104703. https:// doi. org/ 10.
1016/j. compe du. 2022. 104703
Wellington, J. (2015). Educational Research (2nd ed.). Bloomsbury Academic.
White House Office of Science and Technology Policy (2022, October). Blueprint for an AI Bill of
Rights: Making Automated Systems Work for the American People. The White House. Retrieved
28/08/2023 from https:// www. white house. gov/ ostp/ ai- bill- of- rights/
Williamson, B., Macgilchrist, F., & Potter, J. (2023). Re-examining AI, automation and datafication in
education. Learning, Media and Technology, 48(1), 1–5.https:// doi. org/ 10. 1080/ 17439 884. 2023.
21678 30
Wingard, J. (2023). ChatGPT: A threat to higher education? Forbes. Retrieved from https:// www.
forbes. com/ sites/ jason winga rd/ 2023/ 01/ 10/ chatg pt-a- threat- to- higher- educa tion
Worrell, T. (2024, February 6). Generative AI in the classroom risks further threatening Indige-
nous inclusion in schools. The Conversation. Retrieved July 10, 2024, https:// theco nvers ation.
com/ gener ative- ai- in- the- class room- risks- furth er- threa tening- indig enous- inclu sion- in- schoo
ls- 222254
Xu, W., & Ouyang, F. (2022). A systematic review of AI role in the educational system based on a pro-
posed conceptual framework. Education and Information Technologies, 27, 4195–4223. https:// doi.
org/ 10. 1007/ s10639- 021- 10774-y
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of
research on artificial intelligence applications in higher education—where are the educators?
What generative Artificial Intelligence priorities and…
International Journal of Educational Technology in Higher Education. https:// doi. org/ 10. 1186/
s41239- 019- 0171-0
Zhai, X., Chu, X., Chai, C. S., Jong, M. S. Y., Istenic, A., Spector, M., Liu, J.-B., Yuan, J., & Li, Y.
(2021). A review of Artificial Intelligence (AI) in education from 2010 to 2020. Complexity, 2021,
1–18. https:// doi. org/ 10. 1155/ 2021/ 88125 42
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
Matt Bower is a Professor in the School of Education at Macquarie University, who specialises in the
innovative use of technology for learning purposes. His research focuses on computing education, teacher
education, online learning, and more recently, Artificial Intelligence. He is fascinated by the way that dif-
ferent uses of technologies can influence student learning outcomes and experiences.
Michael Henderson is a Professor of Digital Futures in the Faculty of Education at Monash University.
His research on effective use of technology in internet-enabled teaching and learning spans early child-
hood, schools, universities and professional learning contexts. Areas of speciality include assessment and
feedback, wellbeing and creativity, and effective learning and teaching using online technologies.
Christine Slade is an Associate Professor at the University of Queensland. Her primary areas of exper-
tise are assessment, academic integrity, and more recently, artificial intelligence in education. Christine
is also involved in several initiatives and projects aimed at promoting academic integrity and supporting
schools in Queensland and nationally with their artificial intelligence priorities.
Erica Southgate , Associate Professor of Emerging Technologies for Education, is a technology ethicist,
VR for learning expert and a maker of computer games for literacy learning. Her VR School Study is the
longest running research on virtual reality for school education, internationally. Erica believes that all stu-
dents, regardless of their socio-economic, cultural or geographic background, should have access to the
best, ethical technology for learning.
Kalervo Gulson is a Professor at The University of Sydney. His research investigates whether new
knowledge, methods and technologies from life and computing sciences, with a specific focus on Arti-
ficial Intelligence, will substantively alter education policy and governance. Kalervo is interested in the
ways education will grapple with and form responses to these changes, both in the academy and in public
debates.
Jason Lodge is aProfessor of Educational Psychology in the School of Education and a Deputy Associ-
ate Dean (Academic) in the Faculty of Humanities and Social Sciences at The University of Queensland.
His research in the Learning, Instruction, and Technology Lab focuses on the cognitive, metacognitive,
and emotional mechanisms of learning in education. His work with the lab primarily emphasises self-
regulated learning with technology. Jason is a lead editor of Australasian Journal of Educational Technol-
ogy and an editor of Student Success.
Authors and Aliations
MattBower1 · MichaelHenderson2 · ChristineSlade3 ·
EricaSouthgate4 · KalervoGulson5 · JasonLodge3
* Matt Bower
matt.bower@mq.edu.au
M.Bower et al.
Michael Henderson
michael.henderson@monash.edu
Christine Slade
c.slade@uq.edu.au
Erica Southgate
erica.southgate@newcastle.edu.au
Kalervo Gulson
kalervo.gulson@sydney.edu.au
Jason Lodge
jason.lodge@uq.edu.au
1 Macquarie University, Macquarie Park, Sydney, NSW2109, Australia
2 Monash University, Clayton, VIC3800, Australia
3 University ofQueensland, St Lucia, Brisbane, QLD4067, Australia
4 University ofNewcastle, Callaghan, NSW2308, Australia
5 University ofSydney, Camperdown, Sydney, NSW2050, Australia