Generative AI in Higher Education:Guiding Principles for Teaching and Learning: Volume 1
ResearchGate has not been able to resolve any citations for this publication.
Integration of ChatGPT into higher education requires assessing university educators' perspectives regarding this novel technology. This study aimed to validate a survey instrument specifically tailored to assess ChatGPT usability and acceptability among university educators based on the Technology Acceptance Model (TAM). The survey instrument comprised 40 TAM-based items in addition to items assessing participants' demographics. We used exploratory factor analysis (EFA) with principal component analysis (PCA) to assess construct validity and Cronbach's α for reliability. The final sample comprised 236 university educators, with 72% who heard of ChatGPT before the study (n = 169), of whom 76 have already used ChatGPT (45.0%). The EFA showed a significant Bartlett's test of sphericity (p < 0.001) and adequate Kaiser-Meyer-Olkin measure (KMO = 0.698). The six constructs inferred through EFA explained a total of 64% of the variance in the educators' attitude to ChatGPT. These constructs comprised 31 items classified into: (1) "Effectiveness" (α = 0.845), (2) "Anxiety" (α = 0.862), (3) "Technology readiness (α = 0.885), (4) Perceived usefulness (α = 0.848), (5) Social influence (α = 0.803), and (6) Perceived risk (α = 0.796). The study identified six key constructs that can be exploited for comprehensive understanding of the university educators' attitude towards ChatGPT. The novel survey instrument herein termed "Ed-TAME-ChatGPT" involved positive influencing factors such as perceived usefulness and effectiveness, positive attitude to technology, and social influence in addition to negative factors comprising anxiety and perceived risk.
p>As part of the 4th Industrial Revolution, the emergence of Artificial Intelligence will change almost all economic activities, and it will create enormous social and economic opportunities. It will also pose major challenges, accompanied by ethical dilemmas.
The present study focuses on the perceptions of current employees predominantly from the IT area, on the development of AI. The aim is to capture the attitudes they have towards the emergence and the development of AI, the impact that it might have on certain sectors of social life and people in general. We sought for the 280 online surveyed subjects to have been employed for at least 6 months, assuming that being already anchored in their professional lives might reduce their biasness. The working methodology allowed us to process and interpret data both quantitatively and qualitatively. The results of the study could be used to predict possible changes that could occur in the future as an effect of the development of Artificial Intelligence, but also to reduce the negative impact that it could have.</p
Advances in generative artificial intelligence (AI) have enabled new forms of human-AI interaction. In this work, we explored the utility of using generative AI, specifically OpenAI’s ChatGPT (Chat Generative Pre-trained Transformer) 3.5, to support the design thinking process to identify user needs, ideate, and refine solutions. We examined how 17 students and professionals from a design program engaged in reflective design practices and self-regulated learning (SRL), as they used generative AI to brainstorm ideas. We further explored how participants considered, elaborated upon, and integrated the AI-generated ideas into their design artifacts. Analyses involved qualitative coding of the brainstorming sessions and Ordered Network Analysis, which visualized the co-occurrences between reflective design practices and SRL as indicators of multifaceted learning engagement. Findings illuminate the importance of iterative evaluation and planning of AI-generated ideas, in conjunction with reflection on design moves, to improve design quality. We discuss the importance of reflective practices and SRL in AI-integrated learning.
Purpose
A gripping keyword emerged in the dynamic world of 2022: GPT or the advent of Generative Artificial Intelligence (GAI), at its forefront, embodied by the mysterious ChatGPT. This technological marvel had been silently lurking in the background for just over five years. However, all of a sudden, it emerged onto the scene, capturing the public’s attention and quickly becoming one of the most widely adopted inventions in history. Therefore, this narrative review is conducted in order to explore the impact of generative AI and ChatGPT on lifelong learning and upskilling of students in higher education and address opportunities and challenges proposed by Artificial Intelligence from a global perspective.
Design/methodology/approach
This review has been conducted using a narrative literature review approach. For in-depth identification of research gaps, 105 relevant articles were included from scholarly databases such as Scopus, Web of Science, ERIC and Google Scholar. Seven major themes emerged from the literature to answer the targeted research questions that describe the use of AI, the impact of generative AI and ChatGPT on students, the challenges and opportunities of using AI in education and mitigating strategies to cope with the challenges associated with the integration of ChatGPT and generative AI in education.
Findings
The review of the literature presents that generative AI and ChatGPT have gained a lot of recognition among students and have revolutionized educational settings. The findings suggest that there are some contexts in which adult education research and teaching can benefit from the use of chatbots and generative AI technologies like ChatGPT. The literature does, however, also highlight the necessity of carefully considering the benefits and drawbacks of these technologies in order to prevent restricting or distorting the educational process or endangering academic integrity. In addition, the literature raises ethical questions about data security, privacy and cheating by students or researchers. To these, we add our own ethical concerns about intellectual property, such as the fact that, once we enter ideas or research results into a generative chatbot, we no longer have control over how it is used.
Practical implications
This review is helpful for educators and policymakers to design the curriculum and policies that encourage students to use generative AI ethically while taking academic integrity into account. Also, this review article identifies the major gaps that are associated with the impact of AI and ChatGPT on the lifelong learning skills of students.
Originality/value
This review of the literature is unique because it explains the challenges and opportunities of using generative AI and ChatGPT, also defining its impact on lifelong learning and upskilling of students.
This paper presents a novel framework, artificial intelligence-enabled intelligent assistant (AIIA), for personalized and adaptive learning in higher education. The AIIA system leverages advanced AI and natural language processing (NLP) techniques to create an interactive and engaging learning platform. This platform is engineered to reduce cognitive load on learners by providing easy access to information, facilitating knowledge assessment, and delivering personalized learning support tailored to individual needs and learning styles. The AIIA’s capabilities include understanding and responding to student inquiries, generating quizzes and flashcards, and offering personalized learning pathways. The research findings have the potential to significantly impact the design, implementation, and evaluation of AI-enabled virtual teaching assistants (VTAs) in higher education, informing the development of innovative educational tools that can enhance student learning outcomes, engagement, and satisfaction. The paper presents the methodology, system architecture, intelligent services, and integration with learning management systems (LMSs) while discussing the challenges, limitations, and future directions for the development of AI-enabled intelligent assistants in education.
Generative artificial intelligence (GenAI) has been advancing with many notable achievements like ChatGPT and Bard. The deep generative model (DGM) is a branch of GenAI, which is preeminent in generating raster data such as image and sound due to the strong role of deep neural networks (DNNs) in inference and recognition. The built-in inference mechanism of DNN, which simulates and aims at synaptic plasticity of the human neuron network, fosters the generation ability of DGM, which produces surprising results with the support of statistical flexibility. Two popular approaches in DGM are the variational autoencoder (VAE) and generative adversarial network (GAN). Both VAE and GAN have their own strong points although they share and imply the underlying theory of statistics as well as significant complex via hidden layers of DNN when DNN becomes effective encoding/decoding functions without concrete specifications. This research unifies VAE and GAN into a consistent and consolidated model called the adversarial variational autoencoder (AVA) in which the VAE and GAN complement each other; for instance, the VAE is a good data generator by encoding data via the excellent ideology of Kullback–Leibler divergence and the GAN is a significantly important method to assess the reliability of data as to whether it is real or fake. In other words, the AVA aims to improve the accuracy of generative models; besides, the AVA extends the function of simple generative models. In methodology, this research focuses on the combination of applied mathematical concepts and skillful techniques of computer programming in order to implement and solve complicated problems as simply as possible.
With Generative AI’s (GenAI) rapid development and the ability to generate sophisticated human-like text, it has evolved as a powerful technology in various domains. However, its application in the education domain was initially met with resistance due to concerns about disrupting traditional learning and assessment methods, raising questions about academic integrity, and provoking ethical dilemmas related to data privacy and bias. Many schools, higher educational institutions, and governments initially chose to ban the use of GenAI tools due to the disruptions they caused to learning and teaching practices, only to rescind their bans later. This study conducts a literature review to investigate GenAI tools from the perspectives of key stakeholders in the educational domain—students, educators, and administrators—highlighting their benefits while identifying challenges and limitations. The review found several benefits of using GenAI, such as personalised learning, immediate support, language support, and reduced administrative workload. This paper also provides usage guidelines for stakeholders and outlines future research areas to support GenAI adoption in higher education. Our findings indicate that most studies involving students had a positive view of using GenAI. There is a noticeable gap in research focusing on administrators, highlighting the need for further investigation.
This research focuses on the possibility of utilizing generative AI in developing learning content to suit each learner's requirements. The study will evaluate the outcomes of the use of AI-created content in enhancing students' interest, desire, and performance in contrast to conventional learning resources. A qualitative study involves interviewing educators and developers of AI to understand their experience and perception about the generative AI in education while the quantitative study involves the performance data of students to determine the effectiveness of the content generated by AI. The paper also explores the ethical and privacy issues that come with the integration of AI in learning and offers solutions to these issues. To compare the efficiency of the AI-generated learning material with the traditional one, the study designed a quantitative comparative study on the performance of the students in Object-Oriented Programming (OOP) course. The course was split into two equal independent assessments; the professors uploaded AI generated content such as the title of the lesson, the content that was taught, and the learning outcomes expected, for each class. LMS was integrated with the OpenAI API to write content that is in line with the learning objectives as defined earlier. Performance data of students as obtained from the two evaluations was used to determine the effect of using AI-generated contents on students' learning. The results indicate that despite the students' increased test scores and grades after applying AI-created study materials, some of them are not benefited from them. These are some of the effects that show that it is essential to consider aspects like students' interest, their prior know-How to cite this paper: Binhammad, M.
The growing use of generative AI tools built on large language models (LLMs) calls the sustainability of traditional assessment practices into question. Tools like OpenAI's ChatGPT can generate eloquent essays on any topic and in any language, write code in various programming languages, and ace most standardized tests, all within seconds. We conducted an international survey of educators and students in higher education to understand and compare their perspectives on the impact of generative AI across various assessment scenarios, building on an established framework for examining the quality of online assessments along six dimensions. Across three universities, 680 students and 87 educators, who moderately use generative AI, consider essay and coding assessments to be most impacted. Educators strongly prefer assessments that are adapted to assume the use of AI and encourage critical thinking, while students' reactions are mixed, in part due to concerns about a loss of creativity. The findings show the importance of engaging educators and students in assessment reform efforts to focus on the process of learning over its outputs, alongside higher-order thinking and authentic applications.
Background
As the use of artificial intelligence (AI) becomes more prevalent in academic settings, there is a growing concern about maintaining a culture of integrity.
Method
This article explores the role of academic institutions and programs in fostering a culture of integrity in relation to AI.
Results
By implementing specific policies, integrating tools, and utilizing software for AI detection, academic institutions can establish a culture of integrity in relation to AI. These collective efforts foster an environment where ethical AI practices are upheld and reinforce the importance of academic honesty, particularly in the nursing profession.
Conclusion
Academic institutions have the capacity to establish integrity-focused policies and integrate anti-AI agent tools in courses to mitigate unethical AI usage, while software advancements assist faculty in identifying AI presence during assessments. Emphasizing the interplay between academic and professional integrity strengthens nurses' dedication to academic honesty. [ J Nurs Educ . 2024;63(X):XXX–XXX.]
This article explores the implications of Generative AI in higher education institutions, focusing on its impact on academic integrity and educational policy. The study utilises qualitative methods and desk-based research to investigate the adoption of Generative Pre-Trained Transformer and similar programs within academic settings. While some institutions have implemented bans on Generative AI due to concerns about plagiarism and ethical implications, others have embraced its potential to enhance educational practices under ethical guidelines. However, such prohibitions may overlook the advantages of Generative AI and ignore students’ inevitable engagement with technology. The article addresses these challenges by proposing guiding principles for the ethical and efficient application of Generative AI in UK universities, particularly in the realms of employability, teaching, and learning. The article is structured into three main sections: a review of existing literature on Generative AI, an exploration of its benefits and challenges, formulation of guiding principles for its implementation, and recommendations for future research and practical implementation. Through this analysis, the article aims to contribute to the ongoing discourse surrounding Generative AI in higher education, providing insights into its implications for educational policy and practice.
At the crossroads of advanced technology and pedagogy, Generative Artificial Intelligence (GenAI) is, at the very least, prompting a reassessment of traditional educational paradigms. Following a frenetic year in the advancement of GenAI, particularly after the emergence of ChatGPT, there is an intent to explore the impact of GenAI on the educational sector, analysed from the perspectives of four key groups: teachers, students, decision-makers, and software engineers. Throughout 2023 and into 2024, literature reviews, interviews, surveys, training sessions, and direct observations have been conducted to gauge how GenAI is perceived by individuals representing these groups within the educational context. It is highlighted how GenAI offers unprecedented opportunities for, among other things, personalising learning, enhancing the quality of educational resources, and optimising administrative and assessment processes. However, the application of GenAI in education also has a less favourable aspect related to reservations and mistrust, often due to a lack of literacy in issues related to AI in general, but also well-founded in some cases due to gaps in legislative, ethical, security, or environmental impact aspects. This analysis reveals that, although GenAI has the potential to transform education significantly, its successful implementation requires a collabo-rative and cross-sectional approach involving all actors in the educational ecosystem. As we explore this new horizon, it is imperative to consider the ethical implications and ensure that technology is used to benefit society at large without overlooking the risks and challenges that already exist or will inevitably arise with the accelerated development of these extremely powerful technologies. En la intersección entre la tecnología avanzada y la pedagogía, la Inteligencia Artificial Generativa (IAGen) está provocando, como poco, el replanteamiento de los paradigmas educativos tradicionales. Después de un año frenético en el avance de la IAGen, especialmente tras la aparición en escena de ChatGPT, se quiere explorar el impacto de la IAGen en el sector educativo, analizado desde las perspec-tivas de cuatro colectivos clave: profesorado, estudiantado, perfiles de toma de decisiones e ingenie-ros/as de software. Durante 2023 y lo que llevamos de 2024 se han realizado revisiones de literatura, entrevistas, encuestas, formaciones y observaciones directas de cómo se percibe la IAGen por personas que representan a los colectivos anteriormente mencionados dentro del contexto educativo. Se des-taca cómo la IAGen ofrece oportunidades sin precedentes para, entre otros aspectos, personalizar el aprendizaje, mejorar la calidad de los recursos educativos u optimizar los procesos administrativos y de evaluación. Sin embargo, la IAGen aplicada a la educación tiene otra cara menos amable que se relaciona con recelos y desconfianzas, debidas, en muchas ocasiones a una falta de alfabetización en aspectos relacionados con la IA en general, pero bien fundamentados en otras ocasiones por las lagu-nas existentes en cuanto a aspectos legislativos, éticos, de seguridad o de influencia medioambiental. Este análisis revela que, aunque la IAGen tiene el potencial de transformar significativamente la educa-ción, su implementación exitosa requiere un enfoque colaborativo y transversal que involucre a todos los actores del ecosistema educativo. A medida que exploramos este nuevo horizonte, es imperativo considerar las implicaciones éticas y garantizar que la tecnología se utilice de manera que signifique un beneficio para la sociedad en general, sin obviar los riesgos y retos que ya existen o que ineludi-blemente aparecerán con el desarrollo acelerado de estas tecnologías tan extremadamente potentes. Palabras clave chatGPT, inteligencia artificial, educación, academia.
AI has become the poster child for a certain kind of thinking which holds that some technologies can become objective, independent and emergent entities which can evolve beyond the control of their creators. This thinking is not new however. It is a product of certain philosophical ideas such as materialism, a common-sense world of objective and independent objects, a correspondence theory of truth, and so forth, which are centered around the pre-eminence of science, epistemology, and logical reasoning, among others, as the supremely valid modes of engaging experience. This paper aims to critically examine the synthesis of the development program for AI to be found in the writings of computer scientist John McCarthy (1927–2011). McCarthy has been called one of the founding fathers of AI. First, some of the main themes which recur in his writings from as early as the mid nineteen fifties in the Dartmouth proposal, up to his late writings, are considered. A discussion of the goals that such a program implies, follows, in terms of society’s relation with AI based technology and the nature and purpose of AI itself. These implications are arguably related to social control and are also moral. Ultimately, McCarthy’s idea that AI is and can be built upon a paradigm of reasoning and logic so as to simulate a perfectly rational abstraction of human thinking is shown to be motivated by his view that AI systems should be controlled, as servants. This understanding of human thinking is questionable, however, given current research on human irrationality, as well as on the argumentative role of reasoning. And yet, the process of development toward the unachievable goal of controllable, rational, logical AI systems, has handily come to serve as a blind for increasing techno-corporate control of society, with techno-corporate interests displacing blame for devaluations and harms caused by their push to develop the technology to AI systems which they can conveniently ‘lose control of.’
This chapter discusses the question of whether we will ever have an Artificial General Superintelligence (AGSI) and how it will affect our species if it does so. First, it explores various proposed definitions of AGSI and the potential implications of its emergence, including the possibility of collaboration or conflict with humans, its impact on our daily lives, and its potential for increased creativity and wisdom. The concept of the Singularity, which refers to the hypothetical future emergence of superintelligent machines that will take control of the world, is also introduced and discussed, along with criticisms of this concept. Second, it is considered the possibility of mind uploading (MU) and whether such MU would be a suitable means to achieve (true) immortality in this world—the ultimate goal of the proponents of this approach. It is argued that the technological possibility of achieving something like this is very remote, and that, even if it were ever achieved, serious problems would remain, such as the preservation of personal identity. Third, the chapter concludes arguing that the future we create will depend largely on how well we manage the development of AI. It is essential to develop governance of AI to ensure that critical decisions are not left in the hands of automated decision systems or those who create them. The importance of such governance lies not only in avoiding the dystopian scenarios of a future AGSI but also in ensuring that AI is developed in a way that benefits humanity.
Higher education is crucial for producing ethical citizens and professionals globally. The introduction of generative AI (GenAI), such as ChatGPT, has posed opportunities and challenges to the traditional model of education. However, the current conversations primarily focus on policy development and assessment, with limited research on the future of higher education. GenAI's impact on learning outcomes, pedagogy, and assessment is crucial for reforming and advancing the workforce. This qualitative study aims to investigate student perspectives on GenAI's impact on higher education. The study uses an initial conceptual framework driven by a systematic literature review to investigate the opportunities and challenges of AI in education. This framework serves as an initial data collection and analysis framework. A sample of 51 students from three research-intensive universities was selected for this study. Thematic analysis identified three themes and 10 subthemes. The findings suggest that future higher education should be transformed to train students to be future-ready for employment in a society powered by GenAI. They suggest new learning outcomes—skills in learning and teaching with GenAI, AI literacy—and emphasize the significance of interdisciplinarity and maker learning, with assessment focusing on in-class and hands-on activities. They recommend six future research directions – competence for future workforce and its self-assessment measures, AI literacy or competency measures, new literacies and their relationships, interdisciplinary teaching, Innovative pedagogies and their evaluation, new assessment and its acceptance.
This study examines the relationship between student perceptions and their intention to use generative artificial intelligence (GenAI) in higher education. With a sample of 405 students participating in the study, their knowledge, perceived value, and perceived cost of using the technology were measured by an Expectancy-Value Theory (EVT) instrument. The scales were first validated and the correlations between the different components were subsequently estimated. The results indicate a strong positive correlation between perceived value and intention to use generative AI, and a weak negative correlation between perceived cost and intention to use. As we continue to explore the implications of GenAI in education and other domains, it is crucial to carefully consider the potential long-term consequences and the ethical dilemmas that may arise from widespread adoption.
Universities are increasingly concerned with the impact of Generative AI, such as Chat GPT, on cheating and other violations of students' academic integrity. However, research is scarce regarding the responses of universities on this issue. In addition, the increase in studies on GenAI invites a systematic review of themes and trends to update researchers who wish to embark on this emerging research area. This paper reviews 37 articles on academic integrity in the Age of Gen AI and presents the approaches of the top 20 global universities to mitigate the impact of artificial intelligence tools on students' intellectual integrity and learning. The results showed three themes both in the systematic review of the literature and the content analysis of the policies of the top 20 global universities: enforcement of academic integrity, education of faculty and students on ways to avoid academic misconduct, and encouragement of using Gen AI tools in the academe and the workplace, for productivity. This paper proposes a 3E Model for higher education institutions to start a discussion on creating a roadmap to ensure academic integrity, explore ways to improve classroom assessment practices, and encourage exploration of evolving Gen AI tools. In addition, the categories found in this study may be used by universities in updating their research agenda on Generative AI.
The neoliberal transformation of higher education in the UK and an intertwined focus on the productive efficiency and prestige value of universities has led to an epidemic of overwork and precarity among academics. Many are found to be struggling with lofty performance expectations and an insistence that all dimensions of their work consistently achieve positional gains despite ferocious competition and the omnipresent threat of failure. Working under the current audit culture present across education, academics are thus found to overwork or commit to accelerated labour as pre-emptive compensation for the habitual inclemency of peer-review and vagaries of student evaluation, in accommodating the copiousness of ‘invisible’ tasks, and in eluding the myriad crevasses of their precarious labour. The proliferation of generative artificial intelligence (GAI) tools and more specifically, large language models (LLMs) like ChatGPT, offers potential relief for academics and a means to offset intensive demands and discover more of a work-based equilibrium. Through a recent survey of n = 284 UK academics and their use of GAI, we discover, however, that the digitalisation of higher education through GAI tools no more alleviates than extends the dysfunctions of neoliberal logic and deepens academia’s malaise. Notwithstanding, we argue that the proliferating use of GAI tools by academics may be harnessed as a source of positive disruption to the industrialisation of their labour and catalyst of (re)engagement with scholarly craftsmanship.
This study explores the capability of academic staff assisted by the Turnitin Artificial Intelligence (AI) detection tool to identify the use of AI-generated content in university assessments. 22 different experimental submissions were produced using Open AI’s ChatGPT tool, with prompting techniques used to reduce the likelihood of AI detectors identifying AI-generated content. These submissions were marked by 15 academic staff members alongside genuine student submissions. Although the AI detection tool identified 91% of the experimental submissions as containing AI-generated content, only 54.8% of the content was identified as AI-generated, underscoring the challenges of detecting AI content when advanced prompting techniques are used. When academic staff members marked the experimental submissions, only 54.5% were reported to the academic misconduct process, emphasising the need for greater awareness of how the results of AI detectors may be interpreted. Similar performance in grades was obtained between student submissions and AI-generated content (AI mean grade: 52.3, Student mean grade: 54.4), showing the capabilities of AI tools in producing human-like responses in real-life assessment situations. Recommendations include adjusting the overall strategies for assessing university students in light of the availability of new Generative AI tools. This may include reducing the overall reliance on assessments where AI tools may be used to mimic human writing, or by using AI-inclusive assessments. Comprehensive training must be provided for both academic staff and students so that academic integrity may be preserved.
AI use in higher education raises ethical concerns that must be addressed. Biased algorithms pose a significant threat, especially if used in admission or grading processes, as they could have devastating effects on students. Another issue is the displacement of human educators by AI systems, and there are concerns about transparency and accountability as AI becomes more integrated into decision-making processes. This paper examined three AI objectives related higher education: biased algorithms, AI and decision-making, and human displacement. Discourse analysis of seven AI ethics policies was conducted, including those from UNESCO, China, the European Commission, Google, MIT, Sanford HAI, and Carnegie Mellon. The findings indicate that stakeholders must work together to address these challenges and ensure responsible AI deployment in higher education while maximizing its benefits. Fair use and protecting individuals, especially those with vulnerable characteristics, are crucial. Gender bias must be avoided in algorithm development, learning data sets, and AI decision-making. Data collection, labeling, and algorithm documentation must be of the highest quality to ensure traceability and openness. Universities must study the ethical, social, and policy implications of AI to ensure responsible development and deployment. The AI ethics policies stress responsible AI development and deployment, with a focus on transparency and accountability. Making AI systems more transparent and answerable may reduce the adverse effects of displacement. In conclusion, AI must be considered ethically in higher education, and stakeholders must ensure that AI is used responsibly, fairly, and in a way that maximizes its benefits while minimizing its risks.
Classic explanations of the “group polarization phenomenon” emphasize interpersonal processes such as informational influence and social comparison (Myers & Lamm, 1976). Based on earlier research, we hypothesized that at least part of the polarization observed during group discussion might be due to repeated attitude expression. Two studies provide support for this hypothesis. In Study 1, we manipulated how often each group member talked about an issue and how often he or she heard other group members talk about the issue. We found that repeated expression produced a reliable shift in extremity. A detailed coding of the groups' discussions showed that the effect of repeated expression on attitude polarization was enhanced in groups where the group members repeated each other's arguments and used them in their own line of reasoning. Study 2 tested for this effect experimentally. The results showed that the effect of repeated expression was augmented in groups where subjects were instructed to use each others' arguments compared to groups where instructions were given to avoid such repetitions. Taken together, these studies show that repeated expression accounts for at least part of the attitude polarization observed in the typical studies on group polarization and that this effect is augmented by social interaction, i.e., it occurs particularly in an environment where group members repeat and validate each other's ideas.
Generative Artificial Intelligence has rapidly expanded its footprint of use in educational institutions. It has been embraced by students, faculty, and staff alike. The technology is capable of carrying out a sustained sequence of interactive dialogs and creating reasonably meaningful text. Not surprisingly it seems to be routinely used by faculty to generate questions and assignments, by students to submit assignments and aid in self-learning, and administration to create manuals, memoranda, and policy documents. With its potential to lead to significant social innovation, tethering on the verge of becoming a disruptive technology, it seems most unlikely that it will fade away without being fully enfolded into almost all aspects of academic and pedagogical activity. While it is early to predict the exact place of this technology in education, we present thoughts to aid deliberations and give a brief review of the opportunities and challenges.
ChatGPT is an artificial intelligence chatbot that utilizes advanced natural language processing technologies, including large language models, to produce human‐like responses to user queries spanning a wide range of topics from programming to mathematics. As an emerging generative artificial intelligence (GAI) tool, it presents novel opportunities and challenges to the ongoing digital transformation of education. This article employs a systematic review approach to summarize the viewpoints of Chinese scholars and experts regarding the implementation of GAI in education. The research findings indicate that a majority of Chinese scholars support the cautious integration of GAI into education as it serves as a learning tool that offers personalized educational experiences for students. However, it also raises concerns related to academic integrity and the potential hindrance to students' critical thinking skills. Consequently, a framework called DATS, which outlines an optimization path for future GAI applications in schools, is proposed. The framework takes into account the perspectives of four key stakeholders: developers, administrators, teachers, and students.
Introduction
This study explores the effects of Artificial Intelligence (AI) chatbots, with a particular focus on OpenAI’s ChatGPT, on Higher Education Institutions (HEIs). With the rapid advancement of AI, understanding its implications in the educational sector becomes paramount.
Methods
Utilizing databases like PubMed, IEEE Xplore, and Google Scholar, we systematically searched for literature on AI chatbots’ impact on HEIs. Our criteria prioritized peer-reviewed articles, prominent media outlets, and English publications, excluding tangential AI chatbot mentions. After selection, data extraction focused on authors, study design, and primary findings. The analysis combined descriptive and thematic approaches, emphasizing patterns and applications of AI chatbots in HEIs.
Results
The literature review revealed diverse perspectives on ChatGPT’s potential in education. Notable benefits include research support, automated grading, and enhanced human-computer interaction. However, concerns such as online testing security, plagiarism, and broader societal and economic impacts like job displacement, the digital literacy gap, and AI-induced anxiety were identified. The study also underscored the transformative architecture of ChatGPT and its versatile applications in the educational sector. Furthermore, potential advantages like streamlined enrollment, improved student services, teaching enhancements, research aid, and increased student retention were highlighted. Conversely, risks such as privacy breaches, misuse, bias, misinformation, decreased human interaction, and accessibility issues were identified.
Discussion
While AI’s global expansion is undeniable, there is a pressing need for balanced regulation in its application within HEIs. Faculty members are encouraged to utilize AI tools like ChatGPT proactively and ethically to mitigate risks, especially academic fraud. Despite the study’s limitations, including an incomplete representation of AI’s overall effect on education and the absence of concrete integration guidelines, it is evident that AI technologies like ChatGPT present both significant benefits and risks. The study advocates for a thoughtful and responsible integration of such technologies within HEIs.
ChatGPT is revolutionizing the field of higher education by leveraging deep learning models to generate human-like content. However, its integration into academic settings raises concerns regarding academic integrity, plagiarism detection, and the potential impact on critical thinking skills. This article presents a study that adopts a thing ethnography approach to understand ChatGPT’s perspective on the challenges and opportunities it represents for higher education. The research explores the potential benefits and limitations of ChatGPT, as well as mitigation strategies for addressing the identified challenges. Findings emphasize the urgent need for clear policies, guidelines, and frameworks to responsibly integrate ChatGPT in higher education. It also highlights the need for empirical research to understand user experiences and perceptions. The findings provide insights that can guide future research efforts in understanding the implications of ChatGPT and similar Artificial Intelligence (AI) systems in higher education. The study concludes by highlighting the importance of thing ethnography as an innovative approach for engaging with intelligent AI systems and calls for further research to explore best practices and strategies in utilizing Generative AI for educational purposes.
This study examines the role of ChatGPT as a writing assistant in academia through a systematic literature review of the 30 most relevant articles. Since its release in November 2022, ChatGPT has become the most debated topic among scholars and is also being used by many users from different fields. Many articles, reviews, blogs, and opinion essays have been published in which the potential role of ChatGPT as a writing assistant is discussed. For this systematic review, 550 articles published six months after ChatGPT’s release (December 2022 to May 2023) were collected based on specific keywords, and the final 30 most relevant articles were finalized
through PRISMA flowchart. The analyzed literature identifies different opinions and scenarios associated with using ChatGPT as a writing assistant and how to interact with it. Findings show that artificial intelligence (AI) in education is a part of the ongoing development process, and its latest chatbot, ChatGPT is a part of it. Therefore, the education process, particularly academic writing, has both opportunities and challenges in adopting ChatGPT as a writing assistant. The
need is to understand its role as an aid and facilitator for both the learners and instructors, as chatbots are relatively beneficial devices to facilitate, create ease and support the academic process. However, academia should revisit and update students’ and teachers’ training, policies, and assessment ways in writing courses for academic integrity and originality, like plagiarism issues, AI-generated assignments, online/home-based exams, and auto-correction challenges.
It is increasingly common to interact with products that seem “intelligent”, although the label “artificial intelligence” may have been replaced by other euphemisms. Since November 2022, with the emergence of the ChatGPT tool, there has been an exponential increase in the use of artificial intelligence in all areas. Although ChatGPT is just one of many generative artificial intelligence technologies, its impact on teaching and learning processes has been significant. This article reflects on the advantages, disadvantages, potentials, limits, and challenges of generative artificial intelligence technologies in education to avoid the biases inherent in extremist positions. To this end, we conducted a systematic review of both the tools and the scientific production that have emerged in the six months since the appearance of ChatGPT. Generative artificial intelligence is extremely powerful and improving at an accelerated pace, but it is based on large language models with a probabilistic basis, which means that they have no capacity for reasoning or comprehension and are therefore susceptible to containing errors that need to be contrasted. On the other hand, many of the problems associated with these technologies in educational contexts already existed before their appearance, but now, due to their power, we cannot ignore them, and we must assume what our speed of response will be to analyse and incorporate these tools into our teaching practice.
Educational technology innovations leveraging large language models (LLMs) have shown the potential to automate the laborious process of generating and analysing textual content. While various innovations have been developed to automate a range of educational tasks (eg, question generation, feedback provision, and essay grading), there are concerns regarding the practicality and ethicality of these innovations. Such concerns may hinder future research and the adoption of LLMs‐based innovations in authentic educational contexts. To address this, we conducted a systematic scoping review of 118 peer‐reviewed papers published since 2017 to pinpoint the current state of research on using LLMs to automate and support educational tasks. The findings revealed 53 use cases for LLMs in automating education tasks, categorised into nine main categories: profiling/labelling, detection, grading, teaching support, prediction, knowledge representation, feedback, content generation, and recommendation. Additionally, we also identified several practical and ethical challenges, including low technological readiness, lack of replicability and transparency and insufficient privacy and beneficence considerations. The findings were summarised into three recommendations for future studies, including updating existing innovations with state‐of‐the‐art models (eg, GPT‐3/4), embracing the initiative of open‐sourcing models/systems, and adopting a human‐centred approach throughout the developmental process. As the intersection of AI and education is continuously evolving, the findings of this study can serve as an essential reference point for researchers, allowing them to leverage the strengths, learn from the limitations, and uncover potential research opportunities enabled by ChatGPT and other generative AI models.
Practitioner notes
What is currently known about this topic Generating and analysing text‐based content are time‐consuming and laborious tasks.
Large language models are capable of efficiently analysing an unprecedented amount of textual content and completing complex natural language processing and generation tasks.
Large language models have been increasingly used to develop educational technologies that aim to automate the generation and analysis of textual content, such as automated question generation and essay scoring.
What this paper adds A comprehensive list of different educational tasks that could potentially benefit from LLMs‐based innovations through automation.
A structured assessment of the practicality and ethicality of existing LLMs‐based innovations from seven important aspects using established frameworks.
Three recommendations that could potentially support future studies to develop LLMs‐based innovations that are practical and ethical to implement in authentic educational contexts.
Implications for practice and/or policy Updating existing innovations with state‐of‐the‐art models may further reduce the amount of manual effort required for adapting existing models to different educational tasks.
The reporting standards of empirical research that aims to develop educational technologies using large language models need to be improved.
Adopting a human‐centred approach throughout the developmental process could contribute to resolving the practical and ethical challenges of large language models in education.
This study explores university students’ perceptions of generative AI (GenAI) technologies, such as ChatGPT, in higher education, focusing on familiarity, their willingness to engage, potential benefits and challenges, and effective integration. A survey of 399 undergraduate and postgraduate students from various disciplines in Hong Kong revealed a generally positive attitude towards GenAI in teaching and learning. Students recognized the potential for personalized learning support, writing and brainstorming assistance, and research and analysis capabilities. However, concerns about accuracy, privacy, ethical issues, and the impact on personal development, career prospects, and societal values were also expressed. According to John Biggs’ 3P model, student perceptions significantly influence learning approaches and outcomes. By understanding students’ perceptions, educators and policymakers can tailor GenAI technologies to address needs and concerns while promoting effective learning outcomes. Insights from this study can inform policy development around the integration of GenAI technologies into higher education. By understanding students’ perceptions and addressing their concerns, policymakers can create well-informed guidelines and strategies for the responsible and effective implementation of GenAI tools, ultimately enhancing teaching and learning experiences in higher education.
The “Psychology of Artificial Intelligence” looks at the different kinds of human intelligence and asks if intelligence is really one thing or many. It then looks at progress in AI from its earliest beginnings through to the most recent “deep” neural networks and large language models. The book argues that AIs should be seen as genuinely intelligent but not yet capturing all aspects of human intelligence. The potential for AI to surpass human intelligence is seen as both a risk but also as an opportunity to advance human intelligence and to improve our understanding of ourselves.
This single author book, first published in July 2024, is intended for the general reader interested in artificial intelligence and its similarities to human intelligence. More information at https://tonyjprescott.com/2024/11/23/the-psychology-of-artificial-intelligence/
Mathematical writing (MW) can support students’ mathematical learning and is common in mathematics assessment. However, MW is known to be particularly challenging for students with learning disabilities. While the use of model compositions of both high- and low-quality writing and the act of revision are evidence-based practices in writing instruction, models of MW are not readily available in the curriculum, and many teachers struggle to compose high-quality MW themselves. Artificial intelligence (AI) chatbots are increasingly accessible for teachers and provide one avenue by which MW models can be readily generated. This column guides educators on utilizing AI chatbots to produce MW models to support MW instruction for students with learning disabilities.
An accessible introduction to an exciting new area in computation, explaining such topics as qubits, entanglement, and quantum teleportation for the general reader.
Quantum computing is a beautiful fusion of quantum physics and computer science, incorporating some of the most stunning ideas from twentieth-century physics into an entirely new way of thinking about computation. In this book, Chris Bernhardt offers an introduction to quantum computing that is accessible to anyone who is comfortable with high school mathematics. He explains qubits, entanglement, quantum teleportation, quantum algorithms, and other quantum-related topics as clearly as possible for the general reader. Bernhardt, a mathematician himself, simplifies the mathematics as much as he can and provides elementary examples that illustrate both how the math works and what it means.
Bernhardt introduces the basic unit of quantum computing, the qubit, and explains how the qubit can be measured; discusses entanglement—which, he says, is easier to describe mathematically than verbally—and what it means when two qubits are entangled (citing Einstein's characterization of what happens when the measurement of one entangled qubit affects the second as “spooky action at a distance”); and introduces quantum cryptography. He recaps standard topics in classical computing—bits, gates, and logic—and describes Edward Fredkin's ingenious billiard ball computer. He defines quantum gates, considers the speed of quantum algorithms, and describes the building of quantum computers. By the end of the book, readers understand that quantum computing and classical computing are not two distinct disciplines, and that quantum computing is the fundamental form of computing. The basic unit of computation is the qubit, not the bit.
AI chatbots have recently fuelled debate regarding education practices in higher education institutions worldwide. Focusing on Generative AI and ChatGPT in particular, our study examines how AI chatbots impact university teachers' assessment practices, exploring teachers' perceptions about how ChatGPT performs in response to home examination prompts in undergraduate contexts. University teachers (n = 24) from four different departments in humanities and social sciences participated in Turing Test-inspired experiments, where they blindly assessed student and ChatGPT-written responses to home examination questions. Additionally, we conducted semi-structured interviews in focus groups with the same teachers examining their reflections about the quality of the texts they assessed. Regarding chatbot-generated texts, we found a passing rate range across the cohort (37.5 − 85.7%) and a chatbot-written suspicion range (14-23%). Regarding the student-written texts, we identified patterns of downgrading, suggesting that teachers were more critical when grading student-written texts. Drawing on post-phenomenology and mediation theory, we discuss AI chatbots as a potentially disruptive technology in higher education practices.