ArticlePDF Available

Report on AI Chatbots' Impact on Writing Centers

Authors:
Report on AI Chatbots’ Impact on Writing Centers
Nathan Lindberg, Amanda Domingues, Thari Zweers, Selin Goktas
Cornell’s English Language Support Office
ChatGPT’s rapid rise has surprised educators. Conversations about its impact are seemingly
ubiquitous at conferences, on social media, and on listservs. Some educators worry that students
are using ChatGPT to complete their assignments, and teachers have no reliable way of
determining authorship. However, some argue educators must promote the use of artificial
intelligence (AI) chatbots, such as ChatGPT, because students who do not use them are at a
disadvantage compared to those who do.
Undoubtedly, AI chatbots will haveor already are havingan effect on writing centers.
Potentially, AI chatbots could replace tutoring/consulting sessions, and writing centers could see
a drop in appointments made. Tutor/consultant training may need to include working effectively
with AI chatbots and also raising awareness of ethical issues, such as using AI chatbots without
acknowledgment. Understanding current sentiments of writing center interested parties can help
with the assessment of the situation and provide information for planning. However, so far few
studies have focused on such sentiments.
This report provides perspectives on how writing centers are being affected by AI chatbots. It
presents data from an IRB approved survey, distributed on professional listservs and social
media groups as well as through personal contacts. To ensure the report’s timely release, it has
not gone through peer review. Later, the data from this study will be considered with findings
from an ongoing study, which is focused on students’ perspectives. The result will be a research
article intended for publication.
From this current study, the main findings are as follows:
Writing center administrators are feeling generally negative about the impacts of AI
chatbots.
Generally, writing center administrators do not personally use AI chatbots or only use
them a little.
Writing center administrators predict AI chatbots will change the way writing centers
work with clients and how tutors/consultants are trained; however, the changes will not
be major.
Institutions are still developing policies for AI chatbots. Most have policies for divisions
(e.g., programs, departments)or professors have created their ownbut not university-
wide.
Institutions are providing support for faculty to understand and work with AI chatbots.
Writing center clients are using AI chatbots for a variety of reasons, none of which are
dominant.
This report continues with a brief overview of germane literature, followed by the survey
methods and findings. At the end of the report, the Conclusions and Discussions section includes
some ideas on how writing center administrators can approach working with AI chatbots. These
ideas will be expanded later in the article written for publication. In the meantime, we welcome
any suggestions about changes or additions to this report that could be made for the article.
Please, send them to Nathan Lindberg at nwl26@cornell.edu.
Relevant Literature
Invented in the 1960s, AI chatbots are computer programs that use natural language to
communicate with users (Shawar & Atwell, 2007). Recently, AI chatbots have exploded in
popularity due to the success of ChatGPT, a large language model chatbot. Launched in
November 2022, ChatGPT reached a million users in five days, a feat that took the social media
site Instagram 15 times longer (Chartr, 2022). While a variety of other AI chatbots have been and
are being introduced (e.g., Google’s Bard), currently, ChatGPT is the most popular.
ChatGPT has had a significant impact on education, in particular writing. Students use it to come
up with ideas, create outlines, and even write content. In reaction, some schools have banned
using ChatGPT (Yadava, 2023), but, as of yet, teachers have no reliable way of detecting AI-
generated text (Terry, 2023). Trends indicate that AI chatbots are only going to become more
effective and ubiquitous writing tools.
Using AI chatbots to assist with writing can be problematic. The texts they generate can look like
academic writing, but contain mistakes or even fabricated information (Ali & Singh, 2023).
Beyond inaccuracies, teachers and students have expressed concerns that AI chatbots are
negatively impacting education and replacing humans (Shoufan, 2023). Additional concerns are
that they could create a divide between those who can afford the technology and those who
cannot (Yan, 2023) and that they may further solidify the use of colonial languages (e.g.,
Standard English) and, thus, the coloniality of power (Madianou, 2021).
Despite these challenges, using AI chatbots for writing has advantages. AI chatbots can function
like a personal tutor, accessible any time for any question (Rahman & Watanobe, 2023). Having
such a tutor can create a more equitable environment for those who have difficulty adjusting
their written accent, such as an English as additional language (EAL) student writing in North
American academic English (Teubner et al., 2023).
Though there are fears that AI chatbots might replace writing instructors, we might instead look
at them as a powerful tool that we can enable our students to use. In fact, it has been argued that
teachers are ethically obligated to teach the use of AI chatbots, or they risk disadvantaging their
students (Jeon & Lee, 2023).
Methods of this Study
Data for this study were gathered using an IRB approved survey. The survey was sent out in
August 2023 on several writing center professional listservs and posted on a writing center
Facebook group. There were 98 responses, but 23 were deleted because either no input was
given or only the first item was addressed (i.e., “What is your role with the writing center(s) at
your institution?”) Nine others were only partially completed; however, they contained relevant
responses, so they are included in this study. The other 66 responses were complete.
Survey participants primarily identified as writing center administrators (n=62). In addition, nine
identified as consultants or tutors, and three identified as interested parties (e.g., writing faculty).
The participants are affiliated with 65 institutions, five of which were listed twice. However, the
responses from duplicated universities were often different, even for questions that have a
definitive answer, such as Does the Institute You Are Affiliated with Have Policies on Using AI
Chatbots? We felt answers reflected opinions and perceptions, not definitive truths, so we
included all responses from duplicate institutions.
Participants were asked to identify the primary and non-primary clients of the writing centers
they are affiliated with. Most primarily serve undergraduate students (n=55) and, second,
graduate and professional students (n=13). Their non-primary clients are graduates and
professionals (n=38), postdocs (n=23), faculty (n=23), other (n=17), none (n=14), and
undergraduates (n=7).
Results
Participants Perceptions and Experiences with AI chatbots
Participants were asked how they generally felt about the impact of ChatGPT and other AI
chatbots on the writing center. As seen in Figure 1, most sentiments were negative (n=36),
though 18 were neutral. Only 12 felt positive. None of the participants chose “there is no
impact.”
Figure 1
Question: Generally, How Do You Feel About the Impact of ChatGPT and Other AI Chatbots on
Writing Centers?
Participants were asked to elaborate. A little more than half of the 47 who responded believe that
AI chatbots are useful and can allow writing centers to focus on different tasks, such as spending
less time on looking for “mistakes” and more time on using language creatively. Approximately
half of the others are not as optimistic. These respondents are worried AI chatbots will negatively
impact the learning process and fear that students will increasingly rely on AI chatbots to do their
work. One respondent even argued that the reliance on these kinds of technologies may diminish
people's cognitive skills.
Participants were asked if they used ChatGPT or other AI chatbots personally. The majority do
not or only use them a little (n=55). Only four reported using them a lot (Figure 2).
Figure 2
Question: Do You Personally Use ChatGPT or Other AI Chatbots?
10
26
18
9
3
0
Very negative A little negative Neutral A little positive Very positive There is no impact
Respondents were asked to elaborate. Some indicated that they use ChatGPT to learn how
it works. Some stated they use it to brainstorm and/or help with designing classes or creating
training materials. Concerning those who do not use AI chatbots, most declared they do not have
a need to. However, two raised ethical implications and others stated AI chatbots are too limited
in output and reliability. Two others raised issues of privacy and were reluctant to create an
account.
Institutions Approach to AI Chatbots
Participants were asked if the institution they are affiliated with has policies for using AI
chatbots. While the most common answer was yes (n=24), the policies were only for specific
“parts” (e.g., programs, individual professor). Thirteen indicated that there are no policies, but 11
thought policies were being developed. Only seven indicated their institution has university-wide
policies (Figure 3).
Figure 3
Question: Does the Institution You Are Affiliated with Have Any Policy/Policies About Using AI
Chatbots?
Participants were invited to elaborate. Of the 46 who responded, most stated that the institution
they are affiliated with allows individual faculty members to decide their own policies. Some
stated that administrators have developed policies both in favor of and against the use of AI
chatbots, highlighting ethical concerns and the need for transparency. Most stated that the
23
32
84
No Yes, a little Yes, sometimes Yes, a lot
13 11
24
7
1
12
No No, but being
developed.
Yes, but only for
parts
Yes, university-wide I'm not sure Other
policies being developed, or already in practice, go hand in hand with existing policies, such as
those on plagiarism.
Even though policies at institutions may not have been fully developed, 42 participants indicated
that their institution provides support, while only eight indicated theirs does not (Figure 4).
Figure 4
Question: At the Institution That You Are Affiliated With, Is There Support (e.g., Workshops,
Handouts, Lists of Resources) for Using ChatGPT or Other AI Chatbots?
Respondents were asked to elaborate. Of the 46 who responded, most mentioned support from
teaching centers (e.g., center for teaching innovation), and some from IT departments. A few
mentioned that the library is involved. The most common form of support seems to be
workshops, but some respondents mentioned informal conversations with faculty members.
Clients Using AI Chatbots
Participants were asked if their writing center clients were using AI chatbots. Fifteen indicated
that they were not, 37 thought they were, and 20 were not sure. It is notable that 20 were not
sure, indicating that administrators had either not explored the issue or that they had and their
findings were inconclusive.
The 37 respondents who thought clients were using ChatGPT were asked what they were using it
for. The most popular answer was coming up with ideas (e.g., brainstorming topics, outlining,
drafting text) (n=28), followed by adjusting for second language issues (e.g., non-native
phrasing/vocabulary, non-native syntax) (n=26), and then writing low-stakes texts (e.g.,
emails, minor class assignments) (n=25). Generally, however, no single answer was dominant
(Figure 5).
Figure 5
Question: What Do You Think Clients Are Using Them for? (Choose as Many as Apply.)
811
42
6
No No, but support is
being developed.
Yes I'm not sure.
Note: Choices not discussed above and their parentheticals are as follows:
“proofreading” (e.g., checking for mistakes),
“tone or style” (e.g., formal/informal, academic/general audience),
“high-stakes texts (e.g., major class assignments, job applications, research articles, grant
proposals), and
“reading” (i.e., summarizing/explaining texts).
Participants were asked what evidence they based their answers on, and 12 indicated it was
anecdotal, i.e., from conversation with faculty, clients, and/or tutors/consultants. Only four
indicated they had surveyed clients.
Impacts of AI Chatbots on Writing Center Appointments
If AI chatbots can be used as a personal writing assistant (Rahman & Watanobe, 2023), then it
stands to reason that clients might be using an AI chatbot instead of going to a
tutoring/consulting session. Thus the number of appointments made would decline. However,
this idea was only partially supported. For the academic year 2022-2023, 23 participants reported
a decrease in appointments made, while 24 reported an increase, and 22 indicated the number of
appointments was about the same.
Those who saw a decline were asked how much. Rates ranged from 1% to 70%, with an average
decline of 24%. Participants were asked if the decline was caused by clients using ChatGPT
instead of tutoring/consulting; five indicated no, six indicated yes, and 12 were not sure. When
asked to further explain declines, 12 indicated they were due to the pandemic (e.g., burnout,
disruption, services slow to recover). Other reasons included a decrease in student enrollment
and budget cuts, both of which led to fewer appointments being offered.
Those who saw an increase in appointments made were asked to estimate how much. Rates
ranged from 3% to 41%, with an average of 16%. Eleven participants surmised the cause of the
increase was again primarily pandemic related (e.g., services were rebounding when physical
spaces were opened). Other reasons included increased budgets and student enrollment.
20
26
15
28
25
21 19
3
8
Proofreading Adjusting for
second
language
issues
Adjusting
“tone” or
“style”
Ideas Writing low-
stakes
content
Writing high-
stakes
content
Reading Other I'm not sure.
Determining the cause of an increase or decrease in appointments is difficult because there are
many variables involved, including some outside the writing center (e.g., school enrollment,
budgets). For future studies, rewording the survey item could help eliminate some variables. For
example, asking if occupancy compared to appointments offered declined or rose might yield
more germane data.
AI Chatbots Impact on Working with Clients and Training Tutors/Consultants
Participants answers to two survey questions indicate that they believe AI chatbots will change
aspects of the writing center. In the first question, participants were asked if they thought AI
chatbots would change the way writing centers work with clients. Fifty-three indicated they did,
16 were not sure, and only two reported they did not (Figure 6). Again, it is notable that so many
were not sure, implying they had not investigated the matter or that they had and the results were
inconclusive.
Figure 6
Question: Do You Think ChatGPT or Other AI Chatbots Will Change the Way Writing Centers
Work with Clients?
Participants were asked to elaborate. Of the 49 who responded, almost all mentioned that
tutors/consultants will need to discuss with clients the ethics of using AI chatbots, specifically,
being transparent about using them and university policies on the matter. In terms of the writing
center, there seems to be a consensus that AI chatbots will significantly change the way things
are done, especially on how tutors/consultants are trained and the feedback that clients expect or
need. For example, there will be more need for feedback on ideas and structure and less for line-
by-line editing. The consensus also seems to be that writing centers need to make students aware
of the limitations of the AI chatbots and teach clients how to use them critically. The most
pessimistic sentiments were from two respondents who thought that AI chatbots will bring the
end of writing centers. Another particularly notable response was, "It's not just AI itself that
changes the situation; it's how faculty react to it and what policies institutions adopt about it.”
In the second question, participants were asked if they thought AI chatbots would change the
way writing center tutors/consultants are trained. Fifty-five indicated yes, 11 were not sure, and
three indicated that they would not (Figure 7). Notably, those who were unsure represent nearly
20% of the respondents.
Figure 7
2
53
16
No Yes I'm not sure.
Question: Do You Think ChatGPT or Other AI Chatbots Will Change the Way Writing Center
Consultants/Tutors Are Trained?
Participants were asked to elaborate. Of the 47 who responded, some highlighted the need to
understand the ethical implications of AI and make sure tutors are following the policy of the
university. Others highlighted the need for tutors to understand the strengths of AI and its
limitations. Some emphasized the need for the development of guidelines for tutors and possibly
spending more time on invention and structure than on grammar.
Though participants felt AI chatbots will change the writing center, the change may not be
substantial. When asked how much AI chatbots will change the way tutors are trained, the
majority indicated some; only three indicated a lot or everything will change (Figure 8).
Figure 8
Question: How Much Will AI Chatbots Change the Way Tutors/Consultants Are Trained?
Conclusions and Discussions
When this study was conducted, ChatGPT had only been released nine months prior, so
situations will most likely change. However, based on the data gathered, it appears that writing
center administrators are personally not widely using AI chatbots, but they recognize that their
clients are. They also feel that AI chatbots will have an effect on the writing center, specifically,
how tutors/consultants are trained and how they work with clients. However, perhaps these
changes will not be major.
So far, evidence is lacking that AI chatbots are replacing writing center appointments. It might
be too early to ascertain. When participants who witnessed a decline in appointments were asked
3
55
11
No Yes I'm not sure.
what caused it, the majority were unsure. Future studies should probably word survey items
differently. This study asked if there was a decline or increase in appointments made in the
academic year 2022-2023. However, appointments can vary due to budgets, school enrollment,
and other factors outside the writing center. Participants instead might be asked if writing centers
experienced a decline in occupancy compared to appointments offered, which could provide
more relevant data. However, as noted, there are multiple variants to consider.
Moving forward, this study gives evidence that writing centers will most likely need to make
adjustments for the use of AI chatbots. What adjustments should be made? The article we will
write for publication that follows this report will attempt to address this by considering clients’
perspectives (e.g., how they use AI chatbots). Here, though, the notion can be put forth that if we
believe that AI chatbots will change the way writing centers work, but we are not using AI
chatbots, we are faced with a dilemma. How can we guide writing centers if we do not know
where to go?
One notion is that writing center administrators will need to become more familiar with AI
chatbotseven using them with their own work. (In fact, the writers of this report consulted
ChatGPT 4 for style suggestions and proofreading.) Another notion is that if clients are readily
using AI chatbots, we might look to them for guidance. What are they using them for? What
techniques have they developed? Which techniques can we pass on to others? Asking these
questions can lead to knowledge that can be shared with tutors/consultants and clients. We may
even learn for ourselves.
About the Researchers
Dr. Nathan Lindberg is a senior lecturer for the English Language Support Office (ELSO) at
Cornell University and Director of ELSO's Writing & Presenting Tutoring Service.
Amanda Domingues is a PhD candidate in Science and Technology Studies at Cornell University
and a former tutor for the English Language Support Office (ELSO).
Thari Zweers (MA, M. Ed.) is a PhD candidate in the Medieval Studies Program and a tutor for
the English Language Support Office (ELSO) at Cornell University.
Selin Goktas is a PhD candidate in Psychology and a tutor at the English Language Support
Office (ELSO) at Cornell University.
References
Ali, M. J., & Singh, S. (2023). ChatGPT and scientific abstract writing: Pitfalls and caution.
Graefe's Archive for Clinical and Experimental Ophthalmology.
https://doi.org/10.1007/s00417-023-06123-z
Chartr. (2022, December). ChatGPT: The AI bot taking the tech world by storm.
https://www.chartr.co/stories/2022-12-09-1-chatgpt-taking-the-tech-world-by-
storm#:~:text=Built%20on%20the%20architecture%20of,spits%20something%20back%
20to%20you
Fang, T., Yang, S., Lan, K., Wong, D., Hu, J., Chao, L. S., & Zhang, Y. (2023). Is ChatGPT a
highly fluent grammatical error correction system? A comprehensive evaluation. ArXiv.
https://doi.org/10.48550/arXiv.2304.01746
Heaven, W. D. (2023). The education of ChatGPT. MIT Technology Review, 126 (3), 42-47.
ISSN: 2749-649X
Jeon, J., & Lee, S. (2023). Large language models in education: A focus on the complementary
relationship between human teachers and ChatGPT. Education and Information
Technologies. https://doi.org/10.1007/s10639-023-11834-1
Madianou, M. (2021). Nonhuman humanitarianism: When ‘AI for good’ can be harmful.
Information Communication and Society 24(6), 850868.
https://doi:10.1080/1369118X.2021.1909100.
Rahman, M., & Watanobe, Y. (2023) ChatGPT for education and research: Opportunities,
threats, and strategies. Applied Sciences, 13(9) https://doi.org/10.3390/app13095783
Shawar, B. A. & Atwell, E. (2007). Chatbots: Are they really useful? Journal for Language
Technology and Computational Linguistics. 22(1), 29-49. ISSN 0175-1336
Shoufan, A. (2023). Exploring students' perceptions of ChatGPT: Thematic analysis and follow-
up survey. IEEE Access, 11, 38805-38818.
Terry, O. K. (2023). I'm a student. You have no idea how much we're using ChatGPT: No
professor or software could ever pick up on it. Big Bot on Campus: The Perils and
Potential of ChatGPT and other AI. The Chronicle of Higher Education.
Teubner, T., Flath, C. M., Weinhardt, C., Van Der Aalst, W., & Hinz, O. (2023). Welcome to the
Era of ChatGPT et al.: The prospects of large language models. Business & Information
Systems Engineering, 65(2), 95101. https://doi.org/10.1007/s12599-023-00795-x.
Yadava, O. P. (2023). ChatGPTA foe or an ally? Indian Journal Thoracic Cardiovascular
Surgery 39, 217221. https://doi.org/10.1007/s12055-023-01507-6
Yan, D. (2023). Impact of ChatGPT on learners in a L2 writing practicum: An exploratory
investigation. Education and Information Technologies, 4(1),1.
https://doi.org/10.1007/s10639-023-11742-4
... If we are to ensure GenAI writing tools are used effectively and ethically, these tools must be part of our services, and, indeed, some directors are already incorporating these tools. In the spring of 2024, a co-researcher and I distributed an IRBapproved survey to writing center administrators (Lindberg & Domingues, 2024). Of the 81 respondents who completed the survey, 63 felt GenAI tools were changing the way writing centers work with clients (5 did not and 13 were not sure), and 69 felt that GenAI writing tools are changing the way tutors/consultants are trained (2 did not and 10 were not sure). ...
... My own service experienced a 60% decline in occupancy since the fall of 2022 when ChatGPT 3.5 was introduced. Our research (Lindberg & Domingues, 2024) has shown that a large part of this decline was due to our clients using GenAI writing tools instead of our service to adjust their written accents. So, while I believe that such tools should be advocated, ironically, the same tools are negatively impacting my service. ...
... We too have had difficulty determining a nomenclature. Last year, we used the term GAI tools, but it proved too general (Lindberg et al., 2023). This year, we consulted an expert-ChatGPT-which told us AI writing tools is currently most widely used (confirmed by a Google search), so we've adopted the term in this report. ...
Preprint
Full-text available
In May 2024, we surveyed writing center administrators and interested parties (e.g., tutors, faculty) about the effects of AI writing tools (e.g., ChatGPT, Gemini, Copilot) on the writing center. By comparing the results to our 2023 report (Lindberg et al.), we show how these tools are becoming accepted and are affecting writing centers, including the way tutors are trained and how writing centers work with clients.
Chapter
As Generative AI (GAI) chatbots become increasingly prevalent in academia, research, and writing, Writing Centers face the challenge of adapting to this technological shift. GAI chatbots are powerful tools capable of generating large volumes of text on various topics, assisting with revision and editing tasks, and providing round-the-clock accessibility. These capabilities pose significant competition for Writing Center consultants and professionals, potentially leading to feelings of insecurity and demoralization. Whether the number of consultations in your Writing Centers has begun to decline or not yet, it is time to act. This chapter explores how Writing Centers can address these challenges by adopting the A3 approach: Assess, Adapt, and Apply. This methodology not only helps restore a healthy Writing Center environment but also enhances assessment practices, thereby increasing the center's value and resilience.
Chapter
This chapter briefly addresses LLM scholarship. Next, a systematic analysis outlines WC AI policies for the reader, utilizing the frame set by the IWCA Task Force Initial Findings (2024), with an emphasis on the ways students are positioned within this documentation (as users, owners, violators, learners, plagiarizers?). The chapter then shares narratives written by the authors, both undergraduate and graduate consultants from various disciplines, as they indicate how they have experienced these shifts in educational practice in real time. Finally, the chapter provides a heuristic for writing centers, developed for and by writing consultants, in order to demonstrate how to design and develop more accessible, open, and collaboratively driven writing center curricula, materials, and services in light of the developing research and the often times conflicting policies and positionalities encountered by academic writers in the uses of LLMs.
Article
Full-text available
In recent years, the rise of advanced artificial intelligence technologies has had a profound impact on many fields, including education and research. One such technology is ChatGPT, a powerful large language model developed by OpenAI. This technology offers exciting opportunities for students and educators, including personalized feedback, increased accessibility, interactive conversations, lesson preparation, evaluation, and new ways to teach complex concepts. However, ChatGPT poses different threats to the traditional education and research system, including the possibility of cheating on online exams, human-like text generation, diminished critical thinking skills, and difficulties in evaluating information generated by ChatGPT. This study explores the potential opportunities and threats that ChatGPT poses to overall education from the perspective of students and educators. Furthermore, for programming learning, we explore how ChatGPT helps students improve their programming skills. To demonstrate this, we conducted different coding-related experiments with ChatGPT, including code generation from problem descriptions, pseudocode generation of algorithms from texts, and code correction. The generated codes are validated with an online judge system to evaluate their accuracy. In addition, we conducted several surveys with students and teachers to find out how ChatGPT supports programming learning and teaching. Finally, we present the survey results and analysis.
Article
Full-text available
Artificial Intelligence (AI) is developing in a manner that blurs the boundaries between specific areas of application and expands its capability to be used in a wide range of applications. The public release of ChatGPT, a generative AI chatbot powered by a large language model (LLM), represents a significant step forward in this direction. Accordingly, professionals predict that this technology will affect education, including the role of teachers. However, despite some assumptions regarding its influence on education, how teachers may actually use the technology and the nature of its relationship with teachers remain under-investigated. Thus, in this study, the relationship between ChatGPT and teachers was explored with a particular focus on identifying the complementary roles of each in education. Eleven language teachers were asked to use ChatGPT for their instruction during a period of two weeks. They then participated in individual interviews regarding their experiences and provided interaction logs produced during their use of the technology. Through qualitative analysis of the data, four ChatGPT roles (interlocutor, content provider, teaching assistant, and evaluator) and three teacher roles (orchestrating different resources with quality pedagogical decisions, making students active investigators, and raising AI ethical awareness) were identified. Based on the findings, an in-depth discussion of teacher-AI collaboration is presented, highlighting the importance of teachers’ pedagogical expertise when using AI tools. Implications regarding the future use of LLM-powered chatbots in education are also provided.
Article
Full-text available
Article
Full-text available
ChatGPT has sparked both excitement and skepticism in education. To analyze its impact on teaching and learning it is crucial to understand how students perceive ChatGPT and assess its potential and challenges. Toward this, we conducted a two-stage study with senior students in a computer engineering program (n=56). In the first stage, we asked the students to evaluate ChatGPT using their own words after they used it to complete one learning activity. The returned responses (3136 words) were analyzed by coding and theme building (36 codes and 15 themes). In the second stage, we used the derived codes and themes to create a 27-item questionnaire. The students responded to this questionnaire three weeks later after completing other activities with the help of ChatGPT. The results show that the students admire the capabilities of ChatGPT and find it interesting, motivating, and helpful for study and work. They find it easy to use and appreciate its human-like interface that provides well-structured responses and good explanations. However, many students feel that ChatGPT's answers are not always accurate and most of them believe that it requires good background knowledge to work with since it does not replace human intelligence. So, most students think that ChatGPT needs to be improved but are optimistic that this will happen soon. When it comes to the negative impact of ChatGPT on learning, academic integrity, jobs, and life, the students are divided. We conclude that ChatGPT can and should be used for learning. However, students should be aware of its limitations. Educators should try using ChatGPT and guide students on effective prompting techniques and how to assess generated responses. The developers should improve their models to enhance the accuracy of given answers. The study provides insights into the capabilities and limitations of ChatGPT in education and informs future research and development.
Preprint
Full-text available
ChatGPT, a large-scale language model based on the advanced GPT-3.5 architecture, has shown remarkable potential in various Natural Language Processing (NLP) tasks. However , there is currently a dearth of comprehensive study exploring its potential in the area of Grammatical Error Correction (GEC). To showcase its capabilities in GEC, we design zero-shot chain-of-thought (CoT) and few-shot CoT settings using in-context learning for ChatGPT. Our evaluation involves assessing ChatGPT's performance on five official test sets in three different languages, along with three document-level GEC test sets in English. Our experimental results and human evaluations demonstrate that ChatGPT has excellent error detection capabilities and can freely correct errors to make the corrected sentences very fluent, possibly due to its over-correction tendencies and not adhering to the principle of minimal edits. Additionally, its performance in non-English and low-resource settings highlights its potential in multilingual GEC tasks. However, further analysis of various types of errors at the document-level has shown that ChatGPT cannot effectively correct agreement, coreference, tense errors across sentences, and cross-sentence boundary errors.
Article
Full-text available
Technology-enhanced language learning has exerted positive effects on the performance and engagement of L2 learners. Since the advent of tools based on recent advancement in artificial intelligence (AI), educators have made major strides in applying state-of-the-art technologies to writing classrooms. In November 2022, an AI-powered chatbot named ChatGPT capable of automatic text generation was introduced to the public. The study tried to apply ChatGPT’s text generation feature in a one-week L2 writing practicum. The study adopted a qualitative approach to investigate students’ behaviors and reflections in their exposure to ChatGPT in writing classrooms. The developmental features in learning activities and reflective perceptions were triangulated for the piloting evaluation of the impact of ChatGPT on L2 writing learners. The findings revealed the affordance and potential applicability of the tool in L2 writing pedagogy. Additionally, the tool also showcased an automatic workflow that could maximize the efficiency in composing writing. However, participants generally expressed their concern with its threats to academic honesty and educational equity. The study impelled the reconceptualization of plagiarism in the new era, development of regulatory policies and pedagogical guidance to regulate proper utilization of the tool. Being a pioneering effort, the study accentuated future research directions for more insights into the application of ChatGPT in L2 learning, and the establishment of corresponding pedagogical adjustments.
Article
Full-text available
Artificial intelligence (AI) applications have been introduced in humanitarian operations in order to help with the significant challenges the sector is facing. This article focuses on chatbots which have been proposed as an efficient method to improve communication with, and accountability to affected communities. Chatbots, together with other humanitarian AI applications such as biometrics, satellite imaging, predictive modelling and data visualisations, are often understood as part of the wider phenomenon of ‘AI for social good’. The article develops a decolonial critique of humanitarianism and critical algorithm studies which focuses on the power asymmetries underpinning both humanitarianism and AI. The article asks whether chatbots, as exemplars of ‘AI for good’, reproduce inequalities in the global context. Drawing on a mixed methods study that includes interviews with seven groups of stakeholders, the analysis observes that humanitarian chatbots do not fulfil claims such as ‘intelligence’. Yet AI applications still have powerful consequences. Apart from the risks associated with misinformation and data safeguarding, chatbots reduce communication to its barest instrumental forms which creates disconnects between affected communities and aid agencies. This disconnect is compounded by the extraction of value from data and experimentation with untested technologies. By reflecting the values of their designers and by asserting Eurocentric values in their programmed interactions, chatbots reproduce the coloniality of power. The article concludes that ‘AI for good’ is an ‘enchantment of technology’ that reworks the colonial legacies of humanitarianism whilst also occluding the power dynamics at play.
Article
Full-text available
Chatbots are computer programs that interact with users using natural lan- guages. This technology started in the 1960’s; the aim was to see if chatbot systems could fool users that they were real humans. However, chatbot sys- tems are not only built to mimic human conversation, and entertain users. In this paper, we investigate other applications where chatbots could be useful such as education, information retrival, business, and e-commerce. A range of chatbots with useful applications, including several based on the ALICE/AIML architecture, are presented in this paper.
ChatGPT and scientific abstract writing: Pitfalls and caution. Graefe's Archive for Clinical and Experimental Ophthalmology
  • M J Ali
  • S Singh
Ali, M. J., & Singh, S. (2023). ChatGPT and scientific abstract writing: Pitfalls and caution. Graefe's Archive for Clinical and Experimental Ophthalmology. https://doi.org/10.1007/s00417-023-06123-z