ArticlePDF Available

The Next Word: A Framework for Imagining the Benefits and Harms of Generative AI as a Resource for Learning to Write

Authors:

Abstract

In Parable of the Sower, Octavia Butler (1993) wrote: “Any Change may bear seeds of benefit. Seek them out. Any Change may bear seeds of harm. Beware” (p. 116). In this paper, we apply this command to a speculative examination of the consequences of text-based generative AI (GAI) for adolescent writers, framing this examination within a socially situated “Writers-in-Community” model of writing (Graham, 2018), which considers writing as both an act of individual cognition and as situated within concentric circles representing nested social, material, and cultural contexts for writing. Through the lens of this model, we discuss representations of language-related technologies in works by several well-known authors of 20th-century speculative fiction and contrast these speculative scenarios with examples from our recent research into student writers' use of ChatGPT and other GAI tools. Finally, we discuss (a) the limitations of these tools as lacking the ability to set goals and use these goals to compose a written work, which is a key component of an effective writing process and (b) what would be required to supporting students to write agentively in collaboration with these tools, despite these limitations. This discussion focuses on three principles: (1) centering human writers in collaborations with GAI; (2) setting writer goals to address historical, political, institutional, and social influences; and (3) critical agency in literacy with GAI.
Author Query Form
Journal: RRQ
Article: 567
Dear Author,
During the copyediting of your manuscript, the following queries arose.
Please refer to the query reference callout numbers in the page proofs and respond.
Please remember illegible or unclear comments and corrections may delay publication.
Many thanks for your assistance.
AUTHOR: Please note that missing content in references have been updated where we have been able to
match the missing elements without ambiguity against a standard citation database, to meet the reference
style requirements of the journal. It is your responsibility to check and ensure that all listed references are
complete and accurate.
Query
reference
Query Remarks
1AUTHOR: Please supply a short title of up to 40 characters that will be used as the
running head.
2AUTHOR: Please provide job detail for all authors.
3AUTHOR: Please provide the graphical abstract (image and caption) for this article.
4AUTHOR: Please check the hierarchy of heading levels.
5AUTHOR: Kindly check and confirm the hierarchy of heading levels.
6AUTHOR: Please provide publication year for reference “SFE: SF Encyclopedia, n.d.”
7AUTHOR: Dick (1964) has not been included in the Reference List, please supply full
publication details.
8AUTHOR: Adams (2012) has not been included in the Reference List, please supply full
publication details.
9AUTHOR: Johnson and Sullivan (2020) has not been included in the Reference List,
please supply full publication details.
10 AUTHOR: Please provide complete publication details for reference “A.J. Laird, personal
communication”
11 AUTHOR: As per journal style, “Acknowledgement” required. Kindly provide.
12 AUTHOR: Please provide the “volume number, page range” for reference Breakstone
et al., 2021.
13 AUTHOR: Please provide the “year of publication, volume number, page range” for
reference Levine, Beck, Mah, Phalen & Pittman, in press.
14 AUTHOR: Reference “Ong, 1992” is not cited in the text. Please indicate where it should
be cited; or delete from the reference list.
15 AUTHOR: Please provide the “name of the publisher” for reference Ong, 1992.
16 AUTHOR: Reference “Swift, 2023” is not cited in the text. Please indicate where it should
be cited; or delete from the reference list.
1
Reading Research Quarterly, 0(0)
pp. 1–10 | doi:10.1002/rrq.567
© 2024 International Literacy Association.
ABSTRACT
In Parable of the Sower, Octavia Butler (2019) wrote: “Any Change may bear
seeds of benefit. Seek them out. Any Change may bear seeds of harm. Beware”
(p. 116). In this paper, we apply this command to a speculative examination
of the consequences of text- based generative AI (GAI) for adolescent writers,
framing this examination within a socially situated “Writers- in- Community”
model of writing (Graham, 2018), which considers writing as both an act of in-
dividual cognition and as situated within concentric circles representing nested
social, material, and cultural contexts for writing. Through the lens of this
model, we discuss representations of language- related technologies in works
by several well- known authors of 20th- century speculative fiction and contrast
these speculative scenarios with examples from our recent research into stu-
dent writers’ use of ChatGPT and other GAI tools. Finally, we discuss (a) the
limitations of these tools as lacking the ability to set goals and use these goals
to compose a written work, which is a key component of an effective writing
process and (b) what would be required to supporting students to write agen-
tively in collaboration with these tools, despite these limitations. This discus-
sion focuses on three principles: (1) centering human writers in collaborations
with GAI; (2) setting writer goals to address historical, political, institutional,
and social influences; and (3) critical agency in literacy with GAI.
Almost 300 years ago, Jonathan Swift envisioned educators’ current night-
mares about ways that ChatGPT and other generative AI (GAI) might
decouple students’ written language from thought. In Gullivers Travels
(1726/2023), Gulliver encounters an illustrious, pompous professor who
has created the Engine, a writing machine made of wood and bits of
paper, on which were written “were written all the words of their lan-
guage, in their several moods, tenses, and declensions, but without any
order” (p. 45). Figure1 depicts an illustration of this machine from the
1726 edition of the book.
With the turn of 40 handles, the machine combined words at random
to create readable sentences, which meant that “the most ignorant person,
at a reasonable charge, and with a little bodily labour, might write books
in philosophy, poetry, politics, laws, mathematics, and theology, without
the least assistance from genius or study” (p. 44). The human who turns
the handles on this engine loses the power, and seemingly the will, to
exercise this essentially human capacity.
Swift’s view of the effect of machines on writing is clearly a dim one,
but a more recent speculative fiction writer, Octavia Butler, encourages us
to consider innovations from a balanced perspective. In Parable of the
Sower(1993), she writes: “Any Change may bear seeds of benefit. Seek
them out. Any Change may bear seeds of harm. Beware” (p. 116). We take
up her commands to seek and beware by considering the potential harms
and benefits that text- based GAI may bring to writers and writing.
Sarah W. Beck
New York University, New York,
New York, USA
Sarah Levine
Stanford University, Stanford, California,
USA
The Next Word: A Framework for
Imagining the Benefits and Harms
of Generative AI as a Resource for
Learning to Write
1
3
4
5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
2 | Reading Research Quarterly, 0(0)
GAI seems just as close to science fiction as it does to
our current reality. To consider its harms and benefits,
therefore, we turn not only to traditional research methods
but also to speculative fiction, which intersects with the
origins of GAI. A recent study by Dillon and Schaffer-
Goddard(2023) found that AI researchers drew on specu-
lative fiction for ethical perspectives and occasionally even
for modeling possible futures that could ensue from their
scientific work. We see fiction as an especially useful tool
for speculation because, as cognitive scientist Mark Turner
says, “Narrative imagining—story—is the fundamental
instrument of thought. Rational capacities depend upon it.
It is our chief means of looking into the future, of predict-
ing, of planning, of explaining”(1996, p. 4–5). Using fic-
tional sources for the purpose of speculation is thus
aligned with the larger project of speculative education,
which Garcia and Mirra(2023) define as the imagination
of “visionary and future- oriented approaches to teaching
and learning” (p. 4). Speculative literacies.
In this essay, we look to speculative fiction to imagine
what might happen if seeds of harm were to flourish, and
humans were to surrender written language and the writ-
ing process to machines. Heeding Garcia and Mirras
(Garcia & Mirra,2023) reminder that “the speculative is
also now and here,” (p. 12) we also draw from current
writing models and current research with high school
students who used ChatGPT as a writing support, seeking
evidence of seeds of benefit from these sources. This
hybrid, interdisciplinary approach research helps us envi-
sion ways that human writers can maintain their imagina-
tion and agency to build and communicate ideas when
writing with generative AI.
We organize these observations and interpretations
through the lens of Graham’s (2018) “Writers- within-
Community” model of writing (Figure2). Graham’s expan-
sive heuristic suits our purpose because it frames writing
not only as an act of individual cognition, but also as situ-
ated within concentric social circles that include tools such
as paper, smartboards, or writing applications; human col-
laborators; writing communities; and political, social, cul-
tural, institutional, and historical influences. The inclusion
of historical, institutional, and political dimensions of
influence is useful for considering how writing with GAI
can represent a form of speculative education known as
“speculative civic literacies,” which Garcia and Mirra(2023)
characterize as “decenter[ing] the state” (p. 4).
We see GAI as interacting with and across each of
these concentric circles, with the potential for both harms
and benefits, depending on the degree of constraint that
each circle exerts on the writer. We explore how it may be
necessary to rethink this model to accommodate the influ-
ence of GAI and the billions of instances of prior language
use – in the form of Large Language Models (LLMs) eerily
akin to Swift’s Engine—that it is built upon.
Seeds of Harm in Speculative
Fiction About Writing and
Technology
The definition of speculative fiction has changed over
time, and its relationship to other genres, such as science
fiction, is the subject of spirited debate (Thomas, 2013).
FIGURE 1
Illustration of Swift’s Engine (1726/2023)
FIGURE 2
Graham’s(2018) Basic Compone nts of a Writing
Community
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
The Next Word: A Framework for Imagining the Benefits and Harms of Generative AI as a Resource for Learning to Write | 3
We find broad definitions such as Toliver and Mill-
er’s (2019) most useful for informing speculative educa-
tion projects. They define speculative fiction as any
depiction of fictional worlds beyond our current “realistic
and historical boundaries” (p. 53). Authors of fiction use
the written word to create worlds. Thus, it is not surprising
that many authors of speculative fiction focus on the writ-
ten word when imagining what humanity might become,
as in classic depictions like the universal book burning in
Ray Bradbury’s Fahrenheit 451, or Orwell’s “memory
hole,” into which the state deposits written evidence of
facts or opinions that run counter to the states agenda in
1984. The Encyclopedia of Science Fiction (SFE: SF Ency-
clopedia,n.d.) offers an entry on “wordmills” that includes
over 40 novels and short stories in which humans wrestle
with technology for control of writing.
For this paper, we selected four pieces of fiction that
speculate about the power of writing and the powerful
influence of machines on writing. Our selections include
Gulliver’s Travels (1726), a foundational work of specula-
tive fiction from the English literary canon; The Penulti-
mate Truth (1964), a lesser known work by mid- century
American writer Philip K. Dick, ()); The Parable of the
Sower(1993) by Ocatvia Butler, who is famous for using
speculative fiction to address themes of social, racial, and
environmental injustice; and a genre- bending work—a
speculative fictional essay—called “Catching Crumbs
from the Table” (2000) by Ted Chiang, a relative new-
comer to the speculative fiction universe. Together, these
authors represent a historical, cultural, and imaginative
range of speculative fiction.
Seeds of Benefit in Research on
Students Writing with Technology
in the Here and Now
We balance fictional speculations about potential seeds of
harm with actual, and more hopeful, examples from our
recent study of student writers’ use of ChatGPT and other
AI tools (Levine etal.,in press). A brief summary of the
scope and purpose of our research follows.
In the spring of 2023, just months after the emergence
of ChatGPT, we partnered with a teacher at a California
charter high school, Sunrise High (a pseudonym) to
explore the affordances and limitations of GAI as a sup-
port for student writing. Sunrise serves low- income, Black,
Latinx and AAPI students who are the first in their fami-
lies to go to college. It is important to note that Sunrise had
the resources to offer students an unusual level of aca-
demic support, which meant that our participants’ interac-
tions with ChatGPT may not represent those of the typical
high school student. On the other hand, the fact that their
activities and strategies occurred with no prompting from
us or their teacher provides compelling evidence to sup-
port our speculations about seeds of benefit in the radical
change that GAI represents.
We recruited 12 students to work with us in four sepa-
rate after- school sessions to write with ChatGPT, using
Chromebooks to access Open AI’s ChatGPT 3.5 and to
write in Google Docs. We instructed them to use Chat-
GPT, however, they wished, with one exception: they could
not use ChatGPT to do all their writing for them. Our
examples come from Session 1: when they worked in pairs
and wrote arguments proposing a new mascot for their
school and Session 2: when they individually wrote super-
hero stories to pitch to a fictional movie company.
Using a screencasting application, we recorded all stu-
dent talk and writing, including their written interactions
with ChatGPT (their input to ChatGPT and ChatGPT’s
responses) and the texts they drafted in Google Docs.
After students completed their composing sessions, we
also interviewed a subset of them (8 out of 12) about their
experiences with the tool. (To read a more detailed report
on this study, see Levine etal.,in press).
Imagining GAI’s Place in the
Writers- in- Community Model
To present the harms and benefits that we observed in
these data examples and in the speculative fiction exam-
ples we interpret, we proceed through the three layers of
the Writers- in- Community model: Writers and their Col-
laborators; Members and Purposes of the Writing Commu-
nity; and Social, Cultural, Historical, Institutional, and
Political Influences.
Writers and their Collaborators
In The Penultimate Truth, Philip K Dick (1964) features a
collaborative writing aid, also presciently similar to Chat-
GPT, called a rhetorizer. The rhetorizer works as a writing
partner by constructing complete sentences in response to
human prompting. Joseph Adams, a human speechwriter
suffering from writer’s block, invests a significant sum in
this device, but he yields only frustration. The rhetorizer
produces only “miserable metaphor[s]” in response to his
prompts (p. 8). And yet Adams says, “I dont think hon-
estly I could do it, in my own words, without this machine;
I’m hooked on it now” (2012, p. 8). This example illustrates
the harms that can come from using GAI as a collaborator:
First and most obvious, the collaborator takes over, and
the writer ceases to learn or grow through writing. Second,
the collaborator defaults to the most generic prose, and the
writer loses the pleasure of creating original sentences or
fresh metaphors.
In our research, we saw instances of students collabo-
rating with ChatGPT to learn and grow as writers. For
6
7
8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
4 | Reading Research Quarterly, 0(0)
example, one pair of students was stuck on finding a syn-
onym for the word “suitable.” They asked ChatGPT for
alternatives. Figure3 shows their dialog with their
collaborator.
After some discussion, the students determined that
“felicitous” was the word they wanted and drafted a sen-
tence: “We believe a husky would be a felicitous fit.” In this
example, the students managed the bot’s contributions
through precise prompting, setting the goals that shape the
collaborator’s influence, rather than the other way around.
This exchange offers a promising vision of what GAI could
be for students: a collaborator who helps students develop
their ideas and knowledge, with the students setting the
goals that determine the kind and degree of their assis-
tance and evaluating that assistance with a critical eye.
We also saw a rebuke to Dicks pessimistic depiction of
the dampening effect that mechanical intervention can
have on writing. For example, while writing a superhero
story to pitch to a movie production company, one student
decided to write about a team of superheroes called the
FIGURE 3
Screenshot of Students Learning about the Word “Felicitous” from ChatGPT
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
The Next Word: A Framework for Imagining the Benefits and Harms of Generative AI as a Resource for Learning to Write | 5
“Dorito Squad” (based on a name his friend group used to
refer to themselves). Without any support from ChatGPT,
he wrote:
“Dorito Squad, the superheroes who can fly, walk, crawl, and
swim (no, of course they can’t swim). Their weakness? Water
man, it’s that simple. Together they fight, but Dorito Squad
always ends up in the pond in the end.
After completing a story draft, this curious student
prompted ChatGPT to write its own story about the Dorito
Squad. In its offering, ChatGPT generated a villain called
“Dr. Zest”:
One day, the Dorito Squad discovers that their old arch-
nemesis, Dr. Zest, has returned to the city with an army of
highly trained villains and a new plan: rule the world with the
power of his spicy Doritos.
Our screencasts recorded the student as he called his
friends over from their computers to read ChatGPT’s
story. The students read the story out loud to one another,
laughing and repeating “Dr. Zest!” in various diabolical
voices. Like the rhetorizer, ChatGPT produced figurative
language. Drawing from LLMs, the bot identified “zest” as
a word that typically collocates with “Doritos,” and like-
wise identified villains as common characters in superhero
stories. But in this case, the bot’s algorithmic output trig-
gered very human laughter and pleasure in the student
writers, and the students further animated and amplified
the text through their performative reading. This event
suggests that human interaction with GAI can create con-
ditions for writers and readers to experience authentic
delight—a feeling mostly absent in the speculative depic-
tions of humans interacting with language technologies.
Members, Purposes, and History
of the Writing Community
Graham (2018) notes that “Writing is simultaneously
shaped by the community in which it takes place and the
cognitive capabilities and resources of community mem-
bers who create it” (p. 271). In a fictional speculative essay
published in the scientific journal Nature, Chiang (2000)
plays with this idea by imagining a change in the function
of writing for a scientific community of writers. Chiang
writes this essay in the voice of future editors of the scien-
tific journal Nature. The editors announce to their readers
that all new knowledge will now be created by “metahu-
mans” and communicated via “digital neural transfer.
This new way of gathering and communicating knowledge
effectively renders scientists and writers obsolete, doomed
to “never make an original contribution to science again
(p. 517).
The fictional editors reassure their writing community
of scientists that their new role—interpreting metahuman
discoveries—is “a legitimate method of scientific inquiry
and increases the body of human knowledge just as origi-
nal research did” (p. 517). Chiang’s stance toward these
editors is ambiguous, but one part of his story is clear:
Writing has been decoupled from knowledge creation. For
writers in other communities outside of science, the impli-
cation is that we should closely guard the capacity/ability
to use writing for knowledge creation, whether that is
communal knowledge or personal knowledge.
In interviews, we asked students whether they would
be likely to use ChatGPT for the purpose of supporting
their writing in future. About half said no. One said, “I just
wouldn’t use it, because it’s just like a personal thing. Like,
I just prefer planning out and actually taking the time.
Another explained that writing is “a big part of my identity.
I think that writing really just helps me when I’m trying to
express my emotions.” The bot could not fulfill that pur-
pose. This student, who spoke AAVE, also noted that
ChatGPT sounded like a robot, which meant that it could
not express her ideas. “When I write [to express my emo-
tions], it’s basically just how I talk. It’s not like in an infor-
mational way, or like a professional way.” In terms of
Grahams model, this student perceived ChatGPT’s writ-
ing as issuing from a writing community with linguistic
norms that did not suit her communicative needs. In terms
of Chiang’s speculative world, these students valued writ-
ing as a way of creating personal knowledge.
Similarly, in Butler’s Parable series, including Parable
of the Sower and Parable of the Talents, protagonist Lau-
ren Oya Olamina uses writing to create and preserve
knowledge. The daughter of pastors, she writes to learn,
keeping a daily record of her thoughts and observations in
a notebook: “All I do is observe and take notes, trying to
put things down in ways that are as powerful, as simple,
and as direct as I feel them…” and in doing this, reflects
that “every time I understand a little more, I wonder why
it’s taken me so long–why there was ever a time when I
didn’t understand a thing so obvious and real and true.
(p. 78). She also writes for a larger purpose, to “put together
the scattered verses that I’ve been writing about God since
I was twelve” (p. 24), to create a common text that will act
as a foundation of a new religion, one that will create a ref-
uge from the anarchy that surrounds her community.
Writing is represented here as a fundamentally human
activity that brings the community together.
We found that students perceived writing as a way to
connect with their community, and further perceived
ChatGPT to be an outsider to their community. For
instance, before the pair of students wrote their first lies
about Rocky the husky, one student said, “Oh! We could
ask [ChatGPT] how to begin.” Her partner said, “I feel like
we could figure that part out ourselves. It’s like, we’re using
the connection with Rocky, and ChatGPT wouldn’t know
about Rocky.” Unlike the metahumans in Chiang’s specu-
lative piece, who had come to dominate the scientific
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
6 | Reading Research Quarterly, 0(0)
community, ChatGPT could not even join this school
community. These students seemed to recognize that
members of a community were best positioned to repre-
sent that community, and were disinclined to cede this
position to a bot. Like Lauren Olamina, they connected
with their community through writing.
Social, Cultural, Institutional,
Historical, and Political Influences
Interacting with Writing
Butler, Chiang, Dick, and Swift, along with many other
authors, imagine dystopian worlds in which state forces
have destroyed humanity’s writing as a means of burying
human history or preventing humans from having agency
over their own ideas and voices. In Butler’s Parable of the
Sower(1993), protagonist Lauren Oya Olamina maintains
this agency by remaining one of the few literate citizens,
writing to restore humanity through the creation of a new
religion that will enable humans to weather the political,
social and environmental disintegration of their world.
Other writers go further, imagining ways that the state
can implicate humans in writing their own destruction. In
Dick’s The Penultimate Truth, speechwriter Joseph Adams
uses another mechanical writing tool in addition to the
rhetorizer: the Megavac 6- V. Adams feeds his speeches
into this machine, and it transforms them into a holograph
of the state leader. Then, “out of the sim’s mouth would
come the utterance… transformed; the simple word would
be given that fine, corroborative detail to supply verisimili-
tude to…an otherwise incredibly bald and unconvincing
narrative” (p. 44). The written text of the speech, generated
with the rhetorizer’s aid, is flat and uninspired; another
technology must be enlisted to animate it. This is an espe-
cially vivid example of discourse that centers the state and
enlists humans to support that centering. As such, it is a
powerful example of the kind of discourse that a civically
oriented form of speculative literacy seeks to decenter
(Garcia & Mirra,2023).
The danger of ChatGPT is that the bot, fed by billions
of historical instances of standard academic written lan-
guage, might exert the same destructive power on stu-
dents’ individual creativity and collective humanity. And
in our observations, we saw instances in which ChatGPT
exerted such influence on students. For instance, the stu-
dents’ first assignment was to draft an argument in favor of
a new school mascot. Students could argue for a redwood
tree, a seal, or a lion, or they could choose their own mas-
cot. One pair of students rejected the tree and the other
options and decided upon a dog- - a husky- - as an appropri-
ate school mascot, because one of their teachers often
brought her husky to class. They drafted an opening para-
graph that began:
Just one of the reasons that Sunrise is special is its dog- friendly
campus, we believe that this is now a part of Sunrise culture and
it should be reflected in the mascot. Most notably the dog who
has been here the longest, Rocky the husky.
They asked ChatGPT to edit the paragraph for “gram-
mar and coherency.” ChatGPT’s output began:
One of the things that sets Sunrise apart is its dog- friendly cam-
pus, and we believe that this should be reflected in the school’s
mascot. For many of us, the longest standing dog on campus is
Ro cky, the husky.
When the students read ChatGPT’s revision, one said,
“Dang! This is way better than ours!” They then agreed to
copy and paste ChatGPT’s first paragraph wholesale into
their final draft. At no point did they explore why they
thought ChatGPT’s version was better. In Philip Dick’s
terms, their collaboration took the form of surrender to
the rhetorizer. In terms of Graham’s model, ChatGPT
became more than a fellow writing collaborator. Students
had to contend with powerful historical and cultural influ-
ences on written discourse. It is not surprising that they
would accept ChatGPT’s edits without question. Thus the
outer rings of Graham’s model exert more control on the
individual writer than they would if GAI were not in the
picture.
Importantly, though, unquestioning acceptance of
ChatGPT’s output was not the norm among our partici-
pants. For example, the students drafted a second para-
graph in favor of the husky as a mascot. In this paragraph,
they focused on rejecting other options for a mascot. They
asked ChatGPT to edit that draft (Figure4).
When the students read ChatGPT’s edit of their sec-
ond paragraph, they audibly groaned at what their AI col-
laborator had done to their work. One student said, “It
[ChatGPT] took out all of our personality!” The other
said, “Did it take out our hippie? It took our hippie!” They
rejected this entire paragraph. In their follow- up interview,
one of the students reflected, “if you were to ask ChatGPT
for outline ideas, or help with structuring, it could be help-
ful. But if you want to write like ChatGPT, I think thats
terrible, because it would make you write like a robot.” In
this way, students resisted the flat, mechanical rewriting
offered by the bot. They also resisted the force of a vast
bank of homogenizing culture and history. They main-
tained their language and their voice.
Discussion and Implications
Because they are powered by Large Language Models,
text- based GAI technologies are entirely based on histori-
cal data. Thus, as AI researcher Yann LeCun noted in an
interview with Lex Fridman, they do not have the capacity
to set goals and work toward them (Fridman,2024, March
7). They cannot see forward, except to the next word. But
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
The Next Word: A Framework for Imagining the Benefits and Harms of Generative AI as a Resource for Learning to Write | 7
FIGURE 4
Screenshot of Students’ Exchange with ChatGPT Including Output they Rejected
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
8 | Reading Research Quarterly, 0(0)
seeing beyond the next word is a key criterion for human
intelligence and also a characteristic of the writer in the
Writing in Community model, located in the innermost
ring (“goals”) and implied in its first tenet: “Writing is
simultaneously shaped by the community in which it takes
place and the cognitive capabilities and resources of com-
munity members who create it.” (Graham, 2018, p. 271).
How, as educators, can we work to ensure that these goals
and plans involve more than recycling and replication of
historical linguistic and discursive patterns?
A complete model of writing now needs to account for
AI’s capacity to burden the writing process with historical
usages and to make things seem true that aren’t. Further, a
complete model of writing instruction now needs to
account for understanding and appreciating what a human
writer is beyond an aggregator of linguistic, cultural, polit-
ical and social influences. Garcia and Mirra(2021) offer
guidance toward that model with their focus on “imagina-
tion and agency” (p. 647) as central to the literacy practices
that speculative education entails. With an enhanced
model of writing that centers human imagination and
agency, we now turn to a discussion of pedagogical and
research implications for working with, and against, Chat-
GPT, using what we learned from our study of fictional
and real writers.
Centering Human Writers in
Collaboration with GAI
In responding to our prompts, students chose to draw on
local knowledge and inside jokes, which helped to make
them aware of what ChatGPT “knew,” and did not “know”
– such as the existence of Rocky the husky as the original
dog on campus. To push back against the very real dangers
of a rhetorizer, teachers will need to design assignments
that help students recognize their agency in writing, and
appreciate how their essentially human, experiential con-
tributions cannot be—or should not be—outsourced. Our
exploratory work with students in a school- adjacent con-
text shows how AI collaborations can foster the emotional
engagement—the spontaneous pleasure and humor—that
is a central element of humanizing writing instruction
(Johnson & Sullivan, 2020). Students can be encouraged to
exercise creativity through prompting unlikely juxtaposi-
tions of language and content—for example, asking it to
provide an explanation of nasal polyp surgery in the voice
of a medieval knight (A.J. Laird, personal communication,
September 18, 2023). Teachers can offer students the
opportunity to use these tools as collaborators in creative,
playful genres (superhero stories being one example) to do
things that students might not yet be able to do on their
own, and to do things that transcend school discourses
and academic norms—for example, drawing on everyday
literacy practices such as fandom literacies (reading, writ-
ing, and multimodal engagement around an aesthetic
production such as a book, film, TV series or band) (Jones
& Storm,2022). At the same time, teachers can help stu-
dents investigate ways that GAI can be a harmful /biased/
collaborator. For instance, to what extent do GAI tools
“perpetuate harmful conceptions of race, class, and lan-
guage hierarchies” (Jones & Storm, 2022, p. 461) when
recruited for fandom literacies. Students can also investi-
gate whether it is possible to engage in affect- driven
meaning- making with a non- human agent who does not
think or feel.
Setting Writer Goals to Address
Historical, Political, Institutional, and
Social Influences
Because texts generated for political purposes, or with
political significance, eventually become part of the his-
torical record, they inform the LLMs that power GAI.
Recently, books such as travel guides authored by Chat-
GPT have begun appearing on the Amazon website (Kugel
& Hiltner,2023). A future in which human- authored texts
compete with bot- authored texts seems imaginable, and
maybe even not very distant. Teachers can lead the way in
developing a humanizing remedy for this unsettling sce-
nario by shifting the focus of their writing assignments
away from writing in canonical, standardized forms—such
as the literary analysis essay—and toward writing that
serves documentary purposes. In doing so they can attend
to youths social and political concerns, while also bolster-
ing the supply of human- authored records of important
events and issues of a historical moment. Perhaps most
importantly, teachers can encourage students to engage
agentively in what Garcia and Mirra(2021) call composing
against or what Toliver(2020) refers to as counterstories,—
particularly in digital form, as these student- authored texts
will then shape the language models that inform future
interactions of text- based GAI. In this way, the pathway of
influence on writing will not be unidirectional, from the
outermost circle in Graham’s model to the innermost, but
bi- directional.
Critical Agency in Literacy with GAI
A key contribution of sociocultural perspectives on writ-
ing is the premise that reading and writing are not separate
activities but interrelated literacy practices. GAI tools cre-
ate urgent implications of this premise, in that reading and
evaluating texts (the GAI output) now becomes a more
prominent part of the writing process than it has been for
writers merely working with source texts. Furthermore,
through the influence of the LLMs and the corpuses that
comprise them, this output imports social, political, his-
torical, and institutional influences on language that had
previously occupied a peripheral status—as represented in
Grahams model—into the intimate, innermost sphere of a
9
10
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
The Next Word: A Framework for Imagining the Benefits and Harms of Generative AI as a Resource for Learning to Write | 9
writer’s work. Thus, a critical sociocultural perspective is
even more necessary, to remind writers of their position as
“an agent of the written word” rather than a “recipient”
(Kwok et al., 2016, p. 260). Agency should be included
along with tools, goals, and actions in the innermost
sphere of a complete model of writing.
Interacting with ChatGPT and each other, some of our
student participants attributed human cognition to the bot
(i.e. what ChatGPT “knew” and did not “know”), and
some referred to its output as “facts.” Attributing human
cognition to an entity that lacks imagination, agency, or
the ability to plan could leave us in the predicament of
Chiang’s scientists, who have surrendered their role to
“metahumans.” It is not hard to imagine how a student’s
misconception that a GAI tool “knows” could evolve into a
belief that it knows more than they do. One recent study of
adult participants working with AI text summarization
tools found participants typically unwilling to question or
revise AI output, particularly when the topic of the text
was unfamiliar to them (Cheng etal.,2022). Critical AI lit-
eracy requires a vigilant skeptical stance regarding this
output, even while recognizing that it can serve human
goals. This stance is aesthetic as well as informational: in
order to maintain a discerning ear for “miserable meta-
phors,” students will need to maintain a practice of reading
human- authored texts in a range of styles, genres, and
discourses.
To work toward this objective, teachers need to address
the ethical issues implicated in the creation and dissemina-
tion enabled by GAI. Along with sourcing (Wineburg &
Reisman, 2015) and lateral reading (Breakstone
etal., 2021) as means of detecting propaganda and bias,
the capabilities of GAI require additional attention to
authenticity. Teachers need to teach students to interrogate
the facts that GAI produces, to be on the lookout for the
so- called hallucinations (propositions that are not factu-
ally true) that occur when the LLMs driving the GAI tool
create texts based on probabilities of co- occurrence rather
than factual referents. Alongside these pedagogical imper-
atives lie important questions for research to investigate,
such as how best to teach students about the construction
and operation of these LLMs, particularly with regard to
supporting students in developing a critical perspective on
what the tools produce.
Conclusion
Our study participants’ intuitive sensitivity to the distinc-
tion between human and non- human contributions to
writing offsets the grimmer predictions of the dystopian
examples we have curated here and gives us reasons for
hope. In his fictional essay written from the point of view
of journal editors, Chiang(2000) exhorts us to “not be
intimidated by the accomplishments of metahuman
science,” explaining that “we should always remember that
the technologies that made metahumans possible were
originally invented by humans, and they were no smarter
than we” (p. 197). One of our research participants articu-
lated a similar idea in their interview: “ChatGPT just
repeats words from other people. We still want to think for
ourselves.
While speculative fiction is almost always solely
authored, speculative education, and the literacy practices
associated with it, are collective and participatory (Garcia
& Mirra,2023). As educators, teacher educators, and liter-
acy researchers, we should heed the speculative imperative
to work together to imagine possibilities for using GAI,
and to look to youth for inspiration in doing so. We con-
clude by revisiting Butler’s seeds of change in one of Lau-
rens verses from the Parable series(1998):
Alter the speed
Or the direction of Change.
Vary the scope of Change.
Recombine the seeds of Change.
Transmute the impact of Change.
Seize Change.
Use it.
Adapt and grow. (p. 110)
REFERENCES
Breakstone, J., Smith, M., Connors, P., Ortega, T., Kerr, D., & Wineburg,
S. (2021). Lateral reading: College students learn to critically evaluate
internet sources in an online course. Harvard Kennedy School (HKS)
Misinformation Review. https:// doi. org/ 10. 37016/ mr- 2020- 56
Butler, O. (1993). Parable of the sower: A powerful tale of a dark and
dystopian future. Warner Books.
Butler, O. E. (1998). Parable of the talents. Open Road Integrated Media,
Inc.
Cheng, R., Smith- Renner, A., Zhang, K., Tetreault, J., & Jaimes- Larrarte,
A. (2022). Mapping the design space of human- AI interaction in text
summarization. Proceedings of the 2022 conference of the north
American chapter of the Association for Computational Linguistics:
Human language technologies, 431–455 https:// doi. org/ 10. 18653/ v1/
2022. naacl- main. 33
Chiang, T. (2000). Catching crumbs from the table. Nature, 405, 517.
Dillon, S., & Schaffer- Goddard, J. (2023). What AI researchers read:
The role of literature in artificial intelligence research. Interdisciplin-
ary Science Reviews, 48(1), 15–42. https:// doi. org/ 10. 1080/ 03080 188.
2022. 2079214
Fridman, L. (2024). Yann Lecun: Meta AI, Open Source, Limits of LLMs,
AGI & the Future of AI (No. 416) [Audio podcast episode]. In Lex
Fridman Podcast https:// www. youtu be. com/ watch?v= 5t1vT LU7s40
Garcia, A., & Mirra, N. (2021). Writing toward justice: Youth specula-
tive civic literacies in online policy discourse. Urban Education,
56(4), 640–669. https:// doi. org/ 10. 1177/ 00420 85920 953881
Garcia, A., & Mirra, N. (2023). Other suns: Designing for racial equity
through speculative education. Journal of the Learning Sciences,
32(1), 1–20. https:// doi. org/ 10. 1080/ 10508 406. 2023. 2166764
Graham, S. (2018). A revised writer(s)- within- community model of
writing. Educational Psychologist, 53(4), 258–279. https:// doi. org/ 10.
1080/ 00461 520. 2018. 1481406
Jones, K., & Storm, S. (2022). Sustaining textual passions: Teaching
with texts youth love. Journal of Literacy Research, 54(4), 458–479.
https:// doi. org/ 10. 1177/ 10862 96X22 1141393
11
12
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
10 | Reading Research Quarterly, 0(0)
Kugel, S., & Hiltner, S. (2023). A new frontier for travel scammers: A.I.-
generated guidebooks. The New York Times. https:// www. nytim es.
com/ 2023/ 08/ 05/ travel/ amazon- guide books- artif icial- intel ligen ce.
html
Kwok, M., Ganding, E., Hull, G., & Moje, E. (2016). Sociocultural
approaches to high school writing instruction. In S. Graham, C.
MacArthur, & J. Fitzgerald (Eds.), Handbook of writing research (2nd
ed., pp. 257–271). Guilford.
Levine, S., Beck, S., Mah, C., Phalen, L., & Pittman, J. (in press). How do
students use ChatGPT as a writing support? Journal of Adolescent
and Adult Literacy.
Ong, W. J. (1992). Writing is a technology that restructures thought. In
The linguistics of literacy (p. 293) https:// www. jbe- platf orm. com/
conte nt/ books/ 97890 27277 183- tsl. 21. 22ong
SFE: SF Encyclopedia. (n.d.). Retrieved from: March 13, 2024, https://
sf- encyc loped ia. com/
Swift, J. (2023). Gulliver’s Travels. Project Gutenberg https:// www. guten
berg. org/ files/ 829/ 829-h/ 829-h. htm (Original work published 1726)
Thomas, P. L. (2013). Science fiction and speculative fiction: Challenging
genres. Springer Science & Business Media.
Toliver, S. (2020). Can I get a witness? Speculative fiction as testimony
and counterstory. Journal of Literacy Research, 52(4), 507–529.
Toliver, S. R., & Miller, K. (2019). (Re) writing reality: Using science
fiction to analyze the world. English Journal, 108(3), 51–59.
Turner, M. (1996). The literary mind: The origins of thought and lan-
guage. Oxford University Press, Inc.
Wineburg, S., & Reisman, A. (2015). Disciplinary literacy in history: A
toolkit for digital citizenship. Journal of Adolescent & Adult Literacy,
58(8), 636–639.
Submitted October 2, 2023
Final revision received June 21, 2024
Accepted July 2, 2024
Sarah W. Beck is an Xxxx in the New York University, New
York, New York, USA; email: sarah.beck@nyu.edu.
Sarah Levine is an Xxxx in the Stanford University, Stanford,
California, USA; email: srlevine@stanford.edu.
13
14
15
16
18
2
... Research focused on using generative AI in writing classes is beginning to indicate some pedagogically useful ways to integrate writing and AI. In a recent laboratory study, researchers had students write with GPT 3.5 and found that students could collaborate with ChatGPT to learn and grow as writers without ceding all writing authority to the AI [5]. The benefits of using AI for writing support may be especially salient for non-native English speakers, a recent study finds that two-thirds of 2L students in a first-year English course used generative AI tools [28], see also [51,56]. ...
Article
Full-text available
As generative AI becomes ubiquitous, writers must decide if, when, and how to incorporate generative AI into their writing process. Educators must sort through their role in preparing students to make these decisions in a quickly evolving technological landscape. We created an AI-enabled writing tool that provides scaffolded use of a large language model as part of a research study on integrating generative AI into an upper division STEM writing-intensive course. Drawing on decades of research on integrating digital tools into instruction and writing research, we discuss the framework that drove our initial design considerations and instructional resources. We then share our findings from a year of design-based implementation research during the 2023–2024 academic year. Our original instruction framework identified the need for students to understand, access, prompt, corroborate, and incorporate the generative AI use effectively. In this paper, we explain the need for students to think first, before using AI, move through good enough prompting to agentic iterative prompting, and reflect on their use at the end. We also provide emerging best practices for instructors, beginning with identifying learning objectives, determining the appropriate AI role, revising the content, reflecting on the revised curriculum, and reintroducing learning as needed. We end with an indication of our future directions.
... 57). They also proposed infusing different AI tools into the writing process, understanding that one tool will never be fully encompassing, and we must carefully consider their inclusion in our writing process (Beck & Levine, 2024). So, let us document how AI tools became that digital muse for this chapter. ...
Article
The rapid advancement of generative artificial intelligence (GenAI) presents significant opportunities and challenges for literacy education. While existing research indicates that digital technology integration has enhanced and transformed student writing and creativity in various ways, the advent of GenAI introduces critical questions regarding its role in supporting elementary students' writing. This article examines the potential and challenges of using GenAI to support students' creation of children's literature. Conducted over a 2‐week unit at a multiage lab school, the project engaged students using GenAI for brainstorming, feedback, and creating images to support their writing. Analysis of student writing, interviews, and artifacts highlights both the advantages and drawbacks of employing GenAI, culminating in practical recommendations for teachers interested in leveraging GenAI to support student writing and storytelling.
Article
Full-text available
Educators and researchers are interested in ways that ChatGPT and other generative AI tools might move beyond the role of “cheatbot” and become part of the network of resources students use for writing. We studied how high school students used ChatGPT as a writing support while writing arguments about topics like school mascots. We asked: What did students prompt ChatGPT to do? And how did students take up ChatGPT's responses to those prompts? We used Flower and Hayes' writing model to analyze screencasts of students interacting with ChatGPT and one another as they planned, drafted, and reviewed their arguments. Our data show that while planning and drafting, students primarily asked ChatGPT for ideas and then built upon those ideas to develop their own arguments. While reviewing, they generally used ChatGPT as they might use Grammarly or other editing tools. Students also compared their writing with that of ChatGPT, which allowed them to identify their unique writing voices and build meta‐level understandings of rhetorical choices and effects. Our study indicates that ChatGPT can become a part of a social, distributed model of writing, and that students can use ChatGPT as a resource for writing without sidestepping the processes of planning, drafting, and reviewing.
Article
Full-text available
Building on youth literacies in formal learning spaces is a promising direction for asset-based literacy learning designs. However, in response to ways that academic spaces can deaden passionate literacy study, it is important to attend to the resulting affective flows of such practices. This study traces how affect was sustained and dampened in a high school English classroom that intentionally brought together academic and fandom practices. We listened to how affective encounters with the popular text Grey's Anatomy unfolded across the class, with a particular focus on Black, Indigenous, and People of Color (BIPOC) focal students’ experiences. We found derision and dismissal of certain texts and experiences by peers (undergirded by dominant narratives about fandom and literary taste) dampened affective resonance. All the same, collective intensities were sustained through respectful discourse between fans and potential fans as well as BIPOC women's fugitive literacy practices resisting dampening practices of White students.
Article
Full-text available
This paper presents the results of a pilot interview study investigating the leisure reading habits of 20 practising AI researchers based in the United Kingdom. The interview analysis yields six areas in which literature plays a role in the field of AI: research focus, career choice, community formation, science communication, ethical thinking, and modelling of sociotechnical futures. These categories are proposed as the basis of a systematic taxonomy of the role of literature in AI research, evidencing literature’s significance in AI laboratory and professional cultures. The paper presents the results of this preliminary investigation in combination with a synthesis of existing evidence in each category of influence. The aim of this hybrid approach is to cohere research and evidence in this relatively new area of study, and to present new findings contextually, in order to provide the foundations for further qualitative and quantitative research.
Article
Full-text available
The COVID-19 pandemic has forced college students to spend more time online. Yet many studies show that college students struggle to discern fact from fiction on the Internet. A small body of research suggests that students in face-to-face settings can improve at judging the credibility of online sources. But what about asynchronous remote instruction? In an asynchronous college nutri-tion course at a large state university, we embedded modules that taught students how to vet web-sites using fact checkers’ strategies. Chief among these strategies was lateral reading, the act of leaving an unknown website to consult other sources to evaluate the original site. Students im-proved significantly from pretest to posttest, engaging in lateral reading more often post interven-tion. These findings inform efforts to scale this type of intervention in higher education.
Article
Drawing on Black feminist/womanist storytelling and the three-dimensional narrative inquiry space, this article showcases how one Black girl uses speculative fiction as testimony and counterstory, calling for readers to bear witness to her experiences and inviting witnesses to respond to the negative experiences she faces as a Black girl in the United States. I argue that situating speculative fiction as counterstory creates space for Black girls to challenge dominant narratives and create new realities. Furthermore, I argue that considering speculative fiction as testimony provides another way for readers to engage in a dialogic process with Black girls, affirming their words as legitimate sources of knowledge. Witnessing Black girls' stories is an essential component to literacy and social justice contexts that tout a humanizing approach to research. They are also vital for dismantling a system bent on the castigation and obliteration of Black girls' pasts, presents, and futures. Keywords narrative inquiry/research, African American children and/or youth, critical race theory, feminist studies/research, social justice In her groundbreaking text Talkin and Testifyin: The Language of Black America, Geneva Smitherman (1977) acknowledged how storytelling is a rhetorical strategy in which Black people condense broad, theoretical observations about life, love, and people into concrete narratives. These narratives include ghost stories, general human interest stories, origin stories, and folk tales, and each retelling "recreates the spiritual reality for others who at the moment vicariously experience what the testifier has gone through" (p. 150). Thus, stories are more than just basic commentary, for they are a means to reaffirm the humanity of the storyteller because they share their lives with those willing to listen. They are more than just fiction, for the stories act as testimony, "a dramatic narration and a communal reenactment of one's feelings and experiences"
Article
This article explores how speculative civic literacies can support youth engagement in policy discourse in digital and analog contexts. We broaden the scope of civic literacies by emphasizing core principles of Afrofuturism and participatory culture. Articulating a specific framework for applying these principles to contemporary conceptions of civic literacies, we identify six digital civic literacy practices that may be leveraged in classrooms. We then analyze two case studies of youth engaging with digital tools and web-based platforms, providing examples of how youth participate in educational policy discourse in online contexts.