ResearchPDF Available

The Potential Benefits and Risks of AI For all Learners

Authors:

Abstract

This is the transcript of the eighth episode of the second season of DiveIn. You can find the episode at https://divein.alitu.com/1?order=newest. In this episode, we tackle a pressing and timely issue: the growing role of AI in education. With Mary Rice, Joaquin Arguello, and Richard Carter Jr., I explore AI technologies' potential benefits and hidden risks. Can AI help address long-standing inequities faced by students with disabilities and other marginalized learners—or could it deepen existing disparities? We examine questions of transparency, standardization, and the evolving relationship between AI, teachers, and students. Beyond critique, we also imagine what AI could be: a tool for transforming schools into more equitable, just, and inclusive learning spaces.
Federico Waitoller: From the Division of Research of the Council from Exceptional Children, this is
Dive In. I am your host, Federico Waitoller, professor at the University of Illinois at Chicago.
Welcome, welcome, welcome, welcome to our eighth episode of our second season of D Dive in.
Today we have another timely topic, a topic that creates both excitement and hope, while also anxiety
and fear.
Today we're going to talk about artificial intelligence, AI, and the application of these tools for students
with disabilities and education at large.
We'll take a critical look, not only examine its benefits, but also many of its potential risks. Can AI
remediate historical inequities for students with disabilities and other marginalized students or has the
potential to increase such inequities?
We also talk about issues of transparency today and standardization, as well as the relationship of AI
with teachers and students.
And for all that, I do not have one guest, but I have three of them.
First, I have Richard Allen Carter, Jr. He is the holder of the Dr. Ted Hasselbring Chair in Special
Education Technology at Indiana University, Bloomington.
Dr. Carter researches the education of students with disabilities in modern learning environments with
a focus on technology enabled personalized learning, virtual learning, and blended learning
environments. He's the co author, with Lin Sang and Yu Ting Yu and Peng Peng, of a recent article
published in Review of Educational Research titled Let's Chat about Artificial Intelligence for Students
with a Systematic Literature Review and Meta Analysis.
Also today we have Mary Rice. She's an associate professor at the University of New Mexico. She
used to be a junior high school teacher and taught English language arts and TESOL classes.
Her research focus is at the intersections of literacies, identities and agencies in digitalized settings.
She's also the author of another literature review recently published in the Journal of Computers in
Schools.The title of the review is the Use of Artificial Intelligence with Students with Identified a
Systematic Review. I will post all these links of these both reviews in the description of the podcast.
Finally, we have Joaquin Arguello de Jesus. He's a Dominicano, requeno, decolonial, anti racist,
bilingual, community, clinical, and school social worker. He was raised on the Manito Ax trail. He's
also a PhD student in the Department of Language, Literacy and Sociocultural Studies at the
University of New Mexico.
Joaquin is currently engaged in various research projects through indigenous pedagogies, land and
water, traditional practices, and online and AI technologies.
So are you ready? So let's don't waste any more time and dive in into the conversation about AI.
Well, thank you so much Marie, Joaquin, and Richard for being here with us. You know we're going to
get started. This is a topic that creates also a lot of, like, excitement, but a lot of, a lot of concern as
well.
So I'm glad we're supporting this. And I want to start with some broad question about, you know, what
are the some ways that AI has shown promises for students with disabilities?
Richard Carter: I'll jump in. Sorry. So I think what we've seen, thinking about AI as a tool, as a
process has been around for a long time. I think we've seen AI be instrumental in some aspects of
accessibility with students.
For instance, if we think back to a very broad general generation.
I'm sorry, definition of AI, we're thinking about speech to text, you know, text to speech, those type
items which we really are integral in how we, how we've served students with disabilities for years.
And, and thinking about how, you know, how, how that good and, and how it's ubiquitous in schools
really has been the excitement and the challenges you spoke of, you know, earlier really has, has
shown up when it comes to generative AI.
So I think that' that space is really interesting to play in right now. And I'm glad we're going to talk with
some great minds today about how, how that might, how that might shape out well.
Mary Rice: And a lot of things right now that people are excited about in my realm have to do with
things involving writing. And because people are historically we haven't done writing instruction in
school, so we assign writing rather than teach it.
And so then this tool comes along or this people call it a tool or sometimes people call it an agen and
we say, oh, this thing can give us some sort of feedback on writing that is more than what a teacher
would do.
And everyone is saying how happy that is.
Like, oh, this thing will tell you where to put your headings in or where you need to make more
descriptive words or something like that. And because for the most part people haven't had the
access to that kind of stuff then and people are very excited about that and we can problematize that
because that's actually what I'm apt to do.
But at least the initial grasp. There's something about kids who usually just get red marks all over their
paper or told that they didn't write enough or who maybe don't get any attention at all as writers or
there's a long standing notion that students who have been identified with disabilities can't think of
ideas or can't take an outline and make it into a draft,
that this is going to be, you know, some kind of saving thing.
And it's either it could be that, but it also, is it that, or is it that that body annoys you or it's like,
frustrated you in some way and that this is a way to, like,
ease the tension around that body that frustrates you?
Richard Carter: Yeah.
Joaquin Arguello: And, and I would say thinking about it from a, from a student standpoint,
myself being a dyslexic learner, even an adult learner. Right. I, I also like to think about what are
healthy ways we can even frame what learning differences are.
Richard Carter: Right.
Joaquin Arguello: Because the,
the, the categorization and the viewpoint of somebody with a disability is a deficit discourse which
automatically sets students in a place of them having to prove their worth, to even be with their peers,
be in a classroom, and then if there's extra tools that are being offered to supposedly make them
equal to others,
that that's a different place of learning than if they were seen as learning like everybody else and
maybe having an alternative way of processing cognitively or even physically with any differences that
they have.
And so for me, I think, you know, one of the things that I'm hoping this technology can do is help get
over some of the current limitations. For example, I know Professor Rice has asked me to support her
in some publications around the speech to text programs and how they don't even work for students
with accents.
Right. And so you have a population that's marginalized by a school system that's English dominant,
that doesn't recognize how that pushes kids out, and then further dominate, marginalized by the
domination of.
Well, now this software doesn't even help you because it can't recognize your accent. So maybe, and
I'm not a tech expert, but maybe AI can help address some of that.
However, I'm still worried that AI will be a tool that is used in a way where students feel they have
another task to learn on top of everything being categorized as disabled.
Right. So not only they're not seen as normal, they're a population within a population that has an
extra task to learn, an extra tool to supposedly make them feel normal, normal to peer learning, when
that's not even clearly defined.
So I'm thinking about what are the ways we're setting students up for success or if we're giving them a
longer road with more obstacles to feel like they're learning at levels that aren't quite clearly
expressed by schools or teachers.
Federico Waitoller: That's very interesting. Yeah. What I'm hearing you all saying is, like, they have
the potential to provide the kinds of levels of support so that students can feel successful or achieve
some sort of proficiency in writing or reading.
But at the same time, as you say, Joaquin, it could be just one more thing that they need to do to get
there, and maybe they don't. It may be even difficult for them to learn the technology.
Richard Carter: Right.
Federico Waitoller: So now you have, like, two different tasks in front of them that are. Positioned
them as the vision. Right. So that's interesting to think about. But I know both of you wrote very
interesting literature reviews on the topic,
and I have read both of them, and I will put them on the description of the podcast for this week. So
can you. Well, while you're reading this literature review, can you share some concrete examples that
the studies were saying these are tools that are showing some success with X or Y students with
learning differences?
Richard Carter: Yeah. So I'll go first.
Yeah. So what we looked at in our Live review with Dr. Ling Zhang, what we saw, we went back to
look at using that broad definition of AI to include.
To include text to speech, for instance. And Joaquin, I think, made a wonderful point, because having
a southern accent, as I do, it doesn't always pick up me as well, you know, and it's just not something,
you know, I.
So, you know,
I totally understand that point.
But we looked at that. We also looked at. We looked at robots that were being used to support
individuals being served with autism. We looked at many of those kind of elements to just step back
and say, you know, all right, how.
How has.
How have we. Using a broad definition, like, what are the tools that we have used that we can, as
educators, researchers latch onto to say this was AI this whole time?
Because AI, again, really didn't pick up steam, concern,
challenges.
You know, as you mentioned earlier, up until, what, 2022 with the advent of ChatGPT. So what we did
was kind of step back to show that, you know, special education has been working on this for a long
time and through intelligent tutoring systems as well.
You know, those elements to just say, hey, we've been a leader in this space and, you know, maybe
we should also be a leader moving forward instead of being something that is taken and moving.
You know, someone else takes this and takes the lead and starts running, which I feel that's kind of
where we are right now. So our piece was about what has kind of worked in the past as well as to
show, hey, we're.
We're actually the people that can move this forward as well. So let's. Let's take the reins.
Mary Rice: Yeah, well, a lot of those studies that I was looking at back before that were done before
there was this massive explosion,
were still sort of in the positive discourse, positive realm,
oh, AI can solve this. And they were really around a number of issues around identification of
students,
so saying,
here's how we're going to use AI to sort children. And that was an issue that I actually brought up as
potentially being problematic. So it works, but like, at what cost and to whom?
And do we really want to think about this?
Federico Waitoller: Sorry. To interview Marie. How they work, how they were using the tool to
identify students.
Mary Rice: Well, with speech. So they were saying, how we are going to. So this is what constitutes.
We're going to put, get. We're going to get a big database of speech and then we're going to decide,
we're going to let the AI decide what constitutes acceptable speech.
And then who doesn't meet that threshold,
then that's who we're going to identify as having a speech disability.
And you can see how then that opens up really a lot of potential issues. And then of course, then
they're going to find, oh, the AI worked for that.
And I think whenever we look at studies about AI because people want to run out and do this right
now.
I was the managing editor for Online learning for a bit, and I'm the current editor in chief of The
Journal of K12 Online Learning Research.
And people do studies and they do intervention work and they say,
we didn't do anything for these people and we did something for these people.
And we found when we did something, then it helped.
And,
and the, and,
and of course.
And so then we have to think about, well, when we change something, something changed what that
means.
And I actually was looking at a lot of stuff with a former grad student about language support too. And
these people doing ESL EFL work right now saying, well, we used something like duolingo to help
people learn language and it helped people learn language.
Well, yeah, I would imagine that if you gave somebody nothing and gave somebody something, that
the something would help.
And so then we should be interpreting research with caution and be really careful about, you know,
when we say, oh, these were successful, what we really mean by that and you know, speak with
some,
some clarity. And then also, like, I did this review,
you know,
back in the day, now, like at 2023, seems like eons ago, like, but to say,
let's think about the ins and outs of what this really means in terms of if we're going to put a big bunch
of stuff into a database and then let the database tell us where the cut score is and then use that to
put children in boxes and put labels on them and what the implications are for that for the.
Maybe the duration of their lives, but at least for their schooling.
Federico Waitoller: Yeah, so we're doing that with or without the machine too. But I understand your
point.
Joaquin Arguello: Yeah.
I was just thinking as a full time school social worker at school specific schools, but also through the
district wide programs I've developed,
there's an interesting reality and I found difficult even in doing lit reviews around what is the emotional
process of learning? Right.
What are students feeling even before the lesson plan starts? What are the dynamics in the
environment either with peers or teachers or related to a subject matter that has students either being
open or closed down or reactive and defensive.
And I would say most related to this would be in a program we started where we got grant funding to
use manipulatives, you know, fidgets, things that students can play with to help them self soothe.
Right. Which supposedly supposed to help them with another sensory listen and pay attention and
participate more.
And I'm not sure if AI can be programmed right or if there's even an interest because we're not really
talking about the emotions even today, right? In schoolings or in this podcast and in our research and
in our field, emotions are everything in human interaction.
While we're sleeping, while we're awake, whatever we're doing, the emotions lead what we're able to
or what's being asked of us. Yet in education it's only justified when there's emotions in the way,
right?
When it's problematized, Right. Unless we're talking about gifted classes and students who are just
seen as are labeled as so smart we have to give them extra resources to keep up with their brains.
But it means everybody else, right? Everybody else in special education or everybody else with any
learning difference is seen as the opposite. So I'm wondering how literature reviews can also be more
holistic and say, well let's not just deal with the emotions of the 10% limited to 10%, even though
there's way more students in education that aren't passive learners.
But what are the emotions of teachings of teachers, of teaching staff? Because I think we're also
assuming every classroom will have a highly certified special ed teacher in it. When in reality most
classes function through underpaid under trade educational assistance and volunteers are long term
subs that are contracted.
What are the emotions they're bringing to the fear of teaching populations they may not feel
comfortable with? And how does that cause students to react with emotions that AI can't even be
programmed to understand or compliment.
Mary Rice: Well, AI is supposed to actually like when you're talking about some things like autism
Spectrum disorder, that's they're supposed to train you, quote unquote, to have emotions ostensibly
because you don't have a person in the classroom who can help you with that.
Federico Waitoller: That. Take me to one of the questions, right, about AI doing the work of
teachers. They've been saying that in a year we're going to have this super AI. You know what it's
called, the generative AI, that it's going to be as powerful as the best scientist in any area.
Right. Do you see in the future that there's a risk that AI may take the roles of teachers, whether
general or special educators?
Richard Carter: A couple of things with that. First, what I think what Joaquin just mentioned, which I
really like, and that's the idea that teachers are in theory we have, and by the law we should have,
you know, certified teachers.
And that of course is not the case all the time. Mainly what I see right now in both the literature as well
as kind of funding opportunities is that it is these are AI that are being delivered for teachers to access
to deliver instruction to students.
So I think that's where we're kind of comfortable in our development right now. And that means a
couple of things. One, to just kind of take advantage of this database of evidence based practices and
however you want to, you know, what, whatever you want to put in there to say teacher,
we were teacher ask questions of the AI, gets recommendations, then teacher applies
recommendations to the students.
I think that's most of what I see. And an example would be something like using AI to write strong IEP
goals, using AI to determine what evidence based practice works for students.
There's some challenges there for a couple of reasons. One is that for instance, if you just put in
academic data to ask for recommendations as to what student, as to what intervention might support
student, one you don't like.
Academic data is but 1/100th of what it means to be a human and a student, right? So I mean, it's
really difficult to make really strong decisions on what intervention will support that student just based
off results of data.
The second problem is that whatever that recommendation is, it assumes a couple things. It assumes
one, that the teacher has access to the resources, that they have the knowledge and that they have
the support from,
they have support from administration, whomever to actually deliver that intervention.
So you know, it would be like for instance saying, you know, student performed this, you know,
performed at this level. And the AI says, well, all you need is this reading program.
Well, this reading program is a, is a, you know, is a, is more of a school wide initiative. I think that's
kind of where I'm seeing this right now. I think I know what you're talking about.
The 2026 debt kind of mark where he says, hey, we, where we've got the, we've got this new giant,
we've got access to this new tool that is going to revolutionize.
I think we'll probably push back on that a bit because where we started with this were AI that were
developed for students to use.
Then where we are in this space right now is AI developed for teachers to access to, then deliver
instruction to students. And then I think we will revert back in some time.
It might not be 2026, it might be later to then using AI to develop for students to use, you know,
own their own. So it's kind of like, it's kind of like we're seeing, it's kind of like we're seeing waves of
AI. Wave one, which we talked about earlier, wave two, where we are right now, these databases.
But that is the human in the loop idea, right? You know, it's like we've got AI, but we're going to have
someone access that AI to then apply to the student.
And therefore we do have some control over the AI because we can, again, assuming that we have,
we have the skills necessary, we can make the distinction. And is the AI producing something that we
believe in that is void of the bias that we're going to talk about, no doubt here a little bit later?
Mary Rice: But I also think that there's some design in making sure that like AI is supposed to be
doing this work and even if teachers are here, then their work can be replaced.
So I mean, if you think about right now, there is a deliberate effort to take people who are in positions
and remove them from those positions so that AI can do work in government sectors right now.
And so, and I think that like when we talk about AI discourse, one thing that's frustrating to me is that
we, we kind of are like, oh, AI took your job or AI placed you in special education, or AI took your
services away,
but AI didn't do those things, a person did. And we can say that AI did all we want and that it was the
result of a database making these decisions, but.
And it sort of helps us like remove responsibility from whoever the powers are that be. But. And I think
it's designed to do that it's designed to be a responsibility removing tool.
But there is a deliberate effort to funnel public money into private hands and AI happens to be a tool
that rapidly is going to assist in that. And so then, and I and Richard for a long time has said, you
know, I've known him since probably 2014 has said for a long time it's a tool and people are going to
use this tool to,
you know, do. People are going to look at the tool and they're going to try and decide what, how to,
how to do a thing that they've wanted to do for a while and then try and find new problem problems.
And Neil Selwyn talks about that too.
And technology people look around for problems that weren't problems until tech could come in and
do surveillance and different things. But you know, but we have to like, you know, make sure that
we're clear eyed about that too.
That yeah, there's a, definitely a special education teacher shortage and it's more acute in some
areas of the country than others. So there's some nuance to it in that where there's only a little bit of a
shortage.
It's actually, it's a different market than where there's very bad shortages, which I think is interesting.
And then the shortages in New Mexico are different because of the linguistic landscape of New
Mexico versus other landscapes. But the,
you know, people aren't going to care when they just come through and say I got, I got the, I got the
solution.
Federico Waitoller: So yeah, I mean I think one of the things that you say, Mary, is one of the
problems also the dangers of this tool right now is the particular context is where it's going to be
implemented in particular context where it's going to be expanded.
Because as you say, we're in a very particular in the U.S. cultural and historical content in where you
say there were dismantling civil rights protections for students with disabilities. We just have a show,
last show was about the Becerra vs Texas case about 16 states asking for declaring Section 544 as
an unconstitutional.
It's happening in a context where we're trying to expand vouchers for students with disabilities when
we know that to go to a private school they're pretty much giving up all the idea rights.
So I think one of the interesting things that are coming from our conversation is not so much about the
tool but also about when the tool is going to be expanded, exploded and implemented in this very
tumultuous time where the tool may ended up using for some of the uses that you were mentioning
Maris,
that are quite problematic. I know Joaquin wanted to say something, but I just wanted to jump out and
clarify that.
Joaquin Arguello: Yeah. And that's really how I was thinking along those lines. We're having a
wonderful conversation about the ethics,
not just the ethics of school or the ethics of teachers in a teacher role.
But, but I want to be, you know, I want us to be clear. A lot of learning isn't isolated to a dynamic
between one teaching person and one student. It's not a didactic, one directional process.
Right. And a lot of teaching improvements, I would say, have to do with the ethics and that dynamic.
Because more than anything, I would say let's take an IEP meeting, for example.
A lot of times those are dynamics where somebody who has a license to assure things are ethical and
healthy on behalf of a student is faced with a team of people who only want to talk about a product or
a score or a number.
However, there's more learning for these individuals in hallway conversations around. Is is what you
did and said in that meeting or in the classroom, was that really ethical? And how do we overcome the
detriment it had on a student?
Right. So my question then becomes,
can AI have an ethical component or recognize when it's being used unethically in place of a teaching
certified staff or being used as a tool harnessing power and dynamics? Right?
Because classrooms are very, are a very territorial, they're a very power dynamic space.
Because again, schooling historically and today has always been seen as a place of control for
passive learning. And anything different than that is usually met with eventual discipline, public
external discipline.
But initially energy, personality, charisma and language dynamics of power and control to either
cohorse all the way to demanding students passively agreeing with how. Right. How lesson plans are
being implemented?
Because let's be clear, right? Curriculum and lesson plans really tend to operationalize any type of
existing conscious or unconscious bias of teachers. Now if we're saying now AI is going to be involved
and AI is an instrument that will exponentially carry forth whatever the teacher is doing.
And the teacher doesn't even include ethics or holistic learning needs of a student, then what, what
are we saying the product is going to be that ultimately a student will be held responsible for, right?
The student's grade. The student's product, now with some unknown algorithm software is going to be
seen of whether or not they're a good student or not based on if the teacher even considers ethics or
holistic needs of a student and then now has an algorithm replacing Some of what they're doing.
Right.
Federico Waitoller: Richard?
Richard Carter: Yeah. And just building off what, what Kane just mentioned, Mary and I met in 2014.
We were working in the center on Online Learning, Students with Disabilities,
the universe of Kansas. And it's, it's really interesting how these two kind of parallel show.
Online learning had been happening for a long time, exciting until Covid hit and then it was not
exciting as exciting anymore. However, it is something to think about because if I think there is some
traction for this national voucher program that will potentially move forward, and when that happens,
that traditional classroom really is going to be disrupted across the country, offering lots of different
ways that students do access. And what I loved about what Keane said, because it's something that
Mary and I would talk just ad nauseam about, was really that, like those moments that, like, how can
AI,
if AI is actually your, your a major source of your instruction, how can it replace seeing your teacher in
a grocery store? Like, what do you lose? Like, what do you lose in a hallway conversation with your
friends?
Like those type things? And Mary and I, I mean, again, here's the thing about. It's like, you know,
there really is no expert in AI. AI right now. If you meet someone that says they're the expert, I would
just go the other direction.
It's all conjecture. We're all learning, we're all at the same.
But mainly what it is is to think about that element right there. What Joaquin mentioned, because what
can happen is,
you know, families make decisions about where they want their students to learn. There's going to be,
if, if this goes through, there will just be multiple options for them too, but you're going to lose that
piece.
There's AI is the, is the P that cannot be replicated.
So ensuring that thinking about moving forward, that is going to be the part that we, that, that we
really need to be mindful of. AI can do a great job, an amazing job of delivering instruction right where
you need it, answering any question you may have.
But when it comes to that fabric of society,
that's. That's not something that AIs is going to be able to do. And I'll be quiet right after this because
we, we, Mary and I will, I'll do our best.
Our best line from all of online learning was, and now using AI is that, you know, John Dewey said
that school was to produce the best,
you know, to produce democratic, you know, individuals ready for the democracy. Right? And so our
thing was, so the, the joke was, okay, so the best way to do that is to remove them from the
democratic society.
Right. The best way to prepare an individual to be a citizen is to remove them from the only
mandated. I think that is super challenging and something that could continue to happen.
Federico Waitoller: It seems like that relational aspect and so far unreplaceable. Now I want to touch
a related comment a related question about the transparency of these tools. They seem something
that may be like a black box and the question is, you know, who controls the black box?
What's in that black box? Making decisions and how this transparency can affect the usefulness for
educators and students and parents. And we mentioned already these databases that are. That are
behind these machines are.
Can be biases. There is, there is some lot of algorithms and data in where those algorithms are
operated that may not be completely known to teachers, may need completely known to
administrators or parents and they may be having.
Making a lot of decisions about students life.
Mary Rice: Yeah, I would say that transparency is actually pretty low, but that's part of the power
structure of it.
So as if you tell people a lot about. There's actually been some studies of young people of Gen Z ers
and below where when you tell them more about how what AI is and what it does and they're less
interested in using it.
So then,
so then you may not have folks that'll do that. So when I the article that I just published when I was
talking to these kids about you know, just very basic things about.
Because I'm not a. I'm a. I do have linguistic training but I'm not an engineer about what AI is from a
language standpoint and very basic things about databases and helping them understand very large
numbers and things like that.
Then they made different decisions about what they wanted to put into the chatgpt and what kinds of
feedback they were asking about it. But. And I found myself a lot of times saying to them well, you're
the writer.
And when I was in. When I was a writing teacher and I'm still a writing teacher I guess in some ways
like my students are just quite a bit bigger.
So I would give them feedback sometimes and I would think well I'm the teacher so they should, you
know, take my feedback. And then it was down the line right As I progressed emotionally and ethically
that where I would tell them more like well you don't have to do this.
This is just my response rather than feedback and then they're the writer. But a lot of times as
teachers we think you should just do this because I marked it and helping them understand that AI is
Not something that is universally correct or right.
But that's not the. When people make these things, that's not how they want you to think about it.
They want you to think about it as indispensable and is the only way to consider the problem.
And they want you to make it a partner in all of your decision making. And they actually want you to
make you think that it is somehow at least partially sentient and that it can come comfort you.
And you know, because like, I have students write things to me all the time about different things that
have happened to them. And even kids used to write stuff about,
well, there. This one story I got is a grand. They were going to miss school because their grandmother
was repatriating to Mexico to marry the man that she couldn't marry 60 years ago because even
though she was pregnant because the family didn't approve.
And so she went to. Came to the United States and married somebody else. And then the secret all
unraveled and they found each other again and she repatriated and married the.
Married the father of her actual oldest.
And I can't understand how you'd read a story like that from a child and be like, well, you got to put
some headings in here and maybe it needs some more details.
So, or, you know, those sorts of things. And, or even, and even if ChatGPT or one of those things can
tell you, oh, that's a very compelling story or, oh, that sounds like a good plot for a novella, it's not the
same as if you really engage with a child.
And I'm also, I need people to understand I'm not nostalgic or not naive about some of the fact that
adults on the landscape are not always great to kids because I go to schools all the time and I see a
very wide range of adults in their ability and interest in engaging with them.
And so the, so there is this transparency piece about what this is and what this does and who we're
going to share it with.
And again, states have taken this up differently. Kentucky is very much like, oh my gosh, we need to
tell the parents all this stuff. But a lot of other states have, you know, have, have not come out with
that strong of language.
Joaquin Arguello: I think that's a wonderful transition. Right. And, and I think the nostalgia of us,
his story, right,
of always making reference to Manifest Destiny, the wild, Wild west, where we're exploring new lands.
I mean, let's be real, you know, the, the genocide of indigenous people wasn't complete. Right. And
the land grant Universities and the lands public schools are on are still considered stolen land by
people who still live in the area.
Right. So there's this colonial reality still playing out in schools that I think is not even being
considered. So if we want to talk about transparency, I think transparency between a learner, what
could be identified as a learner, and a teacher, which again is a very didactical one directional reality
that I don't really think is the truth.
But transparency on a bigger level, I would say how can AI or does the school or does administration
or policy even want AI to address if families even understand schools and schooling?
Right. And let's think about families who show up to parent teacher conferences or IEPs, or just
receiving communication of status of how their students doing throughout a semester. Right. Do
schools even ethically take on the responsibility of making sure families understand what learning
curriculums are?
Right. The reality of students having to not just go to different classrooms. Right. In early childhood
education, which was my last job at the district, of course, it's mainly one teacher with some
supplemental adults, but the rest of education is students being in several locations and having to
learn to manage several different adults,
personalities, temperaments, learning styles. And is that part of the transparency we're going to ask
AI to translate for families and students or is AI just going to replicate and compound that?
So now there's not just several adult spaces, but now several versions of AI that students will have to
address. But the teachers are just gung ho by saying it's the wild wild West.
Let's see what this new contraption is going to do on top of students. Like we've done it on top of
indigenous land and on top of indigenous people that still exist.
So I think the transparency piece, I would love to know an algorithm or that it's being programmed to
really help a full transparency for communities, families,
students to better understand learning. But that's going to get into some very delicate spaces of if
teachers think it's being used to monitor them.
Richard Carter: Right.
Joaquin Arguello: Their performance, their abilities to do and carry out policies that come from
federal or state, whether or not they agree with it.
Federico Waitoller: Yeah, I mean, one of the things I think you're mentioning is like if the issues of,
you know, transparency and understanding schooling and the context of schooling, in the case you
were mentioning, the example of talking about the colonization and erosion of indigenous people from
certain lines, if we haven't done until now,
why do we expect AI is going to solve that problem? Because it's being implemented on that particular
context. So it's just A tool. I don't, I don't see that's going to come as a redemption to that because we
haven't done it for hundreds of years.
So why AI was going to come and suddenly fix this.
Joaquin Arguello: That's a wonderful reality, right, that we have to say AI then is now becoming a
part of colonial schooling. Right. Because schooling in the US was made to deculturize, delanguage,
assimilate and attack the spirit of not just indigenous people, but also Mexicano and Latino people
through mission schools.
And the policy and the values of colonial oppression are within the policies, within the power dynamic
of what schooling is. Unless, unless a student happens to be in some very different small learning
environment with all of who they are, is encouraged to flourish and however that learning needs to
take place.
Right.
Mary Rice: And it requires the natural resources that are. That further denude the land. So which I
have written about.
Federico Waitoller: So,
you know, I have a few, a few questions remaining. So we talked a little bit about some of the
potentials and some of the limitations of AI. I want us to spend a lot of time dreaming, thinking about
what are the possibilities?
I mean, how would you would like to this movement or AI be implemented or used in the future?
Mary Rice: Right.
Federico Waitoller: So let's, in very, maybe transformative and redemptive ways to change some of
the problems, the current problems that someone mentioned. So I'm going to start with Richard and
we'll move around.
Let's dream about how this can become a tool for actually transformation rather than just reproducing
the same inequities that we've been dealing for the last decades and decades.
Richard Carter: Thank you.
So I think a bit is to demystify AI.
These tools, these products are increasingly easier to create and therefore would be that in your local
context, individuals create their own,
support their own, you know, those type, those type things, whatever your needs might be. Like right
now we feel like we need venture capital to come in to, you know, you know, invest $2 million to
create something that we're going to use across the country.
And I think that's really the wild, wild west space that, that I'm seeing is really individuals realizing just
how much revenue can be generated from education, which I think is absolutely the wrong way to
look at this, but that's really where that space is.
And ultimately here is an opportunity to leverage AI to be able to empower the students within that
context to one,
address their own challenges. Two, really, you know, empower them to think about what their lives
may be like moving forward, maybe opportunities for them and for the teachers and for the community
to collaborate around.
All right, what tools are important to us? How can we, how can we do this?
And again, it is,
you know, it's kind of like you've got some entrepreneurs that are trying to,
that are trying to really go in and generate revenue out of education. You've got some entrepreneurs
who are really saying AI is powerful. So let's create canvases that anyone can use.
Then that does still. And that does put a layer of,
you know, of where families, students, teachers, whatever could really address those outputs that
we've talked about over and over today as we should be talking about. But that's that kind of layer of
critique that really does have the potential to support students.
So I would say maybe taking that under kind of what we've known about the maker space,
empowering students to, and families and community members to work together to create the tools
that they need.
Mary Rice: There's, there's lots to think about in terms of dreaming and also in terms of making sure
that people have information that they can use to make the decisions that they want to use.
Which in some cases means that the use of AI in education would be very minimal. And in some
cases it may not mean that. So AI, if you go talk to some people who have been doing engineering
work, they've had AI for a long time,
so many decades. And it was unimaginable that you would just take it and bring it into an educational
space because it was supposed to be for things like my brother in law's an astrodynamicist and he
went down and worked for NASA for a little bit and his job was to recalibrate like this,
this satellite with a bunch of different mirrors on it to make sure all the mirrors were the exact right
angle to take the picture. Now that's what AI was for, or let AI find your cancer.
So there's, I mean there's not like there wasn't like good uses of AI and there wasn't supposed to be.
But we have to think about the,
what are the really good uses of AI for people? And then,
you know, if you. There's some, There's a thing. There was a question about book recommendations
and Brian Merchant wrote this great book that we talked about one of the classes that Joaquin and I
did together.
And it is about the Luddite movement and the Luddites aren't people who hate technology.
It's about making sure that the people who.
The benefits are distributed and the harms are reduced,
especially to the people who have Always been paying the price.
And so I went down to a conference in Chile summer ago, and people just kept saying, well, this is the
price of progress. This is the price of progress.
And I'm not the person who normally pays the price of progress, but I have to go in schools and work,
or I get to go in schools and work among people who have been paying the price of progress for 500
years.
And that it's insulting that the same people keep paying the price of progress. And so my dreaming is
all about figuring out how we've got to figure out the right ways to use the stuff that is going to reduce
the harms and expand the benefits and make sure that people have the most information possible so
that they can do the things with it that they are wanting to do.
When I went and looked at the 50 largest school districts in the United States to see how they were
using AI, then even within states,
you have one school district that is like, oh, we had an existing computer science initiative, and so all
of our students get computer science and we just folded it into there.
And the neighboring school district said, we're going to use AI to body scan students when they come
in the door.
And so that's not okay.
And my fear was that there are some groups of students who are going to get to use AI to send
rockets to the moon,
do fancy stuff, right? So I go work with students who are doing all kinds of things with looking at
space, and they're doing it right now, we're designing weather balloons and things like that.
And then other students. It's going to be a tool of, you know, doing the same thing in education that
we usually do, but more of it and harsher so and so.
And who's that going to land on?
Joaquin Arguello: Yeah,
I really like the examples you're giving. Right. The contrary examples. And I remember at one point I
was in a really small charter school and they barely had resources for Chromebooks or something, I
believe it was.
And we ended up finding a sixth grader that was hacking into government websites at lunch. I mean,
that was his brilliance, right? He could code in ways I didn't even understand.
And so how do we try to find when we dream? How do we find healthy ways that technology can
unleash solutions and realities to benefit of everybody? Now, I didn't say that as the benefit.
What I'm saying is once we had a robotics program and that student was able to harness and channel
that coding, well, then they went to state and regionals and they won in ways that nobody Imagined
right.
The first year they were there. And so for me, you know, if we want to dream about technology and
AI,
I think, well,
can we get over our own fear as adults of saying, well, maybe the students should be the ones in a
healthy way programming AI and we might actually see a level of scaffolding, a level of varied
teaching and meeting students needs.
Not because AI will do it, because maybe they need to teach AI to teach the teachers what they need
holistically, right? Not in an algorithm way that nobody understands, but to lay out in very clear ways
to teaching staff.
This student from all of you will need these various things. Things, right. And make that concrete.
Because let's be honest, student voices in the history of schooling have, have never been a formal
part of learning.
Right? It's never been a multi directional process focused on relationality. And if we need technology
to highlight the relationship that should exist and the negotiation between teacher and learner and that
the teacher probably should be learning more than students on how to be a better teacher all the time
through their own cognitive algorithm.
I could dream maybe AI will help point that out and then sound an alarm when the teacher gets lazy
and is just spitting out stuff to the same class every time.
Mary Rice: Right?
Joaquin Arguello: Or over disciplining, AI will sound an alarm or AI will raise its hand, you know, like,
you know, the 1980s movies of robots and say, you know, little Johnny needs X, Y and Z when the
teacher's purposefully or maybe unconsciously overseeing those things.
Right.
Federico Waitoller: Wow, those are great dreams. Well, thank you, the three of you for delighting me
and reach our audience understanding It's a complex topic and there's so many. I mean, I have more
questions and many things to say and we'll keep people thinking about it and debating between them.
Thank you so much, Marie, Richard and Joaquin for being here with me today.
Richard Carter: Thank you, thank you, thank you.
Federico Waitoller: Thank you for listening to Dive In. I hope you learned from this episode as much
as I did. Please help us to spread the word about the show.
See you next time.
ResearchGate has not been able to resolve any citations for this publication.
Thank you for listening to Dive In. I hope you learned from this episode as much as I did
  • Federico Waitoller
Federico Waitoller: Thank you for listening to Dive In. I hope you learned from this episode as much as I did. Please help us to spread the word about the show. See you next time.