Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
I think we're delegating and agreeing to give up
our agency,
our identity,
our accountability.
And those three things are really important.
And we're also then harming potentially harming our
students,
especially those who are most marginalized by this.
(00:21):
This is Educating to be Human,
and I'm your host,
Lisa Petrides,
founder of the Institute for the Study of Knowledge
Management in Education.
In each episode,
I sit down with ordinary people creating
extraordinary impact; people who are challenging
notions of how we learn,
why we learn,
(00:41):
and who controls what we learn.
Thank you very much for today on Educating to be
Human.
I'm joined by Maha Bali,
a professor of practice at the Center for Learning
and Teaching at the American University in Cairo.
(01:01):
As a leading voice in digital pedagogy and open
education,
her work invites us to think about digital literacy
not just as a technical skill but as something
deeply connected to identity,
power,
and relationships.
And at a time when our digital culture can often
feel dehumanizing,
her perspective challenges us to ask (01:22):
what values
do we embed in the digital tools we use?
And what kinds of relationships do they foster or
erode?
Through projects like Equity Unbound,
which you'll hear more about in this episode.
Maha is helping to shape global communities that
(01:42):
model critical reflection and cross-cultural learning.
In our conversation today,
we explore what's at stake in the digital world
and why education must equip us not only to use
technology but to question it,
especially with artificial intelligence,
or AI.
(02:02):
I think the urgent task before us is not to
simply take a position for or against it,
but to ask important questions about equity and
agency,
and what it means to be human in digital spaces.
Thank you and welcome.
Thank you so much for having me.
I'm so excited to dive deep into this,
(02:24):
especially with you.
Excellent.
So I have a question.
about what brought you to this work.
I know there's probably a long history of that,
but you've done so much work of how you've shaped
communities.
And I know we'll talk more about Equity Unbound
and what that is.
But I'm just really wanting to have an understanding
(02:44):
of what brought you to this work in this way.
That's such an interesting story.
So first of all,
I mean,
it's an interesting question.
The story is I started out as a computer
scientist,
and partway through studying that I was like,
ah,
I don't really know if this is what I want to
do.
And where's the sociality and relationality in this?
(03:04):
And I found myself interested in so much.
more.
And I finished my computer science degree with a
thesis that used neural networks.
And in the process,
I was like,
oh wait,
I'm interested in psychology more than I'm
interested in computer science.
I want to study educational psychology.
And I shifted.
After I graduated,
I worked in IT for a didn't find myself there.
I went back to do my master's in education.
(03:27):
So it was like somewhere between technology and
education.
And then I did a PhD in education.
And while I was doing my PhD,
I was working at the Center for Learning and
Teaching where I am now.
But early in my role,
my work was to support people with using technology
and to help them with things.
like learning outcomes and alignment.
And something started to feel off about all of
(03:48):
this.
Like I could see some people embracing this and
some people resisting it.
And at first,
of course,
the narrative is like,
oh,
why are people resisting technology?
This could be so good.
And I started saying nope,
it's not always good for all people.
And then,
when I was doing my PhD,
my PhD was about critical thinking.
And I kept coming across critical pedagogy.
And I kept saying,
(04:09):
no,
no,
no,
that's not what I want.
Leave me alone,
I'm trying to do critical thinking.
And then I'm like,
okay,
I'll read about it.
And I'm like,
oh my God,
this is so good.
This is talking about power and talking about
social justice.
And I went so deep into that.
And then started to question,
oh my God,
what are we doing with technology?
We're not approaching technology from a social
justice perspective.
(04:30):
We're not looking at the power dynamics that are
shifted when we use technology.
And I learned about alternative approaches to
curriculum that were not product oriented,
that were not neoliberal and based on outcomes and
measurement,
and all those things that we were teaching faculty
in.
And I was like,
wow,
we have to do things differently.
(04:50):
And I started teaching and realizing as I I could
not,
as a teacher,
just.
Follow a and follow a lesson plan and say,
these are my outcomes.
These are my assessments.
These are my rubrics.
And that works for everybody.
This is gonna work,
and we're gonna be good.
Every time I taught,
I had different students; I teach differently.
And I started to think that a lot of what I was
reading about this kind of thing wasn't working for
(05:12):
me anymore.
So I sought more critical perspectives on the
learned about digital literacies and approaching the
digital world in a more critical way.
And this changed my faculty development practice,
too.
Yeah,
that's so interesting that you come to it from the
computer science perspective.
in that way,
I think so many people start thinking about digital
literacy and identity,
(05:33):
and they're coming at it from the other direction,
and then trying to apply technology.
And you're doing it,
and you've done it in the opposite direction,
which helps me understand sort of the grasp of the
depth that you actually have around this concept of
digital identity and why that matters today in the
world.
(05:54):
I'm wondering kind of what's the difference when
you're thinking about this between just the using
technology and then engaging with it critically and
ethically?
Yeah,
so I mean first of all one of the things I like
to say about digital skills versus digital
literacies is like it's like telling someone that a
(06:15):
pen helps them write.
Learning to use a pen you can just draw or even
put words together,
but that's not quality writing.
And I think it's the same with the digital.
It's like if you give someone a keyboard,
they can type,
but it doesn't mean they can write a novel.
And so with digital literacies,
it's about not just knowing how to do the thing,
but it's about knowing how to use it in the best
(06:38):
possible way to convey a to achieve a and
recognizing the limitations of the thing that you
have.
Like so you know with the pen and pencil example,
like is this better.
to do with a pen or a,
and do you need an eraser for this?
Or is it okay to do it in,
you know,
that kind of thing?
And I think a lot of people just use technology
(06:59):
because it's there.
And this goes back to the problem with young
people,
right?
We think young people have digital literacies
because they know how to use the tool.
It doesn't mean that they're hyper aware of the
kind of risks they're taking when they post things
online just because they know how to post things
online.
And these are the kinds of things we need to
develop ourselves also because the tools are new.
(07:20):
We need to learn how they work.
We need to learn what could go wrong.
We need to understand who created these tools.
What were their goals?
What are the hidden curricula of all the digital
tools?
So the hidden curriculum is part of the critical
perspective on pedagogy,
right?
That we don't always talk about as educational
developers,
like myself.
And we need to look at all these technologies.
(07:41):
And the basic one is like turnitin.
com,
right?
So turnitin.
com is a tool that's existed for maybe 20 years
now,
where teachers create a class and they invite their
students to submit all their essays onto it.
And then the tool will figure out if your student
has copy-pasted something exactly or with very poor
paraphrasing.
from either other students on their database or
(08:03):
public internet or some actual subscription databases
like ProQuest.
And it will just highlight the parts of your
student's paper that were taken from somewhere else.
And then you could show that to your students if
you wanted and ask them to rewrite the paper,
or you could just punish them and have a case
against them or whatever.
And what happens when we approach our students and
(08:25):
start by doing this on the first day of class and
we say we use Turnitin,
we're doing two things (08:30):
we're telling them we don't
trust them as a rule,
and we're teaching them about plagiarism and in the
negative.
sense and teaching them technically what it looks
like and what this tool can detect rather than
teaching them about scholarship and referencing and
the importance of acknowledging where we got our
(08:50):
ideas from and how to mix your original ideas with
the ideas of other people and how that makes your
work better quality.
It becomes a punitive thing rather than a growth
thing,
right?
And that the perpetual use of turnitin.
com,
even if you are gonna use it in a different way,
people say,
oh,
I use it pedagogically to help students learn about
paraphrasing.
(09:10):
And I say,
but everybody else is not doing that.
And in the end,
you're feeding your students' data to this machine.
We're paying this company to give it our so that
it can do its work better.
It's ridiculous.
I'm wondering what you're seeing now also in terms
of you've talked about people before who are
(09:31):
resisting AI in their practice.
What do you think is at stake when we're basically
delegating parts of teaching and learning to AI?
So much is at stake.
So first of all,
I think the talk of embracing and delegating to
AI,
I think what kind of education are we talking
about in the first place that looks like this,
that an AI can do it?
(09:51):
So that's problematic.
I think we're delegating and agreeing to give up
our agency, our identity,
our accountability.
And those three things are really important,
and we're also then potentially harming our,
especially those who are most marginalized by this.
And I'll explain all of that.
So I'll talk about particular ways that people are
(10:12):
saying they're using AI and why I have a problem
with each of them.
So educators saying,
oh,
we'll use AI to write a lesson plan.
And I'm thinking,
wait a second,
you know your students.
Every time we write a lesson plan or we plan what
we're gonna do in our classes,
we know why we're doing it.
We know what we did the day before.
We know what our students have experienced before.
We've met our students.
(10:34):
I do things differently every semester depending on
which students I have.
I think no matter how much I try to summarize to
an AI tool who my students are and ask it to
give me something,
it doesn't have a holistic view of who these
people are.
If I explain they're Egyptians,
first of all,
AI tools are so trained on Western data they have
no idea.
They'll think they're ancient Egyptians at this
point.
So it's not gonna understand my students.
(10:56):
When it gives me ideas,
I won't know where these ideas came from.
I learn a lot from the open internet; I learn
from a lot of other people.
I look at syllabi of other people who teach.
similar things,
but I can go and say,
oh,
this is Lisa.
She's based in the US.
She teaches this.
I can get in touch with her and ask her,
why are you doing it this way?
Why are you doing it that way?
I can try to look for people who are in a region
(11:17):
closer to me.
I'll see who's doing what in Morocco,
or Lebanon,
or South Africa.
And I'll say,
oh,
that context is more similar to mine.
So maybe I'll take something from there.
And whenever I make a about what I'm gonna do in
my class,
I have a justification of why I'm doing it that
way.
AI will not give me that justification.
If it does,
it's not really necessarily connected.
I hate it when people say I'm gonna use AI to
(11:39):
give feedback to my students.
I think if students are writing,
they're writing for an audience,
their work needs to be read by an audience.
And the feedback has to be about how their writing
has touched me,
how the writing has influenced me,
how it connects to what we've done,
and how it connects to what's happening in the
world.
And then using AI to do that is just giving them
a generic sort of,
(12:00):
yes,
it's gonna look different each time,
but it's really,
really very generic.
And I've heard,
like,
using AI for grading and things like that,
that AI tools tend to normalize things to
the average.
So if someone's writing is,
like,
very creative or they're a non-native speaker,
so they write in a very different way,
or something like that.
It's not gonna try to bring everything back to
(12:20):
the middle,
and so that's very problematic.
But I also think the use of AI in things like
when they talk about personalization,
for example, of learning,
and I'm thinking,
why are we doing this?
Let the student be exposed to a lot of things and
the chaos and the messiness,
and then let them choose their pathway.
If you overdo the personalization,
(12:41):
you keep putting people into boxes based on the
data points you have about them,
which are behavioral data points and don't really
express what's inside a person.
There's a lot of people who will make the same
choice of what book to buy,
but their reasoning for why they're buying that
book is different.
If you keep recommending similar books based on
what other people are doing,
that's not necessarily gonna work for youEven the
(13:03):
very simple algorithm that Amazon uses; it doesn't
work very well for me,
for example,
and AI is even more complex,
and we don't even know the black box where it's
coming from.
And then the things like learning analytics are so
problematic.
I think and whether or not they use AI,
we're using a tool that looks at,
again,
data points related to people,
and then say,
oh,
this person is at risk of failing,
(13:24):
or this person is whatever.
So it's labeling people again based on things you
can see,
whereas there might be really important things that
are not visible to the algorithm.
And it's kind of like,
oh,
the teacher now doesn't have to care about the
students anymore.
Like the teachers should get to know the students.
And when we talk about,
oh,
but there are large classes,
well,
we shouldn't be having those huge classes.
(13:45):
We should have teaching assistants.
Like we should think about human solutions to these
problems rather than say,
oh,
we have an algorithm for that,
and then give that responsibility to the algorithm,
and then nobody's accountable for what could go
wrong or how someone could get harmed by this?
Yeah,
you're raising the most important questions of
today,
I think.
I mean,
because we're at this age where platforms and the
(14:08):
use of platforms,
they're shaping how we see ourselves and how we
see each other.
And it's troubling to think about this context.
So you're there,
working with students,
educating students and educators to critically
navigate these kinds of things,
like bias in our algorithms and the politics around
(14:29):
that.
How are you doing that?
And are there ways that you're— are you giving
alternative ways for students to be able to use
AI,
for example?
So,
first of all I think one very important thing to
keep in mind is that not everyone talks about
issues like bias and social justice on a regular
basis.
And if you start to talk on the level of the
(14:50):
problematic biases,
and inequities and ethical issues in AI to someone
who doesn't really truly understand bias,
you're not gonna get anywhere.
Like even trying to explain to people that AI has
implicit biases,
a lot of people themselves are not aware of their
own unconscious biases or don't even know the
concept of unconscious bias.
And they think that if you prompt the AI,
(15:10):
don't be biased,
then it's gonna be okay. I'm like,
no,
it doesn't work like.
that So the first thing I do with my own
students,
I teach one course on digital literacies and
intercultural learning.
I start with the intercultural,
talking about identity and talking about hybrid
identities,
right?
And talking about bias and then the systemic level
of it as othering.
We talk about different levels of oppression and
(15:32):
inequities and so on.
And then they actually have an intercultural
experience,
and it's done online.
So it's digital as well.
And then we talk about the digital world,
and then they can apply everything they've learned
about these things into the digital world.
But they need to,
first of all,
reflect on their own biases and the ways they
interact in the world,
the kinds of inequities that they're part of,
(15:53):
how they might be complicit in the oppression of
other people,
and the ways that things that they can see in the
world that are easier and more,
they can touch it,
right?
And then they see it with AI.
And coming from where I come from in Egypt,
where I have Muslim students,
Arab students,
the biases in AI are more obvious.
I think like I come across the biases in AI like
50% of the time; I think a white man in America
(16:14):
maybe sees it like 5% of the time.
It depends on what you prompt it with,
right?
Because my students have learned about implicit bias
they know how to prompt AI to see and bring out
these implicit biases.
And then they see it.
And what I do with my students is that I allow
them to use AI because I think the best way to
stop someone from using AI is to let them use it
and reach its limits and then realize,
(16:34):
oh my God,
this is a waste of my time or this is biased or
this doesn't help me that much.
But at the same time,
I wanna recognize that there will be students who
will benefit from it for grammar checking and some
who really haven't been taught how to brainstorm
and it'll help them with brainstorming.
And if I'm not teaching them the things,
if I'm not gonna teach them how to improve their
and the writing center doesn't have enough staff,
(16:55):
and they don't have enough help with this.
Then,
yeah,
you can use AI to help you with that.
I teach them even about the research tools that
use AI and then explain about why they might not
wanna use them anyway,
even though they exist and they give real
references because they don't really tell you what's
there,
and you haven't gone in depth,
and you haven't really learned.
And the more they do things in the world that are
(17:15):
experiential and authentic in my class,
they start to realize the AI wasn't there with us
when we had that experience.
The AI can't help me reflect on this. I'm gonna
do something with a real community here.
AI doesn't understand this; AI can't help me in
this.
And they start to see that,
even though there's a lot of discourses around them
about how AI is gonna,
(17:36):
I don't know,
transform everything,
they realize it's not really that transformative.
I actually also have them go interview people in
industry and professors in their discipline because
it's an interdisciplinary course.
I have two liberal arts things,
so people are coming from different I have
engineers and historians and business students and
everything. And then they find out what people are
actually doing in real life,
(17:57):
rather than the hype around it.
And so there's that.
With faculty members,
it's trickier.
With faculty members,
there's part of it where,
first of all,
I don't want people to be scared of it,
and I don't want people to shame each other about
it.
I understand that in some disciplines,
I'm not talking about generative AI,
(18:18):
but like AI tools in general,
have been useful in the past.
Even back 20 years ago,
when I was an undergrad,
the use of AI to detect cancer from imaging was a
big deal.
And this is a very specific AI that does this one
thing,
and you give it millions of scans that no human
being would have had a chance to see,
(18:39):
and it learns this one specific thing and it's
been tested that it's really good at this one
specific thing.
It doesn't hallucinate as much as the generative AI
does.
And there's still gonna be a doctor there who's
gonna also look at it and do the treatment and
talk to the patient.
It hasn't replaced the doctor,
it's just speeded up a little bit the work of the
radiologist,
but there's still a human being doing the scans
and there's still a human being talking to the
(19:00):
patient before and after and during and all of
that.
So I wanna respect that these things happen and
they could be helpful in revolutionary experience in
agriculture.
People who teach architecture say if you have a
foundation of architectural literacy,
then you can use AI and discard the things that
don't work,
and it helps you move.
So I wanna respect that.
And at the same time,
(19:21):
I don't want people to overhype it for the things
that it doesn't help with.
And I wanna also respect the people who recognize
the issues of power and the way AI has been used
for surveillance and the way AI has reproduced
inequities before.
Or that just say my priority is to teach students
this thing,
why do you want me to waste my time learning AI?
(19:43):
The only thing is students do have access to AI.
So I don't think AI is inevitable,
I don't think it's inevitably.
gonna transform education.
I don't think it's inevitably good.
I do think students all have access to it,
and they know how to use it.
And the first semester it was out,
not all of them were using it. Now they all use
it.
All educators need to know what's possible with AI
so that they can make sure that they design their
(20:03):
assessments in such a way that if it's an
important assessment,
you need to know that your students have done it.
But I would never ever tell someone you have to
allow your students to use AI,
or you have to embrace AI,
or AI is definitely gonna be something that you
integrate into your classes.
And I'm so careful in all of our communication
never to go out and make it sound like we're
always encouraging AI use.
(20:25):
I was recently at a conference called the AI for
Good Global Summit in Geneva that's organized by
the And almost every use of AI there that was
supposed to be for good didn't sound like it was
for good. But also,
there was someone who was talking about,
for example,
lawyers.
And I was very shocked that lawyers and barristers
were encouraged to use AI in their work for
efficiency.
(20:46):
But then what happened is they had multiple levels
of AI hallucination,
right?
It can hallucinate cases that don't exist.
But what's worse is it can take cases that exist.
but hallucinate the summary of it,
and it can hallucinate some of the inner details.
And to get a 200-page something that's produced by
AI and have to revise that,
(21:06):
and then make sure that you submit the right thing
to the judge.
And if you make a mistake,
you can get jailed.
So what's the point?
And there are so many papers coming out now about
this,
this efficiency gains from AI. You're just making
people do awful work of revising the outputs of
the AI rather than just doing the hard work
themselves,
which would have been rewarding,
would have been more useful to have actually read
(21:27):
the full cases and written your own summary than
to get the AI one and then have to revise it and
make sure it hasn't missed anything or made a
mistake or all that.
And of course,
in the legal system,
you also have the issue of deepfaked evidence now.
So that's going to be a huge issue.
And why are we using AI for this?
Shouldn't lawyers be accountable for the evidence
they bring?
(21:48):
Shouldn't judges be accountable for the decisions
they make?
Shouldn't they be able to justify why they're doing
what they're doing?
Yeah,
I think the thing is we haven't seen what that
research looks like when people are doing that.
Everybody is just rushing because there's money in
AI.
I can tell you here in the US around education,
if you're not talking about AI,
(22:10):
if you're talking disparagingly about AI,
you're literally not invited to conversations.
So it's a really,
I think we find ourselves in a very difficult
situation in this way. When you're teaching, you
talk about this concept of digital well.
What is that connection to how we're automating and
(22:33):
generating through AI and digital well?
Obviously,
there feel like there's some pretty big risks to
our humanity in these spaces,
like you talked about mental health and identity.
How do we begin to do this?
I talk to my students about well-being in
general. So I care about their well-being.
In general,
so I show them a video by Mays Imad where we
(22:56):
talked about trauma-informed pedagogy during the
pandemic,
and we talk about what helps you through the day,
what helps you during stressful times,
and we talk about all of that.
Then we also talk about online,
like what do you do online that helps with your
stress levels and what makes it worse,
and how do you deal with it.
And we talk about how algorithms can keep showing
you the kind of thing that you're seeing and go
(23:18):
down,
you know how it goes with YouTube; it gives you
more extreme content.
So if you go more and more into political one
direction,
it will take you farther and more.
extreme in that direction and vice versa.
But also the way social media.
I remember when Facebook did that experiment where
it controlled your well-being by showing you things
that would make you happy or upset.
(23:39):
And I remember I was one of the people who was
shown the things that would make me happy.
But you know what the problem was?
It was during a time that Egypt was going through
a lot of political problems,
and I was not seeing any of my Egyptian friends.
I was only seeing my Western friends' posts about
work things.
And so I wasn't feeling as upset as everyone else
(23:59):
around me,
which is very problematic.
But also the point is we get so into engaging
with ideas and with people and so on online that
you may not be looking around you and engaging
with what's happening around you in real life.
And that's affecting our well-being.
And when you think about even the simplest things
like GPS,
(24:20):
which is so important and so useful,
right?
And maps and all that,
right?
But sometimes you're so looking at your phone that
you're not looking around you,
enjoying the nature and noticing that the building
you want is actually right in front of you and
not enjoying the moment.
Sometimes we're so into digitizing the moment that
you're not actually living the moment,
so you're documenting the moment. But I also want to
(24:43):
acknowledge that a lot of my well-being is due to
having friends digitally.
So,
when my daughter was very young,
I was finishing my PhD.
I had a lot of friends based outside Egypt who
were on a different time zone,
so that when she's asleep at midnight,
I could actually find someone to talk to
to encourage me while I was trying to finish
writing.
My best friend is Mia Zamora,
who co-created Equity Unbound with me,
(25:04):
and she's a huge part of my well-being.
And Equity Unbound itself,
as a community of global educators,
we come together.
Just right now,
we had a conversation with Chris Gilliard about
resisting AI,
and we're like we need to get together and make
plans and share how we're resisting AI in our
institutions.
And sometimes,
when I do these things with Equity Unbound,
they allow me to do public things that then help
(25:27):
me find the local folks who believe the same
things.
Sometimes,
when you do it locally in your formal position in
the hierarchy,
you can't really be as free.
Well,
I wanted to ask you,
actually,
you went right into my next question. I wanted to
ask you about Equity Unbound,
which is how I first came to know the work of
you and Mia as well,
the amazing work that you do.
(25:49):
And I understand some of its beginnings about
connecting learners across geographies.
and contexts and I have to say from where I sit
here is there anything more important today than
trying to bridge and connect people across borders?
So I would love to hear a little bit about how
(26:10):
Equity Unbound came to be and then what you're
doing today around this global learning and
of course the importance of it.
Yeah,
so I want to say that our motto is from an
article by a Lebanese author called Dina Munsur,
and it's the only way to make borders meaningless
is to keep insisting on crossing them.
So just what you were saying there.
(26:31):
So first of all,
the prerequisite to Equity Unbound was something
called virtually connecting where when My daughter
was young; I couldn't travel to conferences,
and I wanted to talk to my friends who were
there.
And so we would have hallway conversations during
breaks and lunchtime so that virtual folks who
couldn't be at a conference could connect with
keynote speakers and just others who were there.
And we'd have conversations.
(26:52):
And what happened there is that we realized we
were challenging the academic gatekeeping of
conferences that doesn't allow certain people from
the Global South,
or young mothers,
or early career scholars,
or adjuncts to attend these conferences,
but allowed us to have a voice in conferences.
and stay up to date on what was happening.
The content wasn't important; it was those
conversations,
the social conversations,
(27:13):
that were missing.
And we made a point of having these hybrid
conversations where some people were virtual,
some people were in person doing that.
With Equity Unbound,
it was Mia and myself and Catherine Cronin from
Ireland,
also the three of us together connecting our
students.
And we said,
you know,
we're not going to just do it between our
students.
Let's make it open and other people can join in.
(27:33):
What ended up happening the first two years we did
this,
or two semesters,
is that educators came to these things and
educators found them helpful.
And then,
more educators wanted to do it.
And then,
when COVID happened,
we’re like educators need us right now. What's the
thing that people in our community need right now
that we can do?
'Continuity with Care' was the term that we used
(27:54):
for the beginning of the pandemic,
right?
So we crowdsourced how people were going about it.
We had conversations about it.
And we created a small Twitter DM that still lives
on now but moved to Signal because nobody wants to
be on Twitter anymore of a core group of people
who wanted to keep supporting each other from all
over the world.
And then at some point two three things happened
(28:16):
actually in a row.
One of them was the murder of George Floyd.
And so many,
so many right around the same time in the US.
And one of my African American colleagues here at
AUC,
she said, "Maha,
this is something you should do something about.
You're the person" And together,
she,
myself,
and me as Mia Zamora,
we created Socially Just Academia.
And we brought together educators to talk about
(28:38):
what's the role of academia in this.
We got a grant from the Hewlett Foundation and we
did several sessions on what's the role of academia
in promoting social justice.
And at the same time this was happening,
this was like the summer of 2020 We were
approaching the beginning of a semester where
everybody around the world was going to be online
for the first time without ever having seen their
students And we're like oh wait a Now we're going
(29:01):
to meet students for the first time online And
people were like I figured out how to teach online
but I have no idea how to do this building
community thing So we're like OK we're going to do
community-building resources We know how to do this
We've been doing it with virtually connectingWe
have this thing called intentionally equitable
hospitality It's like how do you make these
conversations equitable How do you host them as an
educator in that way?
And so,
converting it from hybrid to fully online was easy.
(29:23):
And we would record videos,
Mia and I,
in our living room,
with our kids coming on and other people coming up
to pretend to be students,
experience the way of building community and reflect
on whether you would be able to use this in your
class,
how you'd adapt it.
And the important thing was all these resources you
could adapt them.
And we would give you ideas for adapting them
because people didn't always have the imagination.
(29:44):
Oh well,
if I don't have Zoom and breakout rooms,
how can I do this?
Or if I had to be asynchronous,
how could?
I do that.
Or if my students won't turn on their cameras,
how can I do this?
So we had that.
And then the most recent thing we've done was
Mid-Year Festival,
which we've done now for four years.
It's our fourth year.
And this is also funded by Hewlett.
And the Mid-Year Festival,
the community building resources,
(30:04):
by the way,
were funded by a UK organization called 1HE,
which is for educators around the world.
The Mid-Year Festival was as soon as we started to
go back face to face,
people started to say,
wait a minute,
I don't have time for online conferences anymore.
I can't stay online for a whole day anymore.
And we're like yeah,
if you're not traveling.
Why does it have to be a three-day online
(30:25):
conference?
Can we have it over three months (30:25):
June,
July,
August,
the middle of the year,
and just three or four sessions a week?
And then you can pop in and out of whatever you
want.
And let's make the themes about well-being and
about social justice,
the things that people weren't talking about enough.
And themes also related to intergenerational
learning,
where my child and somebody else's children got
(30:47):
together and played Minecraft.
And we would do reader's theater,
where we'd go and read books— not books,
but plays,
actually— together and pretend and make voices,
but also have deep conversations about social
justice and critical pedagogy and well-being and
grief and all of the things that we needed to
talk about in community.
And you'd have three months to work through it
(31:08):
together in and out,
and you'd get to know people really deeply.
And what happened after that first year,
we had like 13 organizers from the Continuity with
Care group,
almost starting from there and then creating this
thing.
And then people who came often would come up with
ideas.
So we were like emergent,
the whole idea of emergence and not the
pre-planning of everything,
(31:30):
right?
Not pre-planning your outcomes. Being what your
community needs and adapting intentionally,
it's called 'intentional adaptation',
Adrienne Marie Brown calls it,
to what the community needs.
And then the next year,
people who were participants became organizers.
One of them was an undergraduate blind student in
my institution,
whom I met on Twitter,
and then he joined my fest.
(31:51):
And then he came to the next year as an organizer,
and now he's an intern with us.
And we've had people from all over the world
joining and becoming organizers,
and contributing and helping with organizing,
and bringing people they know that I wouldn't have
known on my own.
And the community has been growing.
It's been four years now that we do this.
(32:12):
I do want to say.
that the Equity Unbound and my fest community when
AI came about I was in a conversation with Anna
Mills,
who's someone that most people should know by now,
who's done a lot of good work on AI.
And we got together before I even gave a workshop
in my own institution.
Anna Mills and I did something through Equity
Unbound.
We're like,
come on,
let's test your assignments on AI and let's show
(32:32):
you how to use this thing.
It was practical and hands-on.
There were a lot of people doing things about AI,
but they were very top-down rather than workshoppy.
People really,
really benefited from that.
And there were a lot of people from my
institution.
Who came to that session,
and that helped me design the local things?
So,
a lot of times,
I'll experiment in the public spaces and then say,
okay,
this works.
Let's bring it into the institution.
(32:53):
Now that I know that it works,
I'm wondering if you have strategies for how people
who are not necessarily ready or aware enough about
these issues to bring in this,
I'm going to call it an alternative perspective,
than what most educators are getting in more of
(33:13):
our mainstream education media.
What are some strategies or suggestions,
or maybe there are things you're doing for us to
bring those conversations in?
Yeah,
I'm going to suggest if.
we're talking about critical perspectives on AI.
I'm going to suggest Audrey Waters.
It's called Second Breakfast.
And then if you follow the links there,
you'll find a lot of wonderful people like Helen
(33:36):
Beetham,
and she'll talk about Tressie McMillan Cottom.
She'll talk about Chris Gilear.
She'll talk about a lot of other people too,
and then you can follow that.
And there's a lot of books that give you the
critical perspectives on AI and things like that.
But in a practical way,
like what are you going to do in your classes?
How are you going to do this?
First of all,
yeah,
we offer a lot of things through Equity Unbound,
the Mid Year Festival.
(33:56):
MyFest and it's always either there will be a
price to the Mid Year Festival,
but free is always an option.
Everything we've ever done is either free or the
possibility of free exists,
or there are different prices.
So that's always going to be possible.
The other thing is check people out on social
media.
Even if you're not going to be interacting with
them,
you will find people on social media,
not on Twitter anymore,
(34:16):
but on Blue Sky.
You might find on LinkedIn,
you might follow these people,
and then you'll find their blogs or like I blog.
There's a lot of other people who blog,
and there's a lot of people like Leon Furze,
for example,
who gives useful advice,
but he's also quite critical because he focuses on
the ethics part.
And then the other aspect is think about how you
can build allyships locally.
(34:39):
Because sometimes it's not safe to build it in a
public way,
but you can build it in a private way.
So just contact someone privately rather than
publicly if that helps.
This kind of work is hard and it has,
Mia and I talk about this in one of our latest
papers.
You have regressions,
but you start feeling like you're making progress,
(34:59):
and then something happens and someone threatens
you,
or someone you get passed up for promotion because
of your work on this.
And I've been hearing people who quit their jobs
because they're afraid of losing their jobs.
And,
of course,
with the issues happening in Palestine right now,
there's a lot of people who have lost their jobs
and been threatened.
And it happened before,
but it's happening even more now.
(35:21):
And so,
you sometimes need to do this work in private.
It might be someone outside your institution who
can support you,
but you definitely need someone or else you're
going to die from thirst of feeling like you're
completely alone and there's nobody else like you
in the world.
But that's never the case.
There's always someone like you in the world.
They're just not necessarily right in front of you
or they don't feel safe expressing themselves to
(35:45):
you.
So something like,
for example,
un-grading.
So many of us have felt uncomfortable with the
practice of grading for years,
but we didn't know what to do about it.
We didn't know there were other ways of doing it.
When I first started doing ungrading,
I wasn't telling anyone about it.
Could you tell us about what ungrading is?
Right.
So ungrading is a broad term where you want to do
(36:06):
something other than grading.
So,
where you think there's something wrong with
assigning numbers and letters to humans with each
other.
There's a lot of reasons why you might want to
resist it,
but there are so many different ways to do
ungrading.
Some people do it like Asao Inoue,
who is a wonderful,
wonderful American educator,
(36:27):
Japanese American.
He talks about any labor-based grading.
So,
he agrees with students (36:32):
you do these tasks,
you get that grade; you do more,
you get that grade; you do more,
you get that grade.
And then,
whatever you do,
I'm going to help you make it better.
So,
the quality,
that's kind of the point.
I think our aim as teachers is that everybody
should be able to get an A if I give them a
chance to get better.
(36:53):
So,
you do it badly,
I'll give you feedback; you improve it until you
reach a certain point.
There's things like that.
What I do in my class is I have students do
self-assessment and I tell them you need to be
able to be good judges of your own efforts and
your own work.
Because in life you need to know if what you've
done is good enough to submit to the journal or
to submit to your boss or to go on TV and talk
about it or whatever your job is at a minimum
(37:15):
level before you show it to someone else.
But you realize that so many people want to do
this and they just don't know how to do it within
the restrictions of their institution and they need
to have a community of people to talk through it.
I wanted to go back to another question when we
were talking about how people who for the first
(37:35):
time are thinking about AI and some of the
problems with it and aren't able to they don't
know where to turn for help.
And you gave some fabulous suggestions.
I'm wondering if you've been thinking about or had
experience in terms of policies that are being
created by governments or by institutions of
(37:57):
education around this that are not in fact
reflecting the things that we've been talking about
here today.
And if you have,
what are some examples of things that you've done
there?
That's interesting.
Most of the policies I've seen have been relatively
flexible and open,
allowing different instructors.
to do whatever they want,
(38:17):
which I think is a good call.
When you're in an uncertain space,
when you're uncertain,
trying to introduce certainty is fake,
and it doesn't work.
So,
trying to ban AI would never work anyway.
Trying to open it up completely isn't fair to
certain instructors.
So,
Lance Eaton had collected from people very early on
(38:38):
what their policies are at the classroom level,
and Brian Alexander had collected from people what's
happening at the institutional level.
So,
I can tell you what we did in my institution,
and I actually have not seen policies that are
very different than ours,
to be honest.
We crowdsourced it with faculty and students.
So,
what we have as our policy is this (38:57):
So,
for instructors,
decide what you want to allow students to do with
AI and whatnot,
and be clear about it in your syllabi and in your
assignments,
and tell them how you want them to cite it if
you want them to cite it.
And if you don't want them to use AI,
you need to explain to them why and try to make
sure your assignment is difficult to do with AI,
because just telling them not to do it will work
(39:19):
for some people but not everyone,
and then that's not fair to everybody else.
So,
that's why we tell the instructors and we tell the
students (39:25):
never use AI unless you check with your
instructor first that it's okay with them and then
if you use AI,
make sure that you verify what you found somewhere
else because AI can give you incorrect information,
and eventually they'll figure out it gives too much
incorrect information to use it regularly.
The point is this thing of you have to do the
work yourself still,
like don't let it think for you and short-circuit
(39:47):
your work and things like that.
But most of the policies I've seen give instructors
that freedom,
and I always say it's very different if you're
teaching first years than seniors,
if you're teaching writing versus engineering.
And it also makes a difference what is your field.
We were just talking about this in another
conversation is business faculty tend to be more
(40:08):
willing to embrace AI because they know that in
the business world people are using AI,
and so it makes sense for them that students need
to learn to use it in university so that they're
ready for it in the job market.
And I think when students use it,
if we're also teaching them critical thinking and
all the important judgment skills,
and if we're also giving them a foundation of
(40:29):
social justice and all of that,
then eventually if they're going to be forced to
use AI,
they're going to be very careful about how they
use it.
And they won't do the stupid thing that the lawyers
were doing,
which is submit cases that you're not sure even
exist,
kind of thing.
Yeah,
that's the best case scenario,
and that's a great template I think for others to
use in that way.
We've talked a lot about what the challenges are
(40:51):
around digital learning and digital spaces.
From your perspective,
what are some of the most powerful opportunities?
What are the things that you're excited about as
you see emerging in this digital world,
whether it's through AI or not?
I'm just asking that question more generally.
Yeah,
I'm always excited about anything that can enhance
accessibility.
(41:12):
I think definitely there have been a lot of
advances with voice to text and image to text that
helps people who are visually impaired access things
that previously we hadn't made accessible to them.
I worry,
of course,
that it can still hallucinate and make things up,
and make them think that they're seeing something
that they're not seeing. And this is again very
culturally biased sometimes.
(41:32):
And I also am concerned about using it in ways
that might reduce human responsibility for other
humans by delegating it to a technology.
It's like,
why should I put alternative text on my images if
AI can do that for me?
And you miss out on the nuance of context of I
put that image there to show you a particular
(41:53):
thing.
AI doesn't know that it's gonna give you so many
details.
It's gonna drive you crazy.
We actually had a session about this recently at
MyFest with a literature professor in the US,
and he has to host a blind student,
an undergrad student who's a disability advocate.
I get excited about that.
I'm also honestly I think I'm gonna start to be a
little bit optimistic that because most professors
(42:15):
should be critical thinkers at the core,
that they will eventually start to see that the
hype around AI is problematic just like I think
some people were using Turnitin.
com which we talked about earlier,
which also uses AI,
by the way but old AI,
eventually stopped using it and started to realize
the harms that it can cause.
I do think we need to talk about the harms a lot
more.
(42:36):
And other than the ethical issues I've talked
about,
there are all kinds of other harms related to
human labor exploitation,
copyright violation,
and environmental costs.
But at the core,
the reasons we're using AI sometimes are wrong in
the first place; like we shouldn't be using AI in
these things that require human care and judgment,
(42:56):
and wisdom and decency and responsibility.
And so I'm hoping that as people start to use it
more,
they'll start to notice these limitations more and
stop using it so much,
and stop the hype.
Great, well,
thank you very much.
I know we're at time here.
I wanna just give a shout out to the work you've
done with Equity Unbound.
(43:16):
There'll be links to some of the various references
that you've mentioned.
It's such an inflection point in how AI has sort
of taken over what we're thinking about digitally.
And I've really appreciated how you've sort of
drawn us back from that down to the very core
about the future of learning and the role of us
(43:38):
humans in that.
So thank you again. Before we finish,
I always like to leave space for one final
question. Sommething I ask all my guests,
Can you make up the title of the book that you
wish more people would read?
(43:58):
Oh,
I'm gonna say something a little bit weird.
Go for it,
I'm gonna say us.
That's it.
Okay,
love it.
Thank you.
Thank you everybody for listening to the show this
week.
This has been Lisa Petrides with Educating to be
Human.
(44:19):
If you enjoy our show,
please rate and review us on Apple,
Spotify,
or wherever you listen to your podcasts.
You can access our show notes for and information
on our guests.
And don't forget to follow us on Instagram,
Blue Sky at Edu to be Human.
That is E D U to be Human.
This podcast was created by Lisa Petrides and
(44:42):
produced by Helene Theros. Educating to be Human is
recorded by Nathan Sherman and edited by Ty Mayer,
with music by Orestes Koletsos.