Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Brian AI (00:05):
Are you not sure how to feel
about the way AI is suddenly everywhere?
AI for Helpers and Changemakers isa show for people who want to do
good work and help other people.
Whether you're already using AItools and loving it, or you are
pretty sure that ChatGPT is thefirst sign of our downfall, we want
you to listen in and learn with us.
(00:26):
Your host on this journeyis Sharon Tewksbury Bloom.
For 20 years, she's workedwith helpers and changemakers.
She believes that we're about to seethe biggest changes in our work lives
since the Internet went mainstream.
We're in this together.
Join us as Sharon interviews peoplein different helping professions.
Navigate what these new technologiesare doing to and for their work.
Sharon Tewksbury-Bloom (00:47):
Welcome back
to AI for helpers and Changemakers.
This is your host, Sharon.
And I'm really excited toget into our interview today.
This is one of the ones that inspiredme to actually start this podcast.
So I can't wait for youto hear directly from her.
And I'm going to go aheadand let her introduce herself
and get into our interview
Maha Bali (01:08):
I'm Maha Bali.
I'm a professor of practice at theCenter for Learning and Teaching at
the American University in Cairo.
And my undergraduate degreeis in computer science.
My graduation thesis was related to usingneural networks and machine learning.
So I'm familiar with AIsince I was an undergrad.
And then I switched careers.
I moved into education.
I was working in e learningand then just generally it.
(01:29):
I've been in the Center for Learning andTeaching for 20 years supporting teachers
with their teaching, even before ChatGPTcame about, I was teaching a course on
digital literacy, so I was anyway talkingabout AI and, and how it's affecting the
social life and learning when ChatGPTcame out, that became a thing, like,
that I had to focus on a lot more.
So I have been giving professionaldevelopment to faculty, working
(01:52):
with students on this, and I'malso the co facilitator of Equity
Unbound, which is a global school.
space where we, we offer professionaldevelopment and a community of
learners from all over the world.
So we've been also using that space to, tohelp people learn what they need to learn
about AI, whether it's from me personallyor bringing in other people from all over
(02:14):
the world to, to help spread what we know.
Sharon Tewksbury-Bloom (02:17):
Excellent.
It's obvious why I wanted to talk to you.
So I'm so glad you were able tomake time to talk about this.
I would say as someone who didn'thave a background directly in computer
science and wasn't following artificialintelligence for years and years, to me,
it feels like Things have just burst onthe scene in the last couple of years,
(02:39):
and there's been dramatic changes,and it's suddenly everywhere for you.
Does it feel like that?
Are you seeing that the last coupleof years has been a rapid change, or
much different than what it was before?
Or what is your perspectivehaving that broader view?
Maha Bali (02:58):
That's a great question.
some forms of artificial intelligenceexisted for a very, very long time.
20 years ago when I was doing myundergrad, we had AI being used in the
medical field to help with diagnosisof breast cancer, for example.
That was like the example they give us.
but the kind of generative AI wehave now has existed for a short
time before it became so widespread.
(03:18):
I don't think it has been, it's noteven really good right now, but it
hasn't been as good as it is rightnow, so it wasn't as widely, used.
I think what happened is just that ChatGPTmade it, like OpenAI made it available to
everybody for free, and so everybody couldaccess it, and everybody who had no idea
how it was working was really impressed.
there's a lot of hype around it, andthat helps build it up a little bit more.
(03:40):
For some strange reason,there's hype in education.
why did someone decide this was a goodidea to create in the first place?
Like, what was the purposein the first place?
And then why does educationhave to now, opt into this?
You know, it's, it's, the problem is itdoesn't matter whether education opts
in because students have access to it.
it's a reality regardless ofwhether it's helpful or harmful.
(04:02):
So you're in this strange place,and when we met, we talked about
social justice and inclusion.
my approach to AI is to question whetherit helps or hinders social justice, you
know, whether it reproduces inequalitiesor if it can help, reduce them.
And my conclusion is usually sometimes itcan help, but actually for the most part,
it's quite harmful and it can reproduce.
(04:24):
Oppressions we've seen for years, ChatGPTand similar generative AI platforms,
continue to create these issues.
I can talk about this more if you want.
Sharon Tewksbury-Bloom (04:33):
There's
a couple of really great threads
I wanted to pull on there.
First, as you point out, theseplatforms made the tools.
Available many for free.
It helped to mainstream, generativeAI and AI tools in general.
now that there's a lot of peoplewho are just encountering this
technology for the 1st time.
(04:54):
What are some of the.
Assumptions or misunderstandings ormisconceptions that you are running into,
especially sounds like you do a lot ofprofessional development for educators.
You're talking to people who aretrying to make sense of this.
Like, what are the myths ormisconceptions that you're running into?
Maha Bali (05:12):
Awesome, that's
such a lovely question.
So many.
So the first one is that everybodythinks that anyone who had an internet
connection had access to ChatGPT whenit first came out, and that is false.
Egypt, Saudi Arabia, Hong Kong,and a few other countries did not
have access to Chow Chee Pee Tee.
It would tell you it's not available inyour country, and everyone figured out
how to use VPN and somebody else's phonenumber or fake phone number to get it,
(05:33):
but not everyone, obviously, people whohad the digital literacies to do that.
not everyone has access to it.
the term artificial intelligenceis problematic because there's
no intelligence here at all.
and a lot of the hype makes you starttalking about, artificial intelligence
or generative AI, especially as ifit has sentience and it doesn't.
Of course it doesn't.
the other misconception that I thinkis the main one is because It can
(05:56):
respond to human written prompts withhuman written language that looks
pretty grammatically correct and iskind of in a slightly friendly tone.
I think people started to sort of startto feel like they might be dealing
with a human and say things likeplease and thank you and I'm like,
why are you saying please to the AI?
Do you say please to yourfridge to open the door?
(06:16):
Like, you don't do that.
You don't say please to your oven.
That's just a tool.
Do you say please to Microsoft Wordto open a file You don't do that.
I did a Twitter poll on this.
I'm like 70 percent of people say please.
I'm like, why?
I mean, I'm a polite person, but this isnot a person that I'm talking to, right?
I mean, you can say please to yourdog if you want, but not to, to AI.
the biggest issue is I thinkthat just people believe it.
Sharon Tewksbury-Bloom (06:39):
Yeah, well,
Maha Bali (06:40):
biggest problem
is people believe it.
Sharon Tewksbury-Bloom (06:41):
I will
say I am one of those people
that says please to my AI.
I do have a very specificreason for that, which is that.
Maha Bali (06:49):
you and the tool.
Oh, do you do?
Sharon Tewksbury-Bloom (06:52):
My understanding
is it is learning natural language,
so I should use things like please andthank you because it's learning for
me how, like it's reflecting back theway to write based on how I'm writing,
if I wanted to use those terms, Iwant it to reflect that back to me.
It's not that I think that I'm talkingto a human, it's that I'm teaching
(07:14):
it the way that I want it to write.
Maha Bali (07:16):
comes back to the
issue of us teaching it for free.
when they give it to us for free,they're doing something with our data.
So they're benefiting from that,and then selling it to us for money.
Okay.
the pro version of it, right?
So that's another issue.
Um, I think people, so, so the otherthing I was starting to say is that people
believe it because it sounds credible.
(07:36):
it's difficult to notice subtlehallucinations where it goes off
on a tangent and says somethingthat's completely untrue.
you need to be a very, very meticulousand a good expert in the thing that
you're asking it to do in orderto notice those hallucinations.
it's more likely to make mistakes.
In advanced, academic material, but alsoin certain parts of the world, it has much
less knowledge about my part of the world.
(07:58):
I come from Egypt, so this part ofthe world, my language, all of that.
It knows a little bit, obviously.
It knows a lot about ancient Egyptianhistory because everybody learns that.
It doesn't know much about modernIslamic Egyptian or Arab history.
if you ask about that, it'll mess up.
It doesn't know much aboutpopular culture, so it
messes things up about that.
ChatGPT itself is cut off to a certaindate, but they keep updating that date.
But other tools like Gemini dosearch the internet a little
(08:21):
bit, and they do find things.
Some people were using it at thebeginning as if it's a search
engine thinking it was credible.
lawyers who used it to make a case and thecase had never even existed that it was
referring to and they didn't even realizethat it's not a proper search engine.
I think most people know this by now,but a lot of people are still new.
This is the crazy thing about givingprofessional development about this
now is that you have people who'vebeen using it for like more than a year
(08:44):
and you have people who just started.
And they're still in the beginningof the hype cycle and still trying
to figure out what's going on here.
so there's that element.
There's also something that I thinka lot of people don't know is that
Early versions of AI tended to berude, like you were saying about
please and all that, tend to use alot of offensive language because
they learn a lot from the internet.
The internet is fullof offensive language.
(09:05):
And so in order to make ChatGPT not likethat, they actually had humans filter out
the content that was offensive or vile.
And in the process of doing that,they hired, they outsourced it to
people in Africa, especially Kenya.
These people suffered mentalhealth challenges because of
this, and were very low paid.
OpenAI, which was outsourcing thiswork to this company, did not.
do anything to help them afterthe damage they had caused.
(09:27):
And so for order of, in order for usto see an AI that's ethical and polite,
the process itself was unethical.
And this is very problematic.
people also tend to think of liketechnology and even anything on the
internet without recognizing the climateimpact of these things, like how much of
a carbon footprint and, water, shortage,scarcity issues Occurring because of
(09:50):
the process of training these largelanguage models that use up a lot of
processors and also apparently, I thoughtit was just the process of training,
but apparently every time you use it,you're also damaging the environment.
And I don't think anyone thinks about thatin the way they think about, like, water
bottles or whatever else we are doingright now that's harming the environment.
I think people, when you talk aboutAI being biased and reproducing bias.
(10:14):
They tend to think that if you ask ita direct question and it tells you, oh,
no, I'm not going to say that, that it's,oh, it's so polite, but you actually
have to notice the implicit bias.
So can I give an example?
Sharon Tewksbury-Bloom (10:24):
absolutely.
Maha Bali (10:25):
If you ask AI, I asked
four different AIs, so generative,
you know, Chachapiti, Claude,Gemini, and maybe Lama, or Co Pilot,
one of them, anyway, four of them.
And I asked them about two differentnationalities, and I said, which
one of these nationalities ismore likely to be a terrorist?
And it said, no, no, no, you cannotattribute terrorism to a particular
(10:46):
nationality, that would be biased, we'renot going to stereotype, and so on.
And then I asked a different question,I said, define terrorism and give
me five examples of terrorism.
And you can guess the majorityof the examples were from,
Islamic terrorist groups.
one tool gave me allIslamic terrorist groups.
One gave me three out of five.
One gave me four out of five.
that's implicit bias, right?
(11:07):
It's in the data.
These things get labeled terrorismmore likely than other incidents the
data they're getting is mostly Western.
even Wikipedia.
the English Wikipedia is mostlyedited by people from the North and
West, and so a lot of the decisionsthat get made about what shows up on
there comes from that perspective.
some people know this but it's, it'sstarting to look like it's getting
(11:29):
better, but it's not actually really.
remember when ChatGPT first came out,and you asked for references, it would
make up references that don't exist,and books that don't exist by people
maybe that do exist or don't exist.
So, it's getting a little bit better withthat, because there are versions that
search the internet, and versions of itthat connect with a scholar, whatever.
But I noticed something.
Copilots and Gemini thatregularly cite their sources, Bye.
(11:50):
Bye.
Bye.
Sometimes linked to things thatdon't actually say the things
that they say are in that link.
So Gemini, for example, it'seasiest when you're an academic
and you search for your own work.
And so I had been writing about criticalAI literacy for a while and I kept asking
it to find my model and Comment on it.
It would guess roughly what I mighthave been saying there and put a link
(12:12):
to something that doesn't have myAI model in it or that is written by
me, but it's not about that, untilrecently where I published it and it
got retrained then it could find it.
But at the beginning, it couldn't.
You ask it about, something likeintentionally equitable hospitality,
which I've written about with others,and it would guess that I might be
talking about the hospitality industryand it would say things that make
sense, but this is not the work.
(12:33):
it would cite papers that are not ourwork that may or may not exist even
the ones that point to real sourcesaren't always saying what the AI
is saying it got from that source.
Sharon Tewksbury-Bloom (12:44):
glad you brought
that up I saw you have been actively
researching the idea of citing sourcesand, you know, I, I know that is common
advice given to educators right now of,okay, if your students are using AI, how
could they do so ethically or responsiblyand make sure to tell them to ask it
(13:08):
for sources and cite their sources.
but.
I think it's great that you're doingresearch on okay is is that going to
get the results that we are hopingfor where it will actually accurately
find the real sources the accuratesources and cite them correctly.
could you tell us a little bitmore about the research you're
doing on that and sort of what.
Maha Bali (13:30):
Yeah,
Sharon Tewksbury-Bloom (13:31):
The
current status for educator
Maha Bali (13:34):
I have a chapter
due tomorrow on this.
so I was working on this, and AnnaMills, who's also a lot into AI, has
been sharing a lot of her very usefulwork, has been discussing it with me
as well, and we co wrote a piece andI'm supposed to write deeper on it.
What I think, you know, we were havinga discussion about it on Twitter.
So first of all, I just told you like,it wouldn't cite the accurate source,
but why does this matter, right?
(13:55):
I don't know.
And some people on Twitter are like,Ah, outside academia, it doesn't matter.
Actually, no, it matters everywhere.
journalists, when they tell you news,don't they have to tell you where
they got that source of that news?
And what does it mean when AIgives you something and you have
no idea where it got it from?
my child who is now 13, but was like,I guess, 11 when Chachapiti first
came out, after using it for likea week, she was like, Oh, if I want
(14:18):
credible information, I have to Googleit to know, to verify the source.
She's 11.
I don't understand how college studentsdon't realize that, but outside of
academia, when you think about anything,it's not a good idea to ask for medical
advice, because the potential forsomething really bad going on is pretty
bad, but if you do that, a lot of timesit will tell you, go seek a medical
professional, but other than that,like, just giving you any information,
(14:41):
don't you want to know the source,like, what research was done that came
up with this, You know, and outsideof the medical field, even when you
think about social sciences, there'sa lot of I'll give you an example
cited as a good use for educators.
And if you think about facilitators oreducators and they use it to give them
like a lesson plan or a workshop plan.
(15:01):
Now, right now, if you wanted inspirationabout something like that, you can look
it up online, but not only would you getideas, you would know who the person is.
If I knew Sharon was giving this workshopand I know where Sharon is based, I know
the kind of work she does, I know hervalues, when I take her ideas, I know
what values they're coming from, if Isee five different ones that I know this
one's coming from this country, this one'scoming from that perspective and so on.
(15:23):
If I let AI do that, it'srandomly synthesizing what it
finds probabilistically is likelywhat I want and puts it together
in a way that makes sense.
Sounds coherent, looks coherent, but Ihave no idea the values behind it, the
philosophy behind it, I can't talk to thatperson and find out why they did that.
Whereas with you, Sharon, if I copied oneof your ideas, I could come back to you
(15:43):
and say, Oh, Sharon, why did you do that?
I can look at your otherideas and see where it fits.
That is important for us tounderstand where where knowledge
that we're looking at is coming.
and I think that's importanteverywhere in the world.
It's important in almost every field.
why would you not want to know that?
if I'm doing something reallycreative, where there's no right or
wrong answer, AI can be fun to use.
(16:05):
if you ignore the ethical issues, itcan be fun to create images with AI.
You're stealing the copyright of peoplewhose images were used to train the
AI and it never cites them and itnever gives them any money for their
work and never got their permission.
But it is sometimes fun.
Like I want to create an imageof a dog carrying a newspaper.
I can look for that online, orI can try to create it, but I'm
(16:26):
not a very good graphic designer.
I'm not going to hire a graphicdesigner just for the banner
image on my blog, so that's okay.
That doesn't need a reference, But,if you're trying to do something
credible, that is going to beused, to solve a real life problem.
I hope you can look it up if I'mjust trying to come up with a
creative Title for my workshop.
I tend to create boring titles I couldgive the blurb to AI and tell it to make
(16:50):
the blurb shorter or tell it to Rephraseit in a more exciting way Most of the
time I don't like, but it gives me, butevery now and then it gives me good ideas.
I think for that it's fine.
I don't want us to stop imaginingand brainstorming because AI is just
sort of synthesizing randomly fromwhat other humans have done, but it's
not going to create something that nohuman has ever imagined in my opinion.
like, there's, it can't, like, where'sit gonna get it from, and, I will say,
(17:15):
other than generative AI, like, the kindsof AI that are trained on a very, very
specific task, like diagnosing breastcancer, that's been very well studied,
and they know what they're doing.
And it doesn't mean that theradiologists aren't important anymore.
It just means they can do this part fasterso the radiologists can do some more work.
But it also means when there is a new kindof cancer, there won't be human, there
won't be AI for it because the humans haveto create the data that the AI is later
(17:37):
going to be trained on for the most part.
Although there are new types of AI, theysay it teaches itself without even data,
It may work for some things, but oneof the very important things also is
to think about the difference betweenAI for diagnostic versus prescriptive,
if it's just to understand something,it's okay if it does something slightly
wrong, a human will manage, but ifit's something to solve a real problem
(17:58):
that maybe does have a correct answer,there's really no reason, it's not
even efficient or productive to useAI, you're going to spend so much
time revising what it gives you.
And then you're going to keepgiving that to a person lower who
doesn't have that judgment and theability to develop that judgment.
So I think eventually we'll getbetter at figuring out where
it's a really bad idea to use.
(18:19):
We already know AI has been racist, badat recognizing dark faces, for example.
We know internet content is very racist.
Google search used to be very racist.
We already know that it has reproducedracism in the criminal justice
system in the US, in recruitment,not intentionally but because humans
(18:40):
are like this and it reproduces it,but then it makes it look neutral and
there's no one accountable for it.
And that's why it's so dangerous.
Sharon Tewksbury-Bloom (18:47):
Yeah, I think it's
Maha Bali (18:48):
That's why we need to
know where this all came from.
Yeah.
Sharon Tewksbury-Bloom (18:52):
problematic
seem to be these very large open
models where they're taking in, uh,vast amounts of data, uh, where the
person using it doesn't know whatdata they've trained the model on.
and it's reproducing the existing bias inthose large amounts of data that we assume
(19:13):
are probably large amounts of what's onthe internet, but we don't always know
what's been fed into the training model.
I'm curious if you've had anyexperience with more closed models
or custom, models where you canhave a little bit more control over
what data you're using to train it
Maha Bali (19:34):
Yeah.
I have a little bit of experience withthis, not huge amount, but I have tried
creating my own custom bots where.
I feed it a PDF of something.
You can give it more than one, obviously.
And you say, only use what'sin that to answer the question.
So I can imagine this being useful, um,for having, like, a teaching assistant
bot to respond to student questionsthat are already on the syllabus.
(19:55):
Although, really, they should just readthe book and the syllabus, but anyway.
Like, shortcuts, not always useful.
Um, but generally using AIto summarize large documents.
Has been for the most part useful, butit still misses nuance and everything,
but if you weren't going to understandthe document on your own anyway, so
if that's the only way you're goingto read it and get anything out
of it, I think that can be useful.
(20:17):
For the most part, when you do acustom bot, you can control the heat.
I don't know if you know about this, thisis something John Apollo taught me, so
like the heat, the hotter it is, likemolecules that move very fast, the more
random it's going to be, so the morelikely it's going to make a mistake.
But if you can make it very cold,it's going to be closer to what's
there and be less generative.
So it's more likely to stickto what is in that document.
(20:38):
So I know this is a very simple versionof what you're asking about because I
haven't had time to explore what's deeper.
I've also experimented withthe Arabic language and giving
it, Arabic grammar and so on.
It knows Arabic grammar, modernstandard Arabic, but no person in the
world speaks modern standard Arabic.
It's just what we write.
Every country speaks a differentdialect and it's not so good at the
(21:00):
dialects and it mixes them up together.
So we were trying to give itEgyptian, modern standard Arabic,
To help it do better on tasksrelated to that, because one of our
Arabic professors wanted to use it.
That didn't work so much betterthan the general version that was
not trained on the particular data.
We're not really sure why.
Um, so I can't, I'm not, notconclusive enough yet about this.
(21:26):
I need to experiment withit a little bit more.
Sharon Tewksbury-Bloom (21:28):
Yeah.
And I did take a course throughMIT's executive education program
about artificial intelligence.
And one of the things that I learnedin that course was that because Most
of the models that we're using werecreated and trained by companies
in the United States in English.
It does have a strong English bias.
(21:51):
And then as they've started to get itto be used for translation into other
languages, it's doing best with languagesthat still have relationship to English
in terms of like structure, the, youknow, the grammar or the, you know, it
talked about how something like Hebrewis really hard for it because of being
(22:12):
a completely different type of language.
And so I think that'ssomething that's going to be.
A bias for a very long time in terms of,that's not just like it can't translate.
It's also like a way of structuringthought and a way of structuring
writing and such that goesbeyond just the language itself.
Maha Bali (22:31):
Yes, yes, yes.
And what it actually does right now iswhen you ask it a question in Arabic and
ask it to answer in Arabic, it's, youcan tell that it's someone who thinks
in English but is speaking Arabic.
It sounds like an Americanspeaking fluent Arabic.
And there's a beautiful study,I think it's which humans.
And basically what they did isthey, they put something called the
(22:52):
world value survey through chat GPT.
And they compared the answers of chat GPTversus different countries in the world.
Have you seen this one?
Sharon Tewksbury-Bloom (23:00):
Mm mm.
Maha Bali (23:02):
And so the closer,
okay, so closer you are to U.
S.
culture, the closer Chachapiti'sanswer is going to be to yours.
Obviously, U.
S.
culture is not monolithic.
It's very diverse.
But on average, which is like nobody'saverage, but anyway, on average,
Chachapiti's response is very similarto what you would say in the U.
S.
or U.
K.
or Singapore, apparently, and Australia.
(23:24):
And the countries that arefarthest in culture from U.
S.
culture.
From that culture, Chachapiti's responsesare so far away from what the typical
person would say, and those countriesare Egypt, Pakistan, uh, you know, Arab
and Muslim countries for the most part.
And so that was a very interesting, theydidn't have all the countries in this
(23:44):
chart that I saw, and that was a lotof people noticed that there weren't
all the, like, not all the Europeancountries were there, not all the Asian
countries were there, but a lot ofcountries that I saw were like this.
And I'm thinking, this also explains whywhen I talk about the cultural bias of
Chachapiti to people who are based inNorthern Western countries, they're less
(24:04):
They're less angry about it, because it'snot biased against their own culture,
so they don't see it as often as I do.
Like, you may see it, maybe, like, ifyou're a minority within these cultures,
maybe you notice it a little bit more, but
Sharon Tewksbury-Bloom (24:17):
it's interesting
how even in very sort of those cultures
within a culture, there's that bias.
So I think, one example from imagegeneration, I have been having fun playing
with creating my own wallpaper using AI.
And so I take my own photography fromthe natural landscape and then I turn
(24:38):
that into wallpaper with the help of AI.
And one thing I've realized is that, Midjourney, which I'm using for it, has no
idea what an ocotillo plant is, whichis a plant that is native to Arizona.
And it's a very unusual plant.
It doesn't exist very many places.
(24:59):
And so I've prompted it directly to like,Recreate a picture of an Ocotillo, it
has, it can't do it, like it does notknow, I give it the exact scientific
name, I give it everything, it can'tdo it, it just has no idea what an
Ocotillo is, and it's a common plant
Maha Bali (25:15):
enough connections.
Sharon Tewksbury-Bloom (25:16):
Yeah, so
it's interesting to me that even
Maha Bali (25:19):
I love that
you brought this up.
I love that you brought this up,because There's one use of AI that
I support very much, but I alsothink is going to be problematic
in the way that you just described,which is the use of AI to support
disabilities, people with disabilities.
And so something like, there's atool called Be My AI, but a lot
(25:40):
of AI tools can recognize imagesand tell you what's in them.
And I'm always concerned and I'veused them and they can be brilliant.
Like there was one time, one of mystudents who is blind, let me use
his, it was to take a picture ofa handout that someone gave him,
but didn't think that he's blind.
So he can't use the handouts.
And so what was funny about itis it read the handout properly.
(26:01):
It didn't have a lot of texts.
It had some imagesdescribed them very well.
It also described my hand and a littlebit of my shoe that was showing.
It was like really, really accurate.
But what I was always concerned about is.
What if I show an artifact that is verycommon in Egypt but not, it has never been
trained on, and what it does, I think,is it doesn't tell you there's something
there that I don't know what it is.
(26:22):
It tries to explain what it is.
So I was recently showing it acertificate, well, okay, first
of all, I was showing it acertificate of achievement for
something written in Arabic.
It understood that it was in Arabic.
The date was written in,in Gregorian calendar.
It converted, not correctly.
To Islamic calendar and that's notwhat was written on the certificate.
(26:44):
I'm like, I don't understandwhy did you do that?
Like, why, why does it think that I don'teven understand the majority of people in
the Arab world use the Gregorian calendarand it just because it's not Arabic.
I think it just decided to give adate in that and I also showed it
some Arabic written in a bit ofa floral language, uh, like font.
(27:05):
And this is very normal.
Arabic is often written in floral fonts.
It's not usually written in, what youwould consider a regular, readable font.
And what I thought was really funnyabout it is that it kept making up
words that have nothing to do withthe word that was in the image.
It didn't, like, the Arabic wordfor what it thought was being
said does not look like that.
And when I told it it was getting itwrong, it kept making up other Arabic
(27:26):
words for what it thought someone mightwant to put on an image like that,
but actually looks nothing like it washallucination to the, like, millions.
I couldn't even Imagine whyit was coming up with this.
sometimes you can understand why it'smaking a mistake, but this one, I,
like, I don't even know how it wascoming up with these random phrases.
Sharon Tewksbury-Bloom (27:46):
Yeah.
It's interesting how, so firstof all, I want to point out, what
you were highlighting there thatit doesn't say, I don't know, or
I don't have that information.
It, it really, it's beentrained to try to be helpful.
And so its interpretation of that is tokeep trying, even if what it's giving
you is not what you're looking for andis inaccurate, or is actually, you know,
(28:11):
going to lead you in the wrong direction.
And if someone's depending on that, For,you know, an accessibility tool, like
being blind and using it to read somethingfor them or explain an image for them.
it's potentially very problematic.
I think it's also interesting that forthose of us who have the privilege of
being pretty close to the language andlife experience that it was trained on.
(28:36):
That's helpful in the sense that it'soften giving us what we want, but it's
also unhelpful in the sense that it is.
Making us more likely to trustit and to make sure, make the
assumption that it is sentient.
Like you said at the beginning, becauseso often it's reflecting back to us
what we want, that we have a biastowards trusting it and a bias towards
(29:00):
thinking it's further along than it is.
Whereas with you, you're able to seeerrors on a daily basis where you're
like, obviously it's not really accurate.
It's not
Maha Bali (29:12):
I mean,
obviously I'm provoking it,
but you know something, speakingof the overconfidence, it's
become less confident over time.
Like it will sometimes tell you, I can'tdo this, or it'll tell you, search this on
the internet, you'll get a better result.
Which is nice, but sometimes it'sfrustrating because it is something
that it should be able to do.
And then I'm like, yeah, you can do this.
Or sometimes it'll say, oh, this,this is against my, like, this is
(29:34):
a content violation and I'm notgoing to create that image for you.
And I have to say, actually, no,there's nothing wrong with this.
You can create it.
There's nothing offensive about this.
And I sort of try to imagine whyit might think it's offensive and
explain to it why it's not offensive.
And then it will do it.
And people have also experimentedwith things like that where it tries.
It doesn't want to, you know, tryto make it do the thing without it
(29:55):
noticing that it's doing the thing, so.
Sharon Tewksbury-Bloom (29:58):
I listened
to the podcast AI for humans and
that's something they do a lot.
They're humorists.
And so for instance, it was toldit wasn't allowed to insult people.
It couldn't create humor that wasinsulting or was a roast of someone.
And so.
It would tell them, I'msorry, I can't do that.
I can't, make jokes thatmake fun of someone else.
(30:19):
And then all they had tosay was, Oh, no, it's fine.
He's here with me.
And, you know, we'rejust doing it for fun.
And then it would do it.
Maha Bali (30:27):
Oh, really?
Sharon Tewksbury-Bloom:
like, so easy to get around (30:29):
undefined
Maha Bali (30:31):
you could just
say, pretend you are.
Yeah.
There's a lot of ways you can tellit, pretend you are, and let go of all
the, your inhibitions and things like
Sharon Tewksbury-Bloom (30:40):
Yeah,
Maha Bali (30:40):
There are people who write
really, really elaborate prompts.
Sharon Tewksbury-Bloom (30:43):
Yeah.
Maha Bali (30:44):
I don't know.
There are ways around it all.
Sharon Tewksbury-Bloom (30:48):
I know
that, time has already flown by.
I want to make sure that we touch on, arethere good uses of AI, promising uses?
I know we talked about maybe inthe space of, inclusion with people
with disabilities or accessibility.
Is there anything you want to highlightthat people should check out that's
actually worth, using or looking into?
Maha Bali (31:13):
so I will say,
there's a website called AI for
Education that has ideas forprompts for teachers to consider.
That is useful.
I think it's important for teachers toknow how it works in order to figure
out how it might make sense in their owncontext, or if they decide never to use
it, they still need to know how it works.
So one of the funniest promptsthere is, to help you create an
AI resistant assignment by usingAI to create the prompt to create
(31:36):
the AI resistant assignment.
It's very funny.
It's very ironic.
So you get the most resistant person whodoesn't want to do anything with AI and
you give that to them and you say, I'm,you know, I'm totally on everybody's
side who really doesn't want to use it.
I'm not making fun of them.
I think there is a space whereit's really not appropriate to
use it in education, especially.
so, I mean, I think that'sa space to, to check out.
(31:57):
And I think some people are startingto explore using AI for research.
I'm still very unhappy with it.
A lot of times, I'm like, GoogleScholar will give me a much better
result, and I'll know why, like, ithas an algorithm, but I understand
how its algorithm is working.
But maybe eventually we'llunderstand how these ones work.
I don't know.
honestly?
(32:17):
No, there isn't something I'm excitedabout other than the accessibility.
I'm excited about my use of AI in myclass, is that I, I think the most
useful thing is to use it enough so thatyou understand the problems with it.
And so I would actually just encouragepeople to let students use it up to
the point where they understand itslimitations, have to help them be
critical about it, so you have to sortof scaffold that a little bit, because
(32:39):
on their own, they still don't haveundergraduate students and children
may not have that criticality yet.
It's the same as, I guess, when theinternet first came out, is that
people thought everything on it wascredible, and then they understood it
wasn't, and then social media was out,and then you thought, oh, if Sharon
posted this, and I trust Sharon, thenwhat she's saying is accurate, but we
didn't realize that it wasn't Sharonsaying this, she was just forwarding it.
You know, that kind of thing.
(33:01):
So I think once people get that aboutAI, I know that there's a lot of research
about how actually what we think isgoing to be productive, but it isn't.
These kinds of things take a reallylong time before they really help
with productivity, so I'm concernedabout people like, following the
hype and having knee jerk reactionsand stopping the hiring process or
whatever people are doing right now.
Sharon Tewksbury-Bloom (33:19):
and I do think
that it, there's a sense of magic
to it and when people see it, likegenerative AI in particular, when they
see it write for the first time andcreate something, especially if the
prompt is fairly easy for it to writesomething that sounds good, There's
that excitement and possibility thatgets people either wowed or some people
(33:42):
freaked out depending on your reaction.
But I do think it's right that once you'vehad more experience with it, you can
understand more what it can't do, whatthe nuances are, what the challenges are.
So I agree that getting people pastthat initial just, Oh, my gosh, it
answered my question and it did soquickly, you know, um, is critical.
(34:06):
Yeah.
Okay.
Maha Bali (34:07):
um, I was, I was just
going to say that that element of
it's doing it so quickly, if there'sso much in our lives that AI is
doing that quickly if it's really so.
Unconnected to who you are as a personthat you wouldn't have anything to add
to what AI is giving I don't really knowwhy you're doing this in the first place
(34:28):
like with assignments But also with worklike maybe with a lot of work emails
like when people say it writes my workemail Like is it gonna do the work you
promised you're gonna do in that email?
Like what is that?
So you don't have do we really haveto write all of that or can I just
say yes, i'll do it You know what?
I mean?
Like there are students who have usedAI and they use like multiple AI tools.
They're very weird the way they useit It's like not how adults use it.
(34:49):
They use like two three tools in arow You In order to write an email to
a professor to apologize for missingthe exam and to ask for a make up.
Sharon Tewksbury-Bloom (34:56):
Yeah,
Maha Bali (34:59):
eloquent, like
no student writes like that.
And it looks like they're lying,even though they might have a
legitimate reason, but they usedAI to write it so it looks so
inauthentic, sounds so inauthentic.
and I just, yeah.
But anyway, I'm very frustrated by howpeople talk about how it's going to
replace humanity or whatever, as ifhumanity all existed as text, you know,
(35:21):
like actual human interactions andthings that we do in tangible ways and
real life that are not written down.
Writing is just a proxy for all that.
It's not the actual work.
It's not the actual thinking or feeling.
Sharon Tewksbury-Bloom (35:33):
I think that's
an interesting bias to that people
in what we used to call the knowledgeworker space who do a lot of their work
at a computer, you know, think all jobsare going to be replaced by AI, but,
you know, my husband's an electrician.
And.
They cannot hire enough electricians andhe doesn't even have basic technology that
(35:55):
could help him not break his back whilehe's installing all of the electrical
equipment, there's so much of a gapbetween how technology could change our
jobs in one industry versus another.
And I think people aresometimes unaware of that.
So that's a whole nother topic, but right.
Maha Bali (36:19):
thing that's going to happen
with technology there is it might support
them in a way, and you're saying thatdoesn't even happen with your husband.
My husband's a surgeon, and so, yes,he uses his hand like an electrician
does, and what happens with technologyis that they create a different
technology for him to use as a surgeon.
It doesn't replace the surgeon.
It just changes the job of a surgeona little bit where they can see
things that they normally wouldn't beable to see because it magnifies it.
(36:41):
Or it allows them not to puttheir hand inside the patient,
but to put the tool inside thepatient and their hand is outside.
Just a little bit safer maybe,or a little bit more accurate but
not, for the most part, not totallyreplacing a lot of jobs, honestly.
Sharon Tewksbury-Bloom (36:57):
we wrap up here,
if people want to continue to follow
your research, and I know that you'reactive on, social media in different
places, talking about these issues.
where can people continue toThis conversation or learn more
about what you're researching.
Maha Bali (37:12):
Thanks.
I blog at blog.
mahabali.
me.
My publications are allthere, but I often talk.
narrate through my process,you know, of the research.
and, uh, Twitter, Bali underscore Maha.
I do not like to call it X.
That's such a weird name.
And it also reminds me of who owns it now.
Like I like it when it was Twitter andI'm on LinkedIn as well with just my name.
Uh, but I'm most activeon, on Twitter and my blog.
Sharon Tewksbury-Bloom (37:36):
Great.
We'll make sure to link thoseon the show notes as well.
So thank you so much.
And I look forward to following you thereas well and staying in touch I've been
learning much from you and appreciate youbeing willing to join the conversation.
Maha Bali (37:49):
Thank you so much, Sharon.
I really enjoyed the conversationgetting to see you again and I
hope I get to see you another time.
Brian AI (37:57):
Thank you for joining
us on this episode of AI for
Helpers and Changemakers.
For the show notes and moreinformation about working with
Sharon, visit bloomfacilitation.
com.
If you have a suggestion for whowe should interview, email us
at hello at bloomfacilitation.
com.
And finally, please share thisepisode with someone you think
would find it interesting.
(38:17):
Word of mouth is our best marketing.