All Episodes

November 4, 2024 59 mins

In this episode, we dive into emerging tech with Marsha Maxwell, co-founder of If These Lands Could Talk and Head of Innovation at Atlanta International School. Marsha shares insights on empowering indigenous and underserved communities through AI and VR, the ethical challenges of integrating AI, and the importance of digital inclusion. We discuss the impact of AI on knowledge, culture, and education and examine how to responsibly bridge gaps in tech access worldwide. 

In this episode we cover: 

  • Exploring AI and VR for indigenous and underserved communities 
  • Bridging digital divides: Tech access for all 
  • Ethical challenges in AI and identity 
  • How to navigate digital authenticity in the age of deepfakes 
  • The future of AI in creative and cultural spaces 
  • Practical strategies for blending AI with education and learning 

Tune in for a compelling look at the intersection of technology, education, and culture. Don’t forget to like, subscribe, and share to stay updated with our latest episodes! 

#ArtificialIntelligence #EmergingTech #DigitalInclusion #CyberSecurity #DataProtection #AIinSecurity 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Joshua Schmidt (00:04):
Welcome to the Audit presented by IT Audit Labs
.
I'm your co-host and producer,joshua Schmidt.
Today we're joined by ManagingDirector Eric Brown, and our
Associate, bill Harris, and ourillustrious guest Marsha Maxwell
.
Marsha has quite the trackrecord.
Just a couple of highlightshere that I pulled from your
LinkedIn.
You're the co-founder of Ifthese Lands Could Talk, a

(00:26):
mission of empowering native andindigenous communities globally
through digital technology.
That's how we found you.
I saw your TED Talk, which wasgreat.
It's called Stories for theFuture.
I highly suggest our listenerscheck that out.
So we have lots to talk about.
Today we're going to be kind ofdiving into AI, vr your take on
that and kind of passing ideasaround and maybe talking about

(00:46):
the future, where that's goingto lead us.
So, without further ado, marcia, maybe you could give us a
little background on yourselfand tell us what you've been
working on lately.

Marsha Maxwell (00:55):
Yes, so I'm working on several projects.
One is with If these LandsCould Talk, which is an
organization that I co-foundedwith my partner, Natasha Rapsad,
and we are trying to introduceindigenous native peoples around
the world to emerging tech.
And another nonprofit that Ihave is Liminal Spaces, which is

(01:15):
doing the same kind of work butwith underserved communities
mostly in America, and in themeantime I'm also the head of
innovation, research andtechnology at Atlanta
International School, which ofhas a mission to spread
education around the world andhave people have a more global
minded mindset as we enter intosome very interesting times

(01:37):
that's coming up for all of us,not just in America, but in the
world.

Joshua Schmidt (01:40):
Absolutely Looking forward to hearing more
about that.
So in this part of the worldwe're embracing for the winter
that's coming.
The cold weather here inMinnesota and bills out east in
Maryland, so we're kind ofdreaming about warmer climates,
maybe a winter vacation.
Do you have any travels comingup that you have planned?

Marsha Maxwell (01:58):
Yeah, I'm going from hot to hotter.
So I'm going to Brazil in acouple of weeks to work on a
project, and later in the year,in December, I'm going to Ghana
to work on another project,actually revolving around AI and
identity.

Eric Brown (02:13):
Oh, that's really interesting.
Marsha, what sort of work areyou doing that involves Ghana in
this kind of emergingtechnology?

Marsha Maxwell (02:23):
Yeah, so I'm working in partnership with
Google.
So Google has an AI for Africahub in Accra, which is the
capital of Ghana.
And so, looking at, you know,going over there, looking at
really identity for peoples thathave moved away from Ghana, and
we're doing kind of a minidocumentary and that kind of

(02:43):
involves people who haveGhanaian ancestry, and we're
doing kind of a mini documentaryand that kind of involves
people who have Ghanaianancestry, and we're doing that
here in America and we're takingit back to Ghana to showcase,
and hopefully there are someinteresting stories that are
coming out of this and we'reworking in conjunction, I said,
with Google.
We're just trying to introduceAI to a continent that's been
really innovative over the yearsbut have kind of, you know,

(03:06):
people have a differentperception of Africa and what
African, you know, thoughtleaders are doing, and so this
is just to show that, yes, youknow, the future has not left
certain continents behind.
Everybody has a role to play inthis digital future that we're
creating this digital futurethat we're creating.

Eric Brown (03:27):
As you're talking through that, as we're on this
podcast here, I'm in a southernpart of Pennsylvania at this
cabin that's way out in thewoods, and I'm on Starlink right
now and that's how I'm gettingto the internet and being able
to work and do things like thispodcast.
How have you seen technologieslike Starlink open up access for

(03:48):
other areas of the world, ormaybe not open up areas for the
world?
Have you seen these type ofemerging technologies add
influence to any of theseprojects that you're working on?

Marsha Maxwell (04:03):
Yeah, well, I work on a different side of it.
I know that from the takeyou're having kind of the penny
hit or the penny dropped duringthe pandemic, where you know, as
a technology leader in a schoolmy school is fairly well-funded
and you know we kind of pivotedreally quickly gave children
devices, went home never oncethought about somebody not

(04:26):
having internet access.
And then as we're going throughthe pandemic and hearing from
different parts of the communitywhere you know you can give
somebody a device but they don'thave any, you know, method of
actually using that device.
So that kind of kick-started alot of these ideas.
Um, so, programs like starlinkis very important to be able to
get the connectivity, or atleast the ability to connect to

(04:50):
people who are not in thegeneral, you know, hub where we
have everything connected.
But also I think it kind ofshines a light on a problem that
you know we think of thisproblem being, you know, third
world developing countries, nota problem actually being in our
own backyard and to see thereports from you know many of

(05:10):
the areas where they haveindigenous peoples who don't
even have electricity sometimes.
So we're kind of thinking aboutthese really high level
problems and there's some verylow level problems like
connectivity, actual electricity, that people are still
struggling with.
So these kinds of projectsgoing to Africa, going to Asia,
going to South America kinds of,I think, keeps us globally

(05:35):
alert or aware of the problemsthat are around and not just
thinking about okay, this is areally cool widget or gadget and
what can it do for me?
But really is, how do we bringeverybody along on the journey
and not leave anybody out,because there's so much that
needs to be done and so very fewpeople that have the ability to

(05:55):
do it, especially with the newAI stuff that's coming out?

Eric Brown (05:59):
So, as you are working to bring emerging
technology to underservedcommunities or indigenous
communities, what are thoseemerging technologies that
you're bringing and how are youintegrating that into some of
these communities that may nothave things like electricity?

Marsha Maxwell (06:19):
Yeah, well, the ones without electricity.
I have to admit that I haven'tbeen working with those
communities so much.
It's usually working withentities that are also involved
in the community.
And one thing that I've learnedover the past couple of years
working with If these LandsCould Talk and really getting
into the indigenous communitiesthere has been a history of

(06:42):
either appropriation anddefinitely of distrust, and so
the main part of the work, orthe biggest part of the work, is
establishing that trust andallowing people to determine how
they're going to interact withthe technology and also what
parts of their communities theyare going to allow to be

(07:03):
infected or shown by thetechnology.
And that's something that I hadto learn.
I didn't have to learn it thehard way because I'm a good
listener, but through yourempathy, interviews and things
like that, I'm hearing thingsthat I would never have thought
about.
And you think about it, youknow.
You see, there are many peoplewho get the like Maori tattoos,
and it doesn't mean anything tocertain people, but when you go

(07:23):
talk to a Maori person, it meansa lot, right, and so the same
way you'll have people that are.
You know, especially when theNFT boom was going, people would
be kind of appropriatingdifferent symbols that were
either sacred or very importantto certain peoples and just kind
of using them without thepermission.

(07:44):
So working with some indigenouscommunities is really important
that you know, you understandhow they want to use the
materials and what things arethey allow to be shown, what
things they don't allow to beshown, and follow along like
that.
Working in Africa has beenreally different, and that's
been mainly a lot of thestudents that I work with there.
They've heard of, they've heardof stuff, right, they've heard

(08:07):
of VR, they've heard of AI, butthey haven't actually interacted
with it, and so when I talkabout AI and identity, it's
really about how to not forgetwho you are.
But how do you incorporate, toaugment who you are using AI,
and how do you use AI to connectwith people who have different
experiences around the world anddifferent perspectives than

(08:31):
yours?

Joshua Schmidt (08:32):
I could really see, just using the small amount
of generative AI that I've beenusing to create text or images.
I could see how that wouldreally homogenize a lot of the
things that people are creating.
So when you're working withgenerative AI or VR, what kind
of recommendations do you maketo your pupils and how do they
maintain a voice or stayauthentic and have that

(08:55):
personality shine through thetechnology?

Marsha Maxwell (08:57):
Yeah, I think one of the things I talk about
is knowing who you are andunderstanding who you are,
knowing what your ethics are,knowing what your ethics are,
knowing what your values are andunderstanding where you come
from.
All of us come from somewhere,you know.
There's no, I think, in Americawe think of ethnic and we think
of people of color, but everysingle person in America has an

(09:18):
ethnicity, has a background, hasfamily history, and just
knowing that, and you know I westart sometimes by saying, okay,
tell somebody a story, a memoryof your childhood, and some
people revolve around food,others maybe revolve around a
certain holiday or visiting acertain place, and there are
certain feelings that thatevokes.

(09:38):
And so, after you think aboutthis offline, then I do an
exercise where I say, well, howwould you explain that to
somebody from fill in the blank,you know who has a different
perspective, what were thecommonalities?
And then using AI to help youkind of bridge those gaps all
the same, at the same time, youknow, when AI especially in the

(09:59):
beginning, when everybody washopping on chat, gbt and it was,
it was okay, it was was, youknow, but it was kind of
spitting out very, very kind ofhomogeneous stuff.
Right, it wasn't reallydifferentiated, but you know, if
you, the better the prompt, thebetter the response.
So helping them with promptengineering so that they can,
you know, manufacture a betterresponse, and then giving it to

(10:20):
the person of that backgroundand having them read it and see
how close is it better or worsethan the first iteration of what
the person had written freehandor from their own thoughts, and
sometimes it was better.
And you know we had some laughsabout what the AI spat out.
But really it's not aboutnecessarily the tool, but it's a

(10:43):
tool to help people to startthinking about what they're
putting out there and how it'sbeing perceived, and also using
the AI, like I said, to augmentthat.
So finding more ways and betterwords in order to actually meet
other people where they are.

Eric Brown (10:59):
I'm glad you brought that up, marcia, about the AI
being yet another tool or anarrow in our quiver that we can
use, and it reminds me of astory of a colleague of mine
recently.
He's going through someeducation, he's taking some
college classes, and one of theclasses that he's taking is

(11:21):
around religion and one of theassignments that he was given
was to write a to research areligion that's not his own and
then to write a 12 or 15 pagechildren's story explaining the
religion.
And his teacher said you know,you can't use AI to do this,

(11:47):
which is kind of like a perfectuse case for AI if you wanted to
just kind of take a shortcutand do it.
So you know, we sat around andkind of chatted through what
does this really mean as far as?
How are we using these tools atour disposal?
Versus what is the assignmentattempting to have us learn and

(12:11):
how is that going to enrich us,where you could really just have
AI spit that out and probablydo it in 20 minutes.
Versus really learning aboutthat other religion and then
learning it enough to be able towrite about it in a way that
you could explain it to someoneelse at a very basic level.
So we were just chattingthrough some ways where you

(12:32):
could achieve that same thing,but use the technology that's
available to us today and youknow I always enjoy having those
conversations around.
You know, how do we incorporatethe technology at our disposal?
So when I was growing up I waslike, oh, you can't use

(12:53):
calculators when you're inschool, it's like, well,
calculators are ubiquitous.
I'm never going to be on adesert island where I can't do,
you know, I need to do longdivision on paper.
It's just, you know, just notgoing to happen.

(13:21):
So is it better that I know howto use the tool well to achieve
the outcomes that I want, or doI pretend the continue to
evolve our education and ourlearning, but incorporate those
things that are advancing withus as an overall society?

Marsha Maxwell (13:36):
I think one thing that I hear a lot in
educational circles is how youknow and I believe in using AI
is how you know and I believe inusing AI.
So when I say what I'm going tosay, keep that in the back of
your mind.
That it's not important thatyou know AI will.
They can just plug it in andfigure it out and you know.
They don't really have to think, and I think that's 100% wrong.

(13:58):
I never want a doctor who'sgoing to be AI, asking Jack TBT
how to fix me when I'm on theoperating table.
Right.
So I think the combination isimportant.
I think having that knowledgeis really, really important.
Otherwise you're not going toknow, you're not going to be
able to test what's being spatout by the AI.
So if I don't know anythingabout this other religion or
whatever I'm comparing, thenwhatever AI tells me, I'm going

(14:20):
to say, okay, that's right.
Whereas if I know what thebeliefs are and I know how the
processes are and I know how todo long division, then I could
do spot checks and see how muchis valid and how much isn't
valid.
So I think this idea wherewe're going to kind of cede all

(14:41):
responsibility and all thoughtto AI.
I think that's very, verydangerous.
So I encourage students to useit to, you know, make what
they're doing better and to givethem ideas.
Maybe spit out a couple ofideas that they can then, you
know, kind of, you know, riffoff of and do something great.
And also as a test, you know,to see where they can maybe make

(15:06):
something better or worse, butnot a hundred percent.
Just ai, and in the beginningpeople were using it and it was
spitting out, you know, fake umbibliographies that looked okay,
um and you know, but were fake.
So if you didn't know that youknow such and such was not a
real person, then you wouldnever, you would just turn it in

(15:27):
right.
But if you know, okay, this isnot a leading scientist in this
field, or do at least a doublecheck of the references to make
sure.
So I think yes, it's yes andyes, we should use AI and we
should also learn thefundamentals and some of the
higher you know harder stuffabout the different topics.

Eric Brown (15:50):
I really love that way that you're kind of blending
in the technology really withthe philosophy of education and
learning when ChatGPT as yousaid in the early days last year
was coming out you coming out acouple of years ago and I think
this still holds true today Ifyou get, say, a paragraph from

(16:12):
ChatGPT or you ask it what isthe first page of War and Peace,
and it spits it out.
If you ask the generative AIhow many words were in the
content that you just sent meand you ask it five times, it's
going to give you five differentanswers because it doesn't have
that concept or at least Ihaven't checked this in a couple

(16:33):
of months but it didn't have away of accurately counting how
many words or maybe it didn'teven really understand that
concept because as a largelanguage model, it's really just
a really great predictive tooland it's just predicting that
that next word.
So where we we kind of blendthat artificial intelligence and

(16:54):
just kind of think of it as amagic black box, but really it's
just a a really great way ofguessing what the next word is
that humans would use inlanguage.
And Bill's done a few talks onsome futuristic computing things
, quantum being one of them.

(17:14):
Another one on storage, sostorage using crystals,
holograms and DNA, like stuffthat's kind of on the forefront
of science fiction, right, andas we think about we're all
headed there, and hopefully inour lives, where we're going to
see some of these things out ofthe lab and into practice.

(17:37):
Ai or you know, quote unquoteAI and large language models
will just become what today wewould consider a calculator.
To what that in the future, howfar we're going to have come.
It's really quite a lot to stayon top of for those of us who

(17:59):
are in technology day in and dayout, so those that are on the
fringes or don't interact withit much at all you could really
sweep past a few generations ofpeople quite quickly.

Marsha Maxwell (18:13):
Yeah, and that's scary too and I think that's
one of the reasons why I startedworking in this area,
especially with the othercommunities is it moves so
quickly and we talked about likewhat OpenAI is just a couple of
years old, I don't even thinkit's two, maybe two and a half
or something like that in thepublic consciousness anyway, and
in that time there's been, youknow, a million and one other

(18:37):
open AIs that have started.
Everything that we do has an AIcomponent to it, and whether
it's a chat or something, orhelping you with your emails or
whatever.
And so if you don't have theluxury of having context or
having the ability, I can'timagine what's going to happen
in 10 years, right, or fiveyears.

(18:58):
You're going to be so farbehind and a couple, you know,
years ago we were talking aboutthe digital divide.
That was a big thing and it wasjust having access to computers
.
But now computers are, you know, kind of passe.
I mean they're like, okay, nextthere's something else.
Everything is kind ofcompounding, and so I kind of

(19:19):
feel that pressure that if youmiss this wave, you're really
going to be lost.
You think about, like ourparents, think about my parents,
and they wouldn't know what todo with AI.
I think it would freak them out, or it kind of freaks them out,
right, um, the ability for itto be so human-like and I think
most of the ais now are quitegood.
I mean, they can write a decent, you know email or letter and

(19:41):
you know you have the ones thatare calling you, um, the
robocalls, and they sound like ahuman being, um, until, I mean,
you can tell when it's not.
But it kind of sounds like ahuman being and it seems really
realistic and, as as time goeson, that's going to get better
and better.
So, yeah, I think we have areally interesting future, near
future.

(20:02):
I'm excited about it.
I mean, I'm excited about it.
We'll see what happens.
I hope it turns out well.
I think sometimes people don'tget the right lesson from the
movies.
You're not meant to be doingcertain things and like.
That wasn't the point, whichgoes back to education again,
right?
So maybe they didn't understand.
Good guy, bad guy, you know themoral story, that kind of thing

(20:24):
.

Joshua Schmidt (20:24):
So yeah, I'd just like to jump in, and since
we have an educational experthere, I want to get your take on
this, marsha.
And then Bill, I know you're afather as well, so I'm going to
bring it here for a second.
So one thing I've been thinkingabout with my kids is, you know
, knowledge is easily acquiredthese days, whether it's through
chat, gpt or Google or whathave you.

(20:45):
But wisdom.
I was reading a book calledBreaking the Habit of being
Yourself by Joe Dispenza.
He's a little out there.
He's a little out there.
He's a healer kind ofalternative guy.
But one of his quotes that Ireally enjoyed is knowledge
without experience is merelyphilosophy.
Experience without knowledge isignorance.
So it really takes thatcombined personal experience,

(21:06):
combined with knowledge, tocreate wisdom.
So it's easy for me to throw mykids in front of PBS Kids or
put a tablet up at therestaurant, but you're really
missing that piece of the wisdomthat you get by being bored, or
that creativity that's sparkedby just you know, having your
mind wander.
So I'm wondering you know whatyour approach is to that?

(21:29):
I like to rope in Bill, becauseI know he's a father as well
and how do you navigate thatexperience with your children
devices, when to bring incertain things, when do you give
them a tablet?
When do you give them a cellphone?
When do you give them somefreedom?

Marsha Maxwell (21:47):
Well, I would say that for me, I've started
this kind of low-tech, no-techtype of thing, which is kind of
weird, right, because I am superemerging tech love it, but I'm
also thinking that I'm oldenough to appreciate it.
In a way, I feel very, verylucky to have been born at a
time when we had a lot of manualstuff, especially like

(22:11):
photography.
It's one of the.
You know there was a filmcamera.
So you couldn't do.
You know kids now, when theyjust push the button and
hopefully one of the 300pictures they take in two
seconds is gonna look good,right, when you have a film
camera with 24 or 12, you knowshots, you're really gonna pick

(22:32):
your shots.
You're not going to be wastingand film's expensive, you're not
going to be wasting those shots.
So now I've bought a filmcamera for the kids.
I bought even the not so thePolaroids.
You know you have to kind ofshake them.
So now it's like, okay, what amI going to take a picture of?
Because I only have 10.
What am I going to take apicture of?

(22:54):
Because I only have 10, that'sall I have, I do not have
another.
So it forces them to slow down,to think more, to compose the
shots better.
And again with I'm writing.
You know I like I have amillion notepads and I love pens
.
I do a lot of writing, nocomputer, just write it.
You know.
Re engaging, you know.
So you have to think.
I remember being in next thingI'm going to buy is a typewriter

(23:16):
.
But I remember being in schooland we had to type our reports
and then you type the wholething and then the very last
paragraph.
You'd make a mistake and if youwere, you didn't have the you
know whiteout thingy, or you,you know the whiteout would look
really bad and you didn't wantthe you know whiteout thingy, or

(23:37):
you, you know the whiteoutwould look really bad and you
want to turn it in because itlooks so bad you have to scrap
it and type the whole page overagain.
But that teaches you to becareful about what you do and I
think a lot of times now kidsaren't really careful what they
do and this kind of easy accesshas made a lot of people.
This is going to sound reallybad, but sloppy and lazy, right.
So they just one shot.
You know I'll write something.

(23:58):
I'll turn in my first draft, orI will.
You know I turn in the paper.
It looks really bad, but Idon't really.
You know, I fold it whatever.
I give it to the teacher.
So I'm a stickler for don'tgive me something crumpled or
stained, or you know.
I want, you know, have pride inyour work and think about what
you're doing.
So first draft isn't go writeit again.

(24:21):
Second, third drafts are alwaysbetter, no matter how perfect
the first one seems to be.

Bill Harris (24:26):
Well, I really like that answer that you provided,
Marcia, about you know, justmaking sure that people take
pride in what they do and howthey think, and so the question
I've got, then, related to that,is I was at a conference
earlier this week where thekeynote speaker was talking
about the impact of influenceoperations on our culture today,

(24:49):
and so it made me think thatyou've got this confluence here
of a huge amount of data, socialmedia and now artificial
intelligence.
So do you feel like as we kindof go back to Josh's comment
about having the wisdom tonavigate these things do you
feel that AI it will either helpor hinder or have no impact on

(25:13):
how these things come together,so that one can differentiate
fact from fiction, can see thenuances and the data that we
have and apply some of thewisdom to this enormous
landscape before them and thetools that they have?

Marsha Maxwell (25:29):
Yeah, I think that's a hard question because I
want to say that AI is going tohelp, but I think in reality,
especially right now, it's goingto hurt a little bit because
people already don't know how todistinguish fact from fiction

(25:49):
and it's always relied on, youknow, some being able to trust
the people that they'relistening to or seeing.
But now I had the pleasure ofmaking a deep.
I had a conference, a talk at aschool, and I made a deep fake
of their principal sayingsomething that he would never
say.
And they, when first they werewatching it, and then he started

(26:14):
off fine, and then he kind ofwent off the script, you know,
and I got his permission to doit.
I didn't just do it, but sosome of the kids were you could
tell that they were, you know,some of them were really happy
with what he said, because itwas something they wanted to do
Like you know, you I don't knowwhat it was, but maybe you can

(26:36):
run around the school orsomething like that, and they're
like, they're really happy.
Other kids were kind of, youknow, kind of cocked their head
a little bit and they're kind oflike hmm, is that Mr?
So they were kind of confusedbecause that went against what
they knew about that person,because that went against what
they knew about that person.
So in a context like that, ifit's somebody that you know and
you can trust that would orwould not say something, then

(26:59):
that's fine.
But in this world of socialmedia, you're halfway around the
world.
You have influence all aroundthe world, right?
So if somebody on social mediasays something, I don't know if
this is something they wouldactually say or not.
I don't know if this issomething they would actually
say or not.
I don't know if this is.
And then I'm going toespecially the younger students
or young, young people.

(27:19):
They're going to run off and dowhat this person said.
So imagine if you made a deepfake of Taylor Swift.
She has a billion Swifties andnow, all of a sudden, all these
10 year olds are going off anddoing something because Taylor
Swift told them to do it.
So that worries me a lot.
I think the right.
Now I don't know how peoplewill distinguish right or truth

(27:45):
from fiction.
I think that's.
I personally think that's aproblem.
Yeah, I really think that's aproblem.
I don't know how to solvethat's.
I personally think that's aproblem.
Yeah, I really think that's aproblem.
I don't know how to solve thatproblem.
I'm sure somebody smarter thanme will come up with a way to
solve it, but I think for methat's the biggest problem now,
that you can't actually trustwhat you see.
That for the first time, Ithink, in human history it's now

(28:07):
you can't trust what you'reactually seeing.
And now, with the advent ofmaybe better holographic
technology which I think is thenext thing you're not going to
be able to yeah, it's really,really hard to distinguish fact
from fiction, and I guess you'llhave to.
I'm hoping that it doesn'tspark a backlash where

(28:30):
technology is banned orsomething like that so people
can kind of catch up.
But right now I think sometimesthe bad actors are going to be
acting faster than good actorscan to stem whatever is coming.

Eric Brown (28:45):
In your school, marcia?
Do you or do the students?
Are they able to carry theircell phones with them throughout
the day?
And the reason I ask.
In Minnesota there are someschools that have adopted a no
cell phone policy because thestudents were just so distracted
that it was impacting theeducation and learning of other

(29:09):
students.
So now you know they've takenthe stance that no phones in the
classroom.
I have to put them away, youknow, in a locker or something
like that.
So just wondering how yourschool deals with that and what
you might have seen as aneducator.

Marsha Maxwell (29:25):
Yeah, right now we're dealing with that.
So primary, it's not at all.
Middle school so primary, it'snot at all.
Middle school in your lockeryou can't have it out during the
school day.
And high school they're allowedto use it, but not during class
.
The phone thing is huge.
I think a lot of kids areaddicted to their phones.

(29:45):
I remember I was working inTurkey and I took a phone from a
kid once.
I'm a big no phone person inclass.
I want them to be.
I mean, I'm getting paid toteach you right?
So I want you to pay attentionto what I'm talking about.
And I did the draconian takeaway your phone for a week.
She about had a heart attack,you know, because she couldn't

(30:10):
have her phone.
And I think, as a person who,who I have phones that I don't
pay that much attention to.
I'm not really on social, Idon't, you know.
So that, I think, was the worstpunishment you could ever do to
somebody was to take theirphone away.
So I think a lot of schools arenow grappling with it.
I saw a couple things in thenews fairly recently about kids
who've, like, gotten into fightswith a teacher, you know, if

(30:32):
they take their phone orthreaten to take their phone.
So that feels to me almost likean addictive behavior.
You know, it's weird theattachment that kids have, and
the same with tablets.
I've seen a lot of babies.
People give their really youngchildren these tablets and
they're just on the tablet.
People give their really youngchildren these tablets and

(30:53):
they're just on the tablet.
And what you notice with thephones, especially in middle
school more so in middle schoolthan middle than high is that
you have a bunch of kidstogether.
We had a screening of BlackPanther 2 a couple of years ago
and it was like a fun.
Kids got to stay in and yeah,kind of like a sleepover at

(31:15):
school type thing and and so inan audience of about 50 kids,
every single kid was on theirphone during the movie.
They weren't talking, I don'tknow what they're doing, but
they were on the phone kind ofpaying attention to the movie
and and then maybe a couplescenes in the movie everybody
would look up, watch the sceneand then they would be going
back to whatever they were doing.

(31:35):
So they're in community,they're together, but they're
alone, right, it's like they're.
They have a body buddy nearbyso they could see another human
being, while they're notinteracting with whoever or
whatever in their hand, and Ithink that's um, strange, um,
it's going to be a veryinteresting world, I think, in

(31:57):
the next couple of years, ofpeople who don't know how to be
actual interacting with oneanother unless we do something
about it one of our locallegends here, uh prince.

Joshua Schmidt (32:08):
When he was alive, he was known for making
people lock up their phonesbefore going into his, his shows
at paisley park and elsewhere,which I, as a musician.
I think that's a great ideabecause, like you say, people
are so distracted, uh, on theirphones and it really takes you
out of them, out of the moment,and it comes back to that wisdom
of of uh felt experience.

(32:29):
Um, I'd like to back up just alittle bit.
Since we are a cybersecurityfirm, I would like to explore,
you know how you mentioned deepfakes, right, and I was
wondering if we could chew onthe topic of how AI would be
threatening security in thefuture or the near future.
You know deep fakes we'veprobably all seen them on

(32:50):
Instagram or Facebook andthere's some obvious ways that
that could manifest as a threat.
We've also heard of the voicegenerated phishing calls and
things like that, but are thereany other areas that you all see
that could be an emergingthreat in the cybersecurity

(33:13):
domain?

Marsha Maxwell (33:17):
Well, I think it really depends, like people not
having good passwords or peoplenot following the two-factor.
There's ways you can kind ofcheat with two-factor right
factor.
There's ways you can kind ofcheat with two factor right so
you can do your instead ofhaving fingerprints or something
you can just do like names,your parents' name or street.

(33:37):
You grew up on all kinds ofstuff.
So I think as the technologygets more and more and AI gets
better and better, it can kindof figure out maybe what
Marsha's password would be.
Or it can search through mystuff and figure out you know a
likely password.
It could pretend to be me.
Better, if I'm looking, youknow we get these calls or these

(34:00):
emails.
It will know the tricks becauseit's reading the information.
So it's kind of like having aspy in the works.
Right, it knows what theadvantages or the cybersecurity
protocols are.
So it could better, I guess,get around them than a human
being who has to figure out whatthose protocols are, things

(34:23):
like that.
And I also believe thatsometimes people spend more
money, but there's been a lot ofmoney, of course, in trying to
figure out how to hack systemsand things like that.
There's tons of money behindthat.
So if they are using AI inorder to make themselves better,
that'll be interesting on theside of good right.

(34:46):
So both sides, I guess, will betrying to win the AI war.
Whoever has the best algorithmswill get ahead, and so things
like that.
I don't know.
I think it's worrisome for me.
I'll put that out there.

Eric Brown (35:00):
It is worrisome for me what the next couple of years
will be like six months ago,where I took a lot of content
that I had generated a lot ofwritten content, business emails
and ran them through a languagemodel and then had that
language model respond as me tosome other emails right that I

(35:24):
needed to respond to because Iwas thinking, well, can I take a
shortcut here, because a lot ofthe time I'm saying the same
thing and if it's just kind oflike a B-tier email where I
don't need to really do anythingbut I need to respond, somehow
can I leverage the model torespond and sound very similar
to me.
And it was actually pretty scaryhow close it did sound to me.
So I don't think we're far awayfrom having an integrated email

(35:51):
client that knows you know thehistory of years of email that
maybe have been generated by youbefore and it could respond on
your behalf.
You know, we're probably onlymaybe 18 months, two years away
from that where you could haveauto-generated replies where
you're just like oh you know, doyou want to say this Instead of

(36:11):
predictive now, where it mightfinish your sentence, maybe
it'll start by saying do youwant to say this in response in
the email?
So on one hand I could seepeople totally embracing that.
But that goes down a rabbithole pretty quickly and probably
an area that we don't want tobe in, because now we're having
technology generate content thatyou know maybe our opinions or

(36:35):
our values or what have youshift over time as we learn and
educate, but if you're notre-influencing that model, it's
staying the same.

Marsha Maxwell (36:43):
Yeah, I think too, as you're talking, I was
thinking about, you know, when Iwas creating the deep fake and
I'm no like wonderful deep fakercreator or whatever but the
fact that I could do it bothimitating the voice and
manipulating the image for mewas really scary, because it
took me like 20 minutes orsomething to do a thing right.

(37:05):
So if I was really motivated to, I could have a Zoom call, like
we're having right, some kindof video conferencing with a
made up me I am real, by the way, I'm not, it's like a made up
me, right and just typing myresponses right, and my mouth is

(37:28):
doing the right things, myhands are doing the right things
.
So there would really be no way.
There might be some, if I'm notvery good, some glitches, some
not, but if I'm really talentedat this and I'm sure there are
people around the world who arereally talented at this you
could do it right and you couldsay, yeah, I spoke to Marsha and
she told me and this is theevidence, this is what she told

(37:52):
me to do.
So I did it and I'm on a beachsomewhere, not even aware that
this is happening.
So identity, I think, is goingto be really, really, and if we
choose to, like you said, youchose to use the bot or whatever
to do this.
So how much are we buying intoit for convenience, and where is

(38:14):
that going to come back to kindof bite us in the end?
Like you know, if we, there hasto be a lot more thinking going
on on how we use ai, what ai isallowed to do and of course
they're going to be people whoyou go around this.
So, for the general public then, how we're going to use it,
what's going to be allowed andI'm sure the whole legal

(38:37):
profession is going to changenot change but have something to
do with it or policies,governments, because it's
getting better faster than wecan keep up with it
regulation-wise regulation-wise.

Eric Brown (38:54):
On that regulation topic and Bill, you might have
something to add here as well,because I know you do a lot of
regulatory work around auditingand things like that, for
instance just kind of rewindinga little bit, marsha, where we
were talking about AI and how wemight not want AI providing our
medical direction, somethinglike that but there might be
areas where it might be betterserved that AI could provide,

(39:18):
maybe like a radiology reviewwhere today it takes time for
that image to be reviewed andthen go to a doctor for review
and then back to you, then go toa doctor for review and then
back to you, so it could bemaybe a week before you get that
image reviewed and your results.
Where, potentially AI a good usefor AI could be running that

(39:44):
image through an AI model thathas seen not just thousands of
these images but millions andhas access to a large database
of images and may just have afiner-tuned way of ascertaining

(40:04):
what is actually in that imageand providing a better diagnosis
based on the image than apractitioner could.
And we may not be there today,but 10 years we might be, but I
think our legal and regulatorysystem has not caught up to that
, where today, I believe, inorder for radiologists to review

(40:28):
your image, not only do theyhave to be licensed in the US,
but they also have to belicensed in that state.
So I think during COVID therewas some times where physicians
were moving out of state but yetthey were VPNing in and they
were able to view the imagesthat way Because technically, if

(40:51):
they're VPNing in, they're instate.
But to me that just seemed alittle bit backwards as far as
well, if we could take advantageof radiologists that were in a
time zone that were, you know,12 hours opposite of ours, so
you get an image at 8 pm, youknow, in the ER, that image
could be read essentially bysomebody in daytime somewhere.

(41:13):
Maybe they're not in your state, but you know they're half a
world away and they're just asyour state, but you know they're
.
They're half a world away andthey're just as educated, and
maybe that's all they do isreview these images and they're
going to provide a great reading.
But that wasn't medically legalto do because they're not
necessarily licensed in thestate.
But now we can take that onestep further.
And then are we, Are we goingto be licensing different AI

(41:38):
technologies and those AItechnologies may have to have a
medical license in order toprovide a diagnosis.
It's just kind of where legaland efficacy and outcomes they
don't all seem aligned right now.

Marsha Maxwell (41:56):
Right.
Yeah, I think that especially.
I think there have been studiesdone where AI can spot
something that doctors haven'tspotted right.
But I think you're right withthe whole regulatory issue.
It's going faster than wehumans can comfortably process

(42:20):
and come up with something thatmakes us feel comfortable.
And also, I think someindustries might feel more
threatened than others.
So I'm going to take legal, forinstance, right Legal.
A lot of the I guess the bitein legal is they have this body
of knowledge, they have read allthe things, they've seen all
the things and know what lawapplies to what, whereas if the

(42:42):
AI can do the same thing, thenit's going to be like, hey, why
am I going to pay this person amillion dollars when I could,
you know, take five minutes onchat to BT and it spits out the
same advice, right?
So there's some protectionistthings.
I think they're going to begoing on and people are going to
be have are having to figureout what they are going to do to

(43:02):
stay relevant.
So I think that's going to besome pushback and that'll be all
of the knowledge intensive orinformation intensive jobs, even
programming programmers now youcan ask it to write a code for
something in any language.
You don't need to know thelanguage yourself, but write a
python code that will do xyz.

(43:23):
It can do that for you, um.
So, yeah, I think it's going tobe really interesting again.
You know, I think a lot ofissues are going to be happening
right now, especially people incollege right now.
Um, what are they going tostudy?
What should they study?
Um, is their job going to behappening right now, especially
people in college right now.
What are they going to study?
What should they study?
Is their job going to beobsolete when they graduate?
You know, if the AI systems aregetting so good, what should we

(43:49):
do as humans?
There's that, I think, wall-emovie, where all the people are
just huge on the doing nothing,I guess, vacationing for the
rest of their lives.
On the doing nothing, I guess,vacationing for the rest of
their lives while the bots doeverything.
I mean, is that the feet weneed to figure out as a people,
a global people?
What do we want our future tolook like?
What do we want AI to do for us?
Why are we doing theseparticular things with AI as

(44:13):
opposed to having it help ussolve some other types of
problems?
Is the social structure going tochange.
So for the past couple hundredyears, or a hundred and some
years, it's been industryproducing stuff and now it went
to knowledge workers.
Now knowledge is beingoutsourced to AI.
Is it going to go flip back tophysical and people?

(44:35):
You know AI can't unclog atoilet or AI can't do you know.
So is it going to flip?
I don't know.
It'll be interesting Again.
I think we live in reallyreally fascinating, fascinating
times.
We're lucky and unlucky at thesame time.

Joshua Schmidt (44:51):
Yeah, I want to kick it to Bill really quick.
Just say you know I want to gethis take on how you see that
ethical concern dovetail intocybersecurity in your experience
.
Have you come across anythinglike that that raises any flags
or anything in your experience,bill?

Bill Harris (45:06):
Yeah, so I mean from a cybersecurity perspective
, we work pretty closely withcompliance and with legal and
some of the concerns that we'veseen are around AI and its usage
of intellectual property.
You know, potential violationsof copyrights, of course,
ethical concerns aboutplagiarism, ethical concerns
about you know what AI could doto put human life at risk in you

(45:26):
know sort of some unwittingfashion.
Certainly a lot of that, and Iguess you know.
I'm wondering if you've seenany of that in some of your work
.
And as a follow-up to that, youjust spoke, I think, very
sagely, about how regulationsaren't keeping up with any of
this.
But I was also wondering inyour domestic and international

(45:49):
experience, are you seeing anycultures or governments who are
at least trying to approach itmore aggressively or more
innovatively than we mightotherwise see from our positions
?

Marsha Maxwell (46:03):
Okay, so I think China has had been at the
forefront for a long time not along time, but it seems like a
lot of work is coming out ofChina around AI.
I don't read Chinese,unfortunately, so I can't keep
up with the Chinese literature,but I think a lot of work's
being done in China around AIand I think some of it is.

(46:24):
Of course, they have a billionplus people right, so they need
some systems and they have somuch data that they can work
with, because they don't havethe same data protection or
privacy laws that we do here.
So I think a lot of interestingthings might come out of China
just because they have access to, you know, so much data.

(46:44):
As far as I forgot, part ofyour question was about, oh,
copyright.
I think copyright is a reallyinteresting thing because, like
you can use, in the beginning,when the kids got, you know,
heard about AI, then everybodywas doing their papers.

(47:04):
You know, write me an essay onXYZ and, okay, it spits it out
there.
But now, as people are gettingmore sophisticated, you can say,
write me a paper on XYZ andwrite in the style of William
Faulkner, or write this in thestyle of blah, blah, blah.
So you could say is that orthey don't?

(47:25):
When you're doing a paper ordoing some work and it's, you
know, trawling the net to findinformation, you don't really
know whose ideas it has pickedup out of the billions or
hundreds of millions of work, soyou can't accurately cite
anything or give credit toanything.
It just gives you thecompilation.

(47:47):
Unless you are again back tothe prompt engineering, you can
ask it to annotate and do X, y,z.
But I think copyright is a bigone.
Even music when they were comingout with you know, I know they
had a couple that were.
I think it was Jay-Z is one,taylor Swift is another where

(48:08):
they would say, right, drake isanother, sing a song that sounds
like this person, in this style.
And one of the workshops I did,I used some of that music and
had people rate it and you know,in the beginning was easier,
because everybody now knows AIcan do a lot of things.
I said what if I told you thiswas written by a computer?

(48:30):
You know it's written by AI.
And then people started seeingthings that was wrong with it,
but before you told them that itwas AI generated, it was fine,
it was.
You know this nuance and thatand whatever.
But once they you said, oh,this is AI generated, then they
started to find issues with it.
So I think, yeah, if I were acreative feels also I guess

(48:55):
copyright or um, not so muchcopyright but um, I don't know
about the legal term, butcopying an artist is that
copyright?
So I can say, draw in the styleof picasso or somebody living,
um, I can, I don't need artiststo draw things for me anymore,
really, I can ask ai to createan illustration.

(49:16):
And then you have the issue ofwho does that illustration
belong to?
I mean, I guess it's minebecause I asked it to generate
it.
But then can someone else useit or not?
And if they do, do they have to?
It's really complicated.
Like I said, our laws currentlyhave no reference in my mind,

(49:40):
have no reference to what'sbeing done, because the people
who made the laws 50, 100million years ago were not
thinking that one day artificialintelligence could do this kind
of work yeah, and that, thatownership, that that you
mentioned, where you prompted itto create that particular work

(50:03):
of art.

Eric Brown (50:04):
Well, did somebody else?
Or what if you used chat, gptto create a mid-journey prompt
to create the art?
Well then, you know how do yourewind that?
And it, you know, it kind ofgoes back to um, the digital art
that we were talking about atthe beginning of of the, the

(50:24):
episode where art maybe wasborrowed or taken from different
cultures without people reallyunderstanding the meaning, and
then you know they're creatingthese, essentially maybe digital
representations of that andmonetizing it.
And what have you during thatkind of boom of that, that

(50:47):
digital art technology, so thatthat does really get into a gray
area pretty quickly what I'vebeen seeing.

Joshua Schmidt (50:55):
So I'm, uh, currently composing music for
NBC and ABC, and the one benefitwe have at the moment job
security, I should say is thatthey won't touch anything that
can't be copywritten, becausethe last thing they want to do
is get into any legal hot waterover something that's on air.
So, as it stands right now, theywon't want to touch anything AI

(51:18):
generated because you're notable to copyright it.
So we have some sort of aprotection there.
However, I also think it'sinteresting Maybe you've come
across this too, Marcia, withyour students.
The younger kids, Gen Z, GenAlpha, are starting to be able
to identify, for now,AI-generated music, and

(51:41):
sometimes I can, as a seasonedmusician, I can pick up on it,
but there are little nuances orlittle glitches that are
artifacts, if you will, that arestill embedded in that stuff.
That's kind of undeniably AI.
But yeah, you're right, I youknow there's a public domain and
it will be interesting to see.
As you know, we're still usinga ton of this baby boomer music

(52:04):
in our commercials, in ourculture.
You know, from the Beatles toElvis to Chuck Berry, to, you
know, Little Richard.
That stuff still gets a lot oftraction in the advertising
world and that's soon going tobe in the public domain.
I think it's 75 years.
So I think they just had thishappen with Disney, where the

(52:28):
original Mickey Mouse is nowpublic domain.
So all sorts of crazy thingsare happening with that.
So yeah, you're right, we're ona new frontier.
So yeah, you're right, we're ona new frontier, and it'll be
interesting to see how fast thelawyers and the ethics of our
culture can kind of catch up towhat's happening here.

Marsha Maxwell (52:46):
I think that's the beautiful thing, though I
mean, I think, going back towhat I said in the beginning, I
think it's great that becauseyou can tell the difference
between Well, right now I feellike I can tell the difference
between ai generated and real,um, whatever that means, but I
think it's the the combinationof both, that's really great,
right.
So you can have ai startsomething, or you could have ai

(53:09):
finish something.
It doesn't have to be a hundredpercent.
I'm thinking about music.
You know, you can maybe have,like I think they did a thing
where they had all like 50country songs and it was all the
same song, but maybe a littlebit higher or lower.
But so if you do something likeyou have a melody, ai can maybe
complete it or help you go tothat next stage, and it could be

(53:31):
a mashup between humanintelligence and artificial.
I think for me, right now,that's where the beauty is.
You know, you have an idea, youwrite something AI can help you
figure out.
Okay, this is where I want thestory to go, or this is where I
want the art to go, and I canerase it, and you know, without
losing everything, or I canarchive it or save it or have a

(53:54):
hundred iterations.
So it helps you again toaugment your own thinking, and I
think that should becopyrightable.
I'm hoping because it is acombination of your work and the
generative work, which is whatreal musicians and I'm not a
musician, so, joshua, you canshut me down, but what real

(54:15):
musicians do, nothing is fromnothing, right?
So you are reminded ofsomething or you might see a
link back to a composer orsomething like that, or a folk
tune or whatever in what you'redoing, and I think AI is kind of
helping you do that, but in adifferent way.

Joshua Schmidt (54:32):
Well, absolutely .
I mean, all American music isappropriated from different
cultures.
We're a melting pot here andyou know the dumbed down version
of it is, you know, westernmusic, or American music is
Western classical mixed with theintricacies of African rhythms,
and then blending those twothings together is where we come

(54:54):
up with blues and jazz and folkmusic, and that's a beautiful
thing.
So when you have that meltingpot, I hope AI can contribute to
that in a positive way.
And then also, to your point,make creativity more accessible
to people that might not havethe means to buy a baby grand
piano or a Fender Stratocaster.

(55:15):
Maybe it opens it up.
That comes with its ownramifications of just clutter
and, uh, you know, all thosethings that that can happen when
everyone's able to create.
But, um, you know, hopefullyit's largely for the better and,
uh, a positive thing forhumanity.
I I would, I would hope I thinkit.

Marsha Maxwell (55:32):
I think that's true.
I'm thinking about all the artthat I generate now.
I love art-filled presentations.
It has lots of art in it andbefore it would be almost
impossible.
I mean I would do it, but itwould be really long hours
trying to get the thing right orfind a photograph that matches
what you want.
Or, if you could afford it, goon Fiverr and have somebody try

(55:54):
to replicate what you want.
Or if you could afford it to goon fiverr and have somebody, um
, try to replicate what you want.
But now it's between mid journeyand even canva.
I mean, it's really simple,like you can generate what you
want and tweak it and you know,take this part and you know ask
it to do this kind of f-stop andwhatever.
You can kind of reallyincorporate what you know about

(56:17):
real art in real life and haveit manufacture unique things
that suit you, and I thinkthat's really really.
I'm so really thankful that aiexists for that.
If there's just one thing itcould do is create art for me.
There you go, all thetranslations.
You, there you go, and thetranslations you know just being

(56:37):
able to translate whatever intowhatever language I want you
know.
So now you can reach so muchmore people.
The world literally is at yourfeet, and I think that's a
beautiful thing.

Joshua Schmidt (56:49):
I think that's a great place to end it today
with on a positive note.
We're at an hour, unless anyoneelse has something they want to
get in before we wrap.

Eric Brown (56:58):
Josh.
I just wanted to ask Marcia ifpeople are interested in
learning more about what you door getting involved in working
in communities outside of theirown, maybe outside of the
domestic communities here.
It sounds like you've got someinteresting travel.
You're seeing things all aroundthe world.

(57:19):
Where should people go to learnmore and maybe participate
locally?

Marsha Maxwell (57:25):
Yes, so we have a website.
You can go events atiftheselandscouldtalkcom.
It's a really horrible URL.
We've got to work on that.
But iftheselandscouldtalkorg.
Go there and you can see allthe things that we're really
actively URL.
We got to work on that.
But if these lands could talkthat ORG, go there and you can
see all the things that we'rereally actively doing and they
can also just email me and myemail is on there as well.

(57:47):
So, events at if these landscould talk that org.

Eric Brown (57:52):
Great, thank you.

Joshua Schmidt (57:53):
Wonderful.
Well, you've been listening tothe audit presented, presented
by IT Audit Labs Today.
Our guest was Marsha Maxwell.
You can find her online bysearching her name.
We've also had Eric Brown ontoday and Bill Harris from IT
Audit Labs.
My name is Joshua Schmidt, yourco-host and producer.
You can catch us every otherweek.
We publish on Monday and youcan find us on Spotify, apple

(58:14):
and wherever you get yourpodcasts.
Please tell a friend, like andsubscribe and we'll see you in
two weeks.

Eric Brown (58:20):
You have been listening to the Audit presented
by IT Audit Labs.
We are experts at assessingrisk and compliance, while
providing administrative andtechnical controls to improve
our clients' data security.
Our threat assessments find thesoft spots before the bad guys
do, identifying likelihood andimpact or all.

(58:41):
Our security controlassessments rank the level of
maturity relative to the size ofyour organization.
Thanks to our devoted listenersand followers, as well as our
producer, joshua J Schmidt, andour audio video editor, cameron
Hill, you can stay up to date onthe latest cybersecurity topics
by giving us a like and afollow on our socials and
subscribing to this podcast onApple, spotify or wherever you

(59:05):
source your security content.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.