All Episodes

August 6, 2024 • 45 mins

Simplify the world of AI with our guests, Merissa Sadler-Holder and Ephraim Lerner! Learn the key differences between traditional AI and generative AI such as ChatGPT, and understand why this technology is becoming more accessible and relevant for everyone, regardless of technical expertise.

We discuss the importance of being aware of how our data is used and the need for opting out of data sharing when necessary. Special attention is given to educating children about data privacy, especially with AI tools integrated into common apps, to ensure responsible usage and understanding.

Finally, explore the transformative impact of AI across various industries, from healthcare to education. Learn about the critical need for future generations to develop AI skills. We also highlight the importance of familiarizing oneself with AI tools to support personal development and innovation. Packed with valuable insights, this episode is an essential listen for anyone interested in the future of AI and its societal impact.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Hi, this is Stephanie Schaefer and you're listening
to the North Star Narrative, apodcast from North Star Academy.
I want to thank you for joiningus.
I hope you're encouraged,challenged and motivated by what
you learned today.
Enjoy the story.
Hey everyone, welcome to theepisode.
This week I have two people thathave just been on the show and

(00:24):
now I've got them both together.
So I'm really really excited tohave Marissa Sadler-Holder back
with us and Ephraim Lerner backwith us, and so if you've been
listening, then you've heardMarissa on episode 222 and then
Ephraim was on 224.
So both of them just recently.
Both of them love education,educators, ai, all things

(00:47):
technology, and they're reallydiving into it and doing some
good deep work.
And so if you haven't listenedto those episodes, go back and
you'll hear introduction aboutthem and get to know them a
little bit more and then heartheir heart and where, yeah,
their deep work has taken them.
But today I thought we'd haveboth of them because actually
Marissa is the one thatintroduced me to E3M.

(01:09):
So we're going to all hang outtogether today and just go over
some more maybe generalquestions of AI.
We'll just see where thisepisode takes us.
So thank you so much, marissaand E3M for spending more time
with me in the North StarNarrative today.
Thank you so much.

(01:31):
We're happy to be here.
It's such a pleasure, that's sogood to be hanging out again.
Okay, so we're just going tojump right in.
Maybe someone's listening anddoesn't really understand AI or
what it is, or the fact thatthey are definitely using it on
a daily basis, maybe just to notknow.
So can you just explain inreally simple terms what
artificial intelligence is, howit works, and then give us some

(01:52):
examples of these everydayapplications that we're using?

Speaker 2 (01:55):
Rissa do you want to start with it?
I think yeah, sure.

Speaker 3 (01:59):
Just to be super.
Let's just break this downsuper simplistic.
Be super.
Let's just break this downsuper simplistic.
Um, basically, it is a machinethat mimics, um, human
intelligence, right, so like theability to learn, the ability
to problem solve, that kind ofthing, um, but what it uses is
like algorithm and data to beable to do this and ultimately,

(02:22):
when you're using something likegenerative AI, it is making
predictions really really boldand it doesn't really understand
anything.
It doesn't understand what it'sputting out, but what it is
understanding is based off allthe data that it's collected.
It is predicting the answer andmost of the time it's correct,

(02:45):
but there are some limitations,as we discussed on previous
podcasts here.
For examples, I know that Ethanprobably has a couple, but I
think not so much generative AI,but AI as an umbrella.
We have things that we useevery day in our life siri, um,
netflix, amazon they all usealgorithms and ai to kind of

(03:09):
predict the behaviors of, uh,what people would maybe enjoy
doing or using, and like amazon,for example, if you, um find
that you are buying the same catfood every you know couple of
months or whatever, they'llstart sending you stuff saying,
hey, it might be time topurchase.

(03:29):
You want to add to that Ephraim?

Speaker 2 (03:34):
Yeah, I think you've covered it perfectly.
I would maybe say one other keypiece which is so useful for
people who may not have abackground in technology or
computing or coding thedifference between artificial
intelligence for example, goingon Google and searching
something, versus usingsomething like ChatGPT, which is

(03:57):
generative AI like Marissa,like you were mentioning, which
is the way it's built, is notrequiring someone to have deep
technological skills orknowledge, but rather the
critical skills and the naturalskills to be able to dissect and
get the output that they want.

(04:19):
And, putting all that technicaljargon aside, what that
practically means is that we canuse natural language, human
beings, how we normallycommunicate.
Technical jargon aside, whatthat practically means is that
we can use natural language,human beings, how we normally
communicate, can now communicatewith a machine or a computer
that's allowing us to be able toaccess this, this deep
intelligence that's going onbehind the scenes and, like you
mentioned, there's a history ofthis being there, but what's

(04:42):
amazing is it's just rapidlyevolving.
So what?
What was behind the scenes,maybe 50 years ago or 40 years
ago, when gary kasparov played,you know, in the chess
tournament and played againstthe computer and lost?
This is kind of the next step,next stage, where every, every

(05:03):
person, every human being whohas a language but they're not
able to maybe use like coding,can now use their own natural
language which they feelcomfortable with, to be able to
communicate with this artificialintelligence.
I know that's kind of skirtingaway from the question, the main
question, which was what isartificial intelligence?
But I think there's a, there'sa key piece there that, when we

(05:25):
think of these incrediblecomputing tools and and
technologies, that it could goon in the background without us
knowing about it, but we don'thave access to that, and I think
that's a key, a key reason whyso much of what's been going on
recently has been an ongoingconversation for every single
person, not just those intechnology alone, and that's why
it's relevant to the personlistening to this that it's not

(05:48):
something that's just forcomputer geeks or people who are
in the technology field, butrather for everyone.

Speaker 1 (05:54):
Yeah, and so Google is an example of AI we've been
using for a long time.
Right, a basic example, buttell us the difference in
something like Google and thengenerative AI?

Speaker 3 (06:07):
But tell us the difference in something like
Google and then generative AI.
So the idea is that somethingwith so like Google is based off
of data and it is flexible.
It has been taught that it canonly reproduce based off of
these certain inputs and outputs, and it isible, right?
And so when you have somethingwith generative AI, what you're

(06:28):
doing is perhaps, with the samedata, you're creating something
new.
So I guess one of the thingsthat I share in my learning
journeys for teachers is thinkof it as Google kind of like on
steroids, right?
So, for an example, you can goin and if you go to Google, you

(06:49):
say give me the top cities forrestaurants in France, and what
it will do is it'll procure alist of different sites that
maybe cover the topic of thebest restaurants or foods in
Paris, or whatever.
But what you can do withgenerative AI is you can go in

(07:10):
and be very specific and saycreate a list of the top rated
restaurants in all of France andthe cities that are associated
with them.
Right, and so you're creatingthis brand new list.
It's pulling from all thesedifferent resources and it's now
creating a list that didn'texist before, using all of the

(07:33):
resources that maybe Googlewould have pulled up.
So it's a little bit differentin that you are actually
creating something new thatdidn't exist, yeah creating
something new that didn't exist?

Speaker 1 (07:50):
yeah, and is it true that ai is learning from
everyone that's putting inprompts, questions and
everything?
Can you explain a little bitabout that and how that is going
to adapt over time?

Speaker 2 (07:59):
you know, yeah, I, I think.
I think it will depend on themodel itself and the platform
that you're using, because, forexample, you mentioned Google.
If I were to just use ananalogy for what Marissa was
sharing, there's how I see it islike a filing cabinet and
Google's pulling out from thefiling cabinet and identifying

(08:21):
something that you've searchedfor.
If it's not there, it's notthere, whereas what AI is doing
is it's finding those gaps inthat vast amount of data that
you've searched for.
If it's not there, it's notthere, whereas what AI is doing
is it's finding those gaps inthat vast amount of data that
you've got there and predictingwhat that next step might be.
So if you say the cat on thewhat's likely to be map is going
to be put in, because that'snine out of ten times that's
going to be there, but there'salso the challenge that when we

(08:42):
think about when we're puttingin our data or putting in our
content, is it training on thatdata?
So some models, for examplegoogle, will only train on its
own training data, so it won'tuse the data that you you're
inputting into it, um throughtheir notebook lm or through
their um ai studio that they'vejust you know they've just

(09:02):
released to the public.
They've said very clearly thatif you can put it in your google
drive, you'll only use theinformation that you've got
there and it will build your theai from not from the content
that you're putting in there,but from the but keeping it in
your own local, your localenvironment.
The difficulty and thechallenge becomes when open ai

(09:23):
or companies like that OpenAI isthe father company of ChachiBT
they're the one that are sayingwe're using your data and we're
constantly using what you'reputting in to train off that,
and they say if you're not beingcharged for the product, then
you become the product.
That's sometimes what we'resaying, and the difficulty is

(09:44):
when we're speaking aboutaccessibility and free of charge
content that's availablethrough ai.
What is it using?
It's sometimes using your, yourcontent.
So just checking beforehand onwhere it's pulling its training
from.
So is it using your, your dataand your content that you're
innocently putting through, oris it?

(10:04):
Is it more more subtle andsomething's going on behind the
scenes where it's, you know,being trained on its own servers
and using its own algorithms?
Because it's pulling from avast range of data itself.
Sometimes including as muchinformation as possible can also
affect the reliability and thesuccess of the output.

Speaker 1 (10:23):
So so what should we be worried about?
Anything like what should weput in?
What should we not give it?

Speaker 3 (10:32):
Well, definitely anything with personal
information.
I think that is the mostimportant thing.
We want to practice safelyusing AI.
And I will say, just to kind ofpiggyback off of what he said
you can opt out of some of thesethings, for example, facebook,

(10:54):
um, any of the meta products.
Right, they all use whateveryou're using on facebook to
train, but you can opt out ofthat, and so I like, for example
, I opted out of the facebookone because you know they're
using on Facebook to train, butyou can opt out of that, and so
I like, for example, I opted outof the Facebook one because you
know they're using pictures andstuff to actually train their
image generation, and so youhave to be careful about that.

(11:15):
I think it is our responsibility, just as individuals.
I mean, I know that, like, aiis a very complicated thing and
we don't have a lot of time inour personal lives to, you know,
kind of chase after everythingthat is happening new.
But I think it is importantthat we do just kind of make
ourselves aware of how these arebeing trained, how the data is

(11:36):
being used, how your data, yourpersonal data, is being used
with these models, and it isn'tjust ai.
I think this, this kind of goesinto the bigger scope of how
data is being used, even withyour amazon, how it's being used
with, um, anything really likeany technological piece.
Just be aware of the data thatthey are, they are collecting

(12:01):
and, if you can, I like to optout.
That's just me, but a lot oftimes you can open ai itself.

Speaker 2 (12:08):
I use that example, but the gpt is that.
That's, that's um, that'screated by a lot of the users.
They've given the optionsometimes to say we don't want
this to be public, but notnecessarily are they saying we
want this to be used fortraining data.
So even being able to opt inand opt out, being aware of what
we're choosing to opt in andout of it, might be subtle, but

(12:31):
those are important and if youdon't feel comfortable using
something like social media toput that type of information and
putting it on an AI bot, youneed to be careful where that
information is going to go.

Speaker 3 (12:41):
Being able to teach.
I do want to just say one morething because I think it's
important, because we'relisteners a lot of times, have
children right, and so this ideaof you know if your child is
using AI, that they should alsobe taught.
You know, be careful about whatyou're putting in, save.
You know, make sure yourpersonal information is not in

(13:02):
there.
A lot of times we don't evenrealize and they're getting to
the point where it's wellintegrated and you know we
didn't ask for this, although Imean, I love AI in one aspect,
but there's also a safetyconcern and I think um a
learning curve with that safetyconcern, and so you know you can

(13:24):
look in.
If I were a parent, I would sitdown and maybe look at apps that
my child is using, whether it'sSnapchat or whatever you have
an older child, they have AIbots on there, and so you have
to kind of be careful of that aswell.
As um, just in general, like, Ibought a new computer the other
day, a PC, and it now has AI onit and there is something that

(13:49):
you can integrate with yourcomputer.
So it seems really cool thatyou know it can help you analyze
XYZ on your computer screen,but it's also accessing all of
what is on your computer.
So it's kind of one of thosethings that you really just have
to be careful.
While it sounds really cool,maybe opting out and using it

(14:12):
with a certain LLM or a certainchatbot you feel more confident
with might be the route, butjust kind of making yourself
aware is very important.

Speaker 1 (14:21):
Yeah, no, those are good, true safety concerns, but
I know there's also a lot ofcommon misconceptions and
ethical considerations.
So what are some of thosecommon misconceptions about AI
and what ethical dilemmas do youforesee as AI continues to
evolve in various fields?

Speaker 3 (14:44):
I think you can jump in here anytime.
But, like, I think one bigmisconception that people have
is because it's technology, wekind of have an inherent.
Our relationship withtechnology is kind of that like,
whatever it produces is morecorrect than what a human would

(15:05):
produce because it's built byhumans, but at the same time, we
kind of trust it to be correctbecause, like a calculator,
right in the sense of like youcalculate something and it's
correct, right.
And so I think we have thisfeeling that what ai might
produce is more correct thanwhat a human would.

(15:26):
But the thing is is that it'snot, you know it's it's not
always correct and there areissues and there are limitations
to it and it does have biasesin it, and so we kind of just
have to be careful about that.
So we can't just trust it and alot of people do just because
it is technology.

(15:47):
And another thing is it doesn'treally understand.
And I know I touched on thisearlier, but when we're talking
to talking, this is what's socomplicated, because we're using
language like we're talking toAI, we're working with AI, those
are things that we do withother humans, right, but the
thing is is that it doesn'tunderstand what we're saying.

(16:10):
It doesn't understand thecontext.
It is not a person, it is justsomething mimicking the
intelligence of a person.
So it doesn't like when I sayit doesn't understand, it's just
predicting what the correctanswer will be.
So I think that's kind ofimportant to understand, that

(16:31):
like you know, it's not human,it doesn't understand, it
doesn't have feelings.
I mean we all.
I know it seems silly to statethis, but it's not sentient.
It just isn't right now.
Right now, let's hope that itnever will be so with
misconceptions around ai.

Speaker 2 (16:46):
I think there's this view of it's.
We're for a where we support aiand we support the ideas behind
ai and we support you knowwhere it's going and it's going
to be the future and it's goingto answer our questions or the
opposite extreme, where AI is adangerous you know it's a
dangerous tool and we need to becareful not to use it and you
know it's going to take all ourjobs, et cetera.
And I think having a morebalanced view of that and of

(17:09):
these misconceptions, seeing thenuance where it is and digging
into what Marissa was saying,seeing nuance in the role of AI,
so I kind of see it in terms ofa Swiss army knife Certain, ai
are trained to do very specifictasks.

(17:29):
So if we get an AI that createsmusic, to create a PowerPoint
presentation, it will do aterrible job just because that's
not how it's trained and, youknow, made to to work and the
same way that a system that'sbuilt on mathematical you know
which, which is a weakness thata lot of, a lot of people have

(17:50):
noticed with ai, because it'susing predictive analytics.
It's not a yes or a no, it's amatter of facts, this or that.
The first models, withChachapiti and early AI, the
maths was a really sore point,and the reason why is because
people were using it assumingthat it's going to do all their
thinking for them, and the wayit was functioning was it was

(18:13):
choosing an option and sometimes, because it needed to vary it
up, it would use the wrongoption.
So it wasn't.
It was.
It didn't work in that, in thatenvironment.
So expecting an ai to doeverything, I think is a bit
far-fetched, and I think it'sknowing the role that it, that
it plays and also knowing thecontext which it's pulling from.
And then the opposite extreme isAI can be dangerous.

(18:36):
It can be a dangerous tool, butif we're conscious of what we're
using and how we're using itand, like we were saying before,
the data that we're providingit, the content we're providing
it, that's something that Ithink could be quite important.
And I think, when we're thinkingabout the other side, which is
the ethical side, when we'rethinking about the other side,
which is the ethical, theethical side, when we consider

(18:58):
people who might be using it ina very localized setting, it
means that it can create moreextreme versions of it.
So, for example, um years agothey had these ai tools that
were talking to each other fromgoogle, and I think they shut it
down quite quickly because itstarted speaking in its own
language and it was, you know,it was feeding off everything

(19:21):
from the internet and some ofthe worst stuff that we, you
know, we can see in humanity cansometimes be, you know, found
in particular parts of theinternet and it was creating,
you know, racists and big youlike, just being bigoted in the
sense that it was repeating someof the worst things and that if
we're not careful, if we'reusing ai in a very localized

(19:43):
setting, in a very small setting, and we're expecting it to give
us broader results, it won't beable to.
And I think that's important aswell, to know that it will only
be as good as the data it'spulling from.

Speaker 1 (19:59):
All right.
Talking about our jobs, let'slook at some different
industries and see where AI istransforming them, such as
healthcare, transportation,creative arts.
Let's talk through some ofthose.
What examples would you have togive?

Speaker 3 (20:12):
Well, I think a lot, lot, in particular health care.
You know I I haven't dived asmuch because I've been so
focused on education, but I know, with health care, because of
its ability to predict so well,if you do input, you know
different things with it.
Um, like if the doctor wouldsit down and like put in all the

(20:33):
different things that they'reseeing here, or um
characteristics of somebody's uh, illness or whatever, they can
predict what is the best routefor the best path, this person,
that type of thing.
But what's interesting aboutthat is, while that prediction
is probably very good, um,there's that human aspect that

(20:57):
only doctors can actually bringin, that they understand far
more context surrounding theperson and their needs and their
current environment.
And so I think without that itis kind of just formulate
prediction.
But with the combination of thedoctor and this you know pretty

(21:21):
good prediction from AI, you'regoing to create something that
is hopefully one of the bestplans for an individual's health
or pathway for their healthneeds.
Yvonne, did you want to add onto anything?

Speaker 2 (21:40):
Yeah, I think that's a great example.
And I've got a friend who worksin this space.
He's got a company in thisspace and he speaks quite a lot
and podcasts internationallywith some of the biggest names
in, especially in america, um,on health care, his name is
mental earl and wine and he hespeaks about value-based care

(22:02):
and the importance ofunderstanding your, you know,
the the patient and being ableto sort of support them in not
only we think of medicine, forexample, in terms of being
reactive, but actually beingproactive and creating an
environment where they feelthey're being looked after and
preventative.
You know, and that for me hasbeen a really eye-opening

(22:24):
experience, because when you'vegot that in a secure way and
you're able to get deep insightas to, maybe, a diet or health
care and we can see this insports, you know, in sports and
other industries where they'regetting that insight what it can
provide is this deep supportand understanding of what might
be going on, but they'll only beas good as the context and

(22:45):
therefore the role of thehealthcare practitioner is so
much more crucial than ever, andthe same for other industries
as well.
When you're thinking aboutsports, for example, sport
science, the understanding, theemotional connection with that
person, understanding whatthey're going to give you and
what you're able to seesometimes we can't put our

(23:07):
finger on it, but there'ssomething there and being able
to articulate that it brings thebest of the qualitative
sciences and the quantitative.
So you've got both the humanand the scientific data.
That's going on behind thescenes and again just
reinforcing this idea thatobviously artificial
intelligence is an incrediblypowerful tool, but if the data

(23:29):
is missing, then it will set youdown a path that will be a lot
further away from where youstarted.
So these gaps are going to be alot more significant if you're
using AI.
And where that can beinteresting is, for example,
when there's people's lives atstake in healthcare.
There's a massive risk there.
In education, there's a bigrisk.

(23:51):
In psychology, there's a bigrisk.
Risk there in education,there's a big risk, you know, in
in psychology, there's a bigrisk.
Transportation, if our entiresystems are built built around
ai.
Take you know the tesla andit's using, you know, using
artificial intelligence to beable to drive or park.
What happens when it goes thewrong way or does something?
It's not always perfect andobviously it's incredibly
intelligent with how it's beenbuilt, but I saw last week on

(24:18):
Twitter this, or X, as it'scalled now this story of a car
that had gone on its own.
It was on its own, it didn'thave a driver in it and it had
gone through oncoming trafficand was parked and the police
officer pulled it over andthere's no one there oncoming
traffic and was parked and thepolice officer pulled it over
and there's no one there.
So when obviously these areextreme examples, but there's
the greater the potential isalso the greater the risk and we

(24:41):
need to be careful with how we,how we do that.
And, for example, if we takeanother industry which I think
there's a lot of hype aroundfrom the creative side, which is
, you know, hollywood, netflix,you see, you see actors who are
up in arms because people arebeing replicated, and there's it
touches on a very delicateissue and sensitive issue in

(25:03):
society, which is creativeownership and creativity for,
for for human beings yeah, and Iwill say in regards to
affecting industries, without adoubt this is going to affect
jobs and industries, for sure,but I also, at the same token, I

(25:26):
would say that it will alsocreate new jobs potential and
impact industries.

Speaker 3 (25:37):
That way, I think it will.
I think what we're dealing withis just waiting to change what
we kind of do already, changethe focus on what is really
important, the skills that arereally required of us.
I think that we've we've alwaysbeen expected to kind of just
do it all Right, and now we cankind of focus on the human um

(26:03):
traits that are that only humanscan do Right.
So it really kind of and I'mkind of think that's lovely
right, because it really isforcing us to kind of redefine
what it is to be human, and Ithink we're going to be seeing
that now.
So in our you know, just in ourjobs in the future, I mean,

(26:25):
it's happening already.
There are there are people whosay that they won't hire
somebody who doesn't know how toleverage AI skills.
There are kids coming out ofgraduate school sitting down
with you know these like coachesand career coaches, saying, hey
, your resume needs to have AIwritten all over it if you want

(26:46):
to have some kind of cuttingedge ahead of the track, and so
it is definitely something thatwe're going to have to focus on
Um.
But, as a whole, I think wejust need to prepare for change,
and in that preparation forchange is doing like what we're
doing now we're talking about it, you're learning about it,

(27:08):
you're on, you know, you'relistening to this podcast about
it.
That is the first step in kindof preparing yourself for this
change.
No, that is the first step inkind of preparing yourself for
this change.

Speaker 1 (27:16):
No, that's really really good, because that was my
next question about how AI isgoing to affect the job market
and I think you even answered itbefore when it's not going to
be able to take over all ofhealthcare.
You have to have the humanaspect in all of these and then,
like you've touched on, it'snot always correct, so someone's
analyzing it.
But, yeah, we do have to beteaching our children, teaching

(27:38):
the next generation, about it,because, even though it might
not take all the jobs, it isgoing to change the jobs.
So that's a really good point.
So we have to keep on learning.
All right, what about AI anddecision making?
How is in decision-makingprocesses across different
industries?
What are the potential risksand benefits for relying on it?

(28:00):
So, we know it's going tochange the jobs, but what are
some risks that people might belooking at?

Speaker 3 (28:07):
I would start with just saying that all decisions
need to have some kind of humaninput to it.
So there's something calledhuman in the loop and it's a
term that's been coined andbeing used, and the idea is that
whatever AI is producing,there's constantly human eyes on

(28:29):
it to kind of ensure that thisis definitely um correct or
somebody is being critical ofthe output and um, when we do
that, we are.
If we're using it to help usmake a decision, it is keeping

(28:49):
it kind of like you know, it canpredict, it can give you great
decisions, but we can't put ourall of our trust into it.
Like I talked at the verybeginning when I said something
about you know, we have thisweird connection with technology
where we just kind of trust itbecause we kind of think it's
you know, it knows what it'sdoing, it's trained to do this.

(29:09):
This is why it was built and wekind of just use it.
But we need to be very carefulabout that and all decisions
definitely need to have somekind of human in the loop.
Um, but there are a lot ofbenefits to that in that maybe
it's going to create solutionsor help us make a decision and

(29:36):
provide context that we didn'tthink about before.
You know, I think there's thatcapacity too, and so I think
it's kind of both of thosethings like just understanding
its limitations and knowing thatyou have to be a part of that
decision but at the same timeallowing it to give you some
kind of inspiration or differentideas that maybe we wouldn't

(29:59):
have thought of on our own.

Speaker 1 (30:02):
I kind of like the idea of brainstorming with it,
collaborating with it.

Speaker 3 (30:06):
Yes that's a great way to take it.

Speaker 1 (30:10):
This whiteboard behind me.
I mean, my brain can only do somuch brainstorming by myself,
but you like to have somebodyelse in the room, so sometimes,
yeah, it can help you think ofcooler ideas or more creative
ideas or something different.
So I love that.
All right, so kind of thinkingon those lines.
Innovation what are some thingsyou're excited about, that
you're hopeful about that mightbe coming?

Speaker 2 (30:30):
It's right to jump in here, I think, just on that
last point.
I think this ties in quite wellwith innovation and the last
point you were making, which isbrainstorming.
I think there's this assumptionand this frustration now with
AI being the solution that'sgoing to give us these amazing
responses or amazing results,without us really working on

(30:52):
where it's pulling itsinformation from, how it's being
used in the process, but we seeit as a results off, and this
is, um, I'm to blame as well forthis, because I'm sometimes
guilty of it myself, becausethinking, thinking about how it
can cause me and in my use of it, to be lazy.
It can allow me to just thinkof it as a way to get an easier

(31:13):
response, whereas I think thevalue with ai is not just in the
results alone, but also in the,in the process, in the, in the
build-ups to making a decisionand when we think about
innovation as well.
I think it's difficult toreally imagine a world where ai

(31:34):
can be playing and shaping thoseparts of our, of our life in a
way that can really help us, asopposed to it being used in the
way that we might use Google.
We might use it's only whenwe're looking for something or
we're trying to get somethingquick, but actually embedding
into our processes and embeddingit into how we think and we
operate with people and withothers and with systems.

(31:57):
There's something reallypowerful there and, from an
innovation point of view, Ithink what it could allow us to
do and this is something that wediscussed in the last episode
that we had together was thispossibility that, if we think
about humanity, if you thinkabout society, we've often,
since industrial revolution,we've been giving roles and

(32:18):
responsibilities to individualsthat fit a very specific target
or goal that we had before wemet them or we brought them onto
that role.
I think what ai could be isthis potential where we're able
to capture the individual as awhole and know them and learn
them, and learn what theirstrengths are and areas that

(32:40):
they might struggle with and seehow they could fit into our
organizations.
For me, although that's notspecific innovation, I think
that's something that, as asociety that could be really
powerful.
It could be a tool that wecould use in amazing ways, and I
guess they say that withNetflix, you know, it could be a
tool that we could use in inamazing ways, and I guess they
say that with netflix, you knowit could be that you choose this
I think you mentioned this lasttime, stephanie you could be

(33:00):
choose to star in your own movie.
You could choose to sign.
You know, in the depiction of,you know an experience that that
you're going to go watch onnetflix and you could choose the
actors.
You could choose, you know theactresses, you could choose
different people that you'regoing to have in this setting
and you can really personalizethat.
And whether it's a story fromthe Bible or a story that
they're going to, you know thatthey've got this genre that they

(33:22):
really enjoy.
Bringing that into thatperson's life.
It could be quite powerful andthe possibilities for those who
might struggle or suffer, thosewho might, for example, have an
anxiety to leaving home, to beable to pick that in their
experiences, to be able to feelmore confident to leave their

(33:43):
environment, or someone whomight be anxious before an exam,
or a medical practitioner whowants to practice on a surgery.
These are tools that AI can,alongside other emerging
technologies, can be incrediblypowerful and can allow us to
really get to the heart ofhumanity and think about some of
those challenges that we haveand really rethink that we have

(34:05):
diseases as well like being ableto think about how innovative
ways that we have never beenable to just bring all that
information together and getclarity, whereas a tool as
simple as ChatGPT might beenable to just bring all that
information together and getclarity.
Whereas something at all simpleas chat gbt might be able to
give insight that we might nothave been able to get before
lines.

Speaker 1 (34:20):
Yeah, lots of exciting possibilities.
Marissa, I just wondered do youhave a story where you've seen
ai really help um, someone thathad, you know, special needs in
the special education realm,anything in that area?
I don't think we talked aboutthat on the last podcast.

Speaker 3 (34:39):
I went to a conference yesterday.
It's MTSS and that's multi-tiersystem of sports, and this is
in regards to students and howcan we kind of get in there and
help students at these differentlevels.
And yesterday what they endedup doing was having a triangle
effect and our framework whereyou work with a counselor or a

(35:03):
teacher and you work with thestudent, but then also you work
with AI and you workcollaboratively.
So you ask questions to thestudent and the student responds
how they want to respond andkind of so.
For an example, they werelooking at different potential

(35:23):
career pathways or interest indifferent careers and goal
setting.
Sit down with a student and theAI.
You ask the student you know,what are you passionate about?
What is your interest?
What are your interests?
You know, do you have anythingyou know historically that you

(35:47):
make you um, more lean moretowards?
And just getting that kind ofinformation and then, in putting
it into the AI, the AI willcome up with potential career
paths and then the student canlook at it and say, okay, tell
me more about, I don't know,like travel agent or something
like that, because they reallyenjoy traveling right.
So you look into that and thenyou can say, okay, well, let's

(36:08):
set a goal for this student,let's sit down and see if this
is something that we can do tokind of help prepare you for
that pathway.
And it will generate kind ofthese long-term goals,
short-term goals, and thestudent can go in and say I
don't want to really do that, Iwant to do this, and it shifts
and it really creates thispathway for a student that is

(36:30):
achievable.
And so we're creating these newways to use AI and integrate AI
in education and see how we canactually support these kids and
this can be used for specialeducation and just setting goals
with learning.
It doesn't have to be just withcareer pathways and I think it

(36:51):
is that collaborative partnerand because it is technology
going back to that piece,because it is technology, it
doesn't have feelings and we cankind of tell yes or no, and it
isn't coming just from the adultin the room confidence and

(37:17):
doesn't feel like there's thatimbalance of power, because the
AI is balancing everything outand it's incredible to watch.
So I definitely think there isinnovations in as far as how we
do things is going to kind of bethe new thing, and I'm, for one
, very excited about that.

Speaker 1 (37:33):
Yeah, I'm glad I asked that question because
that's a super exciting example,yeah, possibilities and really
practical ways of helpinghelping students, helping us
learn.

Speaker 3 (37:47):
Parents can do this too at home with their child,
like right.
So because there's that balanceof power between parent and
child, you can bring that thirdparty in and have that
conversation using ai and kindof come up with.
You know, almost like amediator, to kind of come up
with an idea that everybody kindof refers upon and it's, it's

(38:08):
incredible.
I mean, it doesn't have to bejust in the classroom or in the
school setting.
You could be using this at home.

Speaker 1 (38:16):
Yeah, that's really cool.
All right.
So thinking parents, students,anybody that might be listening,
and so something's triggeredlike, yeah, I want to know more
about that.
I want to know more about thejob industry, I want to know
more about safety concerns.
What are the first steps forpeople to take to learn more?
Where does someone go when thispodcast ends?

Speaker 3 (38:37):
I mean, you're more than welcome to follow me on
LinkedIn, but I do post a lot onthat, of course, and so same
with Ephraim.
But a lot of it is just kind ofgiving yourself the time to
kind of search and research andpull from different sources.
I think that's incrediblypowerful.

(38:59):
But also, just sit down andgive yourself five minutes to
pick one tool to play with,Because you can read a whole lot
, but actually going in andpracticing, then you can kind of
see where this is going.
You'll have a deeperunderstanding.
Um.
So I think, like Ethan Mollock,I think, is a interesting

(39:23):
person to kind of follow um asfar as kind of like the broader
aspects, not just in education.
So I think he's he's's he isconstantly being critical but
also kind of seeing thepotential as well.
Do you want to hop on it andsay something like just kind of
what you think somebody mightwant to turn to?

Speaker 2 (39:45):
Yeah, I think there's this assumption as well, that
AI is already there and it'salready sort of the final
version of it is there, andthere's this, like in other
industries those producing itare so much further ahead than
those who are using it.
One of the fascinating piecesaround it is not only is this

(40:14):
evolving at rapid speed, but weare the ones using it, are the
ones who are at pretty much thesame level of knowledge not in
understanding the AI itself, butin the tools that are currently
available as those who areusing it in the larger companies
.
So it's, although it is a bitof a learning curve, getting
used to it.

(40:34):
The benefit that someone wouldhave if they're listening now
and they want to get involved,is that the there's not much of
a gap between those who are thethe latest tools, for example,
and where they might be at rightnow.
And what do I mean by that?
I mean that if a person decidedto go and invest their time and

(40:58):
energy into the latest tools,they would have the same amount
of knowledge from when that toolwas launched as someone else
who's using it from when itlaunched.
The companies don't have asix-month extra window with that
tool.
Usually, these tools arereleased very soon after they're
created, and there's a benefitto that being accessible to
everyone else.
So I think that's from thepositive side.

(41:20):
On the other side, on the sideof being able to learn and think
about it, there are likeMarissa was saying, being able
to play with it is quite useful,being able to feel comfortable
with it I think students andyounger people who have been
around technology for longermight be able to benefit from
the technical side of it,whereas I think someone who's

(41:41):
older, who might have more of acritical understanding of ideas
and concepts, might have thatbenefit when they're using it,
to be able to be more criticallyaware of where it can and able
to be more critically aware ofof where it can and can't be
used.
So there's, it depends on who'susing it, how they're using it.
And I think one other point thatyou mentioned about um, about
with regards to specialeducation needs I think this is

(42:04):
sort of a more blanket thing forlearning in general, about AI
is that we've focused on onesource of information to
multiple people for a long time,so that could be an individual
teacher or it could be a sourceof knowledge, a source of
information from a book, andthat being able to be adapted to

(42:25):
the needs of the learner, ofthe one who's being the
recipient, has taken a hugeamount of time and energy.
One of the benefits of AI isthat it can allow that learner
the person who's learning andthat could be learning about AI
itself or it could be justlearning in general to be able
to think about what theirstrengths are in learning some
of the areas that they mightstruggle with and being able to

(42:46):
adapt to their needs, so thatinformation isn't just being
teacher led or knowledge ledfrom one source of information,
but that source is beingactually adapted to them where
they're at.
And I think that could bereally beneficial as well for
the student when they'relearning about ai to use some of
those ai tools to help themwith that.
Like you were saying, withbrainstorming, to use ai to

(43:07):
teach them.

Speaker 3 (43:07):
Maybe about a bit more about ai can I just add on
one, one more thing, and well,actually two more, but, um, I
think what we tend to forget is,like that kind of question,
right, stephanie?
Like, honestly, if you'resitting down with whether it's
chat, gpt being flawed, whateverthat thought that you are

(43:30):
exploring for the first time,you can, as a parent or like an
audience listener, you can gointo that top thought and ask
that question.
I want to learn about AI forvery simplistic reasons, and I
want to get started.
Where should I start?
And it's going to give yousomething that you can focus on.
Just remember, if you're unsure, if I can do it, just ask it.

(43:53):
But I think that is a greatfirst start to not only building
your own ai literacy but alsojust kind of having a lighthouse
already that can kind of likeanswer those questions for you.
Um, and I did step Stephanie, Italked to Ethan about this
we're thinking about holdinglike a parent session where we

(44:13):
kind of go over a couple thingsabout AI, but really also for
them to explore how AI could beuseful for themselves, whether
it's parenting or whether it isjust in their own jobs, but just
kind of an exploratory sessionwhere we come and we play.
We go over a couple littledetails, but we play and we're

(44:35):
thinking about giving yourlisteners a coupon code and just
allowing you guys to have adiscount on that if they are
interested.

Speaker 1 (44:43):
Yeah, that's awesome, great.
Yeah, let's move forward.
We can definitely advertisethat.
And, yeah, thank you for justhelping us continue to think
through some of the main points,some of the things to be
hopeful about, some of thethings to make sure we're being
wise in our decisions and howwe're using it.
So lots of exciting things thatcan help many of us.

(45:07):
So thanks for encouraging us inthat and keeping us informed.
I really appreciate your workand, yeah, looking forward to
what you can do to help parentsand students, and I know you're
already doing that deep work.
So thanks for sharing again withus today.
Thank you for having us.
We appreciate it so much.
Thank you so much for listeningtoday.

(45:28):
If you have any questions forour guest or like information
about north star, please emailus.
At podcast at nsaschool, welove having guests on our show
and getting to hear theirstories.
If you have anyone in mind thatyou think would be a great
guest to feature, please emailus and let us know.
And don't forget to subscribeso you don't miss out on

(45:50):
upcoming stories.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.