Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Fonz (00:30):
Hello everybody and
welcome to another great episode
of my EdTech Life.
Thank you so much for joiningus on this wonderful day and,
wherever it is that you'rejoining us from around the world
.
Thank you, as always, for allof your likes, your shares, your
follows.
Thank you so much for justinteracting with our follows.
Thank you so much for justinteracting with our content.
Thank you so much for all thegreat feedback.
We really appreciate it because, as you know, we do what we do
(00:52):
for you to bring you someamazing conversations that'll
help nurture our education space.
And today I am really excited tohave on a wonderful guest that
I have followed and thenconnected with on LinkedIn.
She is putting out some amazingthings and things that will
really get you, you know, atleast stopping and thinking and
kind of meditating, becausesometimes you know, we move too
(01:15):
fast, we break things and thenwe're like, oh, if I would have
just maybe thought about that alittle bit more.
So today I would love towelcome to the show Angeline
Corvalia, who is joining ustoday.
Angeline, how are you doingtoday?
Angeline (01:29):
I'm doing great, thank
you.
How are you?
Fonz (01:31):
I am doing excellent and
thank you so much for connecting
with me.
Like I mentioned earlier, Iknow Saturday mornings are very
precious, but I do appreciateyou taking the time out of your
day to just share your passion,share your world with us and
just to really just share whatyou have been amplifying on
LinkedIn and through.
(01:51):
You know all your various posts, and we're going to be talking
about an amazing conference thatyou will be speaking at as well
.
So there's a lot to cover.
But before we get into the meatof things, angeline, if you can
give us a little backgroundintroduction and just what your
context is within the digitalsafety space, education space
(02:13):
and even parent space, so justthat way our audience members
can get to know you a little bitmore.
Angeline (02:19):
Okay, thank you so
much for having me.
There's some things that arealways worth giving our time for
, and spreading messages yourown and others is definitely
worth the time.
So I have a very eclecticbackground.
It would take me a while totell the whole thing.
I'll just start with the wholeonline safety space.
It actually found me becausewhen my daughter was born she's
(02:43):
nine now when she was born, Iwas a CFO in a financial
institution and I was workingthere for 15 years total.
After that, you know, I neverwanted that to be my future, so
I quit and I was working for asoftware provider in sales.
They put me in sales.
That wasn't the best decision,but I did that for around three
(03:06):
years and then I said I have todo my own thing and my intention
was actually to do digitaltransformation and like a little
bit health for the kids on theside, because that was the part
of the CFO that I liked.
But then I noticed people onLinkedIn.
They were posting things forkids and I felt like the medium
wasn't something that kids weregoing to listen to, or it was
(03:30):
actually long articles, forexample.
So I'm thinking, oh, I'll justhave some fun and create a video
.
So I started asking theseexperts, can I create the videos
?
And they're like, yeah, sure.
And I got a very big positivereaction to these videos.
And then, after this was a yearand a half ago, about a year and
a quarter I realized you know,there's so much that needs to be
(03:54):
done.
You know, after spending thenthree months around three months
mainly focused on that, Irealized I have to do this, I
have to do this full time, Ihave to do this, I have to do
this full time, I have to helpwith that.
And AI is one thing I'mespecially interested in, and
the reason I first gotinterested in it is actually
(04:14):
from my time at the softwareprovider.
They were doing work forcustomers and AI was the thing
for the developers.
It was the thing for the techexperts, and I wasn't a tech
expert.
I was the bridge between techand business and I felt, you
know, this was when ChatGPT hadjust come out recently and the
(04:35):
world was changing and peoplewho weren't tech didn't feel
like they were part of it.
So I was like I need to helpthese people understand.
So that's how it started.
And then I had two charactersmy, my activity is called Data
Girl and Friends and I had DataGirl and then someone suggested
(04:55):
why don't you have AI?
So I have Isla AI Girl too, andIsla talks about AI and Data
Girl talks about online safetyand all the other privacy
aspects.
Fonz (05:08):
That is wonderful, and
what I love, though, is just
when I get to talk to guests,and this is what I love the most
about this and amplifyingpeople's voices in their work is
just to hear the backgroundthat they're coming from, what
they're seeing and how they'retrying to either saw something
and trying to find a solution to, or working along with that in
(05:30):
this case, company Like you say,you're making videos and then
all of a sudden seeing, hey,there's a need for this now,
because now you're seeing somethings and this is, I think,
fantastic.
And Data Girl and Friends issomething that I know that I
would love to share with myparents, and so that's why I'm
thankful that you are willing tobe here today to learn a little
bit more about that, because,as part of my job, I do get to
(05:52):
work with parents on a monthlybasis.
We talk, and we have theseconversations about data, the
data privacy.
The most recent conversationthat I did with them, I posted
on LinkedIn.
We were doing one on sharingteam, where parents are just
oversharing you know picturesand so on, and then I talked to
them also about you know theseAI platforms that now can take
(06:12):
some of those pictures and youknow basically undress those
pictures, and then there's theextortion aspect of it, and so
we go deep into thoseconversations because I know
that, although it's a toughtopic, you know, just to inform
parents and letting them knowjust the dangers and also
talking to them about AI andchatbots.
So kind of going into thatinformation, into kind of that,
(06:37):
I guess.
Path now into the conversation.
I know that you have spokenvery much about AI and you're
very vocal about it.
But I want to ask you you know,as far as AI literacy is
concerned, I know that AI ismoving like at a very rapid pace
and it just seems like everyday or every second there's a
(06:57):
new app, a new company,something new, and all these
models are coming out that arereasoning and all that good
stuff new, and all these modelsare coming out that are
reasoning and all that goodstuff.
But I just want to ask you doyou think that, with this and
moving as fast as it is, do weneed to focus more on that AI
literacy side or do we need tofocus more on implementing more
robust AI safety regulations?
Angeline (07:21):
I'm going to say the
AI literacy, because I no longer
trust government or industry tosolve it.
I don't like to make politicalstatements, so I don't want to
get political, because thatdistracts often from the message
.
But any government, anygovernment, because if we wait,
(07:42):
we do need regulation, we needresponsible industry.
But you said exactly, it'smoving very fast and if we wait,
then we're going to wait.
There's enough examples ofparents who have been fighting.
I know one parent, jesperGraugard, in Denmark, who's been
fighting for the privacy of hiskids for five years.
(08:03):
He's clearly in the right buthe can't really move very fast.
And other countries like Norway, where they managed to actually
get change in the governmentbut it took a year and a half
for the government to actuallymake rules about privacy for
kids in schools.
In a year and a half, how muchhas happened?
(08:25):
So I think the literacy is mostimportant.
Long answer, short question.
And the issue with thisliteracy is exactly that it's
moving so fast.
And just I was thinking whenyou're talking about working
with parents.
One whole aspect of my concept,my way of working, is that
(08:48):
parents need to understand thatthey're not going to know.
Their kids are going to knowthey're not going to know.
So the way to keep kids safe,what I try to bring about the
short videos that parents andkids should watch them together
and talk about them together andteach each other.
Because there needs to be atrust, because the kids are
(09:08):
going to know, they're going toknow parental controls.
There's always a way around theparental control that their,
their friends are going to tellthem how they're using ai and
they're going to try it out.
So there needs to be a trustand the parents just aren't
going to figure it out.
So that's kind of my way ofseeing it.
Fonz (09:27):
Yeah, and you know what I
love that you brought it back to
the parent aspect of it,because I know, with my work
with parents and I'm coming injust from education and actually
coming in from business andmarketing into education, and I
know that there's a term that isused quite often.
This is, oh yeah, you know, ourlearning community and our
learning community and you know,and sometimes what I feel like
(09:50):
is that we don't include theparents as much in that learning
community.
It just seems like it's theupper management and then, of
course, the mid-level and thenthe teachers and then students.
But I love that you touched onthe fact that parents need to
know the students are alreadyusing it.
The students are already,obviously, because of their
(10:10):
friends and they see things on,you know, social media and
things of that sort.
They are already familiar witha lot of the apps, but the
parents aren't, and so I lovethe way that you bring that
together and saying these shortvideos are for parents and their
, you know children to watchtogether and have those
conversations.
And that's really the job thatI get to do with our parent
(10:33):
liaison specialist or our parentengagement specialist, I should
say is that the goal is we tellthem it's like we're having
these conversations.
But I'm giving you theseresources also as well, both in
English and Spanish, becausethose are the predominant two
languages here, where I livealong the border.
But these are resources to havethose conversations with your
(10:54):
son or daughter, just at leastto get them to think for 0.5
seconds, you know, before theyclick send or whatever it is
that they're going to do orshare, because of maybe the
long-term consequence of thatthat might happen later on, and
also even talking to parentsabout that too as well.
Like, hey, when you're postingsomething about your child, is
this something that you wouldlike posted about yourself?
(11:16):
Because later on, you know, withstudents and with AI, like I
said, there's even more of adanger now, I think, or at least
it's heightened because of whatcan be done with these apps.
So I love that, that the workthat you're doing in bridging
that gap between parent andstudent or child in this case,
(11:37):
and bringing that together.
So let's talk a little bitabout you know more on that
parent side, because I wouldlove to pick your brain and
learn more and see how I mayalso share what you're doing
with parents as well.
So I know you've spoken aboutAI powered predators and
chatbots and the automating ofthe child grooming.
(11:57):
Can you walk us through like anexample of what are some of the
flags or some of the things tosee when this might be happening
?
Angeline (12:16):
Well, obviously it's
about change in behavior, right,
and just before I go into moredetail, there's one thing that I
really want to mention in termsof how, in my view, what I'm
trying to achieve needs to bedifferent that parents, they
need to admit more, be able toadmit that they don't know
things.
They don't have the answers.
It's the same, it's a societalthing, right?
If you're in a meeting at work,who's going to be the one to
(12:38):
say I don't understand whatyou're talking about?
Can you please explain it in asimpler way?
It's hard because we, ingeneral, all the societies that
I've lived in I lived in sixdifferent countries it was
always like you're supposed toknow, and asking questions and
meaning you don't know is hard.
But with the tech world andkids and parents, we have to
admit we don't know, becausethat's part of the problem is,
(13:01):
kids think they know better,especially in terms of privacy.
So, yeah, just before I saythat that, even the signs, I
would say the first sign isopenness.
I just recently have beenspeaking a lot with Megan Garcia
, who recently lost her son, sol, and she's going to speak at
(13:22):
the conference we're going totalk later and we've been
talking about her experience andone of the things that she
noticed was a change in behavior, in the sense that he was
talking less to her and lesshonest, less open.
It's a first sign you know thatsomething is wrong.
And another thing is just ifthey want to be alone with their
(13:47):
device.
You know it's tempting to letthem be in the bedroom or be
alone, but I've heard a lot ofexperts say the worst things
happen in the bedroom, even on.
You know all these.
I talk a lot about the onlineworld, but I don't spend much
time on it.
I talk a lot about the onlineworld, but I don't spend much
time on it, like Discord andthings where the kids can watch.
(14:08):
You know Roblox Roblox, wherethey can have the games and you
think it's you know, it's notdangerous, but actually it can
be and it's good, especially ifthey're younger kids, to have
them always in the room with you.
Yeah, so those are signs,basically just change in
behavior.
Fonz (14:29):
And you know and that's
very interesting because that's
something that does come up withand the talk that I have with
parents is many times they maythink like, well, you know, it's
just the puberty, it's just youknow the age and you know
they're in that awkward stageand you know they start
isolating themselves.
And I always just's just, youknow the age and you know
they're in that awkward stageand you know they start
isolating themselves.
And I always just say, like youknow, if there's a sudden
(14:49):
change, you know that that issomething that should kind of be
noted and kind of just startasking and just doing.
You know, the parent thing islike, you know, just observing
is everybody is, are you okay?
You know noticing some of thosebehaviors and because, like you
mentioned, and you mentionedMegan Garcia, and that's
something that I did bring upwith our parents when we had our
(15:10):
meeting this past year I thinkit was the November meeting and
talking about how easy it is toaccess, you know, these chat
bots on your devices oncomputers and how easy it is to
even open up an account deviceson computers and how easy it is
to even open up an account, andso I played a clip that when
Megan was getting interviewed,where she mentioned it's move
(15:31):
fast and break things should notbe something that should be
done when it deals with studentsand especially the lives of a
child.
So going into that, you knowthrough your work and what
you've been doing, and I heythis is what needs to change.
What would be some of thosethings that you would ask to be
(16:10):
changed?
Angeline (16:12):
I would ask them to
have their products looked at
and created together withexperts like psychologists and
psychiatrists, behavioralexperts, even teachers, because
they're largely left out of thediscussion, and this would
already be a big step forward,right?
(16:33):
I mean, I recently learnedabout the existence of grief
bots.
When I found out about this, Iwas speechless for 20 minutes.
For people who don't know,these are AI chatbots that are
actually created in a copy of aperson who's passed away and
they are apparently for grief.
(16:54):
But when I psychologists that Iknow they're like obviously we
weren't involved in this,because this is extremely
dangerous and risky, right, theway it's being done, especially
towards kids.
So this is what I would ask Canyou just get non-technical
experts to assess your productfor whether it's safe or risky?
Fonz (17:21):
This just needs to be done
more across different
industries and expertise levelsmentioned that it was so
(17:48):
important for her that you havethat co-creation of these
applications, with not just, Iguess, your end-all goal in mind
of obviously just getting onthe app and just keeping people
on the app at any age level, butalso, if it's something that's
supposed to be used for youngadults, or children, for that
matter, that they do get thatfeedback.
And so, for me, what I see manytimes is there is the
(18:11):
influencer market.
You know, you get people thatare, you know, have a heavy
following.
They get used and say, hey,we'll give you our product or
we'll pay you this much topromote our product.
And really sometimes it's like,well, are you even?
Are they even, you know, takinginto account the privacy, the
data, the dangers that mightoccur?
Or is this just simply apaycheck for them?
(18:33):
And I'm just going to put itout there and, you know, without
any regard to, you know, theirown personal beliefs or views or
anything.
It's just like, hey, this iswhat I do, I'll just share it
out there.
But I do believe that there issomething that's very important
and that's, you know, makingsure that everybody is at the
table, because it kind of bringsback to the ethics of it and as
(18:55):
far as ethical use of AI andyou know, going into the
different biases and the outputsand the uncertainty of those
things, I mean just to get morepeople involved in getting that
feedback.
I think that's something that'sfantastic.
And obviously we talk a lotabout guardrails.
Now, my big viewpoint hasalways been it's how can you put
(19:15):
a guardrail on something thatyou don't own?
Because a lot of theseapplications are plugging into a
system that's kind of you, youknow that large language model.
They're pulling that data fromthere.
So, if you don't own that, manycompanies say, oh well, we're
putting guard rails and thesesafety rails and I'll hear it in
all the education platformswell, we've got guard rails in
(19:36):
place.
I was like, but how, if youdon't own this, is it just
somebody putting in code thatsays, if this, then don't do
this and that's your, if this,then don't do this and that's
your guardrail, and I don'tthink that that's very safe at
all whatsoever or ethical.
On that, what are your thoughtson just AI, ethics and what's
you know?
And, in this case, for thesecompanies, what could they do
(20:00):
better to improve that?
Angeline (20:03):
Well, I think that,
exactly as you said, I mean
these companies overestimatetheir ability to control things
and giving them the benefit ofthe doubt, giving them the
benefit of the doubt that theyhonestly believe that what
they're putting out there can becontrolled, then they need to
(20:28):
trust.
You know that there are peopleon the other side and I think
part of the problem is actuallythat, obviously, the industry,
the AI industry, the creators,are a lot in in a little click
and I sometimes feel that I'mprobably pretty, I don't know
them.
So I I just saying theyprobably live in their own
(20:51):
little world in san francisco orsomething and honestly have no
idea.
I have no idea what, um, youknow, they're kind of distorted
reality.
I just what I've.
You know, I hear them talkingabout creating new beings or
some strange things or religionsand and so, yeah, I would tell
(21:12):
them talk to normal people, see,normal people spend some time
out of Silicon Valley and I dobelieve, going back to something
you said before that in the end, I don't know how long this end
is going to be, and sometimesit's hard to keep said before
that in the end, I don't knowhow long this end is going to be
, and sometimes it's hard tokeep believing this.
But in the end, the winner willbe the one that puts the most
people on the table, because AIis going to be the most
(21:36):
intelligent.
More information that it has,the more useful it's going to be
.
I work with a lot of people fromAfrica and I have yet to have
an AI system.
I would love someone to show meone that can produce a
non-biased image of an African.
I mean, you know and it's justwhen, even so far that I had to
(22:00):
ask my African partner.
I'm like, can you just send mepictures of Africans?
Because I can't trust anysystem that I get that is not
biased, for students, forexample, who need to learn about
(22:27):
Africa, if it's not been fedwith proper information about
that the continent, thecountries in the continent.
So the winner is going to bethe one that figures out.
I have to bring the most peopleon the table, so my system is
really fair and useful for morepeople.
Fonz (22:42):
And I agree with that.
That.
What you said just really, yeah, advocate of AI, and she's out
there also spreading the word,but we did a presentation
together because here in thestate of Texas they are slowly
(23:08):
rolling out the use of AI forgrading constructive responses
or shortened little essays, asopposed to using manpower to
read through these essays.
Obviously, it would take a lotof time to do that if you're
doing it in person with morepeople, but now they're just
saying, okay, we're going to doa small percentage time to do
that if you're doing it inperson with more people, but now
they're just saying, okay,we're going to do a small
percentage, just to kind of testit out.
And going back to what you weresaying, so, for example, an AI
(23:33):
model being used in Africa andan AI model being used here, I
know that even today, when I'vegotten into some of the image
generators and you put in youknow, show me, like just janitor
, you get a certain look, youknow.
Then, for doctors, you get acertain look, for you know a lot
of things, and I'm like, wait aminute, like this is very
(23:55):
unusual, this is very weird.
And so by you know countries,even you know.
Now it's like it.
You know countries, even youknow.
Now it's like how are theyperceiving us Like if they put
there like an American?
You know, what does that looklike to them too as well.
So going back to that, it'sthat information, is it accurate
information?
And that's kind of very scarytoo, because even when you use
(24:17):
an image generator, wherethere'll be like hey, you know,
put yourself in here or put in aprompt and I describe myself
and I'll put there, you know,hispanic male, every single
output that I get, hispanic male, it always gives me a beard or
a mustache and it makes me lookwell, I mean, it makes me look a
(24:39):
little bit more bigger filledout.
Oh really, yes, yes, A littlebit more, you know, filled out,
a little bit more bigger, filledout.
I should say, oh really, yes,yes, a little bit more, you know
, filled out, a little bit morerobust.
And so I'm like this is veryinteresting.
You know, as you're putting inthese prompts, you know there
still needs to be a lot of workbeing done with this, but you
know the fact that people aroundthe world, educators especially
(25:01):
, are like oh my gosh, this isthe greatest thing in the world,
because now we can do thisquickly, now I'm able to do this
in 20 seconds.
But my biggest concern is yes,he can do it in 20 seconds, but
how accurate is it if it's juststatistically predicting the
next word?
The other thing is that theknowledge cutoff date is
(25:22):
something that we brought upthere at that conference too,
because there's a lot ofapplications that teachers are
using and they're purchasing fortheir teachers and, in the
terms of service, it'll tell you, the knowledge cutoff date is
2023.
We are already well into 2025.
Well into 2025.
(25:48):
So how accurate is this goingto be if the data there is at
2023 and now in the statestandards, have you know, have
been updated for a lot of ourcontent area, here in Texas at
least.
So those are a lot of the thingsthat I know many people don't
look into and maybe they justwant to turn a blind eye because
they're like, oh, the magic,the whistle, this is the shiny
object that's going to, you know, create my lesson for me and
(26:09):
I'm done, and that's what reallyconcerns me too as well.
So, kind of going and touchingon that a little bit, you know,
I know that you've compared andsaying like you know, like what
we were talking about a littleearlier, those that bring more
people to the table.
So it's almost like we'recomparing it to an AI race and
it's definitely a competition,you know, without anybody.
(26:32):
Just really, it's just like allhands on deck, everybody just
go, go, go.
Your perception and, in yourexperience and from the lens of
the world that you live in whichis, you know, data Girl and
Friends and all the amazingpeople that you're connected
with in your network, you knowhow do you envision?
You know AI as a force for good, or do you envision it as a
(26:56):
force for good like, maybe 10years from now, or is there many
more pitfalls that are going tobe coming that we should be
worried about?
Angeline (27:06):
I try to be positive.
I need to be positive, I needto believe that it's possible,
the good AI can be a force forgood.
It can.
It can be used well.
It doesn't look like it'snecessarily going in that
(27:28):
direction right now because ofexactly massive problems, you
know, we were discussing beforewith the image generations that
the one, the, the ones thatcreate pornography.
Kids are obviously, you know,interested in this, so they use
it, they create it.
They don't understand the, theweight of what they're doing.
Um, so all sorts of things, andalso these ai relationship
(27:49):
chatbots, they're all completely, you know, overwhelming and
influencing, especially if yougive it to young kids.
I was talking to uh, I think itwas megan who said that met, you
know, someone whose daughterhad had their first relationship
with, with, with an abusive AIchatbot boyfriend at 13.
(28:10):
So this is a person whose firstrelationship.
I mean, this is the influencein a whole world going forward.
So this is a lot of reason tobe negative, right about it.
But, on the other hand, whatthe world I'm trying to create
is one where all of the techconnects us all over the whole
(28:32):
world in a way that we've neverbeen connected.
They figured out.
They make one product and it'ssold in the entire world.
What we haven't figured out isthe other side of it.
Right, so we can take thisconnection that tech gives us
and push together for aresponsible tech.
Right, because individuals andmean AI can help in really a lot
(28:54):
of ways.
It can help us to be veryefficient and it can help us to
be more creative.
It can help us to know eachother better, because in the
(29:14):
moment that and I need to callout Bill Schmarzo because he's
the first person I heard saythis that AI can help us to be
more human, and some people hatethat statement.
Some people like that statement.
I like it because it's, I think, because there are things that
we can do as humans that AIprobably I don't want to say
(29:35):
probably won't be able to do isbe understand, be sentient,
understand emotions, understandcontext.
All of this like real context,life experience.
And if you have AI, if you useAI, then you understand which
parts of you are uniquely.
You and kids can learn thatfrom a younger age.
(29:57):
Right, they actually have to.
They should understand who am I, what makes me unique, what
kind of person am I, because ifyou're using AI and they are and
you don't know who you are,then you can more easily be
influenced and this is somethingthat kids can then learn
earlier and then you're actuallygoing stronger into the world
(30:18):
because you know yourself better.
So that can be a positiveoutput of AI.
But we have to be moreintentional with it and we have
to kind of force that usebecause, as you say, the tech
companies are obviously theyhave billions in funding that
they have to get return on, sothey're gonna go for the for the
money first yeah, no,absolutely, absolutely.
Fonz (30:40):
So I want to kind of just,
uh, turn the conversation over
now because to talking about theGlobal Online Safety Conference
.
So this is something that I didsee recently that was posted on
LinkedIn.
I have already signed up for it, too, as well, and just looking
at the list of speakers, thisis going to be an interesting
conference.
So can you tell us a little bitmore about this conference?
(31:01):
Well, first of all, if you can,or have some background, how
did this conference idea comeabout?
And then tell us a little bitabout what the goal of this
conference is and why peopleshould sign up for it?
Angeline (31:14):
So the idea came about
.
Just after a year of being inthis space, I met some amazing
people, a lot of amazing peoplelike this online safety.
This, you know, an AI, aresponsible AI community that
somehow I have built on LinkedInis so amazing and it's full of
(31:37):
I call them like individualwarriors really passionate
people.
A lot of them are individualsor small companies, small
organizations fighting tosurvive, making a real
difference, and I'm thinking.
I was thinking these peoplecould actually achieve a lot
(31:59):
more if they were workingtogether, if they knew each
other more.
So I said let's do a conference.
And I talked to a fewnonprofits that I work with.
Will you support me to do thisconference?
It was in November and I was ina time.
It's urgent.
So I said I'm going to do it inthree months.
I said in three months we'regoing to do this conference and
(32:19):
we talked about it with thepartners and also one, andy
Briarcliff, who's been a lot ofsupport as well.
He's been in the space for alot longer.
You know how are we going todefine it, so we'll just be very
general.
We're going to call it anonline safety conference.
It has to be global, becausethat's what I said before, we
(32:41):
have to work together more andwe just put it out there and see
what comes back.
What are people interested in?
You know who wants to talk, andwe got this massive just so
many people, so much energy cameback.
I was just putting outmessaging we're stronger
together, stronger together.
We have to know each other, andI just it was like every day
(33:06):
something would come in and saidI can't believe this person is
speaking, I can't believe thisperson wants to speak, like I
always, ever since I heard theexistence of of the AI data
labelers, I always wanted tomeet an AI data labeler or a
content moderator.
And there was a Facebookcontent moderator from South
(33:28):
Africa who contacted and wantedto speak, and I'm like, yes,
that's exactly what the so allsorts of people from 16
countries, different agescontacted and all across the
spectrum of different topics andexperiences are going to talk.
And what's important is is thatwe did not go for any
(33:53):
influencers, like you said, weintentionally we're not.
We don't have, you know the,the, the keynote speaker who is
going to bring in the audiencelike no, we want to hear from
the people who need to be heard,um, and and it's quite unique,
and we also made the conferencelike 12 hours a day so that
(34:16):
people from all over the worldcan speak in their time zone.
And we made it free because andonline, fully online because
then the barriers to actuallyattending are gone.
Um, because a lot of obviously,university students, people in
(34:37):
poorer regions or people like meI'm actually in europe, so it's
Saturday afternoon and I wouldlove to attend conferences in
the US, but it's a really longand really expensive.
So we're like no, we want tohave the voices, he wants to
speak and we want anybody to beable to listen.
(34:57):
So that's how it came about.
Fonz (35:02):
Well, that's wonderful and
, you know, looking at the
lineup, there's definitely someamazing, amazing speakers and
people that I actually follow onLinkedIn too as well.
Like I said, I'm a follower ofyour work and everything that
you're doing because I love whatyou're doing and your mission,
and so this is definitelysomething that's going to be
worth the view and, like I said,I've signed up for it the view
(35:24):
and, like I said, I've signed upfor it.
So I'm really excited to justgain some more knowledge and
different perspective anddifferent lenses from people in
other countries that maybe arelike minded but are seeing
things differently or may have adifferent perspective.
And, like I said, for me it'salways looking for something
different and something thatjust to think about and maybe
change my perception on manythings.
And so this is an excitingopportunity for everybody to
sign up, and I know theconference starts February 19th,
(35:48):
so there's still time to signup, correct?
You can?
Angeline (35:51):
sign up.
Yes, absolutely yeah.
Fonz (35:53):
Perfect, excellent.
So for all our audience membersthat are checking this episode
out, please make sure that youcheck out the link in the show
notes also as well.
It'll be there.
We'll definitely be posting iton LinkedIn, too, and all our
socials to make sure that yousign up for this, because this
would be a great event for youto learn more and see things
from different perspectives anddifferent lenses and, of course,
(36:15):
like I said, it's only gonnanurture our growth within this
space and just to see how, as acollective, we can improve this
space also as well.
So thank you for sharing that.
Now, angelina, before we wrapup, I just want to ask you, as
far as you know your projects,you know what are some of the
things that are you know thatare in store in the future,
(36:38):
maybe for Data Girl and Friends.
Angeline (36:41):
Well, data Girl and
Friends, as I said, I create the
content.
I realized early on that salesis something I if I were to
actually try to sell my content,then I don't have the creative
juices anymore.
So I'm really building outpartnerships with amazing
organizations who have maybethey have partnerships the ideas
(37:04):
and the knowledge, but maybenot the medium to bring it about
.
Or they have schools, classes,parents who, but they could use
the content.
So Data Girl is that's what I'mbuilding out.
That was the original idea forthe conference to help that and
actually has grown into muchmore, luckily.
(37:25):
And also I'm working on someonline courses that will be
ready soon.
Um, basically the whole conceptwe didn't really speak about um
is that I think that kids,teens, they should have, like,
clear knowledge, something likeyou can't drive a car without a
(37:46):
driver's license, so youactually shouldn't be using a
device without basic safety,just really basic, like you've
learned this.
You've heard this at least once.
So I'm working on some onlinecourses that will be ready soon
on this and I'm really excitedwhen those are ready.
(38:07):
And also, the SHIELD conferenceis actually just a kickoff.
We are going to do a yearlyconference, but it's actually
intending to build collaboration.
We're going to also do workinggroups and do smaller meetups
and conferences to really theidea is it will be the platform
(38:30):
to help people come together.
So it's another reason to come,even if you can't be there,
because I know a lot of.
I picked a week where a lot ofpeople are on vacation.
I didn't know when I left theUS there was no vacation in
February.
So, yeah, just to sign up, youcan listen to the recordings
(38:51):
afterwards and be a part of themovement and the community going
forward, because there will beother meetups and other more
specialized conferences.
Wow.
Fonz (39:00):
Well, that is fantastic,
angeline, and thank you so much
for joining me today and takinga little bit of time out of your
day to just really share, youknow, your passion, share your
mission, your vision anddefinitely getting people
excited about the ShieldsConference also as well.
So, again, for all our audiencemembers, make sure that you
check out the episode notes,because all of the links will be
(39:21):
there and the conference iscoming up really quick.
It'll be February 19th, so ifyou're watching this, you know,
please make sure you click onthat link, sign up and check out
all the amazing speakers andjust to help us learn more.
And obviously, now that we'rehearing from Angeline too, that
this is something that is acommunity that's going to
continue to grow.
Maybe, maybe in the near future, around your area, there will
(39:42):
be a meetup or there'll be aconference, but it's just
something great to be part ofand something that where you can
find like-minded individualsand folks coming together, like
I mentioned, just to continue tonurture these conversations and
continue to grow together.
So, angeline, thank you so much.
I really appreciate your timebeing here, but before we wrap
up, we always end the show withour final three questions and I
(40:06):
know I always give my gueststhose ahead of time, so
hopefully you're ready to answersome of these questions or had
just a little bit of time tothink about them.
So question number one everysuperhero has a weakness.
So, for example, like Superman,kryptonite just kind of
weakened him a little bit or itwas a pain point for him.
(40:27):
So I want to ask you, Angeline,in the current state of, I
guess we'll say, ai, or it couldbe education, it doesn't matter
but I want to ask you, whatwould you say is your current
kryptonite?
Angeline (40:40):
My current kryptonite.
I'm a creator and I am not goodat selling my creations in the
sense selling, even getting itout there and approaching people
with it.
That is my biggest pain pointBecause, as you say, I have a
lot of ideas and I have a lot ofcreative juices and I create
things that a lot of people sayare nice, but I'm not good at
(41:02):
getting them out there, which isa big problem, obviously there,
and which is a big problem, uh,obviously, um, so I think it
gets out there slowly throughother people, but it could be a
lot faster and more efficientand more useful if I would be
better at that.
Fonz (41:19):
There you go all right,
that's perfectly great answers,
just like we were talking aboutearlier, just kind of getting,
uh, maybe I guess we'd say alittle bit of that imposter
syndrome, because I I sufferfrom it too as well you know
having great ideas, but just tokind of getting.
Maybe I guess we'd say a littlebit of that imposter syndrome,
because I suffer from it too aswell.
You know having great ideas butjust to kind of get them out
there.
It seems a little bit difficultto have many times.
But yeah, that's a great answer.
Thank you so much for that.
All right, so here's questionnumber two is if you could have
(41:40):
a billboard with anything on it,what would it be and why?
Angeline (41:47):
on it.
What would it be and why?
Billboard would be, and everyindividual can make change, but
the individuals need to worktogether.
In the sense it's an individualthing, but it's a standing
together.
This is what it would be, andthe why is simply because we
(42:09):
often feel powerless for allsorts of reasons.
I mean, there are techbillionaires.
There was recently.
They were all standing behindthe American president when he
was being sworn in and thendoing, you know, jetting off to
Europe and changing, managing toget the regulations changed
(42:31):
overnight.
You can feel powerless, yeah,but we're not powerless.
And if you look back in history, this tech field world that is
all sorts of, has all sorts ofdangers, is new.
It's new.
It's been around a really shorttime ago.
Our world was different and itcan.
We can insist on no, it doesn'thave to go in that direction,
(42:56):
and there's going to beindividuals making individual
decisions that can make thathappen.
But, that said, if you findother individuals who are on the
same path, then you have moreinner power and inner strength.
Fonz (43:16):
Great answer.
Thank you, angeline.
And the last question to wrapup our wonderful conversation is
if you could trade places withone person for a single day and
it could be anybody who would itbe and why?
Angeline (43:32):
I don't know if I
would really want to do this to
myself, but I would like to putmyself in the throes of those
Silicon Valley conversations forone day and kind of figure out.
Maybe all of some of thesethings that I hear them say
would make more sense if I spenta day there, understanding what
(43:55):
they were doing, how they werespending their day Probably
wouldn't understand many oftheir conversations, but at
least to kind of get a feel forit.
So I think that would be it.
Yeah, just to really understandbetter this whole other side of
the AI tech universe.
Fonz (44:11):
Very cool.
That's a great answer.
Well, angeline, thank you somuch again for spending a little
bit of time with me on thiswonderful day, and thank you so
much for all you shared and forour audience members.
Like I mentioned, there's aconference coming up the Shields
Conference so please make surethat you check our show notes
for that link and that way youcan go ahead and sign up and
also just find these wonderfulspeakers on LinkedIn as well.
(44:34):
Follow them on socials, becausethey are putting up some
amazing things that really helpyou learn more, but also to kind
of stop and think, and that'sthe wonderful part about it.
Just, it's not all about, youknow, going fast and breaking
things.
It's.
You know you can go fast butthen also take a pause and just
really reflect on some of thosethings that maybe we're already
(44:55):
coming in with our ownperceptions, but this would be a
great way to continue to learn.
Don't forget to check out ourother 312 episodes, where I
promise that you will findsomething just for you that you
can sprinkle onto what you arealready doing great.
So make sure you go over to ourwebsite at myedtechlife
(45:16):
myedtechlife and, if you haven'tdone so yet, make sure you
follow us on all socials.
That way you can keep in touchwith us but also see all the
wonderful guests that are comingon through the show the
wonderful and then that way youcan go ahead and just get a
little glimpse of the amazing,amazing work that is being done.
So thank you so much and, asalways, my friends, don't forget
(45:39):
, stay techie.
Thank you.