Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
So let me set up this conversation in a particular way.
My younger kid loves computer games, has always loved this stuff,
likes to talk about what's involved in making computer games
and how big a project it is. And he's long
been interested in that kind of thing. And I know
that doesn't make him unique. There are lots and lots
of kids who, you know, see this world of these
(00:22):
these huge games, and you see these billion dollar numbers,
and they want to be.
Speaker 2 (00:25):
Part of it.
Speaker 1 (00:26):
But just in the past couple of years or so,
for my kids, he's thinking about it. He sees AI
coming around and thinking they're not going to be nearly
as many jobs available in that in that kind of work.
And I think there's a generalizable sort of concept that
AI is going to change a lot of things and
(00:48):
it's going to kill a lot of jobs.
Speaker 2 (00:49):
It'll probably also create a lot of jobs.
Speaker 1 (00:51):
Anyway, that's much too long an introduction, joining us to
talk about why it's so important for students, for kids
to understand how to use technology and what's coming especially AI.
Is Ed Kim, who is vice president of Education and
Training at Code Ninjas, and that's their website, code ninjas
(01:13):
dot com. Ed, Welcome to Kowa. Thanks for being.
Speaker 2 (01:15):
Here, Thanks for having me. It's a pleasure to be here.
Speaker 1 (01:19):
So why don't you just pick up on I didn't
ask you a question, but just pick up on what
I said and tell me where you want to take that.
Speaker 2 (01:29):
I think it's something you said in the middle of
your opening statement about how there will be new jobs created, right,
excuse me. So with any technology disruption, let's stick back
even to like search engines when they first came out,
killed a lot of jobs, but also created a lot
of jobs. Right. This whole field, for example, around SEO,
blossomed around you know, Google search and online search. So
(01:51):
I think with AI it is definitely going to get
rid of a lot of jobs. However, the limitation with
AI that we are seeing today is that the application
are so myriad that being able to continually develop and
apply AI in each particular niche in our society to
replace jobs. I think it's still yet to be seen.
(02:12):
There will be jobs, I think that change, but there
are more jobs that will be created. There'll be new
jobs that we can even project right now that'll be created.
I think the main thing to keep in mind for
the kids when we educate them.
Speaker 3 (02:23):
Is to make them power users of AI, being dependent
on AI entirely. I think one of the challenges that
we face is that same thing we saw when Google
first came out and searching this first came out, is
there was a portion of the population of kids that
really learn how to use it effectively and really leverage
it for their needs. And there's a portion of the
population of kids who've never learned that. Right, I think
(02:45):
we have to avoid that same challenge or that same missed.
Speaker 2 (02:49):
Opportunity and make sure all the kids know how to
use AI in the right ways so they can achieve
their goals every day. And it's going to apply in
every kind of job. But I don't think AIS going
to replace humans entirely to that extent, right, So I am.
Speaker 1 (03:05):
I am of the age of people who were kind
of the the first people ever to be able to
have personal computers really at any kind of affordable price
at home. So, and we were talking about this briefly
on the show before ed So I bought. I bought
my first computer in nineteen eighty one, okay, and Apple
(03:25):
two plus, and I don't have it anymore. I wish
I did.
Speaker 2 (03:29):
And through this whole time.
Speaker 1 (03:30):
I've been pretty much of an early adopter of a
lot of things, but I have never seen the pace
of change as fast as it is now, even during
the rise of the of the interwebs. It is changing
like every month, AI can do stuff that it couldn't
do a month before. So, as someone who's involved in teaching,
(03:52):
how what do you teach kids that is is something
that lasts them, that has value to them for more
than a month.
Speaker 2 (04:03):
Yeah, that's a great question. I'll use the context of
one of our new programs we've released codingin just recently
put out a new program called AI Academy. It's about
one hundred and twenty hours of learning content for kids
eight to fourteen. But our focus in that academy is
not so much teaching kids how to program AI. It's
how do you manage AI? When do you use which
(04:25):
tools in which scenarios. So, for example, we'll teach kids
how to use mid journeys AI tools to create to
do graphic design or creative design, you know objectives. They
might hop over just using chat GPT to learn how
to use just general search, you know, general search on that,
or they might hop over to grammarly and learn how
(04:46):
to use AI to better improve their writing. The challenge
that we're facing that we want to teach kids is
that we don't want them to be so dependent on
AI they don't know the base skills behind writing, behind conceptualizing,
behind storyboarding, behind drawing things like that. Right, So our
goal in educating kids is saying, hey, this tool is
there makes your life significantly easier, but if you become
(05:07):
too reliant on it, this is where you're going to
get stuck. So this is you know, we're teaching kids
how to bridge that gap better than we did with
our technology.
Speaker 1 (05:16):
We're talking with Ed Kim from code Ninja's website code
ninjas dot com. One of the things every parent of
a student worries about is they're using AI to cheat
or something less than cheating, but just kind of have
it do some work and some thinking for them so
that they just don't develop the ability the ability to
(05:37):
think as well as they otherwise might. How do you
edumicate kids to not, you know, not do that.
Speaker 2 (05:46):
So I have a two part response to that, actually,
So the first part is that we just have to
do the best we can with monitoring their use. It's
like when YouTube first came out, Google first came out
and kids started getting online and there weren't parental controls
developed in the beginning. Us as parents and as educators,
we have to do our best job we can and
every day just keeping an eye on how they use
(06:06):
AI until education guardrails are built. Now, word around the
fence is that the big AI companies are diving into
the applications and education finally, so they're starting to think
about the guardrails that we need in this technology. So
one thing you'll see with any technological innovation is that
it takes a number of years before the education side
of it gets hit right. They think of the guardrails,
(06:27):
think of the policies, the protective tools, things like that
for the students. So there's that piece coming up. I
think the second part of my response is also me
as an expert, saying that I'm not sure where the
answer is. That is this challenge we're facing now at
college emissions, for example, kids are using a guy to
write the college essays, and I think there is software
(06:47):
being developed to detect whether AI is being used. It's
still not really accurate, but it's getting there. I think
it's more the only solution is time. As fast as
AI is developing, we also need to give these developed
helpers time to think about the guardrails, think about the tools,
think about enhancements that will improve user experience in the
right way.
Speaker 1 (07:08):
All right, one last question, and you might not even
have an answer to this, but as you think about
this hypothetically, and this is really a question of your
imagination as much as anything else, give me an example
of a kind of job that you think might be
created by AI. Because we spend so much time talking
about jobs that are going to be destroyed. Do you
have an idea of a job that might be created
(07:29):
by the existence of AI?
Speaker 2 (07:31):
I do. That's a great question, but I do because
I've been thinking about this recently as well. That the
title is going to sound weird, but I get the
feeling there'll be something called like a validity engineer of sorts,
because one of the biggest challenges facing AI is the
hallucination effect. Right, if it pulls off the wrong data source,
you need human engineers to correct how it's pulling down
(07:52):
and where it's pulling it from. Right, So you probably
will have a new job of just managing the how
correct Yeah, or which data it's. It's it's a data
management mixed with verifying that the data.
Speaker 1 (08:06):
Is Yeah, high tech fact checker. That that's an interesting answer,
and we're about out of time here, so let me
just I'll give you one quick comment on that, and
and that is that at some point we're not going
to need it anymore. A is going to get smart
enough to fact check itself. And the other thing is,
as long as we do need people like that, people
(08:28):
are not going to rely on a AI as much
as they otherwise might because people see AI is as unreliable,
which is a huge challenge, which is why they need
to train it to fact check itself. Ed Kim is
vice president of Education and Training at Code Ninjas. That's
code ninjas dot com. That's a great conversation. Thanks so
much for your time, Ed.
Speaker 2 (08:47):
Thanks love