All Episodes

November 27, 2025 7 mins

Bill, co-founder and Chief Executive of Gradient Institute, called in to chat with Lisa & Russell about future trends in AI. His biggest takeaway? If you work in a white-collar job, get ahead now and start learning how to use AI. Lisa asked for practical tips and Russell couldn’t resist wondering aloud whether he should be fearful of the future.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Wrapping up our deep dive this week into AI. Bill
Simpson Young is co founder and CEO of the Gradient Institute,
a nonprofit focused on ethical AI systems, is joining us
to chat about where AI is headed and the everyday implications.

Speaker 2 (00:15):
Good morning, Bell, Morning Bell, Hi, Hi Russell, thank you.

Speaker 1 (00:19):
So much for joining us. So how do you see
AI tools evolving in personal use over the next few years?
What's coming.

Speaker 3 (00:30):
Well, I mean you've got to think about how they're
improving continually. These new models coming out every week is
probably the four major models released from last week or
with major new capabilities and there of AI is just
getting more and more powerful. So I mean, I don't
know if you've tried to like Nano Banana Pro.

Speaker 2 (00:48):
We heard about that yesterday, about it twice and two.

Speaker 3 (00:54):
Like it's phenomenal and you can creating images video is
really simply and it's not the only one that all
the models have come out the last couple of weeks.
You know, it's a new one from Clawed, from One's Anthropic,
The new version of claud is amazing, the new vision
Chatty may do it this incredible and relevant to your
last song. You're playing everybody once through all the world.
Companies are all racing to racing to be the most intelligent.

Speaker 2 (01:18):
With so many of them, they have to come up
with better names and nano bananas. So they are doing themselves.

Speaker 1 (01:24):
But is it too many? I mean, did you not
see the wood for the trees? It's like every week
for new things, it's yeah.

Speaker 3 (01:32):
So what's happening, as I think one of you earlier,
I think I think Sarah said in the week the
the way model the train is on data. And but
it turns out something was discovered about eight years ago
where if you use more data and more what's called computing,
and more cycles of a of a processor like from
in video, you just the more you do it, the

(01:54):
smarter they get. You don't have to design the smart
you just make the models bigger and faster and with
more data. And so what's happening is these models are
getting They're pumping in more chips, more GPUs from the video,
They're putting in more data, and they're just getting smart
and getting more capabilities. Now the's other stuff happening as well,
but that change is making making them get more and

(02:16):
more intelligence. So now you've got models that will pass
do PhD level biology, chemistry, mathematics solve problems at a
really sophisticated level across a whole bunch of fields. And
now these companies are trying to get to know what
it's called artificial general intelligence, where you've got you know,
an AI that can do a whole bunch of things

(02:37):
across the holes off different areas that are greater than
as good or better than human level capability. And that's
sort of starting to happen. But any of this, all
of them are fairly dodgy as well. The problems of
a thing called spiky capabilities, where you know, any any
model you have, it'll do something's really amazingly and then

(02:58):
just something really stupid and cause to realize that my humans, yeah,
but different. The ways they go wrong are often different
from the ways that humans go wrong, So you have
to always understand even if they're smart, they're still pretty
stupid in some ways.

Speaker 1 (03:11):
Bill governments and big companies preparing for the possible risks.

Speaker 3 (03:15):
Of AI, you actually, it's a great question, and this
week is a big week in that because on Tuesday,
I don't know if you've said this, but the Australian
government announced that they're going to set up an AI
safety institute. Now, this is a fantastic thing, I think,
and the UK have done it, the US have don't
to a lesser extent. There's a bunch of other companies

(03:37):
Seamplet have done it as well, where you've got sort
of technical people working on the scientific and technical issues
with models, where you're looking for how they can go wrong,
looking at how you can fix them, and I'm actually
really pleased that's coming. Not yet. Then there's also there's
various other things that come asuming list, some new guidance
on how to use AI well and so on, so.

Speaker 2 (04:00):
Of that and going back and watching RoboCop a few
times just to see where it can go wrong.

Speaker 3 (04:06):
Yeah, exactly.

Speaker 2 (04:08):
So one of the big issues around AI for the
average person is what is it going to do to jobs,
to employment? Is there going to be a whole layer
and they obviously one of the big areas is white
collar employment. The change that it's going to make that
in your opinion, your expert opinion, where do you see

(04:31):
that taking us in the next I don't know, five years,
probably even less.

Speaker 3 (04:36):
Yeah, I think it will have a massive impact on
all white college jobs. I think that pretty much every job,
or most jobs is college jobs will be will become
AI supervision jobs. So some will go away but many
will stay. But the responsibilities will change where it be
tasks that you would have typically previously done, but those

(04:58):
tasks will be done by AI. But you're job as
a supervisor like that, any supervisor is just to make
sure they're being done well. And given that spiky capabilities
I've talked about, you'll need to know, you know, no,
the basis of how they work, know how they go wrong,
so that you can actually pick them up when they're
doing wrong, and just be a good manager, you know, basically,
and so a lot of you know, if you're in

(05:18):
a white college job, you need to make sure you're
on top of AI. You need to get the experience
using it, but you also need to learn you know,
where it's going to get wrong. And there will always
be problems, right, It's not the case that tomorrow will
wake up and to be a new model that just
does everything perfectly. That's not going to happen. It will
be for the next five years. There will be problems
and they'll need to be really good human supervision.

Speaker 1 (05:40):
Everyone's worried about the job situation. But will it create
new jobs?

Speaker 3 (05:45):
Yeah, I think it will, but also will change. There'll
be new skills that everybody needs, and there will be
new jobs. I mean one of the big jobs. I
coming back to your rubberic. You know, there will be
jobs that are just about There'll be an overallance on AI.

(06:06):
I think it's a lot of businesses using it, and
then it's going to go wrong in unexpected ways, and
there's going to be people out there trying to stop
the trying to try to stop, to find the problem,
stop the problems, and try to correct from when problems happen.
And I think a lot of jobs in that area
sort of trouble exactly.

Speaker 1 (06:29):
So you're one bit of advice, I imagine, Billy, you've
already answered this. So I was going to say, what
is the one thing If you could recommend one thing
that we should all do to prepare for the future
of AI, you would say, get your head around to
learn about it.

Speaker 3 (06:41):
Well, that's one. That's one thing, But another thing is actually,
you know, I think a lot of jobs in the past,
we've sort of spent a long time trying to make
out jobs more mechanical. Yeah, and trying to make humans
more like machines. I think we need to step back
and say, look, actually humans are not machines. We've got
machines to be machines. Let's get the humans really good
at being human. Let's be really good at working with
other people. Let's understand what people are good at, understand

(07:02):
how human values and just and then sit back. And
I mean not sit back, but make sure the machines
are doing what we want them to do and not
something else. So yeah, I think two things can be
about a human and and learn about AI want to
use it.

Speaker 1 (07:17):
Yeah, so that we remain in charge exactly.

Speaker 2 (07:21):
We need to know whose boss.

Speaker 1 (07:24):
Yeah, Bill, thank you so much for joining us this morning.

Speaker 3 (07:28):
No worries.

Speaker 2 (07:29):
He's laid out some of our fews, well a little bit.
He's put it in perspective. He's put it in perspective
apart from the road
Advertise With Us

Popular Podcasts

Stuff You Should Know
Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.