All Episodes

October 19, 2025 18 mins

Osher's time at the recent SXSW resulted in a lot of new thoughts (and wonders and worries) around AI. He recounts them, along with reliving some of the best moments from his recent chat around AI with Dr Matt Agnew (find the full chat in the feed).

For tickets to Story Club, Osher's new book So What, Now What? and more, head here

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Yea, thanks for listening to the show. This is Better
than Yesterday. Useful tools and useful conversations to help make
your day to day better than yesterday. Every episode, every
week since twenty thirteen Minams Lotter Ginsburg and I'm very
glad you're here. I hope you had a wonderful week.
I went to south By Southwest. I went there on
Wednesday with my friend Campbell, who illustrated the book So
What Now What, which you can get in the show notes.

(00:22):
The day I went there, I went to work there
working for a big international marketing company, to talk about
artificial intelligence and what the creative industry looks like in
a time when saer iiO exists. Well, VO three, you
can just type in a complete ad campaign, it'll come
out the other end of you. Machane. And I also

(00:45):
saw there's really brilliant, brilliant conversation with an AI safety
researcher who didn't mince about when he was basically saying, look,
one of the big AI models, Clawed, uses their own
model to type seventy percent of all the code that
goes into their product. And at the World Benchmark rankings

(01:07):
of the best coder in the world. AI has been
the best chess player in the world for a long time.
Less than a year ago, AI was the one hundred
and seventy fifth best coder in the world, So there's
one hundred and seventy four humans better than AI. AI
is now the sixth best coder in the world, and
that's in less than six months that it got to

(01:28):
do that and be that good. So pretty soon a
large language model will be able to write code better
than any human. Therefore, like any human might not even
be able to know what else is going on in there.

Speaker 2 (01:40):
So the idea that we're.

Speaker 1 (01:44):
Old mate at the very end of Castaway losing the
rope and the raft getting away from us, that is happening.
It's scary shit, but it's okay because we also need
to keep our eyes on the ball. I wanted to
kind of talk a bit about AI today, so we're
going to go and listen to a little bit of
Matt Agnew, doctor Matt Agnew, who has a master's degree

(02:04):
in AI research. He's an incredibly talented man. He's an astrophysicist,
he's a PhD, and he once a bachelor.

Speaker 2 (02:12):
Which is really lovely.

Speaker 1 (02:13):
So we're going to talk a bit about AI today,
and also I want to tell you about the most
valuable thing I learned that we can do ourselves as
these machines show up and do all of our jobs
better than us. So here's just a moment from Matt
from our conversation from episode five, one hundred and fifty.
The whole episode's fantastic, but in this bit, Matt talks
about how artificial intelligence is fundamentally different from other innovation

(02:36):
seeing technology, and he also kind of explores about AI's
ability to replace mental labor and how because it's doing that,
it could lead to a paradigm shift and how we've
even view work and view productivity.

Speaker 3 (02:51):
We've had machines replace labor many many times over in
the past, we've never really had machines replace mental labor
in the same way that AI does, And so I
think there's this potential for us to really change the
paradigm of what making the living means. And that's kind

(03:12):
of been a fairly constant idea. I say, for a while,
you know, you work your forty hour week, you sleep
for however many hours, and then you've got a bit
of time to yourself, and I think there's a way
that we could change that.

Speaker 2 (03:25):
I think work is always important.

Speaker 3 (03:26):
I think that sort of purpose is fairly critical to humans,
but we could kind of, you know, let's can we
dial that down a bit and can we re amplify
some of the things.

Speaker 2 (03:36):
That humanity or humans.

Speaker 3 (03:38):
Really get a boost from, which which largely I would
say is hobbies, passions, and relationships. And so I think
the idea of okay, well let's change rather than the
five day working week and forty hours a week and
work life balance, it's like, what if making a living?
What if life? We can change that in quite a

(03:59):
significant way, a bit of an overhaul and you know,
let's have a bit of a reset.

Speaker 1 (04:04):
All.

Speaker 3 (04:04):
A lot of this is kind of rollover legacy stuff
that we've implemented over the years as we've had more
and more developments, and this potentially is an opportunity to
reset that and go, let's start again, because at some
point someone just made stuff up, right, Someone just made
up the forty hour work week, someone just made.

Speaker 2 (04:22):
Up how you spend your time.

Speaker 3 (04:24):
All these things were made up, and it's just kind
of rolled over and become the norm. But we can renormalize,
and so I think that's probably the biggest upside AI
is the ability to do that, and I think that
could be really interesting.

Speaker 2 (04:38):
It could be done really.

Speaker 3 (04:38):
Wrong if it's not careful and considered. But I think
I'd like to have faith in human ingenuity in that
we will tackle these problems and we can solve them
in a clever way that ultimately is beneficial for everyone.

Speaker 1 (04:58):
So it's very clear that this technology has the potential
to redefine the way we live and the way we work.
When I was at south By Southwest, I asked the
two people I was on stage with, the Global CEO
and the Australia Pacific Asia APACK or whatever APAC mate,
what are the skills that we should probably have up
our sleeves as we move forward, And they said, the

(05:21):
ability to learn, unlearn and relearn something. It's probably the
most valuable thing you can have. For example, I don't
know before pedncillin, if if you showed up to the
doctor with syphilis, they would go, oh, geez, tough weekend,
here's a leech because that's the best thing we had
was leeches. A week later, you show up with your
syphilis and you're worried about the leaches and no, no, no, no,

(05:41):
it's fine. We have penicillin now, so from one week
to the next, the way we did the job is
completely different. We're still trying to heal the person, but
the way we do it is completely different. The job
we're doing is still the job we're doing, but the
way we do it we might need to relearn how
to do it very very quickly, and to unlearn an old.

Speaker 2 (05:58):
Way of doing it.

Speaker 1 (06:00):
Another thing I spoke to Matt Agnew about this is
episode five point fifty. By the way, if you want
to go back and listen to it, just scroll on back.
Is exploring how AI is working in education, it's already
influencing education, and how kids are often way more at
depth of understanding tech than grown ups. He also does
explore a bit of the ethical considerations of AI and
the implications that has on coming generations.

Speaker 3 (06:23):
Children are moving into an increasingly AI driven world and
education is power and being adequately prepared is I guess
it's the responsibility of our generation to make sure the
children are are ready. And I think tied in with
that as well is I think we've spoken about it before,
the kind of decline in STEM in boys and girls

(06:45):
disproportionately more in girls in boys and girls, both in performance,
but more concerningly participation. And so I think again, creating
science and scientific literature and discourse that children engage with
is is really important. So yeah, there's kind of a
piece of let's keep fostering a love of science and children,

(07:07):
but then also let's adequately prepare children for this AI
driven world that can be scary if they're not prepared,
and the kind of I guess the secondary part of
that is that there's a lot of parents who or
an adults who aren't across it, and I think some
of them are finding it quite challenging to talk to
their children about AI or even to know do I

(07:27):
need to be do I need to be concerned?

Speaker 2 (07:29):
Are they being exposed to AI in dangerous ways?

Speaker 1 (07:31):
Kings could handle this kind of stuff. There's a misconception
that I was too complicated. We can't tell kids. This
is Reuben's big I bring up again, but it's Reuben's
big thing. He has all these models made of molecules, right,
and his thing is are we teaching the ABC's. We
teach them numbers, but we don't teach them the numbers
and the ABC's are the actual physical world that they inhabit.

(07:51):
Gravity pressure sets them. I don't have spasic shit. And
he gives these kids these molecules and they build they
they build things in front of him. And he sent
me a photo one day and he said, this is
a Grade eleven physics exam. It was completed by seven
year old.

Speaker 2 (08:08):
Yeah.

Speaker 1 (08:08):
Right, and this kid's not some sort of genius at
making up to a movie about there's just someone who's
had to explain to them and they can figure it out.
But for some reason, we don't tell these kids there's
stuff until a're sixteen, which is ridiculous.

Speaker 2 (08:20):
It's too hard for you. You can't deal with that, That's it.

Speaker 3 (08:22):
Yeah, I think I think the current education system needs
a bit of an overhaul. And that's not kind of
a dig at teachers. I think they're tasked with these
nearly impossible tasks of you need to teach thirty children
this particular topic where they all have very unique learning
styles in a limited period of time. Yeah, I think

(08:42):
that there's a lot of relics of an outdated education
system in place, and yeah, this is one of tho.
It's like it's the pacing, right, there's certainly concepts that
children can grasp much more quickly than that.

Speaker 2 (08:54):
Probably a few, like.

Speaker 1 (08:55):
It's common knowledge if you speak another language to a
kid before the age of twelve, they will learn it
like that, Like, tell me it's any different, It's no different.

Speaker 3 (09:03):
Yeah, they're very, very robust and adapt they're sponges, right,
So yeah, it's and kind of almost counterintuitively, the book
I've written for AI, the age group of targeted are
probably already much more across it than the parents. And
so it's funny that, you know, I've written it for
eight to twelve, and certainly there'll be a group within

(09:24):
that cohort that find it.

Speaker 2 (09:26):
You know, it's all new, but a lot of them
are already.

Speaker 3 (09:27):
They're doing coding, they're doing it adjacent subjects, and so
a lot of the stuff they're probably, oh, yeah, I
kind already knew about this, but this is a little
more detailed for me to understand what the implications are
and what are the ethical considerations.

Speaker 1 (09:43):
So yeah, it's very important that we prepare both children
and adults for the future, which is literally a month
from now. So this is not ten years from now,
this is not beyond two thousand this is not twenty
thirty two, this is not Olympic year. This is before
my next birthday. This ship is happening, Okay, if not already,
it's so so, so so important. The ethics around this

(10:06):
stuff is already bolted out of the gate. I did
actually make on mons stage with these two marketers. I
think there's one thing the advertising industry is known for,
it's ethics.

Speaker 2 (10:15):
I got a good laugh.

Speaker 1 (10:16):
I was pretty happy with that line because my book's
been stolen by the libgen hack, my first book back
after the Break, That and many many other books were
used to train all these AI models which are now
making schoollions of bucks based off stolen data. And no
one's going to stop it. I can't stop it. No
one can stop it. And so the ethics around this
stuff is really really full on. There was one more

(10:39):
thing that we kind of discovered about the best thing
we can do when it comes to making sure we
keep our humanity in the time when we're using all
these machines. But I do have to take a commercial break.
We'll be back with that and a little more from
that agnew in a moment. Thanks for listening to the show.

(11:02):
We're just kind of just recapping a little bit with
Matt Agnew and our conversation about AI after I went
to south By Southwest this week and my head kind
of got exploded by how terrifying and groundbreaking the next
six months of the future are. It's coming so fast.
I don't know if people are really aware of how
quickly things are changing. One of the lectures I saw

(11:24):
that day there was a quadrupedal robotic flamethrowing dog that
is autonomous and runs on AI and you can buy
it for nine grand to defend your home in America, obviously,
But do I want a flamethrowing dog based on AI
that I don't know if it's going to be able
to tell the difference between my neighbor and a batty

(11:44):
And even that, do I want to barbecue a battye
on my property?

Speaker 2 (11:46):
Probably?

Speaker 1 (11:47):
Not? So so weird. I don't want AI driven death robots.
I don't want that, but they're happening. But never mind.
There was one more thing that Matt Agnew and I
talked about, and it goes beyond what's happening at the moment,
but it's coming way quicker than you or I can
possibly know, and that is artificial general intelligence, and we

(12:11):
talk about what that actually is. So talking about how
does artificial general intelligence AGI differ from current AI and
why it's called the holy grail of AI research. We
also talk a little bit about the implications of AGI
achieving sentience and surpassing human intelligence. It's a lovely listen enjoy.

Speaker 3 (12:31):
I would say that for most big tech companies playing
in the AI space as sort of AGI, a artificial
general intelligence is seen as the sort of holy grail.
And what I mean by general intelligence is.

Speaker 2 (12:47):
Chat GBT.

Speaker 3 (12:48):
While it seems human like in its conversation, is it's
the illusion of conversation, right, It's the illusion of thinking
and the illusion of intelligence.

Speaker 2 (12:57):
It's it is algorithmic, algorithmic driven.

Speaker 3 (13:01):
But that also means that it's got a very narrow
kind of application or use case, and that use case
is creating human like conversation. If you asked to do
something more general, like drive my car from A to B,
it can't do that. It's got nothing programmed in at
all that relates to being able to navigate from A

(13:21):
to B, just like you can't ask Google Maps to
tell you you know what's two times too. It's these
things have very specific use cases and they're quite narrow
in function. General intelligence is the idea like us. You
can give it any sort of problem and it can
solve it the same way we solve it. You if
I say how do you get from A to B?
You can tell me in some way you can solve

(13:43):
that problem. I I give you some algebraic problem I
want you solve again, you could solve that. AGI I
think is seen as the holy grail because it's saying,
all right, not only do we have something that can
you can interact with in a human like fashion like
chat GBT, but it can solve things really like humans.
And I think that's probably arguably the biggest milestone. And

(14:05):
I think that sort of scene as the first step
on the way to what's called the singularity, which is
a whole other kettle of fish, being as that general
intelligence approaches the point of human like intelligence and ultimately blows.

Speaker 1 (14:20):
Past it and it gets that's where get identified as sentient.

Speaker 3 (14:24):
Yes, yes, so, I mean sentience is still very high.
You know, you ask ten different signs hard to define
sentience and you get ten different answers. But yeah, if
you had something that kind of is mimicking human intelligence.
It's very hard to argue against sentience at that point.
I think anything before that, okay, yeah, it's you know,
how do you It's a bit more nebulous, right, But

(14:46):
as soon as something is as smart as you, very
very difficult to say I'm sentient.

Speaker 2 (14:50):
But it's not.

Speaker 1 (14:52):
So I don't know. The singularity is both super exciting
and terrifying, isn't it. I mean, good gosh. If if
you're not already having ethical conversations with your family about
using this stuff, it's important that you do. And if
you're not already speaking to your MP about the ethical
use of these tools, it's important that you do, because
they're happening so so fast, And like I said, it's

(15:15):
not five years, ten years, twenty years, it's like within
a year. Shit is changing so fast we're going to
have to get on top of it. But there is
one thing that I did we did talk about on stage,
because if art is a conversation and all creativity is
just an idea, and another idea would have never been
combined yet combining, and that is a new idea. If

(15:38):
the ideas we are being inspired by are being generated
by AI, and then we type into another AI. Hey,
I want to do this thing. We are unwillingly just
a warm meat bag in a part of the AI
slop loop. And that's no good if the only inspiration
we're getting is just slop, you know, and we are

(16:01):
a part of that. And so we were talking when
we're on stage. We were talking about how important it
is to put your fucking phone down and go out
and touch the world, feel the world, feel tactile things
that a machine can't experience. Something human, the emotion, the
connection that what connects you another person, particularly when it

(16:22):
comes to like and that is my thoughts now when
it comes to connecting with your friends. I mean, even
since Saw two came out in the last like the
last two weeks, I scroll through my feet and I
don't even know it's real or not. And I'm so
reluctant to even look at my freaking phone. But if
I'm not looking at my phone, am I connecting with
people that I can't see with my own eyes? So

(16:42):
I'm thinking more and more it's really important to get
out there and actually look at another human being and
see what the light does to their eyeballs when they
look at you, and see how the wind flicks through
their hair, and listen to the tone of their voice,
so they tell you about the sandwich they are yesterday
and actually connect with an the human being, because it's
that what makes us human well be and it is

(17:02):
the thing that we cannot have taken away from us,
and it will be the thing that allows us to
use these tools to create things and do stuff that
other people are not doing if we maintain our humanity
and maintain our perspective on things. Thank you so much
for listening. I really appreciate it. I haven't done one
of these little episodes in a while. Thank you very

(17:22):
much to add a bunch of for chopping it together.
If you want to hear more of Dodr Matt Agnew's
insights into AI, it's brilliant stuff. Look, he's a super
brilliant man. He has a PhD in astrophysics and master's
in AI and he goes also once a bachelor, and
he's a delight he's a delightful man. Episode five, one
hundred and fifty is the show. Just scroll on back

(17:43):
in your feed and you'll grab it there. Thanks so
much for listening, And if you're still listening from now
you want to come to story Club on the ninth
of November. Mark Humphrey's, Harley Breen, Nina Ayama, Beck Wilson,
and myself and a new guest. We are yet to announce,
but it's probably maybe the life I just want to
get to do for the year. It'll definitely sell out.
Get your tickets. The linkers in the show notes, thank

(18:05):
you so much for listening. I'll see you Wednesday.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by Audiochuck Media Company.

The Brothers Ortiz

The Brothers Ortiz

The Brothers Ortiz is the story of two brothers–both successful, but in very different ways. Gabe Ortiz becomes a third-highest ranking officer in all of Texas while his younger brother Larry climbs the ranks in Puro Tango Blast, a notorious Texas Prison gang. Gabe doesn’t know all the details of his brother’s nefarious dealings, and he’s made a point not to ask, to protect their relationship. But when Larry is murdered during a home invasion in a rented beach house, Gabe has no choice but to look into what happened that night. To solve Larry’s murder, Gabe, and the whole Ortiz family, must ask each other tough questions.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.