Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome back everybody to another deep dive.
(00:02):
And this time we were tackling something,
I think especially intriguing.
I mean, you sent me this whole stack of articles,
notes, form threads, all about OpenAI's Dev Day in London.
Right.
And it's really a puzzle, isn't it?
We're talking like secret code names
and AI is doing these amazing things out in the real world.
And some people are even saying, you know,
this is the start of something really big,
(00:25):
a real tech revolution.
So are you ready to sort of dive into all this
and try to decode it with me?
Yeah, absolutely.
It seems like OpenAI decided to skip the usual, you know,
the hype and the teasers this time.
And instead they just kind of threw everyone
a bunch of curve balls.
Right.
It's like they wanted to shake things up a little bit.
And speaking of curve balls, let's
start with the biggest one, this O1 thing.
I mean, everyone was expecting GPT 4.5, right?
(00:46):
Maybe even GPT 5.
But no, instead OpenAI drops this O1 on us.
So what is it?
Is it a model?
Is it some kind of project?
Some sort of philosophical statement about AI.
Even the experts in these articles you sent are kind of
stumped.
What do you make of this mysterious O1 label?
Yeah, it's like they're speaking in code 01.
I mean, it's the basis of binary code, right?
Like a starting point.
(01:07):
So maybe it's a hint that this isn't just a small step
forward, but a whole new beginning, a whole new way
of thinking about AI.
A whole new beginning.
OK, so we've got this mysterious name.
And then to make it even more intriguing,
they show 01 controlling a drone in real time.
Yeah.
Not just simple commands, but navigating obstacles.
(01:28):
Yeah.
Adjusting to one while writing the code
to do it all on the fly.
I mean, can you imagine sitting down and trying
to code that yourself?
It would take you, what, a week at least.
Oh, yeah, at least.
And probably a lot longer and with a lot more
mistakes than the AI had.
So you know, this demo, it wasn't just showing off
their technical skills.
It was a statement.
AI isn't just stuck in the digital world anymore.
(01:49):
It's reaching out into our physical world
and interacting with it in ways we're only
starting to understand.
You know what?
I found this little anecdote buried in one of these forum
posts you sent.
And it's kind of unbelievable.
Apparently, they also used 01 to order pies
from a bakery near the event.
But here's the kicker.
It negotiated the price during the phone call.
The AI was haggling over pastries.
(02:10):
Yeah.
It's like something out of a sitcom.
I know it sounds funny.
But think about the technology behind it for a second.
We have an AI that understands real time information
about pie shops, their menus, even maybe customer reviews.
And then it uses all that to negotiate a price, which
means it understands value scarcity.
Right.
Maybe even a bit of psychology to persuade the baker.
(02:30):
So from drones to pie deals, 01 is already
out there in the real world trying things out,
testing its boundaries.
But while everybody was focused on these demos,
something really interesting happened during Sam Altman's
virtual Q&A. Did you catch what he said about AI agents?
Yeah.
His comments were fascinating.
He wasn't talking about simple chat bots or to-do list apps.
He described these AI agents as being
(02:52):
like really smart senior coworkers who could
handle complex multi-day tasks.
So this isn't just about automating simple things.
It's about AI taking on real responsibility
that requires judgment learning, maybe even creativity.
That's a big claim.
He even mentioned AI tackling the two sigma problem
in education, that idea that one-on-one tutoring is
(03:12):
super effective but impossible to do for everyone.
Are we talking about AI tutors that
could adapt to each student's needs,
get personalized feedback, maybe even tell a few jokes
to keep them engaged?
That seems to be the direction OpenAI is moving in,
a future where AI isn't just a tool but a partner, a mentor,
maybe even a friend.
But before we get too carried away with these utopian ideas,
(03:33):
there's another twist, a rumor floating around the internet
about a secret weapon OpenAI might be hiding.
You mean besides this pie ordering drone flying AI that's
already blowing everyone's mind, what
could possibly top that?
People are whispering about a new unnamed model that's
supposedly crushing all the AI benchmarks,
scoring higher than anything we've seen before,
a model so advanced that OpenAI might be hesitant to even
(03:55):
reveal it yet.
OK, now they're just showing off.
But I have to admit, I'm hooked.
This Dev Day wasn't just an event.
It was a master class in generating intrigue.
What do you think OpenAI is trying to do
with all this secrecy?
Maybe they're being careful testing the waters
before they release something really powerful.
Or maybe it's strategic, keeping their competitors guessing
(04:18):
while they stay ahead of the game.
Yeah, it's a high stakes game for sure.
But OpenAI isn't just about building these powerful AIs.
They're also thinking about how these models could
change the world.
Remember that question about AI generated content
flooding the internet and how that could lead
to even more misinformation?
Yeah, that's a big one.
It's already hard enough to tell what's real and what's fake
online.
(04:38):
If you add a bunch of AI content creators to the mix,
things could get really messy.
Right.
Altman actually addressed that during the event.
He didn't shy away from the risks,
but he also talked about a pretty interesting possibility.
What if we could use AI to fight back against AI generated
misinformation?
Like some kind of AI powered immune system for the internet.
Exactly.
(04:58):
Algorithms that can analyze text images, even videos,
and they could flag content that seems misleading or fake.
They could show you sources, point out inconsistencies,
maybe even create counter arguments based
on real verified information.
Wow, that would be incredibly helpful.
But going back to these AI agents for a minute,
Altman's comparison to really smart senior coworkers
(05:19):
really got me thinking.
If we're talking about delegating important tasks
to AI, wouldn't that require a level of trust
that goes beyond just asking Alexa to play a song?
Trust is a key issue here.
To work effectively with these AI agents,
we need to understand them.
Their strengths, their weaknesses,
how they make decisions.
It's like any working relationship really.
You need communication, transparency,
(05:40):
and some level of understanding.
But what if these AI agents get too smart?
Altman talked about them handling tasks over days,
learning and adapting.
Could they eventually become something
more than just assistants?
That's a question that's been around for decades
in science fiction, but now it's becoming a real concern.
The line between tool and entity is getting blurry,
(06:01):
and it's not just about intelligence.
Altman also talked about AI amplifying human capabilities,
helping us overcome our own limitations.
Like that example he gave about booking a restaurant.
A person might call a few places,
but an AI could contact hundreds of restaurants
at the same time.
Find the best option based on your preferences,
your diet, even the noise level of the restaurant.
(06:22):
Exactly, and that's a pretty basic example.
Imagine using that kind of power for scientific research,
financial modeling, even creative work.
What if an AI could help you explore thousands of designs,
test hundreds of theories, analyze huge amounts of data
to find hidden patterns?
It's like having a team of super-powered interns
working 24-7 trying everything.
It's exciting and a little scary at the same time.
(06:44):
It's a window into a future where humans and AI
could work together to achieve amazing things.
But it also raises some big questions
about the nature of work, the value of human expertise,
even the definition of intelligence itself.
And just when you thought it couldn't get any weirder,
we've got this mysterious unnamed AI model
hiding in the shadows.
(07:04):
Right, the rumors are spreading.
Some people think it's a multimodal model,
so it can process not just text, but images, audio, video.
Others say it's been trained on a massive amount of data
more than anything we've seen before.
It's like they're building the ultimate AI Swiss army knife,
something that can do anything.
And they're keeping it secret, maybe to improve it,
(07:24):
or maybe because they're afraid of how powerful it is.
Or maybe they're waiting for the perfect moment to reveal it,
to really shake things up in the AI world.
It's a game of strategy anticipation
and maybe a little bit of showmanship.
It's funny, with all this talk about super-advanced AI,
open AI seems to be really interested in something
pretty basic.
Education.
(07:45):
Remember when Altman brought up that two sigma problem?
You know that thing about one-on-one tutoring
being super effective, but impossible to scale up?
Yeah, it's like the perfect solution
is personalized teaching, but there just
aren't enough teachers and time to do it for everyone.
Right, so maybe AI could fill that gap.
Imagine an AI tutor that learns how
you learn what you're good at, what you struggle with.
(08:08):
And it adjusts the lessons, the pace, the difficulty.
And it explains things in a way that makes sense to you.
It'd be like having your own personal Socrates,
just explaining everything.
That's a really cool idea.
And think about it for students who maybe can't afford a tutor
or don't do well in a regular classroom.
This could be huge.
It could make learning more fun, more effective,
and available to anyone.
Yeah, it could really change things.
(08:29):
But does that mean teachers would be out of a job?
No, not at all.
It's about helping teachers not replacing them.
Imagine a teacher who has an AI assistant that
helps grade papers plan lessons, even figure out
which students need extra help.
That frees up the teacher to do what they do best,
inspire their students to be a mentor,
and help them love learning.
Yeah, so it's not about getting rid of teachers.
(08:50):
It's about giving them superpowers.
And it's not just education.
Think about health care.
You sent me that article about AI assisting with surgery.
I mean, it's mind blowing.
We're already seeing AI used for diagnosis, drug discovery, even
robotic surgery.
Imagine a world where AI helps surgeons
with those really complicated procedures,
gives them feedback in real time,
(09:11):
points out potential problems, even
does things that require superhuman precision.
It's like having an extra pair of eyes, a super steady hand,
and a medical encyclopedia all in one.
But all this progress, all this potential,
it comes with some risks.
We've all seen those sci-fi movies
where the AI takes over, right?
That's a common fear.
And it's not something we should ignore.
As AI gets smarter, we need to be really careful
(09:33):
about the ethics.
We need to make sure these technologies are used for good,
that they align with our values.
And we need safeguards to prevent things from going wrong.
It's like we're exploring a whole new world,
this unknown continent of AI.
We have to be careful, wise, and maybe a little humble.
Absolutely.
Open AI Dev Day wasn't just about showing off
new technology.
It was like a wake-up call, a reminder
(09:55):
that we're at a really important point in history.
The choices we make today, the questions we ask,
the rules we set for AI, they're going to affect all of us
in the future.
So as we wrap up this deep dive, what's
the one thing you want to leave our listener with?
The future of AI is still being written.
It's a story full of possibilities, both good and bad.
And we all have a part in writing that story.
(10:15):
So stay curious, stay engaged, and stay informed,
because the future is coming, ready or not.
Well said.
And to our listeners out there, thanks for joining us
on this deep dive.
It's a journey we're all taking together,
and it's just beginning.
So keep exploring, keep asking questions,
and keep an open mind about the amazing and maybe
a little scary future of AI.
Thank you.