Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Okay, so let's just let's unpack this for a second.
Imagine a world, maybe not too far off, where those
little tests, you know, the ones designed to tell humans
from machines online, the CAPTCHAs exactly, they're just not foolproof anymore.
Think about it, The I am not a robot? Boxes
you click like ten times a day. What if an
AI agent.
Speaker 2 (00:19):
Just breezes through them like sit and passes? Right?
Speaker 1 (00:23):
Or picture this the CEO of a huge AI company,
someone right at the cutting edge, describes his latest creation
not just as powerful, but using terms like well, world altering,
even frightening.
Speaker 2 (00:35):
Mmm, like comparing it to genuinely scary technology.
Speaker 1 (00:39):
Yeah, so what does that even mean for us? For
our digital lives, our jobs? How we even know what's
real anymore?
Speaker 2 (00:45):
You know what really grabs you here is just the
sheer speed of it, all, the velocity of the advancements,
and alongside that these really profound questions it throws out
about control, capability consequences. So today we're going to try
and cut through some of that noise. We'll connect the
dots on how AI isn't just say changing how we learn,
(01:06):
but it's actually challenging our basic assumptions about what machines
can do. Okay, We'll dig into the high stakes world
of AI research. The kind of value being placed on
talent now, it's unprecedented, and we'll look at how it's
shaking up our ideas about digital identity.
Speaker 1 (01:24):
Wow.
Speaker 2 (01:25):
Yeah, all while trying to keep an eye on that
bigger picture. You know, where's this tech heading? How fast
is it getting there? We've sifted through a ton of
recent infoto pull out what really matters?
Speaker 1 (01:34):
Okay, sounds like we've got some incredible ground to cover.
Then we're talking AI tutors that like never sleep, patiently
guiding you through stuff you thought was impossible, and then
machines literally proving they're not machines. That feels like pure
science fiction, it really does. We'll also get into the
industry's ethical crossroads, these multi billion dollar talent wars, and
(01:56):
apparently they're not just about the money anymore. Values are
coming into.
Speaker 2 (02:00):
Which is fascinating in itself.
Speaker 1 (02:01):
Yeah, totally. And then we'll dive into the latest practical
AI tools, the stuff that's already sort of subtly slipping
into our web browsers, our creative software are everyday digital lives. Okay,
here's where it gets really interesting. All right, let's kick
off with something that's uh, maybe quietly, but I think
profoundly changing how we learn. It's the evolution of AI
(02:24):
as a kind of tailored tutor. Now, anyone who's you know,
maybe used a chat butt for homework help or studying
prepping for an exam, you know how tempting it is, right,
just ask for the answer.
Speaker 2 (02:35):
Oh yeah, instant gratification totally.
Speaker 1 (02:37):
You ask about I don't know Bayes' theorem, and boom,
it just hands you this perfectly polished explanation. Or worse,
it just writes the whole essay for you.
Speaker 2 (02:45):
No questions asked, which feels helpful.
Speaker 1 (02:47):
In the moment exactly and the short term great. But
if you're actually trying to learn the material, you know,
build that mental muscle, it kind of defeats the whole purpose,
doesn't it.
Speaker 2 (02:55):
It absolutely does. It bypasses the struggle, the critical thinking
that internal wrestling with the ideas that well, that is learning.
Speaker 1 (03:02):
It's like getting the answer key before the test. You
might pass, sure, but did you learn anything?
Speaker 2 (03:08):
Probably not precisely. And that's really where the let's say,
the traditional use of large language models and education often
fell short. They were just answer machines, not really learning facilitators.
And that's where this idea of steady mode comes in.
It's a really significant, quite deliberate shift in how these
AI models are being used.
Speaker 1 (03:28):
For learning, a shift away from just giving answers exactly.
Speaker 2 (03:31):
Instead of just handing stuff over, this approach actively guides
you through the learning process step by step, much more
like a good human tutor would actually interesting. Think of
it less like using Google and more like having a
dedicated mentor. And here's the kicker. It doesn't sleep. It
never judges your questions. It doesn't matter how basic or
how many times you ask, and you will genuinely never
(03:54):
runs out of patients. For students who might feel intimidated
asking a human your dumb questions, or maybe they just
learned at a different pace, this could be huge. It's
designed to meet you exactly where you are academically, even evotionally.
Speaker 1 (04:09):
That is the dream, isn't it. I mean I remember
struggling for hours with certain concepts in college, especially math,
computer science stuff, feeling like I was the only one
not getting it.
Speaker 2 (04:19):
M hm, We've all been there.
Speaker 1 (04:20):
Yeah, So the idea that an AI could just weet
me it at my level, patiently walk me through it.
No like exasperated size or looking at the clock. That
feels revolutionary. It really could democratize access to good patient teaching.
Speaker 2 (04:35):
It absolutely could.
Speaker 1 (04:36):
But okay, practically, how does this personalized thing work? Like?
What does it actually ask you? How does it figure
out your level and tailor? The explanation sounds incredibly complex
to build into an AI.
Speaker 2 (04:47):
It is complex, but the approach is remarkably effective from
what we're seeing. The system basically starts with a little chat.
It asks things like, okay, what are you trying to
learn today? And roughly how much do you already know
about this?
Speaker 1 (05:00):
So it gets a baseline exactly.
Speaker 2 (05:02):
And based on your answers, it customizes the content, both
the depth and the approach. So, say you're wrestling with
something notoriously tricky like sinus soil positional encodings.
Speaker 1 (05:13):
Okay, I'm already lost. What even is that?
Speaker 2 (05:15):
Huh? Well, basically it's how AI understands the order and
position of words in a sentence, not just the words themselves,
super important for getting context in long texts. Or maybe
you're tackling discrete math.
Speaker 1 (05:28):
Right another phone one?
Speaker 2 (05:29):
Yeah, So instead of just dumping a wall of code
or abstract definitions on you. It explains these complex things
in a more structured, layered way, maybe starts with an analogy,
then breaks it down into smaller pieces.
Speaker 1 (05:40):
Manageable chunks.
Speaker 2 (05:41):
Exactly and critically, it has these embedded learning aids built
right in, things like self check questions, little hints if
you get stuck, prompts designed to make sure you're actually
engaging with it, not just reading passively, so the information.
Speaker 1 (05:54):
Sticks and it remembers you, like if you come back
to the same topic later.
Speaker 2 (05:57):
Yes, that's another key part. It adapts based on your
past conversations. It remembers what you've covered, maybe what you
struggled with, and it builds on that prior knowledge, so
you get this consistent, personalized interaction over time.
Speaker 1 (06:10):
That's a genuine game changer for long term learning. It's
not just a one.
Speaker 2 (06:14):
Off helper uses things like socratic questions, asking you things
to make you think, breaking down big ideas, giving feedback
designed to help you reflect as you go.
Speaker 1 (06:23):
Okay, that sounds impressive, but how much of this structure
actually aligns with what we know about effective human learning?
Is it just fancy tech or is it grounded in pedagogy?
Speaker 2 (06:33):
That's a great question. And apparently it is built with
input from teachers, scientists, pedagogy experts. It incorporates principles like metacognition,
thinking about your own thinking, cognitive load management so it
doesn't overwhelm you, and even ways to build curiosity.
Speaker 1 (06:48):
Okay, so it's thoughtful. But and this is a big butt,
isn't it. This isn't just about making students smarter. It
feels like it's also a direct response to a huge
growing problem in schools and universities cheating.
Speaker 2 (07:00):
Oh. Absolutely, this is definitely positioned at least partly as
a response to that rising problem. And you have to ask,
how do we adapt education when AI can just hand
over the answers or at least something that looks like
the answer.
Speaker 1 (07:14):
But dots are pretty bad.
Speaker 2 (07:16):
They're quite alarming. Actually. Universities in the UK, for instance,
reported nearly seven thousand confirmed cheating cases tied directly to
AI tools just last year.
Speaker 1 (07:26):
Seven thousand.
Speaker 2 (07:27):
Yeah, and that's a huge jump. It went from about
one point six cases per thousand students to more than
five per thousand students in just one year.
Speaker 1 (07:36):
Wow, that's like triple more than triple.
Speaker 2 (07:38):
It's a dramatic increase. Basically, for every thousand students, over
five were confirmed caught using AI to cheat. It's, frankly
an explosion of a problem that really undermines the whole
point of education.
Speaker 1 (07:50):
And you have to assume the real number is much higher. Right,
those are just the ones confirmed.
Speaker 2 (07:54):
Exactly, And then consider this. Over a third of college
aged adults in the US are already using this specific
AI tool we're talking about, Yeah, and a quarter of
all the messages it handles are related to school or tutoring.
Speaker 1 (08:06):
So it's already deeply embedded, deeply.
Speaker 2 (08:08):
Which puts immense pressure on the traditional education system. There's
an urgent need for a shift in approach.
Speaker 1 (08:15):
It's a huge pressure point. I mean, I see it
with my own kids, faced with a tough assignment, the
temptation to find a shortcut, especially one that can just
do it for you. It's massive, of course. But even
the developers of this study mode, they admit it's not
a magic bullet, right right.
Speaker 2 (08:30):
They're quite a front that it won't stop cheating entirely.
Students can still just ignore the study mode and ask
for the full essay or the complete solution.
Speaker 1 (08:38):
So what's the answer then.
Speaker 2 (08:39):
Well, they're saying, and it makes sense. This needs to
be part of a much broader, industry wide shift. Schools
need to fundamentally rethink how they.
Speaker 1 (08:47):
Assess students, moving away from essays that AI can easily write.
Speaker 2 (08:51):
Potentially moving towards more critical thinking, project based work, maybe
even building AI detection or AI awareness right into the
testing systems themselves. The goal really has to be setting
a new tone for what responsible AI use in education looks.
Speaker 1 (09:05):
Like, because just banning it isn't going to work.
Speaker 2 (09:08):
Probably not. It's about integration and adaptation, and frankly it
feels long overdue if academic credentials are going to maintain
their value and.
Speaker 1 (09:16):
Does it actually work for students? Like are there stories
of it helping?
Speaker 2 (09:19):
There are, and the anecdotal evidence really backs up that
tutor with superpowers vibe. There's this story of a college student,
Maggie Wang. She used study mode to finally understand sinusoidal
positional encodings.
Speaker 1 (09:33):
Ah the scary phrase again, Uh huh, Yes, that complex concept.
Speaker 2 (09:39):
Apparently she'd failed to grasp it multiple times before, tried
different methods, maybe even human tutors. Okay, but after a dedicated,
uninterrupted three hour session with this AI study mode.
Speaker 1 (09:49):
She finally got it three hours. That's dedication, but the
AI stayed with her exactly, and.
Speaker 2 (09:55):
She specifically compared it to a tutor who never gets tired,
never judges, just keeps rephrasing, reac explaining until that light
bulb finally clicks on, which perfectly captures the ideal they're
aiming for and apparently delivering.
Speaker 1 (10:07):
Think about that, though, how many of us have been
there struggling for hours, days, sometimes with a concept that
just wouldn't.
Speaker 2 (10:13):
Click, That feeling of hitting a wall.
Speaker 1 (10:16):
Yeah, feeling totally frustrated, maybe even thinking I'm just not
smart enough for this subject. So this isn't just some
minor chat bought update. It feels like a profound shift
in how learning could happen for millions.
Speaker 2 (10:28):
It really speaks to that personalized approach the patients that
you know human tutors aspire to, but it's hard to
deliver consistently, especially at.
Speaker 1 (10:38):
Scale right and AI can deliver that on a massive,
accessible scale. It breaks down barriers, cost, availability, even just
the anxiety of asking questions. It truly could democratize access
to high quality patient teaching reaching students who might otherwise
just fall through the cracks. Okay, now let's switch gears
to something that honestly feels like it's ripped straight from
(10:59):
a black Mail episode, maybe even a nightmare for anyone
working in cybersecurity. Get this an AI agent clicking I
am not a robot and successfully passing the verification tests.
Speaker 2 (11:11):
Wait, really, it passed the cappypcha.
Speaker 1 (11:13):
Yeah, that actually happened. You couldn't make it up a
bot literally saying it needs to prove it's not a
bot and then doing it successful.
Speaker 2 (11:19):
Yeah, the irony is just stick.
Speaker 1 (11:21):
It's wild, right, almost comical, But the underlying reality is well,
it's profoundly unsettling when you think about it.
Speaker 2 (11:29):
Okay, so what's the context here? Was this just a weird.
Speaker 1 (11:32):
Glitch or that's the fascinating part. It wasn't just a
one off trick. This AI agent, it's part of this
growing toolbox of advanced AI capabilities. Think of it like
an autonomous digital assistant. Okay, it operates inside its own
like isolated virtual environment, but crucially, it has access to
a web browser and an operating system that interact directly
(11:55):
with the real Internet, just like you or I would
from our computer.
Speaker 2 (11:59):
So it can browse websites, click links, fill forms exactly.
Speaker 1 (12:02):
You give it these multi step tasks things like go
download the specific video from this website, or research this
topic across these five sources, or even order my groceries
from this online store.
Speaker 2 (12:14):
Right, complex, real world tasks, Yeah.
Speaker 1 (12:17):
And it intelligently figures out the steps and performs them,
navigating sites, clicking, typing, and to make it even more surreal,
apparently it narrates what it's doing as it goes along,
like a person thinking aloud.
Speaker 2 (12:29):
Okay, that's a bit creepy, but useful, I guess.
Speaker 1 (12:32):
So in this specific case, the agent was doing some
task and it landed on a page protected by cloud flare.
Speaker 2 (12:39):
Ah.
Speaker 1 (12:39):
Yes, familiar territory, right, and it hit one of those
little boxes of verify you're human now. Sometimes that leads
to clicking on blurry traffic lights or whatever.
Speaker 2 (12:49):
Visual pepe myyzccha.
Speaker 1 (12:50):
Yeah, but this time it was the simpler checkbox, the
I am not a robot one which mostly relies on
back end analysis of your behavior. And this AI agent,
apparently without any hesitation.
Speaker 2 (13:02):
Just clicked it and it worked.
Speaker 1 (13:05):
It worked, and then almost like an aside it explained
what it did. It said something like this step is
necessary to prove I'm not a bot and proceed with
the action.
Speaker 2 (13:14):
WHOA the self awareness in that statement from an AI,
that's mind boggling. It sounds like it understood the test
it was facing right.
Speaker 1 (13:21):
It's just wow.
Speaker 2 (13:22):
So this raises the big quenchin, doesn't it? How did
it get past a system that's specifically designed to block bots?
I mean, if we connect this to the bigger picture,
Captiza KCA systems exist for one reason to blog this
kind of automated behavior. The whole idea CAPTASHA stands for
completely automated public turing test to tell computers and humans apart, goes.
Speaker 1 (13:44):
Back to the nineties, right the dawn of the webblemost
the goal.
Speaker 2 (13:47):
Was simple, create visual tests. Humans find easy but machines
find hard or impossible. Yeah, but you said this wasn't
the visual one.
Speaker 1 (13:55):
Correct. It didn't have to identify traffic lights or storefronts.
It passed what's called the initial layer the more subtle check.
It's cloud flare turnstyle.
Speaker 2 (14:03):
Ah, turnstyle. Okay, that's different. So it wasn't solving a
visual puzzle.
Speaker 1 (14:07):
No, it bypassed a more subtle behavioral check, which feels
like a fundamentally different kind of challenge, doesn't it. The
AI wasn't just solving a puzzle, it was behaving like
a human exactly right.
Speaker 2 (14:18):
Turnstile works by analyzing all sorts of subtle signals things
you don't even think about, your mouse movement patterns, how
quickly you click, the unique digital fingerprint of your browser,
your IP address history, other background data.
Speaker 1 (14:30):
So it's looking for tiny behavioral cues.
Speaker 2 (14:32):
Precisely, it's looking for those almost imperceptible micro behaviors that
together signal human navigating the web. And crucially, this AI
agent's behavior in these subtle ways was humanlike enough that
it didn't trigger the deeper visual check.
Speaker 1 (14:49):
Okay, that is profoundly significant. It suggests AI isn't just
processing data anymore. It's learning to impersonate human interaction patterns
at a level we thought was uniquely human.
Speaker 2 (15:00):
It really shifts the arms race, doesn't it, From visual
recognition challenges to behavioral mimicry. That raises huge questions about
how we verify identity online, how we filter bots moving forward.
Speaker 1 (15:10):
It really does. It reminds me of that whole history,
you know, the constant back and forth. As AI got
better at reading distorted text, kept tcch's got harder.
Speaker 2 (15:18):
Then came images and Google's recap TCCHA, which was, as
you said, kind of brilliant but also bit dystopia.
Speaker 1 (15:24):
Yeah, having millions of us unknowingly helped digitize books or
train Google's vision algorithms just by clicking I'm not a bot,
clever use of human input, but also yeah, hidden data collection.
Speaker 2 (15:34):
But now the fact that an AI agent can just
casually breathe through even the behavioral layer, it shows how
much the ground has shifted. The very definition of human
online is getting blurry.
Speaker 1 (15:46):
So this wasn't like a brute force attack.
Speaker 2 (15:48):
No, not at all. That's the remarkable thing. It wasn't
a hack, wasn't a lucky guess. It was smooth, it
was dare I say, self aware, and it was seamlessly
integrated into a broader workflow. Okay, that's what makes it
so different from previous bought attempts. You're watching the agent
behave more like a human assistant. It's navigating, clicking, reading,
not just running pre written scripts. It's interacting dynamically, adaptively
(16:12):
learning on the fly.
Speaker 1 (16:13):
What does this all mean for us. We're seeing AI
agents act more and more like human assistants, interacting seamlessly
with the web, which is exciting, right, potential for automating
tedious stuff. Huge potential, but also yeah, a little unsettling
when you think about verifying identity or just preventing automated
systems from doing things we don't want them to do.
Like someone on Reddit apparently got one of these agents
(16:34):
to order groceries.
Speaker 2 (16:36):
I saw that anecdote too. Yeah. Apparently gave it super
basic instructions avoid red meat, stick to healthy items, keep
it under one hundred and fifty bucks.
Speaker 1 (16:45):
And it just did it. Navigated this is follows website,
added stuff to the car, checked out.
Speaker 2 (16:50):
Pulled it off with no problem, according to the post,
which is pretty amazing.
Speaker 1 (16:54):
But it's not perfect, right, There are limits.
Speaker 2 (16:56):
Oh, definitely. It's important to keep perspective. This isn't some
magical silver bullet. Yet. Another user reported the agent completely
failed trying to navigate the stop and shop website. Apparently
the interface was just too messy to an intuitive The
AI got stuck couldn't figure out the layout. So the
conclusion was sometimes bad UI still beats good AI.
Speaker 1 (17:17):
Huh, that's actually kind of reassuring it is.
Speaker 2 (17:20):
It highlights that while AI is advancing incredibly sassd the
messy reality of the world, including poorly designed websites, still
presents real challenges. These agents aren't omnipotent. Human ingenuity, even
when poorly executed, can still be a speed bump.
Speaker 1 (17:35):
All right, Uh, maybe strap in for this next part,
because now we're getting into what might be the most
I don't know, shocking piece of this whole puzzle, a
story playing out right at the top of the AI world.
Speaker 2 (17:46):
Okay, right.
Speaker 1 (17:47):
The CEO of a major leading AI developer recently said
in an interview that his company's next gen AI GPT
five actually scares him.
Speaker 2 (17:57):
Scared the guy who runs the company building it.
Speaker 1 (17:59):
Yeah, and he wasn't being vague, not talking abstract ethics.
He compared testing GPT five to the Manhattan Project.
Speaker 2 (18:08):
Wow, okay, that's not a subtle comparison developing nuclear.
Speaker 1 (18:11):
Weapons, right, that's the analogy he reached for.
Speaker 2 (18:14):
So what's truly profound here, I think is the source
of that feeling. This isn't some outside critic, you know,
not some doomsayer shouting from the sidelines.
Speaker 1 (18:22):
No, it's the person at the absolute top.
Speaker 2 (18:23):
Exactly the individual who built it, who's seen its capabilities
unfold firsthand, expressing deep concern about its path, its potential.
He talked about how fast GPT five feels, not just
response time, but how much it seems to understand, to reason,
maybe even anticipate.
Speaker 1 (18:39):
Like an intelligence jump.
Speaker 2 (18:40):
Yeah. He described just sitting there watching what it could
do and feeling deeply uneasy, almost a sense of like foreboding,
which it raises a huge question, right, Yeah, if the
builders are scared, what does that.
Speaker 1 (18:54):
Tell us about the power we're unleashing and can we
even control it?
Speaker 2 (18:57):
Exactly that Manhattan Project analogy. Yeah, it isn't just about power.
It's about a technology that changes everything with unforeseen consequences,
immense ethical weight.
Speaker 1 (19:07):
It is a sobering thought. If the person closest to
it feels that way, it really makes you stop and think.
And he didn't stop there, did he. He also criticized
AI oversight.
Speaker 2 (19:16):
He did. He took a very public shot at the
current state of it. He basically said, there are no
adults in the room paining, meaning the systems, the regulations,
the monitoring bodies, frankly, even the understanding within governments. Yeah,
the structures that are supposed to be guiding and managing
AI development. They're just way behind, outpaced by the tech,
massively outpaced. The tech is moving too fast, evolving exponentially,
(19:40):
and the people in charge of figuring out how society
adapts they don't have the right tools or maybe even
the knowledge, certainly not the legislative frameworks to keep up.
Speaker 1 (19:49):
It's like trying to regulate supersonic jets with horse and
buggy laws.
Speaker 2 (19:53):
That's a good way to put it. This vacuum of
oversight is a critical challenge. We're seeing these rapid advances
like AI passing human tests showing this capacity for mimicry,
but the governance is lagging way behind.
Speaker 1 (20:05):
Which creates risks, huge risks.
Speaker 2 (20:07):
It means the tech is accelerating without adequate guardrails, which
could lead to exactly the kind of unintended consequences the
CEO seems worried about with his nuclear bomb comment. It's
a race between innovation and responsible deployment, and right now
innovation is way out ahead, and we haven't.
Speaker 1 (20:24):
Even really decided as a society what the boundary should.
Speaker 2 (20:28):
Be exactly, what are the ethical red lines. It's largely
left to the developers themselves right now.
Speaker 1 (20:32):
Okay, so if the stakes weren't high enough, let's talk
about the talent war in AI, because there's this truly
unprecedented story that says so much about what's really driving
this industry. Now it's not just about profit.
Speaker 2 (20:45):
H the poaching story.
Speaker 1 (20:46):
Yeah. So, a very prominent tech CEO known for going
after huge goals aggressively, he's been trying to poach top
talent for his new super intelligence labs, a super ambitious
project aiming for AI way beyond what we have now.
Speaker 2 (21:02):
Right AGI are close to it, And.
Speaker 1 (21:05):
The offers he made they were just outrageous. He went
after this specific team at a place called Thinking Machines Lab,
very respected cutting edge group. Ok And the deals on
the table were honestly mind boggling, almost fictional money. We're
talking hundreds of millions of dollars, up to one billion
dollars for a single research.
Speaker 2 (21:23):
A billion for one person.
Speaker 1 (21:25):
Yes, and get this, not over their whole career, over
just a few years.
Speaker 2 (21:29):
It's astronomical as one of the biggest offers anyone's ever
heard of in tech period, let alone AI.
Speaker 1 (21:34):
It's an almost irrational valuation of human intelligence isn't it.
Speaker 2 (21:37):
It's extraordinary. It signals this almost existential importance placed on
getting the absolute leading minds in this specific critical field,
like the belief that the future of computing, maybe even humanity,
rests on these few individuals. It's a testament to how
valuable their intellectual capital is seen in this race.
Speaker 1 (21:57):
But here's where it gets that's really interesting.
Speaker 2 (22:00):
Okay.
Speaker 1 (22:01):
According to a detailed report, every single person on that
targeted team turned him down. Everyone, every single one. Some
more offer two hundred five hundred million over four years.
One person reportedly that full billion over just a few years.
Speaker 2 (22:20):
They all said no.
Speaker 1 (22:20):
They all said no. I mean, just pause and think
about that for a second. Turning down generational wealth, money
that could set up your family for centuries.
Speaker 2 (22:27):
Why, that's not just a polite refusal. That's a profound state.
Speaker 1 (22:30):
Exactly. It's not just a flex it's a massive statement.
Speaker 2 (22:33):
A rejection on that scale. Turning down that level of
financial incentive, it signals something fundamental has shifted. It means
for these top tier researchers, it's not just about the
money anymore.
Speaker 1 (22:43):
So what is it about.
Speaker 2 (22:44):
It suggests they're making decisions based on much deeper factors.
Maybe values alignment with their current organization's mission, or a
deep sense of trust in its ethics, its long term vision.
Speaker 1 (22:57):
Right, trust in the leadership.
Speaker 2 (22:59):
Maybe it's volumes about how they view the company making
the offers, doesn't it, And possibly how they view that
CEO's specific goals with this News superintelligence team. Maybe it's
a statement about the kind of AI they want to build,
or the environment they want to build it in, or
even the ultimate purpose AI should serve.
Speaker 1 (23:17):
It's like a new kind of currency in AI.
Speaker 2 (23:19):
Exactly, a currency of ethics, of trust, of alignment with
some greater purpose beyond just maximizing shareholder value or personal wealth.
Speaker 1 (23:28):
It is fascinating, isn't it. When you hear about sums
of money most of us can't even really comprehend being
turned down. It points to something way deeper than just
financial incentives. It suggests these individuals are deeply committed to
certain principles, maybe a specific vision for how AI should evolve,
and they're willing to forego massive personal gain to stick
(23:48):
to those principles. There's a moral calculus happening there, a
decision about which future they want to help create.
Speaker 2 (23:54):
So what does this mean for the future of talent,
of research and AI. It suggests the most valuable assets
in this field. Maybe they can't be bought with money
alone anymore.
Speaker 1 (24:04):
Maybe they're one with a shared vision, a commitment to
doing it, responsibility.
Speaker 2 (24:07):
It's a really powerful signal to the entire industry that
the why behind the work is becoming just as important,
maybe more important, than the what.
Speaker 1 (24:16):
Okay, So, while all these high stakes dramas and existential
fears are grabbing headlines, there's also been this quieter but
incredibly impactful revolution happening. New AI tools, new capabilities rolling
out almost daily.
Speaker 2 (24:30):
Yeah, under the radar sometimes, but changing things significantly.
Speaker 1 (24:33):
And these aren't just demos, they're practical things, often subtle
integrations that are actually changing how we work, how we create,
how we interact online. Let's start with something visual huge
for creators, creative consistency.
Speaker 2 (24:47):
Ah, yes, the consistent character generation exactly.
Speaker 1 (24:51):
There's a new tool that lets you generate consistent characters
from just a single photo. This solves a problem artists
and creators have wrestled with forever.
Speaker 2 (24:59):
It's a truly magnificant leap for so many creative fields.
You upload a reference image could be a sketch a
photo of virtual avatar, and then you can generate that
exact same character, different styles, different scenes, different poses, but
it stays consistent.
Speaker 1 (25:13):
Consistency is the key, right, Absolutely.
Speaker 2 (25:15):
It's not just another random face generator. It's built specifically
to ensure the face, the hairstyle, subtle expressions, even the
lighting stays locked across multiple outputs.
Speaker 1 (25:26):
Right, that's huge.
Speaker 2 (25:27):
Think about creators working on comics, or developing game avatars,
or just maintaining a consistent personal brand visually online. That
kind of stability, it's a game changer. It cuts out hours,
maybe days, of tedious manual adjustments trying to make characters
look the same frame after frame.
Speaker 1 (25:45):
I can totally see that like an indie game developer
suddenly having the visual cohesion of a big studio without
the massive team. It really democratizes that high fidelity visual storytelling.
Speaker 2 (25:58):
It really does. And it gives you more control too.
You can mask or unmasked specific areas clothing, neck hair,
depending on what you want to keep or change. And
there's even a fiction mode if you want to push
it into more imaginative, less realistic styles.
Speaker 1 (26:11):
Cool, and how does it fit with other tools?
Speaker 2 (26:13):
It integrates really well. You can combine it with things
like magic fill or remix, those tools that let you
intelligently expand or alter parts of an image, so you
can place your consistent character into a totally new scene,
automatically adjust the lighting to match the new environment, and
control exactly how closely the output sticks to your original reference.
Speaker 1 (26:31):
And people can use this now.
Speaker 2 (26:32):
Yeah, it's available now in early access, right from the
platform's character tab, so artists are already getting their hands
on it, already changing how the works.
Speaker 1 (26:40):
Okay, shifting gears, how we browse the web. That's changing too, right.
Microsoft's doing something with browsers.
Speaker 2 (26:47):
They are. They've rolled out this copilot mode in their
Edge browser that really transforms it from just a passive
window into an active assistant. It goes way beyond just
having a chatbot in a sidebar. So who well, once
you enable it, this copilot can actually read across all
your open tabs and then give you instant summaries or
detailed comparisons between them.
Speaker 1 (27:08):
WHOA, Okay, that's useful.
Speaker 2 (27:10):
Incredibly useful. If you're doing deep research or trying to
compare products backs from different sites, or reviewing multiple sources,
you can do it without constantly jumping back and forth
between tabs ad you comparing like five different hotel reviews instantly,
or summarizing several news articles on the same topic.
Speaker 1 (27:27):
So if you're like me with way too many tabs
open when shopping online or trying to figure out what
competitors are doing, this them can just highlight the differences,
surface the key insights from all those pages at.
Speaker 2 (27:40):
Once, exactly. It sounds like a dream for productivity. Honestly
cuts down that mental load, and they've also streamlined the interface.
There's a unified input bar now that merges search, chat,
and navigation all into one.
Speaker 1 (27:53):
Place, So no more guessing where to type right.
Speaker 2 (27:55):
Less clicking, less switching functions that just handles it. Plus
voice navigation is working too. You can just talk to
the copilot, ask it to find stuff, compare pages, open
new tabs, makes browsing potentially hands free.
Speaker 1 (28:07):
It really feels like the browser is becoming an intelligent agent,
not just a passive window anymore.
Speaker 2 (28:12):
That's exactly the direction, and they're already testing even more
ambitious stuff, exploring letting the copilot complete actual real world
tasks for you, like booking a restaurant based on your
calendar and browsing history, maybe managing online subscription. They're also
working on grouping your browsing into topic journeys, so if
you're planning a trip, it automatically organizes all the related tabs, notes,
(28:35):
searches together, and it will offer smart suggestions based on
what you're doing, proactively helping you find things.
Speaker 1 (28:41):
So the browser becomes like a predictive partner, anticipating what
you need.
Speaker 2 (28:46):
That's the vision, a proactive partner in your digital life.
Speaker 1 (28:50):
Meanwhile, Google's not sitting still either, right, They're pushing search further.
Speaker 2 (28:54):
Definitely deeper push into multimodal search, big upgrades to their
AI mode, which they call gemin. You can now upload
images directly from your desktop, which was mostly.
Speaker 1 (29:04):
Mobile before, and PDFs.
Speaker 2 (29:06):
Two pdf uploads are coming soon, yes, which is huge.
Imagine being able to ask questions about the content of
a pdf. Lexture slides a research paper, even a dense
legal document, and get answers drawn directly from it, but
combined with wider web context.
Speaker 1 (29:21):
No more endless copy pasting paragraphs into the search bar.
Speaker 2 (29:23):
Exactly, you interact directly with the document's content. It's a
new level of intelligent data extraction.
Speaker 1 (29:29):
And analysis, And what's this canvas thing?
Speaker 2 (29:31):
Ah, Canvas is really interesting. It's a new planning tool.
It opens in a side panel in your search results,
and crucially, it stays active across sessions.
Speaker 1 (29:42):
It persists, so it remembers your project.
Speaker 2 (29:44):
Yes, whether you're building a complex travel itinerary, organizing tons
of research for a big paper, or just tracking a
multi step task, Canvas gives you this persistent workspace. Your
search evolves with your goals. You can drag info in
add notes, refined queer over days or weeks, so.
Speaker 1 (30:01):
Search becomes less of a one off query and more
like an ongoing project assistant.
Speaker 2 (30:05):
Precisely a continuous, intelligent workspace that remembers your context and progress.
Speaker 1 (30:10):
And on mobile, things are getting even more futuristic.
Speaker 2 (30:12):
Yeah, Mobile search Live is rolling out. It lets you
point your phone's camera at something, an object, a scene,
and then talk with the AI mode in real time
about that live video.
Speaker 1 (30:21):
Feed WHOA based on Google's project astra Yes.
Speaker 2 (30:25):
Exactly, built on Astrotech access via Google Lens. Imagine walking
through a store, pointing your camera at say a plant,
and asking what kind of plant is this and how
much water does it need? Or pointing it at a
weird error message on your router and asking how do
I fix this?
Speaker 1 (30:41):
That's incredible real time interaction with the physical world through AI.
Speaker 2 (30:45):
Search, and Chrome itself is getting smarter too. There's a
new ask Google about this page option right in the
address bar, plus deeper ail responses when you highlight text
or tap dive deeper. It's making the web much more interactive, responsive,
blurring the lines between screen and reality.
Speaker 1 (31:01):
Okay, let's shift down a level to the actual AI
models powering some of this stuff. There's a new one
making waves.
Speaker 2 (31:07):
Yes, in Nvidia's Lama Neotron super Version one point five.
It just topped a major AI index leaderboard, the lmsys
chat Bot Arena.
Speaker 1 (31:15):
Okay, so it's powerful, but what makes it stand out?
Speaker 2 (31:18):
It's not just raw power, it's efficiency and versatility. It's
beating other leading open models and key areas math, science, reasoning, coding,
general chat. But here's the really noteworthy part. It's efficiency.
How efficient It apparently runs effectively on a single and
Nvidia H one hundred GPU, which is powerful, yes, but
still just a single piece of hardware you can buy.
(31:40):
Yet it's outperforming bigger models in both accuracy and speed.
In many benchmarks.
Speaker 1 (31:45):
Okay, why is that efficiency such a big deal?
Speaker 2 (31:48):
It's massive for developers. It means you can deploy top
tier AI models on much more accessible hardware. It drastically
lowers the cost and complexity of putting advanced AI into applications,
into services, maybe and onto devices eventually. It's like getting
supercomputer level AI power in a more manageable package. It
democratizes access to the cutting.
Speaker 1 (32:08):
Edge that really could accelerate things. And how is it trained?
I heard something about synthetic data?
Speaker 2 (32:13):
Yeah, fascinatingly, it was trained using entirely synthetic data sets
over twenty six million high quality examples generated by other
advanced AI models like Ken three and Deepseek.
Speaker 1 (32:23):
So AI is training AI now increasingly?
Speaker 2 (32:25):
Yes. It speaks to this ability of AIS to generate
their own high quality training material, creating this kind of
self improving feedback loop for future development.
Speaker 1 (32:34):
And went through fine tuning after that.
Speaker 2 (32:35):
Oh yeah, several layers of post training, supervised fine tuning,
plus advance reinforcement learning techniques things like DPO, RLVR. You
hear these acronyms a lot, Now, what do those do?
Speaker 1 (32:47):
Basically?
Speaker 2 (32:48):
Essentially, they take the generally trained model and refine it
using human feedback or preferences. They make it much better
at following instructions, accurately, understanding nuance, reasoning correctly, and just
generally sounding more helpful and less well robotic, The shape
its behavior and align it better with what humans actually
want it to do.
Speaker 1 (33:06):
Gotcha, and it's easy to deploy getting easier.
Speaker 2 (33:10):
Soon developers will be able to deploy it using Nvidia's
NIM micro services with pretty minimal setup, which again lowers
the barrier to using these powerful models.
Speaker 1 (33:18):
Okay, one last area creative software. Adobe's Photoshop is getting
AI magic too big time.
Speaker 2 (33:25):
Adobe just rolled out a set of AI updates for
Photoshop that are serious time savers for creators. They fundamentally
changed some core workflows. The new harmonized feature is really impressive.
You can insert an object into a scene, like add
a person to a landscape photo, or place a product
onto a background, and it automatically matches the lighting, the
color tone, the shadows so perfectly that it looks like
(33:48):
it was always.
Speaker 1 (33:48):
There, no more awkward obviously pasted in elements with weird
lighting exactly.
Speaker 2 (33:54):
It tackles one of the trickiest, most time consuming parts
of compositing, making different elements look like they truly belong
together naturally. It lets artists focus more on the creative idea,
less on the tedious technical fixes.
Speaker 1 (34:08):
That alone sounds huge. What else?
Speaker 2 (34:10):
There's genitive upscale. It can prove image resolution, making pictures
bigger up to eight megapixels, but without making them look
artificially over sharpened or weird. Really helpful for printing large
images or rescuing older low res files. Okay, useful, And
they've significantly upgraded the removed tool using their latest Firefly
AI model, so removing unwanted objects from photos is now
(34:32):
much cleaner. Fewer weird, blurry edges or artifact patches, Complex
retouches become almost effortless. These are things that directly address
common pain points for designers and.
Speaker 1 (34:43):
Photographers, streamlining workflows that used to take.
Speaker 2 (34:45):
Ages precisely, and finally, for collaborations super relevant now with
remote teams, there's a new projects feature on desktop. It
organizes all your assets into shared workspaces, making it way
easier for teams to work together without losing of different
versions or edits.
Speaker 1 (35:01):
Okay, so across the board, these aren't just small tweaks.
These are fundamental shifts in how we interact with tech,
how we learn, how we create, Absolutely from making browser smarter,
helping us learn patiently to giving artists this unprecedented control
and efficiency. AI is weaving itself into the very fabric
of our digital lives in ways we're really only just
beginning to fully appreciate, quietly reshaping our capabilities, our expectations
(35:26):
hashtag tech tech outro.
Speaker 2 (35:27):
You know, if we try and connect all these threads,
step back and look at the bigger picture, what really
stands out is just how multifaceted AI's impact is becoming.
Speaker 1 (35:36):
Yeah, it's not just one thing.
Speaker 2 (35:37):
No, it's not just about what one super smart model
can do in isolation. It's how all these advancements ripple
outwards through education, cybersecurity, creative work, our daily web browsing,
even the basic structure of the tech industry itself right,
redefining value, redefining talent. The speed of innovation is just astonishing,
(35:59):
and the implications are truly global but also deeply personal.
We're witnessing this technological evolution that's rapidly closing the gap
between what humans can do and what machines can do
in ways that honestly until very recently felt like pure
science fiction.
Speaker 1 (36:13):
So as AI keeps learning, keeps adapting, even passes these
human tests with this kind of uncanny precision, what new
responsibilities does that put on us as users, as observers,
just as people living through this incredible technological lead.
Speaker 2 (36:26):
That's the question, isn't it.
Speaker 1 (36:28):
Yeah? When AI can help us learn incredibly complex things
more efficiently than ever, but it can also navigate the
web with like human level stealth, what does that mean
for our own understanding of knowledge, of trust, even maybe
what it means to be human in a world that's
getting smarter artificially every single day.
Speaker 2 (36:47):
These are just theoretical questions anymore.
Speaker 1 (36:49):
No, they're becoming daily realities, things we have to grapple
with individually and together as a society. Well, thank you
for joining us on this deep dive into the fascinating
and yeah, sometimes unsettled world of AI. Keep exploring, keep
asking questions, and we'll catch you on the next one.