Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to Innovation Pulse, your quick no-nonsense update on the latest in AI.
(00:10):
First, we will cover the latest news.
The Washington Post partners with OpenAI for content accessibility, Apple revises AI feature
promotions, Coheir launches an advanced embeddings model, and Anos integrates AI knows in robots.
After this, we'll dive deep into the evolving perceptions of AI among Generation Z and its implications for the future.
(00:35):
The Washington Post has partnered with OpenAI to make its content more accessible via ChatGPT.
This collaboration will allow ChatGPT to provide summaries, quotes, and links to the Post's
original articles in response to user queries, covering topics like politics and technology.
The content will be clearly attributed and linked to full articles.
(00:58):
The partnership highlights a commitment to offering reliable information, especially on complex topics.
OpenAI has similar agreements with over 20 publishers, while some newspapers have sued
OpenAI for alleged copyright infringement.
Peter Elkins-Williams of the Washington Post emphasised the importance of meeting audiences
(01:20):
where they are, while OpenAI's Viren Shetty noted the value of high-quality journalism.
The Washington Post has also been exploring AI for news-related experiments and expanding
content accessibility through AI-driven summaries and audio features.
Join us as we discuss the impact of advertising modifications.
(01:44):
Apple no longer lists its Apple Intelligence features as Available Now, following a review
by the National Advertising Division, NAD.
This change was made after the NAD recommended Apple modify or discontinue the claim, stating
it implied features like priority notifications and ChatGPT integration were available with
(02:06):
the iPhone 16 launch.
However, not all features were available then, as some were added later through software
updates.
The NAD found that the supporting footnote was unclear and AI features like Siri were
listed as Available Now, despite not being released.
In response, Apple updated its promotional content and removed a video featuring actor
(02:30):
Bella Ramsey using Siri.
Although Apple disagreed with some findings, they expressed appreciation for the NAD's
input, and committed to following their recommendations.
Coheer has launched Embed 4, an updated embeddings model designed to enhance enterprise AI applications.
(02:52):
With a 128,000 token context window, this model can handle large, complex, multimodal
datasets, making it ideal for regulated industries like finance and healthcare.
It excels in processing unstructured data, including scanned documents and handwriting,
without requiring cumbersome pre-processing.
(03:15):
Embed 4 supports over 100 languages and can be deployed securely on virtual private clouds
or on-premise systems.
By transforming documents into numerical data, it aids retrieval, augmented generation use
cases, enabling agents to efficiently find relevant information.
Coheer highlights its ability to cut storage costs and improve search accuracy, making it
(03:40):
a strong competitor against models like Kodo Embed 11.5B.
Companies like Agora have successfully used Embed 4 to enhance their AI search engines,
showcasing its capacity to manage complex e-commerce data.
(04:00):
Robots have taken a significant leap forward by gaining the ability to detect scents through
INOS's AI-NOS technology integrated into Ugo's humanoid robots.
This innovation marks the first time scent detection has been incorporated into commercial
humanoid robots, allowing them to perceive their environments more intuitively.
(04:22):
The AI-NOS uses a gas sensor array and advanced algorithms to create unique smell IDs, enabling
robots to detect odours and environmental conditions like humans.
This advancement is expected to revolutionise industries by offering precise detection of
gas leaks, chemical anomalies and hazardous conditions, enhancing safety in workplaces
(04:47):
and healthcare settings.
The technology promises to transform industries, public health and daily life.
Next the companies will conduct real world trials to refine the system and develop applications
for sectors such as security, elder care and industrial use.
(05:09):
Join us as we discuss the evolving perceptions of AI consciousness.
Generation Z, born between 1997 and 2012, has a unique relationship with AI.
A study by Edu Birdie surveyed 2,000 Gen Z individuals and found that 25% believe AI
is already conscious, while 52% think it will become conscious in the future.
(05:33):
Additionally, 58% fear AI might take over the world and 44% believe this could happen
within 20 years.
As a result, 69% of respondents are polite to chatbots, saying please and thank you.
This behaviour aligns with a tech radar survey where many Americans and Brits also show politeness
(05:56):
to AI.
The debate over AI consciousness is ongoing.
Open AI's co-founder Ilya Sutskever once suggested large neural networks might be slightly
conscious, sparking controversy.
While most experts deny AI's consciousness, societal views continue to evolve as AI mimics
(06:19):
human behaviour.
The speaker reflects on AI's progress in search-based research by language models.
Initially, tools like Perplexity and Microsoft Bing showed promise but often delivered unreliable
information.
By 2025, however, AI systems like O3 and O4 Mini have become effective research assistants,
(06:44):
accurately pulling data from web searches.
The speaker notes that these tools can now provide faster, more reliable answers without
the lengthy reports and errors of earlier versions.
Despite improvements, the speaker still exercises caution, not fully trusting the AI for high-stakes
decisions.
(07:05):
They observe that tools like Google Gemini haven't reached the same level of transparency
and accuracy.
As AI search evolves, it challenges traditional web usage and economic models with potential
legal implications as users shift from browsing to relying on chatbots for information.
(07:26):
And now, pivot our discussion towards the main AI topic.
Alright everybody, welcome to another episode of Innovation Pulse, where we take the tech
world's pulse and figure out if it's healthy or having a mid-life crisis.
I'm Thomas Green, joined as always by my brilliant co-host.
(07:50):
That's me, Yakov Lasker, and today we're diving into something that's generating a
lot of buzz, but also a lot of confusion.
AI agents.
Silicon Valley's obsessed with them, but there's a small problem.
Nobody seems to agree on what they actually are.
Exactly.
It's fascinating because you've got all these tech giants, OpenAI, Microsoft, Salesforce,
(08:13):
Google, making these bold claims about how AI agents are going to revolutionize everything
from how we work to how businesses operate.
But when you start digging into what each company means by agent, it gets messy fast.
Right, and that's the real issue here.
Sam Altman over at OpenAI is saying agents will join the workforce this year.
(08:35):
Much in Adela at Microsoft is predicting they'll replace certain knowledge work entirely.
And Mark Benioff at Salesforce has this grand vision of becoming the number one provider
of digital labor in the world through their agentic services.
Those are some pretty ambitious claims, but I'm struck by how this seems to be following
that classic Silicon Valley pattern where a term gets so hyped and overused that it
(09:00):
starts losing its meaning.
We saw it with AI itself, then multimodal, AGI, and now agents.
Absolutely.
It reminds me of what Ryan Salva, who's a senior director of product at Google and used
to lead GitHub co-pilot said.
He straight up hates the word agents now because the industry has overused it to the point
where it's almost nonsensical.
(09:21):
Poor guy.
Death by buzzword.
So lay it on me, Yaakov.
What are some of these competing definitions floating around out there?
Well, that's where it gets really interesting.
Just this past week, OpenAI published a blog post defining agents as automated systems
that can independently accomplish tasks on behalf of users.
(09:42):
But then in the same week, they released developer documentation that defined agents as LLMs
equipped with instructions and tools.
Wait, so even within the same company, in the same week, they couldn't stick to one
definition?
Nope, and it gets better.
And her path hack, whose OpenAI's API product marketing lead later posted on X that she understood
(10:03):
the terms assistance and agents to be interchangeable.
Oh, come on.
So we've got multiple definitions within the same company.
And now we're saying they might just be the same as assistance anyway.
This is getting ridiculous.
And then you've got Microsoft trying to make a distinction saying agents are the new apps
for an AI powered world that can be tailored for specific expertise.
(10:27):
While assistance just help with general tasks like drafting emails.
So OpenAI says they're the same thing and Microsoft, their biggest partner says they're
different.
Got it.
Exactly.
An anthropic takes yet another approach.
They acknowledge in a blog post that agents can be defined in several ways, including both
fully autonomous systems that operate independently over extended periods and prescriptive implementations
(10:52):
that follow predefined workflows.
At least they're admitting there's confusion.
Not about Salesforce.
They're all in on this agent thing too, right?
They might have the broadest definition of all.
According to them, agents are a type of system that can understand and respond to customer
inquiries without human intervention.
And they've got this whole taxonomy with six different categories from simple reflex agents
(11:16):
to utility based agents.
So basically anything that responds to a customer query without a human could be an agent.
That's incredibly broad.
Right.
And that's exactly the problem.
When a term can mean almost anything, it starts to mean almost nothing.
So why is this happening?
I mean, these are sophisticated companies with smart people.
(11:38):
Why can't they just agree on what an agent is?
Well there are a couple of theories.
For one, agents like AI itself are constantly evolving.
Companies like OpenAI, Google and Perplexity are just starting to ship what they consider
their first agents.
OpenAI's operator, Google's project mariner and Perplexity's shopping agent.
(11:59):
And they all have wildly different capabilities.
So the technology itself is still finding its footing.
But I suspect there's more to it than that.
You bet.
Rich Villers, whose GVP of Worldwide Research at IDC pointed out that tech companies have
a long history of not rigidly adhering to technical definitions.
They care more about what they're trying to accomplish technically, especially in fast
(12:22):
moving markets.
That makes sense.
And honestly, I wonder how much of this is just marketing.
Bingo, Andrew Ng, the founder of Deep Learning.ai, said exactly that.
He noted that the concepts of AI agents and agent work flows used to have a technical
meaning but about a year ago, marketers and a few big companies got a hold of them.
(12:44):
Classic.
Nothing ruins a perfectly good technical term like marketers getting their hands on it.
But this seems more problematic than usual because we're talking about a technology
that companies are building entire product lineups around.
That's right, and Jim Rowan, head of AI for Deloitte, frames this as both an opportunity
and a challenge.
(13:05):
The ambiguity allows for flexibility.
Companies can customize agents to their specific needs.
But it also leads to misaligned expectations and difficulties in measuring value in ROI.
That makes a lot of sense.
If I'm a business looking to invest in AI agents, how do I even compare offerings when
everyone's using the same word to describe completely different things?
(13:28):
Exactly.
As Rowan put it, without a standardized definition, at least within an organization, it becomes
challenging to benchmark performance and ensure consistent outcomes.
This can lead to varied interpretations of what AI agents should deliver, potentially
complicating project goals and results.
So where does this leave us?
Is there any hope for clarity?
Unfortunately, if the unraveling of the term AI itself is any indication, it seems unlikely
(13:51):
the industry will coalesce around one definition of agent anytime soon, if ever.
That's a sobering thought.
So what do you think businesses should do in the meantime?
Just accept the chaos?
I think they need to be incredibly specific when discussing agents with vendors.
Don't just accept the term at face value.
Dig into exactly what capabilities are being offered, what level of autonomy is actually
(14:15):
possible, and how the system integrates with existing workflows.
That's good advice.
And maybe avoid getting caught up in the buzzword bingo altogether.
Focus on the problem you're trying to solve rather than whether something calls itself
an agent or not.
Absolutely, and for tech-focused folks, I'd recommend developing your own and internal
taxonomy.
(14:35):
If you're building or buying these systems, create clear definitions for what different
types of AI automation mean within your organization, even if the broader industry can't agree.
I think that's the key takeaway here.
In a world where agent can mean almost anything, the most important thing is clarity within
your own organization about what you're building, buying, or deploying.
(14:59):
Precisely, and maybe we should all take a page from Ryan Salva's book and develop a healthy
skepticism when we hear the term thrown around too casually.
Well, there you have it, folks.
The next time someone tells you they're building an AI agent, maybe your first question should
be, that sounds exciting.
But what exactly do you mean by agent?
And don't be surprised if they struggle to answer.
(15:20):
Indeed.
Well, that brings us to the end of today's episode of Innovation Pulse.
We hope this helps you navigate the sometimes confusing waters of AI terminology.
Remember, behind every buzzword is a set of actual capabilities.
Focus on those, not the label, they.
Absolutely, and if you're working with AI and your organization, take the time to define
(15:43):
your terms clearly.
It'll save you a lot of headaches down the road.
Thanks for joining us today on Innovation Pulse.
I'm Thomas Green.
And I'm Yakov Lasker.
Until next time, stay curious and question those buzzwords.
And that's a wrap for today's podcast.
(16:03):
We've explored AI's growing role in enhancing information accessibility and the ongoing debate
over defining AI agents.
Don't forget to like, subscribe and share this episode with your friends and colleagues
so they can also stay updated on the latest news and gain powerful insights.
Stay tuned for more updates.