Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
OK, let's unpack this
.
We often talk about AI, youknow, in really broad strokes,
but it's rare we get a chance toactually look behind the
curtain see what's reallychanging, how businesses work,
like at the ground level.
Speaker 2 (00:11):
Right, there's so
much noise out there.
Speaker 1 (00:12):
Exactly the pace of
AI change.
It feels like this tidal waveof information, doesn't it?
And it's genuinely hardsometimes to tell what's truly
revolutionary versus what's justwell hype.
Speaker 2 (00:26):
Separating the signal
from the noise.
Speaker 1 (00:28):
Precisely so.
Today we're diving intosomething that honestly looks
like it could cut right throughthat noise, something that could
deliver not just those ahamoments but a proper
understanding of a reallygroundbreaking shift.
Speaker 2 (00:40):
Okay, I'm intrigued.
Speaker 1 (00:41):
Yeah, think of this
as a shortcut, a way for you to
get properly informed on a topicthat seems set to really
redefine how enterprises operate, especially around efficiency.
Sounds important.
Speaker 2 (00:51):
What are we focusing
on?
Speaker 1 (00:53):
So our mission for
this deep dive is Emergence AI's
new platform.
It's called TAFT, and thereally big idea behind it is
this concept of agents creatingagents.
Speaker 2 (01:02):
Agents, creating
agents.
Speaker 1 (01:03):
Okay, and we're not
just framing this, as you know,
another cool AI tool.
It's positioned as a direct,really powerful solution to one
of the biggest, most persistentand, frankly, most expensive
problems in enterprise IT.
Speaker 2 (01:16):
Which is.
Speaker 1 (01:16):
The whole mess of
outdated manual data pipelines.
Seriously, this isn't somesmall niche issue.
It's a massive global challenge.
The sources we looked atestimate it costs businesses
over $200 billion annually.
Speaker 2 (01:30):
Wow, that's a
staggering number, just on
managing data flows.
Speaker 1 (01:34):
Yep and CRFT comes in
, promising immediate real
benefits.
Think like dramatically fasterinsights from your data, getting
rid of all that tedious busywork involved in data prep and,
of course, massive cost savingsthat hit the bottom line.
Speaker 2 (01:48):
And how accessible is
it?
That's often the catch withpowerful tech.
Speaker 1 (01:51):
Well, this is the
kicker it's all accessible with
natural language, plain English.
You don't need a deep technicalbackground, apparently, to tap
into these really sophisticatedAI capabilities.
Speaker 2 (02:01):
Potentially huge.
Speaker 1 (02:02):
Absolutely.
Speaker 2 (02:02):
Democratizing that
kind of power.
Speaker 1 (02:06):
Exactly, and one
thing that really jumped out
from the materials is that thisisn't just about making things a
bit faster.
Speaker 2 (02:10):
No.
Speaker 1 (02:10):
No, it's pitched as a
fundamental paradigm shift.
Speaker 2 (02:13):
Yeah.
Speaker 1 (02:13):
How companies manage
data, analyze it, get strategic
value from it.
It's moving away from thatfragmented, often manual,
reactive way of doing things.
Speaker 2 (02:24):
Towards something
autonomous, intelligent,
proactive.
Speaker 1 (02:28):
You got it.
That's the promise anyway.
Interesting.
Speaker 2 (02:31):
So where are we
getting this information from?
Speaker 1 (02:33):
Good question.
So for this deep dive, we'vepulled insights from a pretty
detailed Ask Me Anything sessionand AMA with Emergence AI's
co-founders and some of theirkey scientists and engineers.
Speaker 2 (02:45):
Straight from the
source.
Speaker 1 (02:46):
Right and we've
cross-referenced that with some
good reporting from VentureBeat,plus information directly from
the Emergence AI website.
We've basically sifted throughit all to pull out the really
important nuggets for you.
Speaker 2 (02:56):
Great.
So the goal is to get everyoneup to speed quickly on the CRFT
platform and the whole agentscreating agents idea.
Speaker 1 (03:04):
Exactly so let's dive
in.
Let's start with thatmulti-billion dollar problem.
Crfd is aiming to fix thesedata pipelines Right.
What's the actual issue?
Speaker 2 (03:16):
Okay, so think about
a large company.
Data isn't just in one neatplace, right, it's scattered
everywhere.
Speaker 1 (03:21):
Right Legacy systems
cloud apps spreadsheets.
Speaker 2 (03:25):
Exactly Databases
here, data warehouses there,
maybe some sensor data over here, customer data in a CRM.
It's fragmented, siloed, eachpiece is kind of stuck in its
own corner.
Speaker 1 (03:37):
And getting it all
together for analysis is the
challenge.
Speaker 2 (03:40):
It's a massive
challenge.
The traditional way involvesthese complex, often manual and
incredibly time-consumingprocesses just to bring that
data together.
Think of it like plumbing, butreally old, leaky, complicated
plumbing.
Speaker 1 (03:52):
And you said manual.
Like people are actually doingthis bit by bit.
Speaker 2 (03:55):
Often, yes, we're
writing custom scripts for every
little connection, whichconstantly break or need
updating when data formatschange.
It's not just inconvenient,it's a huge grain on resources,
time, money, people.
Speaker 1 (04:06):
And that's where the
$200 billion figure comes in.
It sounds almost unbelievable.
Speaker 2 (04:10):
It does, but when you
break it down it makes sense.
First, there's the sheer humaneffort teams of people just
wrangling data, moving it,cleaning it, trying to make
sense of it.
Second, think about lostopportunities.
If it takes days or weeks toget the data ready for analysis,
decisions get delayed.
By the time you have theinsight the market might have
(04:31):
shifted or the opportunity isgone.
Speaker 1 (04:32):
Right, the data is
stale.
Speaker 2 (04:34):
Exactly.
And third, manual processes areerror prone.
Mistakes creep in when you'recopying, pasting, transforming
data by hand or with brittlescripts.
Flawed data leads to flawedanalysis, which leads to bad
strategies.
Speaker 1 (04:47):
So it's not just cost
, it's risk and missed value too
.
Speaker 2 (04:49):
Precisely and it
disproportionately affects the
people you'd think would bedoing the high value work the
data scientists.
How so Well?
The sources we looked at,including insights from
Emergent's AI's team, said youbet data scientists can spend up
to 90 percent of their time onthis preparatory stuff 90
percent.
That's almost their entire job,pretty much Data wrangling,
(05:10):
data cleaning, data migration.
They're dealing withincompatible formats, missing
information, weird namingconventions, constant changes to
how the data is structured.
Speaker 1 (05:19):
So they're experts in
analysis, but they're stuck
being data janitors.
Speaker 2 (05:23):
That's a blunt way to
put it, but yeah, often their
skills aren't being used fordeep analysis or building those
cool predictive models.
Most of the day is just gettingthe data usable, which really
begs the question what if youcould just flip that?
Speaker 1 (05:37):
Free them up to do
the actual science part.
Speaker 2 (05:39):
Exactly.
Let them focus on strategy, oninsight, not on the plumbing.
Speaker 1 (05:43):
OK, so that sets the
stage perfectly for CRAFT.
How does it propose to flipthat script?
Speaker 2 (05:48):
Right.
So CRAFT and the acronym standsfor Create, remember, assemble,
fine-tune and Trust isEmergence AI's first big push
into this space.
It's a data-focused AI platformCreate, remember, assemble,
fine-tune, trust got it and itscore function sounds simple, but
the implications are huge.
It lets anyone and they reallystress anyone from analysts
right up to executives turntheir business goals directly
(06:11):
into these smart, self-verifyingmulti-agent systems.
Speaker 1 (06:15):
Okay, smart,
self-verifying multi-agent
systems.
Let's break that down.
Smart implies intelligence,Multi-agent suggests some
multiple AIs working together,but self-verifying what does
that mean?
Speaker 2 (06:30):
That's a really
crucial part.
It means the system doesn'tjust blindly execute a task.
It has built-in checks tovalidate its own work.
Speaker 1 (06:38):
How does that work in
practice?
Speaker 2 (06:39):
Imagine you ask it to
pull data for a sales report.
A simple script might just grabthe numbers.
A self-verifying agent, though,might automatically run
consistency checks.
Does this number align withlast quarter's trend?
Does it match data from thefinance system?
It might cross-reference things, flag anomalies.
It finds.
Speaker 1 (06:57):
Ah, so it's trying to
catch errors before they get to
me.
Speaker 2 (07:00):
Exactly.
Instead of just handing youpotentially flawed data, it
actively tries to ensure theoutput is reliable and
trustworthy, which you know inan enterprise setting, that's
absolutely critical.
You can't base million dollardecisions on dodgy data.
Speaker 1 (07:13):
Absolutely, and the
interaction model.
You said plain English.
Speaker 2 (07:15):
Yeah, that's the
other revolutionary part.
You interact with it usingplain old English.
Just describe your goal, whatyou need, and apparently you get
results back in minutes, notdays, not weeks.
Speaker 1 (07:27):
Minutes, so no
complex coding, no query
language is needed.
Speaker 2 (07:32):
That's the claim.
No deep technical backgroundrequired.
It's about democratizing accessto this kind of powerful
automation.
Imagine, like you said,collapsing weeks of work into
minutes.
Yeah, it's a massive shift,isn't it?
It really is.
It's like having this superefficient data team just on call
, ready instantly.
Speaker 1 (07:51):
Yeah, and it's not
just a concept.
It has concrete capabilitiesright out of the box.
Speaker 2 (07:56):
Like what can it
actually do from day one?
Speaker 1 (07:58):
Well, it can talk to
data in your databases and data
warehouses and understandsdifferent structures, different
schemas, SQL, NoSQL, data lakes,you name it.
It can retrieve and processthat information.
Speaker 2 (08:08):
Okay, so it can fetch
the data.
Speaker 1 (08:09):
But more than that,
it can write code.
For instance, it can automatebuilding those complex ETL
pipelines.
Extract, transform, load.
Speaker 2 (08:18):
Ah, the plumbing
itself.
Exactly those processes arenotoriously fiddly and
error-prone for humans.
Those processes are notoriouslyfiddly and error prone for
humans Moving data, changing itsformat, loading it somewhere
else.
Shear-raf-t can automate largeparts of that Data ingestion,
cleansing, transformation, a lotof that manual drudgery.
Speaker 1 (08:36):
Which connects back
to that 90% figure for data
scientists.
Speaker 2 (08:39):
Precisely, and the
co-founders are quite explicit
Tasks that used to take days,weeks and months are now
potentially doable in mereminutes.
Speaker 1 (08:47):
It's not just
incremental improvement, then
it's a whole different order ofmagnitude.
Speaker 2 (08:51):
Right.
It feels like it elevates AI tomanage higher levels of
abstraction.
It solves bigger chunks of theproblem that people use to
handle manually, and at a speedand scale that was just
unthinkable before.
Speaker 1 (09:02):
Shifting the
bottleneck from human time to
compute time.
Speaker 2 (09:05):
Yeah, that's a good
way to think about it.
Speaker 1 (09:11):
Okay, so QRFT is
powerful, but you mentioned the
really core innovation is thisidea of agents, creating agents.
That sounds kind of sci-fi.
What does it actually mean?
Speaker 2 (09:16):
Yeah, it does sound a
bit out there at first, but
before we get to the creatingagents part, let's make sure
we're clear on what an AI agentis in this context.
Speaker 1 (09:25):
Good idea, lay the
groundwork.
Speaker 2 (09:26):
So traditional
software, as we sort of touched
on, is mostly static.
You give it commands, itfollows them, step A, step, b,
step.
Speaker 1 (09:32):
C.
That's right, it executes ascript.
Speaker 2 (09:33):
Exactly.
Agents are different.
They're designed to chaseoutcomes.
They adapt.
Ashish, one of the scientistsat Emergence AI, gave a really
neat definition.
He said an agent is an entitythat can sense, reason, plan and
then can act.
Speaker 1 (09:48):
Sense reason plan act
.
Speaker 2 (09:51):
Sense means observing
its environment, maybe reading
data from a database, looking ata web page, processing a
document.
Reason means interpreting thatinformation, understanding it in
the context of its goal Planmeans figuring out a strategy, a
sequence of steps to achievethe goal and act well doing
those steps.
Speaker 1 (10:10):
So it's not just
following a pre-written recipe,
it's figuring out the recipe andcooking the meal.
Speaker 2 (10:15):
That's a great
analogy.
Yes, and importantly, when itacts, it can change the state of
the world, like writing code,updating a database, sending a
notification.
But, just as importantly, itcan change its own state by
acquiring knowledge.
Speaker 1 (10:26):
Meaning it learns.
Speaker 2 (10:27):
Exactly.
It learns from what worked,what didn't it?
Speaker 1 (10:35):
updates its internal
understanding, its strategies,
that self-modification is key,okay, and Emergence AI says
these agentic systems are goodat tasks needing multiple
reasoning steps and usingmultiple tools.
Speaker 2 (10:42):
Right, think about a
complex request, reasoning steps
and using multiple tools.
Right, think about a complexrequest, maybe analyzing sales
data.
An agent might need to firstquery the sales database tool
one then use a statisticalanalysis library tool two on the
results, then generate a chartusing a visualization tool tool
three and finally write asummary report tool four An
(11:02):
agent coordinates that wholeworkflow intelligently.
Speaker 1 (11:04):
Got it.
So they're like smartautonomous project managers for
specific tasks.
Speaker 2 (11:09):
Yeah, that's the fair
way to think about it.
Speaker 1 (11:11):
Okay, NOW agents,
creating agents.
How does that work?
This is where it gets reallyinteresting for me.
Speaker 2 (11:16):
Right.
So this is the ACA agentscreating agents breakthrough.
The idea is, when you giveChirafty a task, you, the user,
don't need to know if there'salready a perfect, pre-built
agent for it.
You don't need to figure outhow to chain different agents
together.
Speaker 1 (11:30):
The system handles
that complexity.
Speaker 2 (11:32):
Exactly.
There's this orchestrator,which they describe as a kind of
meta agent, an agent overseeingother agents, and this
orchestrator figures it all out.
It dynamically creates newagents on the fly if it needs to
.
Speaker 1 (11:45):
To read them from
scratch.
Speaker 2 (11:47):
Well, it might be
more accurate to say it composes
them, it assesses the task,breaks it down.
Then it looks at its library ofexisting agents, the big,
powerful ones.
But if there's a specific gap,a very niche requirement for
this particular task, it buildssomething custom for that gap.
Precisely.
It composes existing agents andnew agents to accomplish your
(12:08):
task.
It might generate a small pieceof code, a specific function,
essentially a mini agent, justfor that one step.
Speaker 1 (12:14):
Okay, that makes more
sense than building whole
complex agents from zero everytime.
Speaker 2 (12:18):
Right Viv, who works
on agent engineering.
There used a really helpfulmetaphor.
He talks about existing agentsas big rocks.
These are your general purposepower.
There used a really helpfulmetaphor.
He talks about existing agentsas big rocks.
These are your general purposepowerhouses.
Maybe a web automation agent, acomplex data analysis agent?
Speaker 1 (12:31):
Okay, the standard
toolkit.
Speaker 2 (12:32):
Yeah, but the
dynamically created agents,
those are the little rocks.
They fill in the spaces forthese really niche problems.
Speaker 1 (12:40):
Ah, like packing sand
around the big rocks to make a
solid structure, exactly.
Speaker 2 (12:43):
Ah, like packing sand
around the big rocks to make a
solid structure, exactly so.
Say, your big data agent needsone specific piece of info from
some weird internal reportformat.
It's never seen.
The orchestrator might spin upa temporary little rock agent
just to parse that specificreport format for that one piece
of data.
Then it plugs that result backinto the main workflow.
Speaker 1 (13:03):
Wow.
So the system adapts its owntoolkit on the fly.
Speaker 2 (13:07):
That's the core of it
and that dynamic creation and
composition is what makes it, intheir words, truly agentic.
It's not just executing.
It's planning reasoning,breaking down tasks, composing
solutions and, crucially,verifying the results and
improving itself along the way.
Speaker 1 (13:22):
And you mentioned
improvement.
These little rock agents, ifthey work well, do they stick
around?
Speaker 2 (13:26):
Apparently yes.
If a dynamically createdcomponent or workflow proves
effective and useful, it can besaved, remembered and reused.
So the system gets smarter andmore efficient over time.
It's like that new employeeanalogy Again.
They learn a new skill, theybecome more valuable.
Speaker 1 (13:42):
So the big takeaway
here is this AI isn't just smart
, it's adaptively smart, it'sconstantly learning, growing,
tailoring itself to my specificbusiness needs.
Speaker 2 (13:53):
That seems to be the
fundamental idea a system that
gets better with you, which is amassive edge in a world that
changes so fast.
Speaker 1 (14:00):
That adaptability
brings us neatly to the next
point, this idea ofself-improvement and memory.
Speaker 2 (14:07):
It sounds like it's
baked right into the system.
Absolutely, it's not an add-on.
It's core to how these care FTEagents operate.
They don't just run a task andforget about it.
They learn, they improve, theyhave self-improvement
capabilities fueled by long-termmemory.
Speaker 1 (14:18):
And how do they build
up this experience?
Just by doing the tasks we givethem.
Speaker 2 (14:23):
That's part of it.
Yes, they learn through usage,seeing what works, getting
feedback, implicitly orexplicitly.
What's really interesting isthey also learn through
self-exploration.
Speaker 1 (14:33):
Self-exploration.
What does that mean?
Like they practice in theirdowntime.
Speaker 2 (14:37):
Kind of it implies
they might proactively test
different ways to solve aproblem they've encountered
before, even when not activelytasked, or maybe explore
adjacent data sources, look forpatterns, actively seeking ways
to optimize their own internalmethods.
It's more proactive than justwaiting for the next assignment.
Speaker 1 (14:57):
Okay, that's a subtle
but important distinction.
It's not just passive learning.
Speaker 2 (15:01):
Right and Ashish
broke down this self-improvement
through memory into three keydimensions.
First is knowledge acquisitionGetting more facts.
More than just facts.
It's about learning specializeddomain knowledge.
Think about industries likesemiconductors or biotech.
They have incredibly specificjargon complex processes, unique
physics.
The agents learn this context.
Speaker 1 (15:23):
So they speak the
language of the business.
Speaker 2 (15:24):
Yes, and beyond that
formal knowledge.
Speaker 1 (15:30):
They also pick up
tacit knowledge, the stuff
that's usually not written downLike office politics.
Speaker 2 (15:32):
Huh, Maybe not quite
that, but more like the
unwritten rules of how thingsactually get done in a specific
company, Operational nuances whothe real expert on System X is?
Maybe even subtle approvalworkflows.
This stuff is critical forfitting smoothly into an
organization's real processes.
Speaker 1 (15:49):
Okay, that makes
sense, embedding itself in the
culture almost.
What's the second dimension?
Speaker 2 (15:53):
Second is skill
acquisition.
This is about process.
Once an agent figures out areally effective way to do
something maybe the fastest wayto query a particular database
or the best technique forcleaning a certain type of messy
data that knowledge, that skillis persistent.
Speaker 1 (16:12):
It remembers the
how-to.
Speaker 2 (16:13):
Exactly.
It internalizes that optimalworkflow and reuses it, so the
system gets more efficient, morereliable over time.
As she's compared it to a newemployee picking up new skills,
they get better, faster, moredependable at their job.
It's not just remembering facts, it's remembering how to do
things well.
Speaker 1 (16:28):
Right Learning, best
practices and the third
dimension.
Speaker 2 (16:31):
The third one ties
back to our earlier discussion
New agent creation, acquisition.
This is where the systemdoesn't just improve existing
agents or skills, it actuallybuilds or acquires entirely new
capabilities, new agents as itruns into new kinds of problems,
so its overall potential growsover time.
Precisely.
Its scope expands, Its abilityto tackle more diverse and
(16:54):
complex challenges increasesdynamically.
It's this hierarchy knowledge,skills, new capabilities that
creates this compoundingintelligence effect.
Speaker 1 (17:04):
Making it more
valuable the more you use it.
Speaker 2 (17:06):
That's the idea, and
they have a concrete example of
this in action.
They used Kraftist to helpbuild itself.
Speaker 1 (17:11):
Right, the dogfooding
example eating your own dogfood
.
Speaker 2 (17:13):
Exactly.
They used Kraftit internally toautomate writing code for
connecting different internalsystems needed for Kraft's own
development.
Speaker 1 (17:20):
Like connecting their
analytics database to their CRM
.
You mentioned, yeah, thingslike that.
Speaker 2 (17:23):
Instead, development
Like connecting their analytics
database to their CRM.
You mentioned, yeah, thingslike that.
Instead of engineers manuallyscripting those connections,
QRef learned the requirements,learned the schemas and built
those data bridges automatically.
Speaker 1 (17:36):
It saved their own
team significant manual coding
effort.
That's pretty compelling.
It shows immediate practicalvalue even for building the tool
itself.
Speaker 2 (17:40):
It does.
It demonstrates that theself-improvement and automation
aren't just theoretical.
They deliver real productivitygains right away.
Speaker 1 (17:47):
So the upshot of all
this learning and memory the
system gets smarter, it adapts,it tailors itself to your
specific business context.
That feels like a significantcompetitive advantage.
Speaker 2 (17:58):
It certainly seems
positioned that way, an
intelligence that evolves withyour enterprise.
Speaker 1 (18:02):
Okay, so we've got
this powerful learning agent
creating platform.
Where is it actually makingwaves?
What kind of real world impactare we seeing?
Speaker 2 (18:11):
Well, Emergence AI is
pretty clear that Kraft is
purpose built for data heavyenvironments where rapid
decision making is critical.
Speaker 1 (18:19):
So industries
drowning in data.
Speaker 2 (18:21):
Exactly Places where
the sheer volume, velocity and
variety of data make manualanalysis incredibly slow,
error-prone or just plainimpossible.
They list several keyindustries already seeing impact
Semiconductors is a big one,and we should definitely dig
into that, but also oil and gas,telecommunications, healthcare,
financial services, supplychain and logistics, e-commerce
(18:45):
and even research environments.
Speaker 1 (18:46):
That's a pretty broad
range, all data intensive
though.
Speaker 2 (18:49):
Very much so.
Places where delays mean lostmoney or missed opportunities,
and where complexity is high.
Speaker 1 (18:54):
Okay, let's do that
deep dive into semiconductors.
You said it's a prime exampleof ROI.
Speaker 2 (18:59):
Absolutely.
The semiconductor industry isjust incredibly data intensive.
We're talking hundreds ofgigabytes of data per product
weekly, maybe even more now.
Speaker 1 (19:09):
Wow, where does all
that data come from?
Speaker 2 (19:11):
Every stage Designing
the chip, the actual
fabrication and the fabs, thosesuper clean factories, then
offshore assembly and testingfacilities, plus data from the
fabless companies themselves,like design specs, customer
requirements, quality targets.
It's a deluge from multiple,disparate sources.
Speaker 1 (19:29):
And the challenge is
making sense of it all quickly.
Speaker 2 (19:31):
Precisely A key
problem is detecting subtle
drops in manufacturing yield.
We're not just talking about asimple alert if yield falls
below 90%.
It's often about spottingsubtle downward trends hidden in
noisy data or complex anomaliesthat aren't just a single
parameter going wrong.
Speaker 1 (19:49):
Things that are hard
for a human to spot just by
looking at dashboards.
Speaker 2 (19:53):
Very hard, especially
when you need to correlate data
across all those differentsources the fab, the assembly
plant, the design data.
Maybe a tiny temperaturefluctuation in one machine in
the fab, combined with aspecific material batch used
weeks ago, is causing a slightincrease in failures during
final testing overseas.
Speaker 1 (20:11):
Connecting those dots
sounds incredibly difficult.
Speaker 2 (20:14):
It is.
It requires pulling data frommultiple systems, often with
different formats, differenttimestamps, different owners.
Doing that root cause analysismanually is described as almost
impossible for humans to be ontop of efficiently.
The sources say analysistypically takes days.
Speaker 1 (20:28):
Days, and in
semiconductor manufacturing days
mean potentially millions ofdollars lost, right?
Speaker 2 (20:35):
Easily.
If a yield issue isn't caughtand fixed quickly, you could be
producing thousands of faultychips.
That's wasted materials, lostproduction capacity, potential
delays to customers.
It adds up incredibly fast.
Speaker 1 (20:48):
So how does CRFT help
here?
Speaker 2 (20:50):
It automates that
whole detection and root cause
analysis process.
Instead of days, the analysistime gets cut down to minutes.
They give an example of offlineanalysis taking maybe 20, 30
minutes instead of several days.
Speaker 1 (21:03):
From days to minutes.
That's the game changer.
Speaker 2 (21:05):
It really is, yeah,
and the value is crystal clear.
They talk about identifyingissues that lead to millions in
weekly savings, becausecompanies can take immediate
actions to counteract anomalies.
Catch it early, fix it fast,save a fortune.
Speaker 1 (21:17):
And crucially, what
about the engineers?
Are they being replaced?
Speaker 2 (21:20):
No, and this is a
point they emphasize.
The agents complement the skillsets of the existing engineers.
Hardware engineers are expertsin chip design, materials,
physics.
Their core job isn't usuallydata science or AI programming.
Speaker 1 (21:34):
Right, they're domain
experts, not necessarily data
wranglers.
Speaker 2 (21:37):
Exactly so.
Kraft frees them up fromdigging through log files and
spreadsheets.
It gives them the insights theyneed quickly so they can focus
on what they are best at solvingthe underlying engineering
problem, improving the design,tweaking the manufacturing
process.
It augments their expertise.
Speaker 1 (21:54):
That makes sense
Empowering the experts, not
replacing them.
Are there other strong examplesoutside of semiconductors?
Speaker 2 (22:05):
Yes, several.
In telecom, for instance, theyreport a 70% reduction in the
time spent on data governancetasks.
Speaker 1 (22:08):
Data governance that
sounds important, but maybe a
bit dry.
Speaker 2 (22:11):
It's critical in
regulated industries.
Think about ensuring compliancewith GDPR or CCPA, tracking
where data came from and whoaccessed it, data lineage,
enforcing policies classifyingsensitive info.
It's hugely time-consuming andoften manual.
Automating 70% of that is amassive efficiency win.
Speaker 1 (22:29):
Okay, yeah, I can see
the value there Less risk, less
overhead.
Speaker 2 (22:32):
Then there's a
fascinating case with a large
online forum.
They used AI agents to helpreduce the number of unsafe
images being posted by 60million per month 60 million.
Speaker 1 (22:42):
That's a staggering
scale.
Speaker 2 (22:44):
It is, and it's
likely not just simple image
filtering.
It probably involvesunderstanding context, maybe
analyzing text associated withthe image, applying complex
moderation rules at a scalehumans just couldn't handle.
It lets human moderators focuson the really tricky nuance
cases.
Speaker 1 (23:01):
Improving safety and
probably reducing moderator
burnout too.
Speaker 2 (23:05):
MARK MIRCHANDANI,
very likely.
And another really interestingone is the integration with
Nymerson, specifically theirO-plus solution for complex
manufacturing, leslie.
Speaker 1 (23:13):
KENDRICK.
What was the challenge there,mark?
Speaker 2 (23:14):
MIRCHANDANI.
Aaron Rousseau from Nymersonexplained it well.
He said previously manualanalysis of production data was
so slow that by the timeanalysis was complete, the next
production Insights were alwaystoo late to actually influence
the current process.
Speaker 1 (23:28):
So purely reactive.
Speaker 2 (23:29):
Exactly.
With Kraft Integrated, they getreal-time insights aligned with
complex production workflows.
This allows for improvingyields and accelerating issue
detection.
Imagine a sensor flag,something subtle.
Kraft analyzes, it instantlycorrelates, it maybe suggests an
adjustment before defects starthappening.
That's proactive control.
Speaker 1 (23:47):
That real-time aspect
seems key in dynamic
environments.
Speaker 2 (23:50):
Absolutely.
And beyond these industrialcases, the demos on the
Emergence AI website show evenmore potential Research analysis
, summarizing academic papers,pulling out key findings, even
spotting conflicting results,and sending it all to Confluence
.
Speaker 1 (24:05):
Saving researchers.
Hours of reading time.
Speaker 2 (24:08):
For sure.
E-commerce competitive analysis, comparing platforms like
Shopify and Wix on featurespricing, market positioning,
health insurance insightspolling, investment and M&A data
from dense financial reports.
Gene therapy market analysisfrom SEC filings.
Even summarizing the huge WHOglobal nutrition report.
Speaker 1 (24:27):
Wow, it seems
applicable.
Anywhere there's complexinformation or data that needs
extracting, synthesizing oranalyzing quickly.
Speaker 2 (24:34):
That's the impression
.
It's acting like a highlyefficient, intelligent knowledge
worker across a very broadrange of tasks.
Speaker 1 (24:40):
Okay, this is
undeniably powerful stuff, which
naturally leads to the bigquestion Trust control, privacy,
security compliance how do youmanage AI agents that are
becoming increasingly autonomous?
That sounds like a majorconcern for any enterprise.
Speaker 2 (24:52):
It's a huge concern,
and rightly so.
Robbie, the co-founder and CTOat Emergence AI, was very clear
that governance isn't an add-on.
It's designed in as a coredesign principle.
Speaker 1 (25:04):
Meaning it's built in
from the start, not bolted on
later.
Speaker 2 (25:07):
Exactly.
The idea is that enterprisesthemselves should define the
rules, the policies.
They specify what agents canand cannot do.
For example, a policy mightstate no agent can access
personally identifiableinformation without specific
anonymization steps, or anyfinancial transaction initiated
by an agent requires sign-offfrom a specific manager.
Speaker 1 (25:28):
So the company sets
the boundaries and the system
enforces them.
Speaker 2 (25:32):
Right.
Mechanisms are built into C-RAFto enforce these policies,
things like granular accesscontrols, detailed audit logs,
tracking every action an agenttakes and limiting the action
space or capabilities availableto certain agents based on
policy.
Speaker 1 (25:45):
What about human
oversight?
Is it all just automated rules?
Speaker 2 (25:48):
No, and this is
critical.
They stress the concept ofhuman in the loop.
While the goal is more autonomy, human oversight is still
considered essential, especiallyfor critical decisions or
actions.
Speaker 1 (25:58):
So the system might
do the analysis, maybe even
propose an action, but a humangives the final OK.
Speaker 2 (26:05):
Precisely.
It's about ensuring the agent'sactions always align with the
company's principles, ethics andcompliance needs.
It's collaborative intelligence, not handing over the keys
completely.
The system flags things thatneed human review and provides
the context for that decision.
Speaker 1 (26:21):
That makes sense.
It's about augmenting humanjudgment, not replacing it
entirely.
Speaker 2 (26:26):
And this ties into
how workforce roles are likely
to change.
Satya, the CEO, used a reallyinteresting analogy with
self-driving cars.
Speaker 1 (26:33):
Oh, yeah, how so.
Speaker 2 (26:35):
He pointed out that
even with level five autonomy,
fully self-driving, there'sstill likely a human in the loop
somewhere, maybe overseeing afleet from a control center,
even if they aren't physicallyin the car.
He sees a parallel in theenterprise.
Speaker 1 (26:48):
So humans shift from
doing the task to managing the
system that does the task.
Speaker 2 (26:52):
Exactly.
The role evolves from doing allthe coding or building the
agents yourself to overseeingthese systems, nudging these
systems, ensuring they staywithin guardrails and making
sure the outputs meet thebusiness requirements.
It's a supervisory, strategicrole.
Speaker 1 (27:08):
That sounds like a
different skill set, though.
Speaker 2 (27:09):
It absolutely is.
Satya called them new skills,even metacognitive skills, that
people have to be taught, thingslike understanding how agents
reason, how to formulateproblems effectively for an AI,
how to interpret their outputscritically and how to define and
monitor those crucialguardrails.
Speaker 1 (27:27):
It's less about
execution and more about
orchestration and governance.
Speaker 2 (27:31):
Well put and
recognizing this need, Emergence
AI has actually partnered withAndela.
Speaker 1 (27:36):
And Andela, the
global tech talent company.
Speaker 2 (27:39):
That's the one
they're working together to
specifically train engineers anddata scientists for this new
reality, training them to gofrom having to build agents to
manage agents and govern agentsand work with agents.
Speaker 1 (27:50):
That's proactive,
addressing the skills gap
head-on.
Speaker 2 (27:53):
It seems so.
The training focuses on thingslike advanced prompt engineering
for guiding agents, debuggingagent workflows, setting up,
monitoring and defining thosegovernance policies effectively.
Speaker 1 (28:03):
And the long-term
vision?
Is it just for techies?
Speaker 2 (28:07):
No, the ultimate
vision they articulate is that
even business users with someanalytical skills should be able
to work with systems like thisand derive value instantly.
So further democratizationempowering a broader range of
people within the enterprise.
Speaker 1 (28:21):
Okay, let's dig into
the tech a bit more.
Interoperability is always ahuge issue in enterprise IT.
How does craft play withexisting tools?
And what about all thedifferent AI models out there
Open AI, anthropic meta Is itlocked into one?
Speaker 2 (28:37):
Great questions and,
according to Vive,
interoperability was afirst-class design goal right
from the start.
They seem very committed to notcreating another silo.
Speaker 1 (28:45):
How do they achieve
that?
What are the mechanisms?
Speaker 2 (28:47):
Several things.
They offer open APIspecifications so other systems
can integrate with Kira.
They have an SDK, a softwaredevelopment kit for developers
who want to build customintegrations or extensions, and,
critically, they fully adoptedsomething called MCP.
Speaker 1 (29:02):
MCP model context
protocol.
What is that?
Speaker 2 (29:05):
Think of it like a
universal translator and
capability directory for AImodels and agents.
It allows different AI systems,even from competing vendors, to
understand each other'srequests, data formats, function
calls and context.
Keras can act as both an MCPclient using other MCP-enabled
tools and an MCP server allowingother tools to use Keras
(29:25):
capabilities.
Speaker 1 (29:26):
Okay, so MCP is key
for making different AI systems
talk to each other withoutcustom hacks.
Speaker 2 (29:30):
That's the promise of
it.
Yes, enables a more openinteroperable AI ecosystem
within a company.
Speaker 1 (29:36):
And which AI models
does Kia actually support?
Speaker 2 (29:39):
They're explicitly
model agnostic.
They support a range of the bigones right now OpenAI's GBT-40
and 4.5, anthropix Clawed 3.7,sonnet, metaslama 4.0 and 4.5,
anthropix Claw 3.7, sonnet,metaslama 3.3.
And they also support popularorchestration frameworks that
developers might already beusing, like Langchain, crew AI
and Microsoft Autogen.
Speaker 1 (29:59):
So companies aren't
locked into one specific LLM
vendor.
Speaker 2 (30:02):
Correct.
They can choose the best modelsfor the job or use models they
already have access to, andEmergence AI runs continuous
benchmarking activity to trackhow these models perform, ensure
results are consistent ordeterministic, as they say, even
as the underlying models getupdated or drift over time.
Speaker 1 (30:17):
That model drift is a
real concern, isn't it?
An update to an LLM could breaka workflow.
Speaker 2 (30:22):
It absolutely can, so
that continuous benchmarking
and focus on determinism isreally important for enterprise
reliability.
It raises a broader question,though how vital is this open
model agnostic approach for thelong-term success of enterprise
AI?
It feels like a smart strategyin such a fast-moving field.
Speaker 1 (30:39):
It definitely
provides flexibility and
future-proofing.
Speaker 2 (30:44):
Okay, looking ahead,
what's on the roadmap for Sea
Raft?
They started with datapipelines, but where do they go
next?
Speaker 1 (30:52):
Right.
They're clear that the currentfocus on the data space is
strategic.
It's where they can deliverhigh accuracy, reliability, task
completion and demonstrate thatclear ROI we talked about.
Speaker 2 (31:03):
Focusing on tangible
value first.
Speaker 1 (31:05):
Exactly.
They acknowledge that the dreamof a completely general,
open-ended orchestrator, like atrue AGI that can do anything,
is still a way off and requiresmajor research breakthroughs.
Speaker 2 (31:16):
So they're being
pragmatic about the current
scope.
Speaker 1 (31:17):
Yes, but they have
ambitious plans to expand On the
base platform layer.
They're planning moreconnectors, especially using MCP
, to talk to even more tools anddata sources.
They're building out agentdeployment and scheduling so you
can run agents automatically ona schedule or trigger them
based on events like new dataarriving.
Speaker 2 (31:33):
Making them more
proactive and integrated into
workflows.
Speaker 1 (31:36):
Right and,
interestingly, in the next three
to six months they plan to addcode viewing and editing
capabilities.
Ah, so developers can see thecode the agents generate and
maybe tweak it.
Speaker 2 (31:47):
That seems to be the
idea More transparency, more
control, better debugging, Plusongoing work on improving the
human-in-the-loop experience,making that interaction smoother
.
Speaker 1 (31:58):
Okay, that's the
platform layer.
What about the datacapabilities themselves?
Speaker 2 (32:02):
They're expanding
there too, across three main
areas.
First, more sophisticatedagentic data analysis agents
doing more advanced statsfinding trends, maybe even
generating hypothesesautonomously.
Speaker 1 (32:14):
Going beyond just
fetching and cleaning.
Speaker 2 (32:16):
Second, deeper
capabilities in data governance.
Things like automaticallybuilding beta catalogs,
assessing and enriching metadata, monitoring data quality,
continuously automating more ofthat critical but often
burdensome work.
Speaker 1 (32:29):
Automating the
compliance and quality checks.
Speaker 2 (32:31):
And third, enhancing
data engineering capabilities.
This includes automatingpipeline creation and
optimization, maybe evenself-healing pipelines that can
fix themselves if they encounterproblems.
Speaker 1 (32:41):
Oh, it sounds like
they're aiming for comprehensive
agentic control over the wholeenterprise data lifecycle.
Speaker 2 (32:47):
That seems to be the
direction Starting focused, then
systematically expandingoutwards to cover more and more
ground.
Speaker 1 (32:55):
So it feels like
we're watching the early stages
of a really adaptableintelligence taking shape,
starting with data but withpotential far beyond.
Speaker 2 (33:05):
I think that's a fair
assessment.
Speaker 1 (33:06):
Okay, this is all
incredibly exciting for people
listening who are thinking Iwant to try this or my company
needs this.
How can they get started withCraft?
Speaker 2 (33:14):
Right now, Craft is
available in a private preview.
This is mainly aimed atdevelopers and integrators who
want early access to test it out, build on it, provide feedback.
Speaker 1 (33:25):
So not quite general
release yet.
Speaker 2 (33:27):
Not quite Broader.
Availability for generalenterprise use is planned for
later this summer.
Interested developers orcompanies can go to the
Emergence AI website right nowand sign up for that preview
program.
Speaker 1 (33:38):
And what if a large
enterprise has a really specific
urgent need?
Speaker 2 (33:42):
They mentioned that
direct engagements are also
possible on a case-by-case basis, so reaching out directly via
the website would be the way togo, and, as we mentioned, there
are demos on the site too, whichgive a good feel for what the
platform can do.
Speaker 1 (33:55):
Good to know.
And what about pricing?
Any idea how that will work?
Speaker 2 (33:59):
They've outlined
three planned tiers.
There'll be a free tier, whichsounds great for individuals
wanting to experiment and justkick the tires.
Speaker 1 (34:07):
Lowering the barrier
to entry.
Speaker 2 (34:08):
Then a pro tier,
which will be cost-based and
offer more capabilities, higherusage limits, probably aimed at
power users or small teams.
Speaker 1 (34:16):
Okay.
Speaker 2 (34:17):
And finally, an
enterprise tier.
This will have custom pricing,likely based on usage scale and
specific needs.
It will include advancedgovernance features, maybe
dedicated infrastructure support, tailored solutions for bigger
organizations.
Speaker 1 (34:32):
That tiered approach
makes sense.
Start free, scale up as needed.
Speaker 2 (34:35):
Seems designed to
encourage adoption and learning.
Speaker 1 (34:38):
All right.
As we start to wrap up thisdeep dive, it leaves me with a
thought.
We live in this world that'sjust drowning in information,
right Data everywhere, complexworkflows slowing everyone down.
Speaker 2 (34:49):
The daily reality for
many businesses.
Speaker 1 (34:52):
So maybe the biggest
productivity unlock isn't just
finding ways to do our currenttasks a bit faster.
Maybe it's about gettingsmarter at designing the systems
that do those tasks for us.
What if the real edge comesfrom creating and managing these
evolving fleets of AI agentsthat learn your business, adapt
to your needs and free up yourpeople for the really
high-impact strategic thinking?
Speaker 2 (35:14):
That's a powerful
idea and it raises a really
important question, doesn't it?
As these agents get better atcreating and managing other
agents, essentially buildinglayers of autonomous capability,
what does?
Speaker 1 (35:25):
that free us up to do
.
Speaker 2 (35:26):
What new frontiers of
human creativity, innovation
and strategic planning does thatunlock?
When we're freed from thedrudgery, how do our roles shift
from task execution to, maybe,system conceptualization and
orchestration, focusing on thetruly human skills of innovation
and complex problem solving ata higher level?
Speaker 1 (35:44):
That's definitely
something to ponder.
Wow, that was quite the journeyinto the world of agents,
creating agents with EmergenceAI's Sucraft.
I hope for everyone listening,you feel much more informed now
and maybe even a bit inspired tothink differently about where
AI is heading in the enterprise.
Speaker 2 (36:02):
Indeed, it's
fascinating stuff.
Knowledge is always mostvaluable when you can actually
understand it and think abouthow to apply it.
Hopefully this conversation hasprovided some powerful insights
for you to do just that.
Speaker 1 (36:14):
Couldn't agree more.
Thank you for joining us onthis edition of the Deep Dive.
We look forward to exploringmore fascinating topics with you
again soon.
Speaker 2 (36:21):
Stay curious.
Speaker 1 (36:22):
And keep diving deep.