Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
As AI's breakout
continues.
Generative AI feels like it'shit an escape velocity in the
enterprise, and yet manyorganizations still feel like
they're spinning their wheels.
On today's show, we go insideAI Day, where leaders from WWT
laid out what they're seeing inthe field, the challenges, the
success stories and what theycall a practical path to AI
(00:21):
transformation.
We explore what it really takesto move from experimentation to
execution cutting through thehype, aligning stakeholders and
building the infrastructure,data strategy and use case
clarity that can turn AI promiseinto practical,
business-driving results.
What does it really take tomove from experimental hype to
practical outcomes?
(00:41):
This episode will be structuredinto three key sections
Practical approaches toaccelerating AI results from
John Duren.
Agentic AI in action from MarkDeSantis and Harry Covey.
And why data readiness isfoundational to AI success,
which will feature JonathanGassner and Bill Stanley.
This is the AI Proving Groundpodcast from Worldwide
(01:02):
Technology.
I'm your host, brian Felt.
Let's get started.
Let's start in an unlikelyplace a public high school
somewhere in the Midwest.
Student discipline problemswere on the rise, engagement was
low and then something happened.
Speaker 2 (01:24):
Sonny doesn't have a
desk and you won't find his
picture in the annual schoolyearbook.
But what Sonny does have istime for these students.
Speaker 1 (01:33):
That was John Duren,
an AI and data solutions expert
from WWT.
Now Sonny isn't on the schoolpayroll.
He or maybe it's a she itdoesn't grade papers.
That's because Sonny isn't aperson.
Sonny is an AI developed bySonar Mental Health, a digital
companion available 24-7,trained on a data set and under
(01:53):
the guidance of mental healthprofessionals.
Speaker 2 (01:56):
Sonny- has seen
hundreds of students at this
point.
Worked with hundreds ofstudents, he's been able to
recognize signs of loneliness,anxiety and even early stages of
depression.
And he does this silently,without judgment, without delay.
Speaker 1 (02:13):
Initially there was
skepticism, but as time passed,
so did that concern.
Speaker 2 (02:17):
But the results have
been incredible, to the point
that Sunny is now distributedacross 50 school districts, not
to replace the human connectionbut to ensure that no students
fall through the crack whenthey're looking for that human
connection.
Speaker 1 (02:31):
This John said is
practical AI.
Speaker 2 (02:35):
Not hypothetical, not
futuristic, just a real
approach to a real problemserving real people at scale.
Speaker 1 (02:43):
Sunny's story is
powerful, heartwarming for sure,
but for most organizationstrying to implement AI the
journey is a little bit morecomplex.
John said customers usuallymove through four key stages.
Speaker 2 (02:55):
Exploratory
experimental operational
transformative.
Speaker 1 (02:58):
In the early stages
there's often a flurry of
experimentation, proofs ofconcepts, hackathons, isolated
pilots, but without acentralized strategy it's easy
to stall.
So what's the most commonpitfall?
Speaker 2 (03:10):
First, and probably
the single most important thing
that we start every meeting withis understanding what your use
cases are.
This is crucially important.
Without a great use case, aiprojects are doomed to challenge
.
Speaker 1 (03:26):
We're coming off a
year where companies are
investing billions of dollarsinto AI, often without a roadmap
.
A recent Gartner report warnedthat by the end of 2025, over
70% of Gen AI pilot projectswill never make it into
production.
Why?
Because the use case wasn'tclear or or wasn't feasible or
didn't matter to the business.
(03:46):
Even when the use case is right, there's another challenge
change, which can make thingsuncomfortable.
Speaker 2 (03:52):
AI is evolving almost
every day, not just every week.
In the process of building manyof the tools that you'll hear
about today from Worldwide wherewe've worked with AI, the
technology has evolvedunderneath our product and
development three and four timesduring the development cycle.
Speaker 1 (04:11):
And if your plans
can't flex with that, you're
toast.
Speaker 2 (04:15):
Design knowing that
there's going to be challenges.
Heck design knowing there'sgoing to be mistakes.
Be willing to accept those,roll with the punches and move
on.
Speaker 1 (04:25):
Next comes the
question of talent.
Speaker 2 (04:27):
I'd be willing to bet
that every organization in this
room has already underestimatedthe talent skills gap in your
organization.
Speaker 1 (04:35):
Sound familiar.
According to LinkedIn's 2025Future of Work report, demand
for AI-related skills hasoutpaced supply 6 to 1 in the
enterprise sector.
John didn't mince words.
Speaker 2 (04:47):
Those who embrace AI
will definitely have a business
enabler competitive advantage.
Those who don't, well, I'llleave that to you.
Speaker 1 (04:56):
His advice invest now
and invest early.
Speaker 2 (05:00):
Focus on training
your folks on AI and how it's
going to impact your business.
Speaker 1 (05:04):
So you've got a good
use case, You're adapting to
change and you're investing inyour people.
You're on your way right Almost, but not if you leave out what
might be the most overlookedpillar security.
Security is critically importantat every stage Dealing with AI
for security security for AI,security of AI, security for
(05:24):
deepfakes In other words, everyorganization deploying AI is
also expanding its threatsurface and with deepfakes,
hallucinations and promptinjection making headlines
almost weekly this year.
There's no such thing aswaiting until phase three to
think about this, so how do youknow when it's all actually
working, Working?
Speaker 2 (05:44):
across well over a
hundred major projects with
customers in AI over the lastfew years.
We found that really there'sthree major milestones that have
to be crossed before you canactually say I'm making an
impact to my organization.
The first of those is when thecompany begins to make, or the
organization begins to make,complementary investments in
(06:04):
both talent and theinfrastructure.
Ai is different.
It's different than anythingwe've done before.
When you start deployingworkloads, it requires a serious
investment in what that's goingto look like.
What is that architectureneeded?
How do I feed those GPU cardsthat are so expensive?
The second is when you reallybegin to leverage internal data.
(06:25):
We've seen this and you'll heara lot about this during the
breakout session, where we talkabout atom.
Ai mark will probably mentionit shortly in his part of our
presentation, but adding yourinternal and private data into
your ai infrastructure is whereyou start really seeing the
value and you find.
The more data you add, thebetter the results become.
(06:46):
Lastly and this is really thekey metric, as companies begin
to rapidly approach 25% of yourbusiness processes and your
workflows being fundamentallyimpacted by AI, we see this
stark turn up in the curve ofvalue out of the AI investments
and the momentum.
The flywheel effect is takinghold and you're beginning to see
(07:09):
the AI projects are driving thenext AI project, and so forth.
Speaker 1 (07:14):
There's a concept in
the AI world that explains part
of this data gravity.
Speaker 2 (07:18):
The amount of data
required to truly feed the GPUs
is substantial, and that dataabsolutely has gravity.
It's hard to move.
You end up bringing yourapplications to the data instead
of trying to move all your datato the applications.
Dealing with that isfundamentally an issue we have
to focus on, and that's why datareadiness becomes a critical
(07:39):
part of the practical approachto AI.
Speaker 1 (07:42):
It makes sense right
the more data you generate, the
harder it is to move your stack.
So your architecture and yourAI need to come to the data.
It's why data readiness is thefoundational layer of practical
AI.
Speaker 2 (07:55):
Every AI project
we've been involved in was at
least 80% a data project.
Speaker 1 (08:00):
In the end, practical
AI isn't just a theme, it's a
mindset.
It's about asking not if youcan do something, but why, how
and to what end.
As John said, putting AI intopractice is only as effective as
(08:22):
the data that fuels it.
And that's where manyorganizations hit a wall,
because the real engine behindpractical AI isn't just
algorithms, it's data readiness.
Clean, connected and accessibledata is what turns an AI
strategy from theoretical totransformational.
Without it, even the mostadvanced models can't deliver
real business value.
Speaker 3 (08:45):
Most AI projects
don't fail because the model
wasn't smart enough, but theyfail because the data wasn't
ready for AI.
Speaker 1 (08:53):
That's Jonathan
Gassner, a technical solutions
architect here at WWT.
Speaker 3 (08:58):
And it's not your AI
model, but it's your data
supporting your AI model.
Speaker 1 (09:02):
And, more often than
not, that data is broken in ways
companies don't expect Datasilos data managed by different
parts of the organization.
Speaker 4 (09:12):
There's no data
sharing going on.
The data processes are ad hocat best.
Speaker 1 (09:18):
It's not that the
metrics are wrong.
They're just calculated fromdifferent data, using different
definitions, based on differentsources.
Speaker 4 (09:25):
Turns out that all of
the metrics are correct,
they're all calculated correctly, but they're all using
different inputs.
They don't all have access tothe same data variables to
calculate that metric.
You have to have that NorthStar, that guiding light, so
that all of your activities,data and technology, are aligned
(09:45):
around the same objective.
Speaker 1 (09:47):
That alignment starts
with understanding how your
business is structured and whichapproach to data strategy makes
sense for you.
Bill Stanley laid out twooptions data fabric and data
mesh.
Each one serves a differentkind of business and
understanding the differencecould be the key to unlocking
real value.
Let's look at two examples.
First, imagine a regionalmanufacturing company.
Speaker 4 (10:09):
They do one thing,
and they do it really really
well They've been around for 20,25 years and they've grown
organically, not throughacquisition.
Speaker 1 (10:19):
Operations are
centralized, but their data
still fragmented, still hard totrust.
Speaker 4 (10:23):
This is where data
fabric as centralization,
bringing everything together thedata management, the data
governance.
We're curating that data andserving it up in a
metadata-driven fashion.
Speaker 1 (10:38):
In this model, the
organization pulls together all
their data across departmentsinto a single source of truth.
Speaker 4 (10:45):
Governance, the
integration of those diverse
data sources.
They need to automate thatright.
We want the data available whenwe need it.
We don't want to have to waitfor that.
And then we want to wrap it inthis warm blanket of data
security and compliance.
Speaker 1 (10:59):
Data Fabric allows
this kind of company to align
everything from HR to finance tooperations under a common,
trusted foundation.
But what if your organizationis different, say a global
conglomerate?
Speaker 4 (11:11):
Here we have a large,
diversified company with a long
history, very distinct businessunits and data is often siloed
by business.
They operate in distinctverticals like mobility, energy,
finance and construction.
Speaker 1 (11:25):
In that case, pulling
everything into one place, as
Data Fabric suggests, might be alittle bit unrealistic, can you
?
Speaker 4 (11:31):
imagine trying to
bring all that data into one
place from all those businessesand provide the management and
the governance for that.
That's Herculean.
That's where Data Mesh comes in.
Data Mesh focuses ondecentralization
Decentralization of dataownership, data stewardship
federated governance.
Speaker 1 (11:52):
Each business unit
owns and governs its own data,
but shares it across theenterprise in a governed,
standardized way.
Speaker 4 (11:58):
Here we're going to
serve up data as a product.
The businesses will beresponsible.
They're the owners.
They will serve that data as aproduct to the data mesh.
It requires a holistic approach.
It's a combination of tools,processes and cultural change.
It's not a product.
Speaker 1 (12:14):
And that brings us to
the next piece of the puzzle
execution.
Once your data strategy isclear, you need to build your
team.
Speaker 4 (12:21):
Maybe it's just a
team, maybe it is a COE, but
there are a couple keycomponents you should have,
right.
You should have an executivesponsor, some sort of leader
championing your efforts.
And then the business.
We have to have the businessinvolved.
We have to have representationacross the business.
Are we going to ensure theadoption throughout the
(12:41):
organization?
Now we're ready for execution.
We've got our team in place.
We have representation from thebusiness.
We can ideate on these businessideas and figure out where we
have some synergies acrossbusiness.
Right, and we'll take our firstlook at the data as we're
ideating through this and we'llfigure out is the data available
(13:03):
to do what we want to do?
Does it possess the qualitythat we need?
Are there risks associated withusing this data in the way we
want to use it?
Speaker 1 (13:10):
From there you build
your roadmap, and that's when
data engineering enters thestory.
Speaker 3 (13:15):
Overall, you might be
feeling like you're doing a bit
more of some data archaeology,you know, dusting around in your
source systems looking forthose rare data artifacts, the
fix, the data engineer.
In a nutshell, data engineeringwraps their arms around the
information and puts it in aneasy to find place.
Speaker 1 (13:35):
Data engineers don't
just move data around.
They enhance it, augmenting itwith context, standardizing
formats and scrubbingunnecessary information.
Speaker 3 (13:44):
Our hero comes in,
bundles the data up in a nice,
neat little package and deliversit for our AI to consume,
removing any unnecessary datapoints, making sure that our AI
model stays lean and clean Dataengineering, Jonathan says, is
built on three pillars.
First, your data architecturethese are the plans or your
end-to-end picture for your AIand, more importantly, this is
(14:07):
why the data will move throughtheir plan.
Secondly, data infrastructurethese are your Lego pieces, or
your building blocks thatsupport the plan, that support
your architecture and this ishow the data is moved throughout
the thing.
And lastly and most importantly, data security.
Just like Bill said, this isthe warm fuzzy blanket that
(14:28):
wraps around everything.
Speaker 1 (14:30):
If there's a central
theme emerging here, it's this
AI doesn't work without theright data, and getting the data
right means rethinking not justyour tools, but your people,
your processes and your purpose.
Bill may say it best.
Speaker 4 (14:42):
The key takeaway is
for your data hero.
Speaker 3 (14:45):
it requires a
holistic approach and Jonathan,
Without data engineering, yourAI will fail silently and
confidently, and I'm willing tobet there's a few of us in this
room that have experienced this.
Speaker 1 (15:07):
As more organizations
begin to operationalize
practical AI, automating theworkflows, enhancing
decision-making and improvingefficiency, all fueled by a
solid data foundation, a newfrontier is already coming into
focus, and it's called agenticAI.
Unlike traditional AI modelsthat require constant human
prompting, agentic AI systemsare designed to act with
(15:28):
autonomy, continuously learningreasoning and even collaborating
with other agents to achievegoals.
It's a major leap forward andit's forcing enterprise leaders
to rethink everything fromarchitecture to accountability.
Mark DeSantis opened thesession with a story about Atom
AI, wwt's internal AI assistant,and how it's evolved from a
simple chat bot to something farmore powerful.
Speaker 5 (15:51):
Partners by the end
of this month.
So I'm excited to say that Allright Through our journey here,
what we've done is created an AIassistant.
This is actually the seconditeration of our AI assistant,
and the first iteration isprobably similar to what you've
seen, maybe at your ownenterprise, which is a RAG-based
(16:12):
AI assistant.
That's kind of what we startedoff with last year.
That was, you know what wasavailable at the time, and RAG
is great for many things, but itdoesn't do everything that we
want to do as an AI assistant.
Rag makes a great AI chatbot,but it doesn't do tasks.
Speaker 1 (16:31):
So what does a real
AI assistant need to do For WWT?
The answer came in four partsreflection, tool, use planning
and collaboration.
Let's break that down a bit.
First, reflection, as it maysound, that's the assistant's
ability to self-correct.
Speaker 5 (16:47):
AI makes a lot of
mistakes, and so reflection is
this technique to use iterationsto refine and self-correct.
Ai makes a lot of mistakes, andso reflection is this technique
to use iterations to refine andself-correct.
Speaker 1 (16:53):
Then comes tool use,
the heart of the agentic system.
This means AI can actuallyinteract with software tools and
not just return text.
Want to summarize a report pulllive data from Salesforce.
The AI assistant needs to domore than just talk about it.
Speaker 5 (17:07):
The idea that you can
add tools into an agent to
extend the capabilities of thatagent.
So this could be to query livedata, it could be to update data
or create data, carry out atask, things like that.
So these are the actions thatLLMs can now help us to use
(17:28):
throughout our AI assistance.
Speaker 1 (17:29):
Third is planning the
ability to break a task into
steps and execute them in theright order.
And finally, multi-agentcollaboration, where one agent
can pass tasks to another.
That's the magic trick thatallows Adam to act more like a
team of coworkers than a singleAI assistant.
Behind the scenes, WWT hasbuilt over 1,150 agents and
(17:51):
counting, but they're not doingit alone.
Speaker 5 (17:53):
We've established an
AI center of excellence team and
that team is really buildingthe foundation but then enabling
this as a capability throughoutour organization.
Speaker 1 (18:04):
It's not just about
building AI.
It's about empowering others tobuild.
If you've been following AIheadlines in 2025, you've
probably heard the term ModelContext Protocol, or MCP.
It's a standard that companieslike Anthropic, openai and
Microsoft have started adoptingto make agents interoperable
across platforms.
(18:25):
And, yes, wwt is alreadythinking ahead on this.
Speaker 5 (18:28):
We've seen Microsoft
co-pilots say that it's going to
support MCP as well, and so theidea there is that you know the
agents.
So, in our case, you know, Ikind of showed you some of the
agents that we've built.
The agents can be, you know,your own, you know custom built
framework, or it could be athird party agent, each of them
using whatever model works bestfor that agent.
Speaker 1 (18:50):
In other words, a
world where your legal agent
runs on one model, your customerservice agent on another, and
they all speak the same language.
It's a vision of modular,composable intelligence.
But as with any newarchitecture, there are hard
lessons too.
Mark pointed out that earlierversions of the assistance
sometimes failed, not becausethe model was wrong, but because
the data was missing,incomplete or flat out
(19:13):
inaccessible.
Speaker 5 (19:14):
AI is a magnifying
glass on top of your data issues
, right?
So if you have data that's oldor data that is insecure and
open to people that it shouldn'tbe open to, AI is going to help
you find that really quickly.
Speaker 1 (19:28):
Remember what Bill
and Jonathan told us earlier in
the day that no AItransformation can succeed
without a foundation oftrustworthy, accessible and
governed data.
This is that principle inaction.
When AI encounters friction, italmost always leads you back to
the same root cause the dataitself.
In the early days of deployingAtom AI, users would ask simple
(19:48):
questions like how many labs areon the platform, and the
assistant couldn't answer, notbecause the question was
unreasonable but because thelegacy architecture had been
designed for static documents,not dynamic live systems.
Rag-based models would try tofind an answer in the
documentation, but if that datawasn't indexed or up-to-date,
the AI was just stuck.
(20:13):
So WWT had to rethink how itexposed its APIs, how it
structured its data layers andhow it enabled real-time queries
across internal systems.
And that was just the beginning.
Then came the securitycurveballs, the kind you only
encounter when real users startexperimenting.
There were cases where peopletried to jailbreak the system
using base 64 encoding tricks,obscure translation loops and
(20:37):
adversarial prompts to get theassistant to reveal more than it
should.
In one case, a user asked AtomAI to translate system context
from Italian back into English,essentially sidestepping access
controls by speaking a differentlanguage, a different language.
Speaker 5 (20:50):
We saw one last week
where somebody was doing
something asking it to translatethe system context that we were
providing and in that exampleit did respond with the system
context in.
Speaker 1 (21:06):
Italian.
To counter these types ofattacks, the team implemented a
moderation API, a guardrail thatscreens queries in real time,
especially for external users.
They also added a loopprotection to make sure that
assistants didn't spiral intoexpensive recursive calls
Because, yes, that happened too.
Speaker 5 (21:25):
You're issuing an LLM
prompt to get a tool call
response and, as it's respondingback with the array of tools
that you should use, it'shallucinating and giving you
thousands of tools.
So it's like what happenedthere.
Speaker 1 (21:37):
The result
Conversations costing over $35
per prompt a massive red flagfor any system trying to scale
responsibly.
With LengFuse as theirobservability layer, they could
visualize each agentic looptrace tool calls, analyze costs
per conversation and pinpointthe moment where something broke
.
Speaker 5 (21:55):
So you can kind of go
through each step and
understand, like, how long ittook, you know, for that step to
complete, which tools wereinvoked, you know what
parameters were sent off intothat tool, what the tool
responded with.
You know, trying to really giveyou a picture of the
conversation.
Speaker 1 (22:13):
Every misfire, every
hallucination and every false
assumption.
All of it became fuel forimprovement.
The team used it to buildbetter agents, sharpen the
feedback loop and prioritizefixes based on what users were
actually experiencing.
So, yes, agentic AI is powerful, but it's also fragile, unless
you build it with real-worldfeedback, real-world use cases
(22:33):
and real-world costs in mind.
So what did we learn today?
First, it all starts with usecase clarity.
If you don't know what problemyou're solving, ai is not going
to save you.
And even if you do, be ready toadapt.
The tech is evolving fast, andso must your approach.
Second, none of this workswithout your people.
Invest in talent, upskill yourteams, because the biggest gap
(22:54):
in AI today isn't the tools,it's trust, capability and
culture.
And third, data is everything.
Broken data breaks your AI,whether you're running in a
centralized shop or a globalenterprise.
You need a data strategy thatfits and a team that can make it
real.
Bottom line.
The path to AI maturity ispractical, it's purpose-driven
(23:15):
and it's powered by data, peopleand processes.