All Episodes

June 18, 2025 42 mins

At Cisco Live 2025, the networking giant rolled out a sweeping agenda to make AI not just powerful, but practical — and secure. In this episode, we caught up with leaders from Cisco, NVIDIA and WWT to talk about what this year's announcements actually mean for enterprise teams tasked with building scalable, secure, AI-ready infrastructure. From the rise of the Cisco Secure AI Factory with NVIDIA to the reality of agentic workflows and persistent inference traffic, this episode unpacks the architectural shifts reshaping the modern data center.

Learn more about this week's guests:

Kevin Wollenweber is Cisco's Senior Vice President and General Manager of Data Center, Internet, and Cloud Infrastructure. In this role, he leads product strategy to enhance Cisco's infrastructure solutions for the data center, high-performance routing, and mobile networks. His leadership is pivotal in driving growth and developing cutting-edge solutions to meet the dynamic needs of businesses worldwide.

Kevin's top pick: About Cisco and WWT

Chris Marriott is the VP/GM of Enterprise Platforms at NVIDIA, where he has spent 14 years advancing enterprise solutions. With a background in engineering, including 10 years in ASIC development, Chris combines technical expertise with strategic insight to address the evolving tech landscape.

Chris's top pick: About NVIDIA and WWT

Neil Anderson has over 30 years of experience in Software Development, Wireless, Security, Networking, Data Center, Cloud and AI technologies. At WWT Neil is an VP in our Global Solutions and Architecture team, with responsibility for over $16B in WWT's solutions portfolio across AI, Networking, Cloud, Data Center, and Automation. Neil advises many large organizations on their global architecture and IT strategy across Global Enterprise, Global Service Provider, and Public Sector. Neil is also on the advisory board of several high-tech companies and startups.

Neil's top pick: Building for Success: A CTO's Guide to Generative AI

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
This year's Cisco Live, held last week in San
Diego, was called the mostconsequential Cisco Live in the
past decade, marking a pivotalmoment in steering enterprise
infrastructure toward agentic AIand secure-ready networks.
While Chuck Robbins and GituPatel spotlighted Cisco's
commitment to AI-ready datacenters and a unified,

(00:20):
security-first approach tonetworking, reuters highlighted
Cisco's deepening role in the AIboom and analysts called Cisco
a hidden, sovereign AI playwhich may sound exclusive to
governments, but it's the sameinfrastructure enterprises need
to safely scale AI adoption.
In this episode, we'll talkwith Cisco Senior Vice President
and General Manager of DataCenter and Provider Connectivity

(00:43):
, kevin Wollenweber, nvidia VicePresident and GM of Enterprise
Platform Solutions, chrisMarriott, and Neil Anderson, my
colleague here at WWT, who leadsour cloud infrastructure and AI
solutions teams.
Kevin, chris and Neil will cutthrough the headlines from Cisco
Live and there were a lot ofthem to reveal what really
matters Operational simplicity,scalable infrastructure and

(01:06):
AI-native network and securitystacks built for a future run by
autonomous agents, becausenetworks are fast becoming the
connective tissue of enterpriseAI and Cisco is laying the
tracks.
This is the AI Proving Groundpodcast from Worldwide
Technology everything AI all inone place.
Let's dive in Well.

(01:39):
Neil, chris, kevin, thank youso much for joining the AI
Proving Ground podcast today.
I know your schedules areabsolutely slammed out there at
Cisco Live, so thank you forjoining yeah thanks for joining
us, yeah, thanks for having us,absolutely.
Great to be here, kevin.
It's your show here.
Why don't you kick us off?
Tell us you know, give us anupdate on what are some of the
announcements that are takingplace there at Cisco Live and
you know what's piqued yourinterest so far there on the

(02:00):
show floor.

Speaker 3 (02:02):
Yeah, I think the most interesting thing for me
has been if I kind of contrastthis to last year.
I think last year there were alot of ideas about either things
we wanted to do in the AI spaceor kind of announcements of
potential futures, and whatwe're really starting to see now
is actual real customers, realuse cases, real adoption and
just the momentum that we'reseeing behind.
Everything we're doing in thatAI space is exciting to see, and

(02:25):
so, you know, we announced newswitches to enable us to drive
higher bandwidths as we buildout some of these AI fabrics.
We announced a bunch of stuffwith NVIDIA.
We're kind of taking thereference architectures that
they're building for AIfactories and evolving those
into secure AI factories withCisco, and so I don't know, it's

(02:46):
just it's an exciting time tobe here.

Speaker 1 (02:48):
Yeah, and Neil, I'll get to you here in a second.
But, chris, you know, what doyou see from NVIDIA's
perspective?
What's getting you excited interms of what's advancing the
agenda from an AI perspective?

Speaker 2 (02:58):
Yeah, yeah, absolutely, I think.
You know, to Chuck's point, Ithink, to Chuck's point, I think
it might be the most importantCisco Live yet, right, and I
think the undertones of theentire show, through everything
from compute and software andnetworking, obviously was skewed
heavily towards security, right, and I think, in the new age of
AI, that underpinning of everyenterprise is trying to figure

(03:24):
out how they go deploy AI andhow they deploy all these new
workloads and you know, one ofthe things that could prevent
that adoption is security.
So I think all the securityrelated announcements that we
saw at the show was fantastic.
So that was my takeaway forsure.

Speaker 1 (03:41):
Yeah, neil, I mean certainly lots of buzz going
around in Cisco Live about AI,cyber.
You know certainlyinfrastructure plays a major
part there, but you know what'sactually resonating with you in
terms of what it means for ourclients' ability to push forward
their AI journeys?
What are you hearing that youknow that feels most relevant or
urgent for our listeners tokind of understand or know about

(04:01):
right now, kind of understandor know about right now.

Speaker 4 (04:03):
Yeah, first of all, I would say like I have never
seen the speed of announcementsthat Cisco has been doing.
I've been working with Cisco orfor Cisco for 25 years.
Never seen anything like whatwe saw this morning.
Just the speed and the numberof announcements.
It's just absolutely incredible.
And these are.
You know, g2 made a point ofsaying this is not stuff, that's

(04:25):
futures.
This is now right, which Ithink is urgent for our
customers.
And I do agree with thesecurity piece.
That can really stall an AIproject in a real hurry If
customers are not comfortablewith the security governance
that they have over what they'retrying to achieve.
And I think and the third thingI would say, brian, is the

(04:45):
seamlessness with which Ciscoand NVIDIA are working together.
It almost feels like onecompany, sometimes like they're
just they can finish eachother's sentences, and so that
level of collaboration is justhelping both companies go faster
in the market for our customers.

Speaker 3 (05:04):
Well, and actually, if I could tag onto that, I
think, one interesting thing thepace is there and we're talking
about a lot of new technologies, but I think things like this
are important because we'reannouncing so many things this
year.
I don't think we've had thismany things to announce in any
Cisco Live that I can remember,and in doing that I want to make
sure that the message isunderstood and people kind of
understand the why behind whatwe're doing and how we're

(05:24):
actually executing, especiallyas we look at, you know,
partners like NVIDIA andpartners like WWT as well.

Speaker 1 (05:29):
We'll dive deeper there, kevin.
You know what is.
What is that kind of singularmessage that you know that it
all boils down to?

Speaker 3 (05:35):
Yeah, it's.
It's about a lot more than justa singular piece of technology.
You know, what we're reallytrying to do is enable the pace
of innovation that we're seeingin AI to be consumed by a larger
set of customers.
We've all collectively beenworking with the big
hyperscalers and the modelbuilders and the people that are
doing a lot of the training ofthese models, but the real kind

(05:56):
of future that we see isinference and usage of the
models and, as that moves deeperinto sovereign networks and
into enterprises, security, easeof use, being able to turn
these technologies up quicklyand make sure that they're
confident and comfortable withthe safety and security of them
is of the utmost importance tothe enterprises that we talk to.

Speaker 4 (06:17):
Yeah, I'd build on that, Kevin, that you know AI
where you run.
Ai is not a product.
It really takes an architectureand an ecosystem to pull that
together.
It's a rather complicated stackand very robust capabilities in
that stack, but I see Cisco andNVIDIA partnering to try to
make that a lot simpler forcustomers to understand that

(06:38):
complicated stack.
The NVIDIA software on top ofthat is a bonus, because now
customers can actuallyaccelerate what they're doing
and get to a use case that'sdelivering outcomes for their
business a lot faster.

Speaker 1 (06:55):
And so I've been impressed with the way that the
architecture is coming together.
Yeah Well, chris, just a fewmonths ago we had met Neil,
kevin and I we met with one ofyour colleagues, kevin Deerling,
and that was while we were atNVIDIA GTC, which, of course,
did not disappoint.
So many great announcements,whether it's Rubin, dynamo,
blackwell, ultra, but I amcurious, between then and now,

(07:15):
what roadblocks still exist orstand between that vision that
we saw and heard about at NVIDIAGTC and what the reality is
within the enterprise setting.
How do you see Cisco Lifebuilding on what we heard there?

Speaker 2 (07:28):
Yeah, I would say a few challenges, right.
So I think, as we start to movetowards accelerated compute
with all these new GPU platformslike obviously, even with
Blackwell, but going into Rubin,for enterprise, I think the
open question is like the returnon investment for AI and the

(07:48):
investment and how it's going togive like business outcomes.
It's, it's obvious, right, andthere are so many places that
enterprise can go.
Do that.
You know.
The one thing is they've got tofigure out and you know Neil
was teaching me about this justyesterday, right, you know Neil
was teaching me about this justyesterday, right, they've got to
go figure out the top one ortwo use cases and stay very
focused about, like the ROI,that that use case is going to

(08:09):
deliver number one, but thennumber two when they actually go
to bring in this infrastructure.
You know, a lot of on-prementerprise data centers aren't
necessarily architected for thepower or the scale of compute,
and so I think we see, you know,over the next five years I
think it was something like a50x increase between NCP, csp

(08:32):
and enterprise of infrastructureand data centers that are going
to be required, and so I thinkpart of it is a data center
build out plan.
Part of it is the improvementsin the enterprise data centers,
then the third part of it arepotentially like co-location
centers.
Then the third part of it arepotentially like co-location
centers where you know partnersor channel partners or anybody
can go land that equipment,stand it up and be kind of fully

(08:53):
still managed, tied into theenterprise's data, and so I
think that that that's asolvable barrier, but it's
obviously something that youknow the between all of us.
We have to have a good plan andget in place before we get
there.

Speaker 3 (09:07):
What I love about some of this stuff you were
talking about the ROI and kindof the use cases that the
customers are going to pick upwhat I love about this is when
the use case clicks, it justlike light bulbs go off.
So you know, when DJ did hisdemo of AI Canvas and he just
kind of showed a real example ofhow we can take these agentic
workflows and have a queryableassistant and actually do

(09:29):
troubleshooting workflows acrossnetworks, you look at it you're
like, oh, that's amazing, andit's this ability to build
composite applications fromdifferent sets of data that we
never could really do before.
And that's just one reallysmall example.
But to me, when it clicks, thelight bulbs go off and it's an
easy sell.

Speaker 4 (09:47):
Yeah, that use case, gavin, I'm really excited about
because I just met with DJ andAnand a little bit ago and I'm
super excited about that usecase because we've been
delivering that as a bespoke,you know use case for our
clients.
The idea of a NOC agent, a NOCco-pilot, with the two of you
you know your company'sunderneath the hood but we've

(10:08):
been building that out for someof our clients and it's kind of
challenging because we have tostart almost at zero with every
one of those right and they kindof work as well as the customer
has data to power them.
But I think this idea of thenputting the Cisco developed deep
network model underneath thatman, that's going to be really

(10:30):
cool.
That's going to make our thenot copilot kind of applications
that we're delivering, kevin,for customers.
It's going to make it so muchbetter today than it is today.

Speaker 1 (10:42):
Yeah, neil, can you clarify a little bit about why
the use case drives so muchconsiderations with the
infrastructure and why it'simportant to kind of lead with
that use case first, so that youknow what you're building for?

Speaker 4 (10:54):
Yeah, I mean use cases.
I put them into two buckets.
There's horizontal use casesthat everybody has the same
problem, I don't care whatvertical you're in.
And then vertical use cases Idon't care what vertical you're
in.
And then vertical use cases andwhat we have found with several
of our very large clients isthis idea of a NOC co-pilot
something to make their NOCagents that much smarter to be
able to resolve problems muchfaster, like that.

(11:16):
Just you know, when we talkabout that with customers it
really gets attention.
Everybody can identify that,everybody who has an IT network
operations center or even asecurity operations center,
which they're also excited aboutfor the future what this could
do for them.
It's just a use case thatreally really resonates with our

(11:36):
customers.
But what's lacking there alittle bit has been this we're
building those use cases on topof general models, the publicly
general available models.
They only have so much networksmarts in them, right, and then
we can ingest customer ticketand resolution data which makes
them smarter.
But I'm excited about, like,the idea of building that on a

(11:58):
purpose-built model from Ciscothat has decades of network
experience in it already.
It's going to make the thingsthat we're building on that so
much more accurate for those notco-pilots, so we're really
excited about that one.

Speaker 1 (12:21):
Yeah, I do also want to get to the fact that perhaps
there's or not, perhaps there isa lot of work to be done in the
data center.
We hear a lot about AI readydata centers.
There this week at Cisco Live,I also hear, or we hear, a lot
about AI factories.
I do feel like they can bleedinto each other a little bit.
So I was hoping you know, Chrisor Kevin, can you take a swing

(12:42):
at telling me you therelationship between an AI
factory and an AI-ready datacenter.

Speaker 2 (12:48):
Yeah, I mean, maybe we both can, but I can give you
our perspective, because I thinkthe term has been, let's just
say, used frequently, maybeoverloaded.
All of our partners have AIfactories.
Nvidia has an AI factory.
What is an AI factory?
I get asked quite a bit and sowe're starting to, you know, at

(13:08):
least talk about these things alittle bit, and so I would say
what we build out with, or whatall of us build out with our
partners in, like CSPs and inNeoClouds and sovereign AI
dentist centers, that aspect ofit.
I think we now consider more AIinfrastructure and some of our
CSP partners and everybody willbring their software stack and

(13:30):
their value add and we'll buildthose things even in the cloud
and offer kind of AI factoriesfor even on-prem customers in
some cases.
But we really believe that theenterprise is going to become
like the AI factory and the ideabeing an enterprise can take,
you know an open model, acutting edge model, bring it
into the enterprise, fine tuneit with that business.

(13:50):
You know critical data for thatparticular industry, their
business, their use case, andthen now they have IP built into
that model and so now you havethis flywheel of train, train,
train a model, get it deployedinto inference, you get new data
, you still fine tune it everynight or every other night, and
you become this AI factory.
And so I think there are, youknow, plans even for large AI

(14:14):
factories that are really you'regoing to have AI factories in
every kind of vertical.
So in manufacturing, in retail,all these places where you need
tokens.
Some of these places will needtheir own AI factories to
generate those tokens locally.
Some will use them fromdifferent areas, and I think
that's really where Dynamo alsocan play a big part.

(14:35):
And so Dynamo is NVIDIA's opensource tool for essentially
disaggregating and taking theinference workloads and steering
them to available GPUs.
And so you take the appropriateGPUs for the pre-fill stage,
where you process the prompts,and you take other GPUs for the

(14:55):
decode phase and it takes like avery spiky workload,
potentially at inference, andyou're able to batch those
processes in so to really takeadvantage of the usable compute.
But I think that's thedistinction.
In Sovereign you have AIinfrastructure.
Enterprises are going to be AIfactories, and I love what Cisco
has done with AI defense, withthe secure AI factory, and I

(15:16):
think that's going to resonateheavily in industry.

Speaker 3 (15:20):
Yeah, I think that was exactly the point.
When we started talking aboutAI factories was, as we see,
adoption from enterprise, andone of the big things that they
need to figure out how to do ishow do I integrate that into my
existing data center andecosystem in a simpler way and
realistically?
How do I get visibility andunderstand safety?
Security that sits both aroundthe models and around that AI
factory itself, and that's kindof where this concept of secure

(15:42):
AI factory came from is.
We know that our enterprisecustomers are going to want to
build these AI factories.
We know that we have somepretty interesting technologies
from a security perspectiveinside of Cisco and we can take
things like AI defense and builda blueprint or an architecture
that allows customers to deploythat simply, easily and securely
in their networks.

Speaker 4 (16:02):
Yeah, and I think the other thing I would add into
there you know sort of comparingthe AI factory to the AI ready
data center is yes, of coursethere's tremendous energy in
building AI factories, butcustomers also have an existing
data center that they need torun and they need to manage that
.
This idea of Cisco being ableto bring that together for
customers and make it simple torun not only your AI workloads

(16:26):
but your traditional workloadsand have kind of a seamless
backend to be able to managethat, that's what I think of the
AI-ready data center.
That's what really compels meand I think compels clients.

Speaker 1 (16:37):
Yeah, neil, stick with you here.
Whether AI factory or AI-readydata center, it all hinges on a
tightly integrated ITinfrastructure security
operations.
How can enterprises or how arewe advising clients to
efficiently scale thatinfrastructure to accommodate
the demands you know that arejust growing and growing for
this new wave of AI innovation?

Speaker 4 (17:00):
Yeah, I think this is where it's really important to
understand.
You know, obviously there'shyperscale customers that are
building AI factories andthey're building you know
tremendous net new models andcan you know sort of
constructing those.
There's also these neocloudproviders that are building
tremendous data centerinvestment around being able to
offer you know kind of an AI,you know factory, as a service

(17:23):
and those are tremendouslyimportant.
But when you get to anenterprise data center where our
clients are asking us, okay,that's great, but because of the
nature of my data, I need itvery secure.
It's my intellectual property.
I want to run this in my owndata center.
That's a very different type ofAI stack, I think, than either
the hyperscalers or the neocloudproviders.

(17:46):
Customers want to leveragetheir existing skill sets.
I've got expertise in Cisconetworking in my data center.
I've got expertise in Ethernetin my data center.
I want to use a storage vendorthat I'm familiar with and I
already have expertise in.
How do I bring that together?
And that's where I think thepartnership between NVIDIA and

(18:06):
Cisco is super important,because Cisco is trusted already
in those data centers.
They're already supplying thatand so I think the ability to
then bring the innovation withNVIDIA to those same clients.
It's absolutely critical forworldwide's clients.

Speaker 3 (18:22):
And over time, I actually expect to see this,
this expand.
So today we're talking aboutspecific use cases and, you know
, wanting to deploy small aipods or a small ai factory
inside of their data center.
But over time, you knowactually nvidia talks about this
a lot as well ai is not goingto be a thing.
Ai is going to be a part ofeverything that we build, and so
you need to be able tomodernize data center, get that

(18:44):
ecosystem ready to consume thesetechnologies and over time, I
think it will become the mostbroadly deployed application
across everything that theybuild in a traditional data
center.

Speaker 2 (18:54):
Yeah, for sure.
And like building on top ofthat, I think, the like ease of
deployment and like thesimplicity of like monitoring
and managing and all thosepieces of the AF factory.
I think that's really, as youknow, I learn and learn more
about like Cisco's offering withlike HyperFabric as well, to be
able to give you kind of asingle pane of glass almost

(19:17):
between all of thatinfrastructure and the same
tools tied in, even for Cisconetworking that is already
running in all of theseenterprises.
I think that really is, youknow, it removes a barrier from
IT when they're looking ateither new infrastructure, new
workloads, those kinds of things.

Speaker 3 (19:33):
Right, that's actually something that I don't
think a lot of people recognizeis, if you think about a
traditional data center operator, most of the components that
we're bringing in AI and AIinfrastructure and secure AI
factories are the samecomponents.
You know, you've got compute,you've got networking, you've
got storage, but they operatetogether as a system, and so you
know you have multiple networks.

(19:53):
You've got a front end networkand a back end network, and so,
even though the components arethe same, the way you operate it
is different, and what we'retrying to do is just find ways
to simplify that.
Let them get infrastructuredeployed and focus on the cool
stuff that comes with all theseAI applications and not.
You know how do I tune back inrocky parameters to build an
efficient fabric, and I thinkwe've really found something
there.

Speaker 1 (20:15):
Yeah, certainly love to see the tight, you know the
blooming partnership between youknow, not just Cisco and NVIDIA
, but WWT as well.
Chris, you had mentioned Dynamoand Hyper Fabric At GTC.
I thought it was interesting.
Jensen build Dynamo as controlsoftware, as the operating
system, the AI data factory.
So in regards to Dynamo andHyperfabric, where does Nvidia's

(20:37):
Dynamo stop and Cisco'sHyperfabric start, or is it all
just intertwined at all times?

Speaker 2 (20:43):
Oh, yeah, yeah, no, it's a great question, right?
I think the way to think aboutit is and again, my
understanding as well is likeyou're going to take, you know,
once you have Ciscoinfrastructure at the base,
you're going to build up thatinfrastructure with Hyper Fabric
to be able to deploy, monitor,manage, you know, configure all
of that infrastructure andreally it's going to be like

(21:04):
dynamo running on top of that todeliver kind of the inference
workloads to available gpu iskind of like inference
orchestrator.
So they're very muchintertwined, uh, between the two
lives.
And and yeah, denimos, as we'veseen, like when you, when you
put it, especially withblackwell, when we've moved now
to fp4, but even with hop, thespeed ups generation over

(21:25):
generation.
Once you add Dynamo into themix for that AI factory, it's
pretty stunning.
Just what like that simplepiece of open software.
So I think, combined with thetwo, it's going to be excellent.

Speaker 3 (21:40):
Yeah, I was about to say.
What's really interesting to meis we've been building out a
bunch of these fabrics andstarting to see customers deploy
this, but what I struggle with,or what I think a lot of
enterprise struggle with, is, asthey start to roll these out,
scheduling of workloads and howthey manage those across GPUs.
So I actually think thatintegrating something like
Dynamo on top to be able toefficiently schedule across

(22:00):
these clusters they're buildingis one of the biggest gaps we
had in some of these enterprisedeployments.
And we have, I would call them,small clusters relative to the
size of like the hyperscalersand what they're building.
But we've got, you know, 1000or so GPUs and clusters inside
of Cisco and we end up having tocarve out resources for teams
and give them those clusters andwe don't know how well utilized

(22:21):
those clusters are.
And so having a much moredynamic approach, especially in
inference, where you're notrunning these you know days,
weeks, months, long type type ofjobs I think that's the
critical missing piece, and soit's going to be really
interesting to see how weintegrate that together with
stuff like Hyperfabric.

Speaker 1 (22:44):
Yeah, we're talking about integration here.
That feels like your wheelhouse.
What do you see from theintegrations?
What considerations do ourclients or listeners need to
think about as they're doingthat?

Speaker 4 (22:54):
Yeah, I mean it does take a complicated stack to pull
this off and there's a lot ofdifferent moving parts there.
Things like hyperfabric aregoing to simplify that.
Dynamo is going to make it moreefficient.
I do agree with both thesegentlemen that those two working
in concert are going to besuper important, right, and I
think it's going to bring a lotof value to clients.

(23:14):
But you know and Kevin, I thinkyou hit on this that while it
could be the same components,you're talking about network,
compute storage and acceleratorsand orchestration at the end of
the day, how you operate, thatis very different You're talking
about.
You know we have built somesuper pods with NVIDIA and you
know you're talking 8,000 cablesin one of these.

(23:37):
And it is not yourgrandfather's old mobile, right?
It is a little bit of adifferent beast to build that.
We know how to build it becausewe've been working with NVIDIA
for quite a while.
But that tight integrationbetween you know compute storage
, network and being able tobring that to life at the speeds
we're talking about here issuper incredible.

(24:00):
The other thing I like aboutwhat Cisco is bringing to the
table not only AI defense, butalso HyperShield this idea of
being able to spread enforcementpoints out for scale, I think
is hugely important.
You know the idea of comingback, and you know bringing all
your traffic back to somecentral point is what I call a

(24:21):
choke point firewall strategy.
That's not going to scale here.
You need to rethink the waythat security is deployed in the
network, and that's what I'm abig fan of Hybershield, because
I think it's the rightarchitectural direction.
Forget that Cisco's productname for a minute.
It's the right architecture forthe security that we're talking
about at AI speeds.

(24:42):
I don't know of another way toarchitect that and so I think
that those kind of integrationpoints are going to be huge for
our clients.
Brian.

Speaker 3 (24:50):
Yeah, I mean, I just love the idea of like reforming
the tools and using the AI toolsto protect the AI
infrastructure.
To me, that's just an amazingconcept.
But, exactly to your point,we're used to managing users
with access to resources in adata center and now that you're
getting into these agenticworkflows, you're we're talking

(25:10):
about 1000s, hundreds of 1000s,millions, billions of agents at
some point, you know, flyingaround all over the place.
We can't, we can't actuallyscale the support and the and
the security behind that intraditional ways, and so we've
got to think about modernizationof the tools that we're going
to use to protect thesemassively growing applications
like agentic workflows.

Speaker 4 (25:29):
Yeah, and I think, Kevin, I was talking last night
with a couple of Cisco fellows.
They may be bringing an idea toyour desk that we were thinking
about last night around thisidea of agent-to-agent
communication and how do yousecure those workflows at scale.
So you'll probably be hearingabout that pretty soon.

Speaker 3 (25:44):
Oh, I love that.
I mean, look at in the lastcouple of months, the number of
agentic communication protocolsand technologies that we've seen
enter the market.
And so I think this is thatnext frontier is, as agentic
workflows grow, making sure wehave standardized ways for
agents to communicate and then,honestly, standardized ways to
make sure that they don'tmisbehave, because think about
how expensive these GPUresources are, and if agents are

(26:07):
running unchecked and they'retaking resources on or doing
malicious things with resources,that actually becomes another
type of DOS attack or anotherattack vector in the network.
And so, as amazed as I am andexcited as I am about agentic
workflows and AI in general, wealso have to make sure that all
of the security and monitoringand other technologies are

(26:27):
keeping pace, because, you know,we're creating potentially new
areas for attack surfaces.

Speaker 2 (26:32):
Yeah, and I think your point about, like agentic
controls because once we starthaving these agents, you're
almost going to have to be ableto control each agent's access
to information in the cluster aswell right, like you know,
we've all heard of whencustomers first get started with
AI and they had just pointedthe entire company's

(26:54):
organization of data at achatbot and suddenly you can you
know, you can query like HRfiles and things like this, and
so that access control for agentis going to be, yeah, very
critical to uh to secure likeenterprise data as well that's
how I explain this, this aidefense up, when I'm trying to
really really dumb it down andsimplify it is just think about

(27:15):
these models, as you're theworld's smartest 10 year old kid
who has access to every pieceof information in the world, and
it was always told you know,don't take candy from strangers.

Speaker 3 (27:22):
And it's got its rules that it's going to follow,
but as soon as you get itoutside of those rules, it has
access to everything and it cantell you everything about every
piece of data that it has.
And so I think there's a lot ofinvestment that has to continue
to drive in this space andcontinue to evolve as we evolve
these types of workflows.

Speaker 4 (27:38):
Yeah, this idea of you know models don't have any
concept of RBAC right.
They don't have access controlkind of built into them.
The model knows what the modelknows and if you have access to
the model, you have access toall the data.
So the need to build somethingon top of that to actually
control that access a bit moreand I think we're also gonna see
, we're already seeing it thisidea of a massive supermodel

(28:02):
with it knows all kinds ofthings, I think you're going to
see a shrinkage of like muchmore purpose-built models so
that, if for nothing else, youcan kind of control the access
to the different data sources alittle bit more granularly.

Speaker 3 (28:15):
Yeah, and the funny thing is, yes, that might
actually mean less GPU resourcefor that particular model, but
it means that we can actuallyrun a ton more models and we can
use AI for things in the costeffectiveness of AI that barrier
for the ROI that we weretalking about earlier as that
bar goes down, we can actuallyuse it for everything, and
that's why I think it's justbecome we're not in two years,
we're not going to be talkingabout AI and how we, you know,

(28:38):
put this little pod in over hereto run AI.
It's just going to beubiquitous and be part of
everything that we do.

Speaker 2 (28:43):
Yeah, and I think that was you know.
I think the initial reactionfrom everyone when DeepSeek, you
know, first dropped, issuddenly like, oh my God, test
time scaling.
Suddenly, you know betweeninfrastructure, gpus, everything
it's going to take, you knowyou can run this on your cell
phones and it's going to crashthe market.
We're actually, you know,between going from Hopper to

(29:04):
Blackwell to Rubin.
We have like orders ofmagnitude improvement in the
number of tokens we can process.
But I think once you reduce thecost of tokens and the cost of
AI, that's really where thehockey stick and everything
starts to take off, because thenyou don't have to really, oh,
like you know, consider how muchyou know.
I should only apply it here, Ishould only apply it there.

(29:25):
We want the cost of AI to drivedown to the floor so everybody
can use it in every application.

Speaker 4 (29:31):
It's classic GVON's paradox, right yeah?

Speaker 3 (29:35):
Well, look at what we just launched with your RTX
Pros in MGX.
You know PCA-based platforms,and so now we have a more
cost-effective power-efficientand sort of scale-out for
inference, because you're justgoing to be running these
workloads on single GPUs and wecan bring that to a much larger
section of the market.

Speaker 4 (29:54):
Yeah, I love what G2 showed on stage this morning too
.
Kevin, around you know, likethe, if you think about chatbots
, it was kind of this burstytraffic right in and out of the
GPUs to do inference.
But when you think aboutagentic AI, it is this
persistent demand for tokens andpersistent results that's
coming out of the modelspersistent inference.

(30:16):
I think we've already seen thatin the agentic AI that we've
built at Worldwide.
We've seen exactly that slidewas like holy cow, like I'm
stealing that, because that'sexactly how I need to explain to
customers because we've seen itevery day.

Speaker 3 (30:30):
Well, I would love to partner with you, and Chris you
guys as well on just exactlyhow we're seeing this rollout of
networks, because that'sanecdotally what I've been
saying, and I do think that thenetworks that we build have to
change in terms of how weoperate them.
I think we're going to see alot more consistent traffic like
that.
It'd be great to get a realview of this, because I think,
for those that are sort ofdipping their toe in the water
or wanting to move towards itwhen we talk about AI-ready data

(30:52):
centers, that's a lot aboutwhat I mean when I say AI-ready
data centers.
Let's prepare for this wavethat's coming, even if it's not
something you're deploying today, and I think that's a perfect
proof point of it.

Speaker 2 (31:02):
Yeah, yeah, totally agree the fact that for agentic
workflows now you have agentsall over the cluster,
potentially in different datacenters, where you have to
connect them like super highspeeds with low latencies.
And then, even if you takesomething like Dynamo, where you
are creating the KB cache,large KB cache as far as the,

(31:24):
the prompt calculation, Nowyou've got to send that KB cache
to the other computers to thedecode.
You're shipping a vast amountof data across the network and
not to forget the huge storagearray of data that you have
there, all that compute goingand hitting and creating track
of traffic over as well.
So the network is going to becritical.

Speaker 1 (31:45):
Yeah Well, obviously you know a ton to go over here,
Neil.
You know so many moving parts.
How are enterprise AI teamsvalidating that their AI
infrastructure choices are goingto hold up under real world
conditions, knowing that there'sjust so much movement going on
all at once?

Speaker 4 (32:04):
Yeah, and what I tell customers is look, you do not
have to go this alone.
We have a tremendous labinvestment at Worldwide that we
call our AI Proving Ground Lab,where we have these
architectures, including CiscoSecure AI Factory with NVIDIA,
built with HyperShield, with AIDefense from Cisco.
These are already built.

(32:25):
You can come into that lab andwe can start exploring the art
of the possible together.
What model do I want to use asa basis?
How is this going to scale?
How many GPUs do I need?
Can I use a storage partnerthat I already have experience
in?
There's all sorts of startupstrying to tackle deepfake
detection.
Which one works?

(32:45):
We're doing all those kind ofstudies today with customers in
the AI proving ground.
They do not have to go it alone.
We can help accelerate thatjourney.
You get your hands on and let'sactually get to the outcomes
faster together and get you onyour way.

Speaker 1 (33:09):
That's our motto with the Proving Ground.
Yeah and Chris or Kevin, not toput you on the spot to make a
pitch for the Proving Ground,but I am curious what value do
you see that type of composablelab environment offering the
industry in general?

Speaker 2 (33:26):
Yeah, I can start.
I mean like honestly, like whatWWT does with the AI proving
ground becomes kind of like thetip of the spear for enterprise
customers and so the fact thatthey invest early in new
architecture and have, like theCisco's secure AI factory, stood
up and be able to like bringcustomers in and test their
actual customer data on thosethings, it's like gold, because

(33:50):
without that it's you know youget, you fall back into the
typical six to nine month buyingcycle that enterprises have,
not to mention the time ahead,just to evaluate AI, to figure
out sizing all of those kind ofthings, to evaluate AI to figure
out sizing all of those kind ofthings.
So what WWT does in terms ofthat lab is just.

Speaker 3 (34:11):
It is invaluable.
In my opinion, yeah, and for me, that's exactly what the AI
proving ground is.
If you think about you, go backto building networks and other
things.
We've been doing proof ofconcepts for years, but
unfortunately, the cost ofbuilding out a large AI factory
to do testing and validationit's not something a lot of our
customers can actually do.
And so being able to do it inthe AI proving ground on real

(34:31):
hardware, get real results andthen make that business case to
help with the ROI problem thatwe talked about earlier is one
of the only ways we're going toget to do this, because not
everybody can go out and build alarge AI factories just to say,
all right, let's put this inthe corner and try some things
and hope it works.
And so the ability to bothpartner with WWT, cisco, nvidia,

(34:52):
go in and actually test withreal models, real GPUs and real
data and then identify whatthose use cases are that want to
go and drive.
To me, this is the only waywe're going to get this done.

Speaker 1 (35:00):
Yeah, kevin, stick with you here for just a moment.
Beyond integration, beyondvalidation, I think a lot of
times we'll hear from clientsthat they're looking to future
proof, at least to the bestextent that they can, and that
seems like an incredibly hardthing to do these days,
considering how fast things arechanging.
From Cisco's point of view, weknow it takes an ecosystem, but

(35:23):
how can we make this all worktogether so that clients aren't
locked into a single solutionfor an unbearable amount of time
?

Speaker 3 (35:32):
Well, I think Neil hit it on the head when he
talked about it earlier.
That's why a lot of enterprisesare looking at how do I
integrate this in withtechnologies that sort of look
and feel and operate like thetechnologies I'm deploying today
.
They have large fabrics today,they have compute deployed.
Today, they have storagepartners and obviously if
there's a differentiation in astorage partner or an
architecture, we'll suggest that.

(35:53):
But what they're really lookingto do is extend their existing
architectures as much aspossible.
Let their operators that knowcertain tools, certain
technologies.
If they have Cisco Fabricsdeployed, if they can just
extend that out and add some ofthese AI technologies in, it
really makes the expansioneasier.
It makes the business caseeasier and allows them to move a
lot faster.

Speaker 1 (36:13):
Well, you know, just in terms.
You know we're running short ontime here and appreciate the
time that the three of you havegiven us here on today's show
Any closing thoughts?
I know Cisco Live isn't doneyet.
Us here on today's show Anyclosing thoughts?
I know Cisco Live isn't doneyet, but what are the key
takeaways from the time you havespent so far on where our
listeners should be focusing forthe next three, six, 12 months?

Speaker 4 (36:35):
I would say something that we tell clients is you
cannot afford to be on thesidelines of this.
You've got to get startedbecause it's going to take
learning, it's going to takeinvestment.
It's going to take a little bitto get that flywheel effect
going.
You know what I tell clientsall the time is you know that
first use case is going to be abit painful.

(36:56):
It's going to be all new rightYou've got to get.
You've got to figure out thesecurity, the governance model,
the investment to get going.
But what we like to do atWorldRide is come in and help
them build a flywheel.
So the third, fourth, fifth usecases are much faster and
actually a lot cheaper to getgoing and before you know it

(37:17):
you've got a couple dozen usecases that are in production.
That's really our goal withcustomers.
But what I think about is likeyou cannot afford to be on the
sidelines waiting for somethingto happen.
It is here and you need to getgoing yeah, chris, any parting
thoughts here?

Speaker 2 (37:34):
yeah, well, I think I think there, there, there's uh,
maybe two or three things.
Right, I think one thing thatwe we left off, that I I think
is going to resonate in everyenterprise and deliver
intelligence out of existingdata that's in your enterprise,

(38:09):
I think is a super, supervaluable point that a lot of
enterprises are going to pick upon.
And the second, I think, isI've heard it anecdotally a few
times at the show is like, inaddition to enterprises trying
to figure out, well, what's myuse case, should I get started?
Maybe you know, to Neil's point, not also not being worried
about failing if it's not thefirst use case that you try.

(38:30):
But one of the pieces that I'veheard is that, well, we're not
sure if we want to deploy onthis generation, because you
know it may.
Maybe we wait until the nextgeneration because, like you
know, I'll say right, ourroadmap moves very fast in the
high end, right In theenterprise range.
I think we are a little bitslower and enterprises should

(38:51):
not be scared of deploying withwhat technology is there today,
because they can take that sameuse case and if they say, bring
in Blackwell or bring inBlackwell Ultra in a year or
whatever it's going to be, it'ssimply a performance upgrade for
part of those workflows andthat infrastructure will live
for a long time in your datacenter.
So I think, yeah, don't beafraid to get started to fail,

(39:13):
and then don't let thearchitecture and the pace of
innovation slow you down fromgetting started.

Speaker 1 (39:22):
Yeah, and Kevin, what are we going to be talking
about?
What are you seeing now andwhat do you think we're going to
be talking about, kind of thistime next year?

Speaker 3 (39:27):
the models themselves are also improving, and so he
was talking about, you know,don't let's not put artificial
harness around things or try tofix things within the model,

(39:48):
outside of the model, becausethe models get smarter and
better every single day.
And so if something is, youknow, 80% effective today,
tomorrow it's 90%, then 95.
And then you know it doeseverything we need in the future
.
And so Neil's point about movefast and, chris, your point
about Neil's point about movefast and, chris, your point
about let's just get startedwith what we have, I think
resonates with everything we'reseeing.
I think next year, if I look atthe pace of innovation, I think
next year you know, we talkedabout things like AI defense and

(40:11):
we talked about hyperfabric.
We talked about this AI canvas.
I think next year we're goingto have so many whether they're
agentic workloads or just newuses of AI real proof points.
I think we're going to see alot more real customers.
We definitely brought a fewcustomers with us in the journey
this time and you're seeingthem on stage, you're seeing
them talk about it, but I thinkevery customer we work with is

(40:31):
going to have proof points anduse cases that show real value
for them next year, so I'mexcited for that.

Speaker 1 (40:38):
Yeah, no, absolutely.
It's going to be an exciting 12months until we connect again.
And on that note, I'll let thethree of you go.
Thank you guys so much fortaking the time out of the busy
schedule.
Hopefully you get some rest.
I know you're probably not outthere it's a busy week but
thanks again for sharing theinsights and all the knowledge
you have.

Speaker 4 (40:55):
Thanks, pleasure to be here.
Thank you, brian.

Speaker 1 (40:57):
Okay.
As the dust settles on CiscoLive 2025, one thing is clear
the age of theoretical AI it'sover and the enterprise
deployment era is here.
From today's conversation,three key lessons stand out.
First, ai use cases must leadinfrastructure decisions.
Whether it's a NOC co-pilot oran agent-driven troubleshooting

(41:17):
assistant, the most successfulAI initiatives begin not with a
tech selection but with a clear,outcome-focused use case.
Second, ai infrastructure isn'tjust about GPUs.
It's about integration.
The Cisco, nvidia and WWTtrifecta is pushing beyond
hardware to offer composable,secure and manageable AI-ready

(41:38):
data centers.
And third, security must scalewith AI adoption.
As agentic workflowsproliferate and inference demand
becomes persistent, enterpriseneeds security models that can
handle billions ofmachine-to-machine interactions.
In the end, the message fromCisco Live could not be clearer
Don't get stuck at the startingline.
Start now.

(41:58):
Start small and build withpartners who understand the
complexity of making AI real andsafe at enterprise scale.
If you liked this episode of theAI Proving Ground podcast,
please consider sharing withfriends and colleagues and leave
a rating or a review.
And don't forget to subscribeon your favorite podcast
platform or watch on WWTcom.

(42:19):
This episode was co-produced byNaz Baker, cara Kuhn, mallory
Schaffran and Stephanie Hammond,and a special thanks to the
teams at Cisco and NVIDIA, aswell as Tori McLeod, sarah
Chiodini, rod Flores and DianeDevery.
Here at WWT, our audio andvideo engineer is John Knobloch.
My name is Brian Felt.
We will see you next time.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.