All Episodes

May 21, 2025 40 mins

As AI pushes the limits of traditional IT infrastructure, enterprises are racing to modernize their data centers. In this episode, Mike Parham and Bruce Gray walk us through the behind-the-scenes decisions that matter — from power and cooling challenges to GPU readiness and sustainability. Whether you're modernizing or starting from scratch, this conversation is your blueprint for AI-ready infrastructure.

Support for this episode provided by: Vertiv

More about this week's guests:

Bruce Gray, a results-driven IT executive at World Wide Technology since 2007, brings 25+ years of experience in Building Automation, IT, and Telecom. As Practice Manager, he leads business development and execution for data center design and IT facilities infrastructure. With a background in architecture, programming, and electrical engineering, Bruce excels in strategy, project management, and vendor relations—always seeking challenges that drive impact.

Bruce's top pick: Data Center Priorities for 2025

Mike Parham is a Technical Solutions Architect at World Wide Technology, where he has been delivering innovative IT solutions since 2011. With a strong technical background and a focus on aligning technology with business goals, Mike helps clients design and implement scalable, efficient architectures. He is known for his collaborative approach, deep expertise, and commitment to driving successful outcomes.

Mike's top pick: AI and Data Priorities for 2025

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
In the rush to build the future of AI, there's a
crisis quietly unfolding in thebackground.
Our infrastructure isn't ready,power grids are straining,
cooling systems are maxed outand enterprise data centers
built for a different era arenow being asked to support
workloads they were neverdesigned for.
On today's show, I'll talk withMike Parham and Bruce Gray, two

(00:22):
experts on WWT's facilitiesinfrastructure team who've been
inside the rooms where thosehard choices are made, from
power density to GPU supplychains, to the geopolitical cost
of inaction.
They'll pull from personalexperience and take us deep into
the real world challenges ofbuilding infrastructure in an
AI-first future.
This is the AI Proving Groundpodcast from Worldwide

(00:45):
Technology Everything AI all inone place, and today's episode
isn't about hypotheticals.
It's about whether yourorganization is ready or already
behind.
So let's get to it.
Okay, mike Bruce, thanks somuch for joining us on the AI

(01:08):
Proving Ground podcast today.

Speaker 3 (01:12):
How are you doing?
Good Doing well, happy to behere Doing great Thanks for
having us.

Speaker 1 (01:17):
Hey, Bruce, I got to ask you you know, when you hear
that a single AI cabinet can nowdraw more power than a single
US home, what are theinfrastructure dominoes that
fall just from that simplestatement?

Speaker 3 (01:30):
Yeah, well, we work on six disciplines and that
triggers four of them right awaypower, cooling, space and
cabling.
So me and Mike get prettyexcited about that.

Speaker 1 (01:41):
Mike, any reaction to that?
I mean, what are theimplications I'm imagining?
I'm going back a couple monthsago to NVIDIA GTC, and NVIDIA is
announcing new racks, rubinRubin Ultra, that can go up to
600 kilowatts of power.
I mean, are you sitting thereon the sidelines with your jaw
on the ground?

Speaker 2 (01:57):
both here at Worldwide and before we joined
Worldwide.
So when I first got in theindustry over 20 years ago,
racks were probably in that 2 to4 kW range, right 2 to 4,000
watts.
We could do that with singlephase, traditional 120 or 208
volt power.
You know it slowly started tomigrate to three phase and, you

(02:17):
know, up until recently, 20 kWand below was, you know,
considered high density.
Now you know, some of theserack loads are 30 to 50 KW on
the low side, you know, over ahundred KW and, like you just
mentioned, 600 KW and beyondcoming.
So it's, it's crazy how fastthings have changed.
Uh, because it took 20 years toget to that 20 KW mark and a

(02:39):
matter of a year, year and ahalf.
It's well, well, well surpassedthat In a matter of a year,
year and a half.
It's well, well, well surpassedthat.
So it's exciting because ouryou know our area, I think has
been slow to change.
Right, data centers are builtfor 10, 15 years or longer, but
that's not the case anymore withwhat we're seeing from these
new NVIDIA chips and other AIloads.

Speaker 1 (03:02):
Yeah and Bruce talk to me a little bit about the
implications that it has on thedata center.
Obviously, 600 might be alittle bit off in the horizon,
but it's coming.
What does the 15 to 20, eventhe 60, what are the
implications that this is havingon the standard enterprise data
center?

Speaker 3 (03:20):
Yeah, well, it's nice .
It always talks aboutdisruption.
Well, we're finally thedisruptors, right.
So it's going to disrupt theway you bring power to your data
center.

Speaker 1 (03:28):
It's going to disrupt the way you code, obviously,
and it's definitely disruptingthings that people don't think
about by the space and thecabling, the cabinets are

(03:57):
getting bigger, the squarefootage needs to increase, the
aisle distance between cabinetto cabinet has to increase, so
it's total disruption ineverybody's space, even at the
lower.
Handle these AI workloads, ifany, and how many are on a path
to being able to support thoseAI strategies?

Speaker 2 (04:11):
Yeah, you know I would say very few, right?
So we see data centers, thetraditional enterprise data
centers or customers on-premdata center site, and then you
see the hyperscales right?
So the hyperscalers right,they're building these
facilities to be able to handlethose loads.
But it's going to beinteresting to see how the
traditional enterprise datacenter can support something
like this Because, again, whenBruce and I go out there, it's

(04:35):
sometimes difficult to do 20,maybe 25 KW.
Well, that might get you one AIbox right or two.
So I think it's going to bereally challenging to see data
centers that can accommodatethat.
You know the on-prem load.
So what we're really workingwith customers to do is, before
you even start talking, thattechnology that's going to sit
in the rack, stop, let's look atthe facility, what can and

(05:00):
cannot do?
So we have assessment servicesto come out on site to
understand and Bruce has justmentioned it the power coming
into the building.
Is there even enough powercoming to the building?
Do we need to go talk to theutility company?
What inside of that power chainneeds to possibly get upgraded?
Right, so you got power cominginto the building.
It needs to make it all the waydown to the data center and how

(05:20):
many devices along the way dowe need to possibly change.
So that's where a real thoroughassessment is going to look at
that, to understand what's theimplication from the IT side,
from the AI that's going to beon-prem.
What can we do in the currentdata center?
Maybe optimization and addingsome newer technologies, but
ultimately we might have toincrease the power coming to the

(05:42):
building increase the powercoming to the building.

Speaker 1 (05:49):
Yeah, Bruce, and those don't sound like easy
changes to make if you have tobring in more power or add
cooling or raise a floor or dowhatever it might be to make
sure that you're able to supportthat AI strategy.
But are these things that arelong time coming or do these
things take time to make changes?
And certainly there's probablya lot of costs associated with
it as well.

Speaker 3 (06:06):
Yeah, you've said it all right there.
It is a long time coming.
One of the things Mike and I'vebeen trying to educate on is
that you need to design or youneed to, you know, to plan for
your facilities now, at the sametime that you're planning them
for your AI.
I've heard recently that someof the new chips are about a 36
week delivery.
Well, generators, switchgear,crack units, any of the cooling,

(06:27):
that is longer than that insome cases.
So if you wait too long todesign and to do this facilities
uplift, you're going to missthe boat on the timelines.
So and you mentioned cost, thecost impact obviously, right.
I mean and you asked thequestion earlier, mike answered
it Let me go back to that for asecond how many customers are

(06:49):
ready?
I mean none, right?
Because if we're changing theway we apply the power, that has
a whole upgrade.
So when they're looking at this, they're defining just a pocket
of where they need AI and notthe entire data hall, right?
So we talk about legacyoperating systems and how we
migrate and make them efficient.

(07:09):
So that's our best asset is togo in and prepare the customer
with their existing legacy sothat they can move into the AI,
area that they need in a slowerfashion.

Speaker 1 (07:24):
And are we talking only on-prem here, or are we
also talking about there'sfacilities implications when
we're talking about colo orhybrid or any other way you
would run an AI workload, or arewe strictly talking just
on-prem deployment?

Speaker 2 (07:37):
Yeah, typically Bruce and I are getting involved for
the on-prem right.
So there's other teams hereworldwide to kind of help with
that GPU as a service.
Where am I going to put thatload first, if it's not going to
be in my data center and Ithink that's going to be the
case in most customers' stepapproach where it's going to
start off-prem first and then,when it does come on site, what

(07:58):
does that rack load look like?
Is it a little bit differentthan what they were doing,
training at a big GPU as aservice data center?
When they bring that inferenceload on on-prem, there's still
probably going to be, you know,major changes that need to be
required.
But we do have customers,whether it's in the education or
manufacturing space, that aredoing some of that training,
some of that high-dense loadon-prem today.

(08:20):
And, like we've all beentalking, there's no data centers
out there traditional datacenters anyway that are ready
for these loads.
So Bruce mentioned the powercoming in.
Not only do we have to possiblyupgrade the power, but just the
type of power that we're takingto these racks is different.
Right, in the US we'retraditionally 480 volt coming in

(08:40):
and it's typically gettingstepped down, but now possibly
we're sending that 480 voltdirectly to the rack and using
277, or we're doing some DCbusway in the back of the rack.
So there's just a lot ofchanges other than I just need
more power and cooling.

Speaker 3 (08:55):
Well, I was going to say you know Mike's talking
about the colos and you hadasked the question who's
prepared where they're trying toprepare, because that's where
many people are going to have togo.
But if you think about it,almost all colos are at a tier
two level, and so this isrequiring a much higher tier
level.
And so now they have to allretrofit and move to not only

(09:18):
new power, new cooling, newlayouts, but they also have to
move to the tier levels, whichis redundancy, for safety and
battery.

Speaker 1 (09:30):
Yeah, well, we've talked a lot about power thus
far, but there are three otherareas that we have in the
research that we've done here atWWT, talking about cabling,
cooling and certainly space andphysical area.
Mike, can you just dive through, give us a little bit of a
brief on why those things areimportant to consider and maybe
a little bit about what a lot ofus are not thinking about when

(09:51):
it comes to those areas in AI.

Speaker 2 (09:52):
Sure, yeah, I'll tap over, maybe, the cooling in the
space and have I'm sorry thecooling in the space and have
Bruce help with the cabling.
So, cooling perspective if wejust kind of rewind a little bit
and look at a traditional datacenter, the way they've been
designed for a long time is withwhat's called a raised floor
approach.
That's where I had the airconditioners that sit up against

(10:14):
the wall.
I'm pressurizing the raisedfloor, putting my cold air down
there, and then that cold air iscoming up from the raised floor
in front of the racks.
That's my cold out.
That approach is a solidapproach.
It's been very reliable for along time and we can optimize it
.
We can get fairly dense withthat approach and there's some
newer technologies whetherthat's fan, wall or others out

(10:35):
there.
We can get fairly dense.
But probably rule of thumb islet's stop around 15 to 20 kW
with that approach.
And if we need to go higher aircooling, there's newer
technologies.
There's things that's called inrow or close coupled air air
conditioning, where I can putthose AC units instead of having
them sit 30, 40 feet away andhope that my hot air migrates

(10:57):
over there, I can put those airconditioners right next to my IT
cabinets so they're grabbingall that hot air as soon as it's
produced.
They're providing that rackcool air on the front side and
we can go higher dense than thatright.
So that'll probably get youmaybe in that 30, 40 kW per rack
.
We can start doing rear door,which is the same concept of
that row base, but now I justturn it sideways, I put it on

(11:17):
the back of the rack so as allthat heat that's generated by
the GPUs, cpus, is expelled fromthe rack, it's kind of trapped
and it has to go through thatrear door.
So we're going to neutralize itand we can get fairly dense.
With that approach we can getup to maybe 50 to 80 kW and
perhaps beyond.
So the reason we want to kindof start with air is and I'll

(11:39):
hit liquid in a second but thereason we want to start with air
is we need to have thatenvironment well-tuned and
optimized, because air is notgoing away.
Even when we start doing liquidcooling.
Oftentimes that's only going tocool maybe 80% of the load.
So if we have a load right nowsome of the racks out there are
132 kW.
If I have to cool 15 to 20% ofthat with air and I have just a

(12:02):
traditional data center withraised floor, I'm still perhaps
going to struggle right.
So that's why we like to comein and do a CFD analysis to
truly understand what thatfacility can do If we do start
to do liquid cooling right.
There's a bunch of differentoptions and there's a bunch of
different ways to do it, but Ithink some of the common ones
that we're seeing customersapproach is called a direct

(12:23):
liquid to the chip.
So instead of immersion,instead of putting those devices
in some type of fluid, I'mgoing to take fluid directly
into the server chassis,directly to the GPU and CPU, and
that fluid is going to removemy heat.
Now I can do it positivepressure, negative pressure.
That fluid could be what'scalled a single phase or a two
phase.
So again, there's severaloptions out there, but that's
how we're really going to getthose racked entities.

(12:46):
And I know a lot of times whenyou talk about liquid to the
chip or liquid inside of therack, it's a little scary, you
know.
The truth is there's alwaysbeen water in the data center,
even when I have airconditioners that are up against
the wall and 30 feet away frommy IT equipment.
Those air conditioners have todo rehumidification, so I'm
always taking water.
It's all about how do youmanage the risk?

(13:07):
In-row typically was alwayschilled water, rear door can be
chilled water.
So again, there's always waterthere.
But now we're introducing itdirectly into the server and
we're doing that to really kindof capture the heat.
Right as it's created, thosedevices are getting so hot that
just blowing air across themsimply doesn't work anymore,
right?
So that's kind of the cooling.
And you know Bruce had mentionedon the space, our aisles are

(13:28):
getting bigger.
So traditionally what was calledyour hot aisle, cold aisle
right, cold aisle in the frontof the rack, hot aisle in the
rear of the rack, I would sayminimum four feet on both of
those right.
That gives you enough space toopen the doors, put equipment in
there comfortably work.
Whether it's ADA requirementsas well that you have to
consider.
But now if we put a rear dooron an IT cabinet and oh, by the

(13:55):
way, that IT cabinet is probablygetting wider and deeper, right
, so traditional rack was maybe42 inches deep.
Now they're probably at least48 inches deep and perhaps even
going deeper.
But behind that rack if I put arear door on it, that's
probably going to add a foot,maybe a little bit more, a foot
and a half perhaps.
And if I have two rows back toback, both of those just got a
rear door added to it toaccommodate that 20% airflow

(14:16):
that I still need to cool.
Well, I've just chewed up threefeet of my hot aisle, right, so
that's where you know.
When Bruce said earlier, somethings that we might not think
about is my aisles are going tohave to get larger to
accommodate different types ofcooling.
Perhaps that I have to do inthe data center.

Speaker 1 (14:33):
Yeah, Bruce, take us to that second half.
Talk about cabling and thespace considerations.

Speaker 3 (14:38):
Yeah, and let me, I'm going to jump on this space
with Mike.
There, you know he said it thecabinets are 42 inch deep and
the new hardware is 41.
You can't close the door onceyou connect it right.
So they have to go deeper ofcabinets.
So back to that disruption.
Now you have to change yourcabinets.
You can't reuse those existingcabinets, and so that brings us
into the cabling.
So what is the cabling?
Well, it's denser.
There's a lot more of itrequired and what's existing is

(15:02):
probably not capable of doingthe speed and bandwidth and the
latency that people were lookingfor, right.
So you're looking at, mostpeople probably don't have more
than what they started at 10 and40 gig.
They might have 100 gig.
Some maybe bigger customerswill go 400.
Well, we need 800 gig speeds onthis network, right, and we
need to use OM4 and OM5 fiber inmulti-mode so that we can pass

(15:26):
more light and we can pass itquicker, right?
Those kinds of things are veryimportant.
You think about the massiveamounts of data sets that are
going to be passed from GPU toGPU or GPU to storage, right,
and from.
You know, mike and I don'ttouch on this very much because
we're not the IT guys.
What's the latency requirementsthere?

(15:48):
You know, can we spread theseto maximize power and cooling?
And in most cases, because ofwhat they're trying to do, we
can't.
So the latency is veryimportant.
So these workloads need to stayclose together.
So then you know the amount ofcables that people don't
consider like.
Go back to the Coloconversation.

(16:08):
You buy a space and theyalready have the cable trays.
Well, those cable trays are notwide and deep enough to
maximize the 8,000 cables youneed per cabinet.
Now, right, they're made for acouple hundred cables, and so
the weight load, the density,the code requirements you have
to follow ANSI requirements of40% fill ratios.

(16:30):
And that's a problem with thecustomers are not thinking about
it.

Speaker 1 (16:36):
Okay.
Well, bruce, you talk aboutgoing from 400 cables to 800
cables, so that's adding space.
You talked about the depth ofthese racks going from, you know
, 40 to 41, 42 inches.
What else is taking up space,and why is space such a concern?
I guess it's just the fact thatyou have a certain amount of
capacity within a data center.
What else do people need to beasking themselves as it relates

(16:57):
to space constraints?

Speaker 3 (16:58):
Yes, yeah well, mike's talking about the
advanced cooling.
You know he's talking liquid tothe chip and all that.
You think about that.
If you're using an existingcabinet, which he said was two
feet by 42 inches, and we'retrying to use the same power,
you need more PDUs, right?
Each PDU can only handle amaximum load.
So to get to those densitylevels, you got to keep
increasing the amount of PDUs.

(17:18):
So where do you put all thesePDUs?
In a two foot by four footcabinet, right?
You put all these PDUs in a twofoot by four foot cabinet,
right.
Then you talk about the liquidcooling.
Where do you add the manifoldthat controls that fluid?
How does that fit into thiscabinet?
And then back to the you knowfour 8,000, 8,000 cables.
How do you manage that?
How do you get all those intothat cabinet as well?
So all this becomes a problem.
And if you don't manage thecable incorrectly, what you're

(17:40):
going to have is you're going tohave an airflow problem, which
gives you a heat problem, andit's going to give you a
maintenance problem because youcan't work on the gear if the
cables aren't managed correctly.
And on top of all that, whatpeople really forget about is
the labeling.
Every cable needs a label.
Well, think about adding thatmess to the world if you don't

(18:06):
plan it.

Speaker 2 (18:06):
A cable, a nightmare, a label nightmare.
This episode is supported byVertiv.
Vertiv offers criticalinfrastructure solutions to
ensure data centers reliabilityand efficiency.

Speaker 1 (18:14):
Keep your operations running smoothly with Vertiv's
robust infrastructure products.
Mike, take an average customerof ours.
Are these questions thatthey're already asking
themselves?
Or are these kind of ahamoments along their process
where they're like oh, I didn'taccount for that, I didn't
account for that, and now it'sbecoming more and more of a

(18:35):
bottleneck and delaying AIstrategy?

Speaker 2 (18:38):
Yeah, I think a little bit of both, but probably
more of a surprise to mostright where they didn't realize
how dense that these racks trulyare from a power perspective
and what they're going to haveto change.
Again, if we're backing upoutside of the data center, do
we have to change the powercoming into the data center,
meaning I need more and morecapacity Once I'm in the data
center?

(18:58):
We kind of reviewed differenttechnologies, right, I'm
certainly going to have tochange the type of power, the
redundancy level.
Do I need a third UPS in somecases?
Do I need to put in someoverhead busway?
Do I need to take differentvoltages to the rack?
So there's a lot of changesthat I think customers aren't
expecting Because, again, if weback up the, you know something

(19:25):
I had said when we first starteddiscussing this.
You know data centers typicallyhad a life of 10, 15 years or
longer, right?
So that traditional space ofbeing able to handle 20, 25 KW
per rack, 208 volt to that rack,that's worked for a long time
and it's not working anymore.
So I think it's going to keptsome certainly by surprise and I
think having that conversationat the very beginning is really
really important If we're goingto talk about any type of HBC or

(19:47):
AI on-prem in a traditionaldata center, let's stop and
first look at do that assessment, first understand what can the
space accommodate?
Because you know I think Brucehad brought it up about lead
times on devices.
You know also lead time on do Icall the power company and get
more power to the building?
What kind of lead time is thatright?
So not only lead times on maybenew generators and ATSs and

(20:09):
switch gear and all that goodstuff, but just getting the
utility company to bring me morepower.
And we've had conversation withcustomers about and Bruce does
a great job in our AI daypresentations about metering
behind right, managing behindthe meter.
Do I have to bring my own power?
Do I have to put power on-prem,meaning some type of microgrid,
because the utility just can'tget any power fast enough?

Speaker 3 (20:33):
Yep, go ahead, bruce.
Let me add to what Mike said.
Mike and I do the.
You know we do the AI daypresentation along with all the
others, and ours is the scarypresentation, right Is your data
center the Achilles heel, andwhat we do is we scare them.
We tell you that it's going toimpact and everything we're
talking about so far how it'sgoing to impact power, how it's

(20:53):
going to impact the coolingcabling in the space, right and
so the feedback we're receivingis exactly that.
Those customers are notplanning and thinking about that
, especially the designing thesein parallel aspect.
Many customers have come up toMike and I after and we've had a
lot of follow-on workshops nowtalking about the preparation
for AI and the AI readinessassessment that Mike's
referenced a couple of times,and so, yeah, the customers

(21:15):
aren't really preparing andthey're being surprised by some
of the concerns.

Speaker 1 (21:20):
Yeah Well, recognizing that this could be a
bottleneck and it could lead tohidden costs, which nobody
likes.
How would AI?
How would those that aredetermining AI strategy message
to the board or to executiveleaders that this facilities
conversation is one that we needto have upfront and early, and
often Should they just be saying, hey, this could significantly

(21:42):
set back our strategy if wedon't account for it?
Or have you found any successin clients that we do work with
in terms of messaging that?

Speaker 2 (21:51):
Yeah, it's.
It's certainly going to to needto happen.
Right, the messaging does needto come.
And where does it come from?
Because a lot of times you stillsee maybe different types of
struggles inside of the customer, where you have the facilities
camp and you have the IT campand sometimes you know one
doesn't know what the other oneis doing or, you know, doesn't
understand the technologies oneither side.

(22:13):
So if I'm strictly on the ITside and I don't understand that
my traditional data centercan't cool 132 kilowatt rack,
I'm getting ready to slide in,right, I may not even bring that
up.
And then on the facility side,you know, if I'm just relying on
the way we've done data centerfor 20, 30 years and not knowing
that what's coming from the ITspace is now 20 times the amount

(22:35):
of power, you know that's that.
That's a big disconnect.
So I think the answer iseducation, right, education, and
making sure to have theseconversations very, very early.
You know, and that's where wecan do workshops, we can do that
education track to make surethat both sides, both parties
inside of that customer trulyunderstands how each is going to

(22:58):
impact the other.

Speaker 1 (23:00):
Yeah, and Bruce, these aren't lessons that we're
just kind of, you know, makingup here.
We've actually learned some ofthese lessons ourselves.
We, you know, we have createdour own AI proving ground, which
, you know, of course, is whatour podcast here is named after.
Walk me through the journeythat we had with the AI proving
ground and perhaps some of thelessons that we learned, either
the easy way or the hard way,and how that would relate to

(23:20):
some of our client journeys asit relates to their AI strategy.

Speaker 3 (23:24):
Yeah, and they made a commitment to build the AI
proven ground in the one datahall that we have HTC One and
when they went to do this, theamount of power was not
available, just like Mike hasexplained earlier.
So we're a customer and we wantmore power.
So we go back to utility andask for more power, which is a
simple solution until you findout the utility can't deliver

(23:46):
power for 18 to 24 months.
Right, how do I get mytechnology rollout without that?
So what we have to do is shiftworkloads.
Right, they had to move thingsaround.
We mentioned we touched onbriefly what are we doing with
our legacy power and coolingmodels?
How do we make them moreefficient?
And that's just what our teamhad to do over there.
Mike and I aren't part of thatteam.
The ATC team has their owngroup of people that make this

(24:08):
happen, but that's exactly whatthey had to do.
They had to shift workloads.
They had to consolidate,optimize to be able to bring
that additional power.
They needed to build out the AIprogram.

Speaker 2 (24:19):
Yep, and something to bring up there and Bruce
mentions this a lot when we dothese AI roadshows is what about
Billy New?
Is it going to be quicker andeasier and better?
Perhaps, if I just kind of notnecessarily abandon my data
center, maybe I'm going to usethat for my low density on-prem

(24:39):
loads that I have.
But I'm just going to startfrom scratch, and we've seen a
real kind of boom in that spacewith modular prefab data centers
where a customer can have adata center stood up maybe
quicker and easier thanconstructing a whole new
building.
So there's partners out therethat we work with and I think
it's a real value that Worldwideadds.
We can certainly architect itand bring in the best debris

(25:03):
technology from the IT side ofthe house, but then we can do
that same thing with thefacility side and get these
modular prefab data centerscompletely integrated and
turnkey for the customer, so nowthey can stand up this HPC AI
environment a little bit quicker, as opposed to waiting for a
new data center to be built.

Speaker 1 (25:21):
So is that taking kind of a greenfield approach
there, then, with those withthat modular approach?

Speaker 3 (25:26):
Absolutely Yep you bet If you look at an analysis,
if you do an AI ratingsassessment and you've got to
decide what, what the customers,the customers that decide what
their goals are.
Is it speed?
Is it money?
What is it that's theroadblocks for them?
To retrofit a data centerdefinitely takes longer than
building new.
It's not cheaper.

(25:47):
Usually building new is moreexpensive, but it gets you to
your end goal quicker.
So if speed of deployment toget the customer results you're
looking for, then Mike's spot onthere with modular and new
builds.

Speaker 1 (26:01):
Well, whether you're retrofitting or going the
greenfield route, you know, ifdata center life cycles were
taking that 10 to 15 years, mike, that you'd mentioned.
Well, the AI hardware, you knowlandscape feels like it's
cycling out, you know, everycouple weeks, if not shorter.
So how can you maintainflexibility in that environment,
knowing that things are alwaysgoing to be changing on the

(26:22):
horizon?
Or is it just a game of catchup for the time being?

Speaker 2 (26:27):
Yeah, great question.
Because six or seven months agothere was questions about how
do we handle these 130 KW or 200KW racks and the technology in
some cases wasn't even availableyet, in some cases right.
So if we're going to now plan adata center to handle these AI
loads, that again maybe 50, 60KW on the low side, a little

(26:49):
over 100 on the high side, butknowing that 600 KW is coming,
that's a real challenge becausemaybe there's not even a
solution out there and I thinkthere is right.
So there was MoEMs showing sometechnology that handles 600 KW
per rack.
But how common is that going tobe to have customers have that
600 kW rack on-prem in theirsite?
Is that something like that,maybe just going to live at

(27:13):
these hyperscale kind of coloenvironments, or is that going
to be on-prem in a traditionaldata center?
So I don't know the answer tosome of that and I don't know if
our customers are understandingthat as well.
But yeah, to kind of answeryour question, that 10 and 15
year lifecycle has been the wayfor decades, Right, but now that
AI is changing in, therequirements put on, the

(27:34):
facility is changing sodramatically.
It's not just that, it's doingit in a quick time frame.
It's just that the capacitiesare a drastic change from 20kW,
30kw, maybe traditional, nowwe're talking 130kW, now up to
600kW.
That I think is a realchallenge to plan for Life
expectancy in my data center togo 5, 10, 15 years.

Speaker 1 (27:58):
Yeah, Bruce, anything to add from your end in terms
of trying to strive towards thatflexibility, knowing that
things are changing all the timein the business landscape?

Speaker 3 (28:07):
Well, one of the interesting things is chipsets
and GPUs are moving faster thanthe facilities right.
In every industry out there,there's a regulatory commission
or a governing body that setsstandards and parameters and
laws and regulations, and that'snot around yet.
That has not happened.
There's nobody.
Ashrae and others have notstepped up and said well, this
is how we're going to do liquidcooling, this is how we're going

(28:29):
to cool 600 KW.
So the OEMs that are out there,they're still trying to decide
what's next.
So it is going to be astumbling road for a little
while until we get morestandards.
Let's just say right, what kindof liquid are we going to use?
What kind of connectors are wegoing to use?
What kind of other technologythat's not even developed are we

(28:49):
going to be able to use?
So it's an interesting timeright now.

Speaker 2 (28:54):
Yeah, and I think also that which might help with
customers is I don't know ifwe're anticipating the entire
data center.
Again, I'm not talking, youknow, the hyperscale GPU as a
service kind of environment.
I'm talking traditional on-prem.
I don't think that's going tobe the entire data center, right
.
So maybe silver lining goodnews is, my entire data center

(29:14):
doesn't need to support thesedrastically different increased
loads.
Maybe it's a few rows, couplerows, one row perhaps, or just a
few racks a few rows, a couplerows, one row perhaps, or just a
few racks.
So it's going to be interestingto see how that evolves and how
quick customers bring thistechnology into their data
center and how much of it.

Speaker 1 (29:40):
I did want to dive deeper into the liquid cooling
aspect that you know.
Several months or a year or soago that felt pretty exotic, I
think, in most cases, butcertainly becoming more and more
common and we're seeing a lotmore vendors hit the landscape.
So I'm just wondering if youcan kind of break down exactly,
you know, what we're seeing inthe liquid cooling space and is
there any more innovation comingdown that line that'll help

(30:03):
maybe have a breakthrough for,you know, for handling some of
these workloads.

Speaker 2 (30:09):
Yeah, so I you know.
I think the good news is therewere a lot of options out there
and I think we're going to startto see some consolidation.
So some of the manufacturersthat we work with have already
started that process ofacquiring these technologies and
I think it was a little bit ofa hey, let's wait and see what's
going to win the battle from a,do I go liquid?

(30:31):
To the chip?
Do I do immersion?
Do I do single phase or twophase?
Do I do positive pressure,negative pressure?
So, to your point, a lot ofthis was exotic.
It was really something thatmost customers weren't using,
right, even when Bruce and Iwould talk about, you know, more
advanced air technologies,that's still in some cases not

(30:52):
common and a little exotic.
When we talk about hot aislecontainment and rear door heat
exchangers, right, that's notcommon in most cases.
So if we're introducing liquidinto the mix right at the IT
rack or inside the IT rack,that's certainly a little bit
different.
So I think good news is, again,we're going to see some
consolidation of some of thesepartners, some of the options.
I think we're going to see some, some standards set on the

(31:16):
other side of the fence, meaningthe IT folks.
Right, the GPU, the CPU makersare going to probably start to
choose their favorites GPU theCPU makers are going to probably
start to choose their favorites.
What we've started to see mostdesigns kind of settle on is a
direct liquid to the GPU CPU andit's got positive pressure.
So that, I think, is going tobe what we're going to see the

(31:36):
majority of, maybe for the nextyear or so.
The advantage of maybe, if we'relooking at different types of
liquid cooling, so thatimmersion concept, where I'm
putting the entire device into atub of liquid, I'm capturing
all that heat, right?
So we talked about it earlier.
The reason why we want tomaintain a well-optimized and
tuned air environment is that Istill need to cool perhaps up to

(31:59):
20% of my heat load.
That's not getting cooledcurrently today by direct liquid
to the chip or GPU.
Right, if I do immersion, I'mcapturing all that heat.
A little more exotic right,I've got to have additional
devices in there.
My floor is now laid out alittle bit different because the
rack is not standing up, it'slaid down on its side.

(32:19):
So some customers are usingthat.
You know, like Bitcoin kind ofmining technology.
We're seeing customers likethat use it.
I don't know if that's going tobe very popular in traditional
enterprise data center space.
Again, I think we're going tosee more of that kind of DLC
environment.

Speaker 3 (32:39):
Yeah, Bruce, any innovations taking place,
whether it's in the liquidcooling or the power, the
cabling, the space that give youhope that we'll be on the right
path here to tackle some ofthis stuff.
The thing is converting to DCis a possibility now with this.

(33:02):
You know, OCP rack and havingthis bus bar.
I was just at a meeting thisweek that that that was a high
topic and that how they're goingto convert you know other the
rest of the technology is aroundto run DC which is not in a
traditional data center.
I mean that's using the telcospace.

Speaker 1 (33:25):
So that's that's one that I think is gaining a lot
more traction right now.
Let's say that, you know, acustomer of an organization is
beyond the, the power, thecooling space, and and and the
concerns that we've brought upso far today, what lie?
What are the challenges thatlie beyond that?
Is it a talent question?
Is it a sustainability thing?
What, what, what are the moreconcerns after the fact?

Speaker 3 (33:47):
there's a huge talent shortage, right?
I mean, I think in mypresentation I have one fact
stated by the governing bodiesis there's over 400,000
technician spaces open in ourarea.
I'm not talking about ITtechnicians, I'm talking about
facilities technicians.
So I can relate that, say backto cabling is people don't value

(34:07):
cabling as high as it should beand if you don't get a quality
technician on the cable, you maybe putting something in that
doesn't get you what you need.
Right, you need those qualitytechnicians to come out and test
and tune so that you get thatbandwidth you think you have and
you get that latency reductionand so forth.
So there's definitely atechnician shortage.

Speaker 1 (34:28):
Yeah, Mike, what about you?
What do you think lies beyondthose four main concerns that
we've talked about?

Speaker 2 (34:33):
Yeah, you know.
Maybe something else that wehaven't talked about is are
there going to be any newregulations set that are going
to impact customers?
So if I do decide to gogreenfield right, I've decided
that I just cannot optimize whatI have now.
I need to start from scratch.
Do I have to comply with anynew regulations for data center
builds right?

(34:53):
Is that going to become more ofa topic and more municipalities
and states are going to requireyou to put some type of on-prem
generation of your own power,whether that's fuel cell or air
or, I'm sorry, wind?
You know that could besomething that's going to
certainly be a challenge andagain add to the complexity of

(35:15):
designing this environment andperhaps even lead to this law,
mike.

Speaker 3 (35:20):
It's a great one there, because we know that
there's bills out in many of thestates right now that are
exactly requiring that.
They're saying that when youpermit a new data center, you're
going to have to create Xbehind the meter, and that's
where the innovation is going tocome.
Everybody knows about wind andsolar, but what about nuclear or

(35:40):
hydrogen, or fusion, geothermal, even wave generation?
Where are these data centers at, and what kind of power is
going to be required from eachgovernment agency?
So that's a great point, mike.

Speaker 1 (35:55):
Well, moving forward.
I'll give you another headlinehere.
This just happened a couple ofweeks ago.
So Jensen Huang, nvidia CEO, asgood of a North Star as we have
these days in terms ofunderstanding where the AI
landscape will go says thatevery company will need an AI
factory moving forward.
So how do you see that fittinginto the future of data centers?

(36:16):
Is it just keep consideringeverything we've talked about,
or is that even going to pushthe envelope even further?

Speaker 2 (36:25):
You know, I think it's certainly going to have to
account everything we justtalked about.
And again, is that AI factory?
Where is it going to live?
Is it going to live on-prem oras a service type environment,
because that's perhaps the easybutton for a customer to do that
, to get that AI factory.
I'm going to do it off-prem, towhere I don't have to worry

(36:47):
about all those challenges andconstraints of my own data
center.
If they are going to bringsomething like that on site,
then absolutely we need to havethese conversations because
there are going to be bigimpacts and we need to
understand what they can andcan't accommodate.

Speaker 3 (37:01):
Mike and I surely aren't going to say Jensen's not
right, so but I think it'sgoing to be.
I think you hit it, mike.
It's going to be shirt sizes,right.
It's an AI factory.
I'm Mr Small Customer, I needsmall AI factory because I want
to do X, y and Z, or I'm a largeconsumer and I need a bigger.

Speaker 1 (37:18):
AI factory.
That's, I think, will happen aminute, but is there any
questions that you know that Ididn't ask here today, that you
would ask of one another interms of getting our listeners
organizations out there that areunderstanding how to advance
their AI workloads?
What should they be asking orwhat questions would you ask of

(37:39):
yourselves that would reallyadvance this conversation?

Speaker 3 (37:45):
I was going to say, mike and I are the how people.
You come to us and we figureout how to do it right.
So the questions will be whyare you doing it, what are you
doing and where are you doing itat right?
What's your goals, what's thosethings?
So we got to figure out exactlywhat you're trying to
accomplish right, what's thegoals, what's the challenge,
what's your business case andthen what you're doing it with.

(38:07):
Mike's already said a few timesis it a whole data center for AI
as a service or is it just?
I need faster Google searchesand I'm going to need a small AI
factory.
And then, where do you plan ondoing it?
Is it onsite or somewhere else?
Once we know the why, what andwhere, mike and I get to the how
.
So that's always where we wantto start with and Mike said it a
couple of times earlier thereadiness assessment.

(38:27):
Mr Customer, these are veryimportant things today.
We need to baseline, we need tofind out what you have, and
then we can apply the why, whatand where to that.
So that's what I would say.

Speaker 2 (38:38):
Yep, and that's what I was going to bring up too the
assessment for sure, and makingsure that the team, the
customer's team, is fullycommunicating on both sides of
the house.
So facilities and IT need tocome together to have these
conversations and understand howeach other is going to get
impacted with some of those newtechnology.

Speaker 1 (38:59):
Cool.
Well, mike Bruce, thank you somuch for joining us here on the
show today.
It was very helpful, veryinsightful and hopefully helpful
to those out there that may notbe asking these types of
questions within their ownorganizations.
So thank you again.
Thanks for all the work you dofor WWT and hopefully we'll have
you on again soon.
Sounds good, thank you,appreciate it.
Okay, great conversation.

(39:20):
So let's talk about some of thethings we learned.
First, ai isn't just a softwarechallenge.
It's a physical one too, frompower and cooling to real estate
and network architecture.
Scaling AI means rethinking theentire foundation of your data
infrastructure.
Second, the organizationsmoving fastest aren't just
adding GPUs.

(39:40):
They're planning holistically,anticipating density spikes,
supply constraints andsustainability requirements
before they become businessblockers.
Sustainability requirements,before they become business
blockers.
And third, every enterprise hasa choice to make modernize with
intent or fall behind bydefault, because in an AI era,
infrastructure is strategy.
The question is no longerwhether you need to modernize.

(40:02):
It's whether yourinfrastructure can keep up with
your ambition.
This episode of the AI ProvingGround podcast was co-produced
by Naz Baker, cara Kuhn, mallorySchaffran and Stephanie Hammond
.
Our audio and video engineer isJohn Knobloch.
My name is Brian Felt.
We'll see you next time.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.