Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jim (00:06):
Welcome to Trading Tomorrow
Navigating Trends in Capital
Markets the podcast where wedeep dive into technologies
reshaping the world of capitalmarkets.
I'm your host, jim Jockle, aveteran of the finance industry
with a passion for thecomplexities of financial
technologies and market trends.
In each episode, we'll explorethe cutting-edge trends, tools
and strategies driving today'sfinancial landscapes and paving
(00:29):
the way for the future.
With the finance industry at apivotal point, influenced by
groundbreaking innovations, it'smore crucial than ever to
understand how thesetechnological advancements
interact with market dynamics.
Today, we're diving into atopic that is at the very
(00:53):
foundation of AI's rapid growthdata centers.
As AI workloads intensify, sodoes the demand for energy and
advanced computinginfrastructure.
How can the industry scalewithout overwhelming power grids
, and how does sustainabilityand AI innovation intersect?
To answer these questions,we're joined by Yuval Bakar, the
(01:13):
founder and CEO of Edge CloudLink or ECL, a pioneering
company revolutionizing AI datacenters with sustainable
off-grid solutions.
Yuval's career spans leadershiproles at Microsoft, linkedin,
Facebook, cisco and JuniperNetworks.
With a focus on sustainability,he's leading the charge in
building carbon-free AI datacenters.
(01:35):
Yuval is also a recognizedleader in digital infrastructure
innovation, with eight USpatents and a deep understanding
of data center efficiency,compute density and next
generation cooling solutions.
Today, he's here to share hisinsights on how AI
infrastructure is evolving tomeet the technology's insatiable
demands without burning throughresources.
Guest (01:56):
Thank you for having me.
It's a pleasure and an honor tobe here.
Jim (02:00):
Well, you know, perhaps we
could start by just explaining
what makes AI data centers sodifferent from traditional data
centers in terms of energyconsumption, infrastructure, I'm
sure, water cooling, the wholething.
I'd love to get yourperspective.
Guest (02:15):
Yeah, I think when we
refer to AI data centers, I
think initially we are referringto data centers which are
targeting the LLMs the trainingpart of it.
There's a completely differentdiscussion we should have
probably about the inferencedata centers, but let's start
with the LLMs the training partof it.
There's a completely differentdiscussion we should have
probably about the inferencedata centers, but let's start
with the LLM ones.
On the training data centers,the phenomena we're seeing is
tremendous multiplier on thedensity of the racks and the
(02:37):
need to put a very largequantity of hardware into a very
small space.
That is challenging most of thedata centers which are out
there in the world today,because the data centers up
until two years ago in the USwere averaging eight kilowatts
per rack and we reached a pointtoday that we are at 150.
(02:57):
And if you listen to Jensenfrom Monday, we are targeting
600 in the next two years, 600kilowatts.
So that's almost 100x on theaverage which was there just
three years ago.
That is the major challenge toany data center to be able to do
that.
The secondary challenge, beyondthe density, is the ability to
(03:20):
operate with liquid in the datacenter.
Liquid cooling initially wasactually used only for liquid
cooling-based air, but now wehave a significant level of
liquid cooling which requiresdirect-on-chip liquid cooling.
Direct-on-chip liquid coolingforces us to bring very high
capacity of water at very highpressure and flow into the data
(03:43):
center to feed elements which dovery high level of cooling into
the chips themselves, not theroom, but the chip itself.
That requires infrastructure ofwater, which again does not
exist in most traditional datacenters and require specialty
data center to be built likethat.
The third thing, which isactually probably one of the
(04:03):
biggest problems that we havetoday, and it's not necessarily
related to the data centeritself, it's actually the
surrounding infrastructure,since these are very large sites
and require very largequantities of power.
We just ran out of power so wedon't have power to actually
feed those systems and datacenters coming from the grid.
(04:26):
The length of the time toactually increase the capacity
of the grid and the availabilityof power for those data centers
is in between three and five or10 years in some cases, and the
problem that we have today isthat we have data center
requirements to grow through thesystem very quickly and we
(04:50):
cannot have enough power in thegrid.
The secondary challenge thathappened to us is data center
build-out time, which is alwaystraditionally was between three
and four years, is not matchingthe technology build-out time.
The technologies are changingevery nine to 12 months.
Between three and four years isnot matching the technology
build-out time.
The technologies are changingevery nine to 12 months.
(05:11):
Nvidia is delivering a newplatform every nine to 12 months
and our cycle to build used tobe three to four years, which
means that it's about four tofive cycles of technology.
That means that if you start aproject today, you cannot
actually be able to deliversomething that will be
addressing Gen 5 from now,because nobody knows how Gen 5
from now is going to look like.
(05:31):
So the cycle of data centerbuild-out is actually an
impeding aspect of what we do.
What a lot of people do todaythey over-design basically.
So they say, okay, I have noidea what I'm going to need, so
I'm going to design the maximumI can today with today's
technology and hope that inthree or four years, when I
deliver the data center, it willaddress the needs.
(05:52):
That is a huge risk to thepeople who build data centers
right now, and it doesn't looklike NVIDIA is slowing down.
So, on the contrary, it lookslike they're accelerating the
processes, which means that wehave to go through a very, very
fast cycle right now, and all ofthis is a challenge for the AI
data centers.
So, if I summarize, very highdensity liquid to the data
(06:16):
center racks, which are pushingthe 150 today, will be pushing
hundreds of kilowatts per rackin the future.
Power delivery to those racks,power delivery to the data
centers themselves, and beingable to sustain all of this at
high availability All of thistogether creates a major
challenge in the data centersand make them special.
Jim (06:36):
Well, in terms of just the
resources alone, we've seen
Google making investments intosmall nuclear reactors.
But just even the availabilityof water has to come into play
in terms of where these datacenters can be located.
100%.
Guest (06:53):
The water, while the
complexity is to bring the water
into the data center itself.
There is a complexity in howmany gallons of water can you
use in the community you'rearound.
Those communities in some casesdo not comprehend how big is
the consumption of water can be.
It can be hundreds of millionsof gallons per month for water
(07:15):
consumed which is taken from theenvironment around you.
So, unless you're in anenvironment where you're rich on
water or there's excess water,where you're rich on water or
there's excess water, you'reactually taking the water from
the communities around you.
If you operate in a standardenvironment.
Jim (07:30):
So let me ask a question
here, just because I don't know
the answer, and I've wonderedthis Is it freshwater or
saltwater something that can beutilized?
Guest (07:39):
You have to use
freshwater.
Saltwater is way too corrosive.
So if you use saltwater youhave to actually run the
decination on it to actuallydeliver clear water with no
salts in it.
The salt is very corrosive tothe systems and will not allow
the system to operate.
Jim (07:59):
Given those kinds of
constraints of power and water
and the impact on communities, alot of state regulators, at
least here in the US orcountries, are probably imposing
a lot of regulation as itrelates to where these data
centers can actually be built Ahundred percent.
Guest (08:18):
We got to the point that
we're running out of power in a
lot of places and we just can'tplace anything over there.
The communities are not willingto accept the data centers from
multiple levels.
One is the power consumption,the water consumption, the air
pollution that they're creating,the noise pollution they're
creating and the way they look.
Data centers are not veryappealing buildings.
If you saw them, they look likeone big brick of concrete with
(08:41):
no windows.
Usually the data center is notgoing through some kind of
architectural beautificationcycle, so all of these are
creating challenges in thecommunities.
The communities are trying toactually right now impose
exactly what you're describing,and that's restrictions on how
many data centers can come in,what kind of data center can
(09:04):
come in and what's theimplication for data center to
come in.
A lot of places now require foryou to bring your power into
the grid if you want to bringyour data center to connect to
the grid, and that's verycomplex because data center
companies are just not powergenerators.
They have not done powergeneration and they're looking
for ways.
Like you mentioned, google andothers are looking for other
(09:26):
ways to bring power into thegrid to be able to enable to
grow the data center footprint.
Jim (09:30):
So now your company, ECL,
is at the forefront of
sustainable data centertechnology.
What inspired you to createoff-grid hydrogen-powered AI
data centers?
Hydrogen-powered AI datacenters.
Guest (09:42):
Yeah.
So because of my history I usedto work for hyperscalers for a
long time and build data centersboth in a hyperscaler
environment as well as in aco-location environment and I
got to the point that I saw thatthe drive in those large data
center operators to createwhat's called carbon neutral
(10:03):
data centers was not necessarilydesigning the data center
better, but actually creatingcredits and offsetting the data
center impact to the environment, and for me, it was always the
wrong way to address the problem.
The right way, in my view, wasto actually build an actual zero
emission data center which doesnot take water, and that led me
(10:25):
to actually say okay to do thatand create a disruptive data
center in the world.
We have to actually build adifferent kind of data center,
and that's how ECL was born.
Ecl was born from a perspectivethat let's prove and let's
build a business model thatshows that we can build a
sustainable data center now, in2024, 2025, and not in 2035 or
(10:47):
40, like others were claimingbefore and show that it's
actually have a right businessmodel.
Now, this happened pre-AI, whenAI came into play.
We actually created a perfectmatch into the data center
requirement of AI because we hadwater in the data center and
we're running off-grid and we'rezero emission and zero water
and all those things actuallyfail like a perfect match into
(11:10):
what AI is requiring us to do.
But the motivation was tocreate a sustainable data center
with zero emission, zero waterfrom the first moment and make
it very high end, and that willenable people to actually
operate at the level that thehyperscalers operate, even if
you're not a hyperscaler.
Jim (11:29):
So what is MV1 in Mountain
View and why is it being called
a groundbreaking facility?
Guest (11:35):
So MV1 was the first
implementation of the
Sustainable Data Center.
The ECL Data Center is amodular data center.
So what does it mean?
Modular?
We build fixed blocks of 1 to1.5 megawatts.
It's a structure it's not acontainer or anything like that
which is running completelyoff-grid.
So it's running aself-generation of energy and
(11:58):
it's running at the same timevery high-end cooling system and
delivery to the data center forhigh density.
Mv1 was the place where weactually developed all the
hardware and delivered thisplatform, which is one of the
fastest-growing platforms in theindustry today and that's the
high-end data center.
It's called AI factories insome places it's called other
(12:20):
names which enables very highdensity.
And just the example for thatis we just deployed the first
two 150 KW with liquid coolingdata centers eight weeks ago and
they're running in productionright now with our customer.
Very few data centers in theworld can actually do that.
That's a very complex problem.
(12:41):
What we did in MV1, the firststep is we said we're not going
to attach ourselves to the gridover here.
So we went and built aself-generation of energy with
liquid hydrogen delivered to theside and running hydrogen-based
power generation units withfuel cells.
There was a breakthrough inpower generation.
It was a breakthrough increating stationary power based
(13:02):
on hydrogen and it was abreakthrough in actually
changing the hydrogen business,because the hydrogen business
was never in the business ofdoing stationary power.
It was always for petrochemicaland other industries.
It was never serving powersystems.
We're the first ones whoactually went and built a power
system based on HydroGem and ranit for the last year Like two
(13:24):
weeks ago it was the one yearanniversary that the site did
not go down running on HydroGemand to do that, we did a lot of
things in changing thearchitecture of the data center,
removing the diesel generatorsfrom our footprint, removing the
UPS systems from our footprint,creating a new power
architecture which is moreadequate for the future of AI
(13:46):
requirements and delivering thatin a breakthrough time.
We built a site within less thana year and that's the first one
.
When you build the first oneand you do engineering
development on it while you'rebuilding it, one year to deliver
.
This is with no precedence.
Now, it is the first smallblock, right, so the 1.5 meg
(14:06):
block.
But the huge advantage is whenwe scale to large sites and we
announced a one gigawatt site inTexas in September.
We just repeat the block many,many times.
We're not doing any scale up,we only scale out, which reduces
dramatically the risk into thetechnology, because the
(14:27):
technology has been proven onone block and it's just
repeating as many times as weneed so that the technology is
not critical in the path.
The second thing is because ofthat proof of concept that we
have, we can accelerate thebuild-out and we can build data
centers in a matter of ninemonths from the moment we get a
PO to the moment we give the keyto the customer at large scale,
(14:51):
hundreds of megawatts.
And the reason we can do it,part of it is the modularity
that has been proven to beworking in Mountain View.
Mountain View, the MV1 site forus, was supposed to be a
development site, but because itwas such a high-end data center
and was actually performingvery, very well, we have paying
customers.
Right now it's moved toproduction.
(15:12):
We didn't plan to move it toproduction, but it's moved to
production and we have payingcustomers on site which are
actually running their ownbusiness on the equipment in the
data center, and that by itselfis a huge achievement for us.
Jim (15:26):
That's incredible, you know
.
You make me think.
You know, like going back to,like you know, the classic Tesla
type case study, right, theemerging technologies, but then
again, now we need a wholenetwork of gas stations or
electric charging stations,which led to solar.
What kind of technologies, aspart of this, are emerging, by
(15:49):
yourself or with your partners,that are benefiting other
industries in terms of takingadvantage of everything that
you're doing?
Guest (15:56):
Yeah, so the first step,
we are making a significant
change in the development offuel cells for a stationary
solution.
Fuel cells have been in themarket for a long time but they
never worked really well forstationary.
So we're actually changing theway fuel cells are being
designed right now with ourlarge partners, which are some
of them are on the automotiveside, some of them are coming
from traditional fuchsiabusiness.
(16:17):
The second thing we're changingthe hydrogen business models
and the hydrogen operation ofthe suppliers of hydrogen.
Today we are building aroundthe pipeline, which is a
pipeline which is existing inthe Houston area in Texas and
it's existing to actuallysupport refineries and
petrochemical and suddenly wecome and bring a green, clean
(16:41):
industry into the pipeline.
We are changing the number ofpipelines that are being built.
We are changing the quantity ofhydrogen which is being built
and the type of hydrogen whichis being produced.
Hydrogen right now is mostlygray hydrogen, which is a
relatively high carbon hydrogen.
We are actually driving bluehydrogen, which is very low
(17:01):
carbon, and definitely drivingmore green hydrogen production,
which all of the businessesrequired.
An off-taker who is going touse the hydrogen that you're
producing?
We are a very large off-takerand we offer that not only in
the Houston area.
We'll offer it in other places,another area that we have an
impact.
For a long time, we made a hugeinvestment in sustainable
(17:25):
generation of energy, if it'ssolar or wind.
The challenge with that energyis that it's unreliable One day
it's there, tomorrow it's notthere.
There's six or seven hours aday that's actually active.
We have to figure out how tomove those into being a reliable
solution.
You see, I'll develop aplatform which is actually
taking these unreliable powerand converting it into a
(17:48):
reliable 24 7 365 solution,combining hydrogen high capacity
storage, which is already builtinto our architecture, and the
self-generation of power.
This is a game-changing.
This is a phase two for us thatwe will build data centers
based on that, but that willenable us to put data center
anywhere.
Anywhere that there's a solarplant, anywhere there's a wind
(18:08):
plant, we can actually put acombo hydrogen generation,
hydrogen usage, direct feedbehind the meter and large
capacity storage solution fordata centers at scale.
And that is something that,again, was very difficult for
data centers to do because ofthe availability of solar or
(18:29):
wind power.
With the high level ofreliability that we need for
data centers, we solved thatproblem and we will continue to
build next to those areas.
So that's an impact acrossmultiple industries.
It's impact across multipledevelopments.
And the last thing is communityintegration that we talked about
.
Ecl is coming into adevelopment, into a community,
(18:50):
investing multiple billions ofdollars into the data center and
requesting nothing from thecommunity.
Because we don't take the powerfrom the grid, we do not take
the water from the environment,because when we generate energy
with hydrogen, the byproduct iswater and we use that water to
cool the data center and toevery other water use that we
(19:10):
have on site.
So we don't take water from ourcommunity around us and just
bring them development.
And that's again a game changerin the integration with
communities, because you don'tcome to the community anymore
and say I'm going to take yourresources, just accept that
because I'm going to give youeconomical development.
We said, no, we will give youthe economical development and
(19:32):
we will not take any resourcesfrom you.
And in some cases we even giveyou back water because we're
producing, in some cases, morewater than we actually need, and
that is again a game changer inhow you actually interact with
the community.
We also developed a communityprogram, which is an education
program that supports thecommunities that we're going to
(19:53):
to generate jobs which are datacenter related.
The data center world right nowhas one big shortage on top of
energy and that's resourceshumans that actually will work
and operate and develop andbuild the data centers.
We are creating programs toactually create this workforce
for us locally, so we don't needto bring people from the
outside.
We want to create localcontribution to the economy and
(20:16):
to the people that live in thecommunity we come into.
Jim (20:18):
You talked about the MV1,
and perhaps you can talk a
little bit about TerraSight, theTX1 project, and how are they
different?
Guest (20:26):
Yeah, so the TerraSight
build-out, first of all, is in
Texas, it's not in California,and the reason it's in Texas is
that we have a pipeline access,actually multiple pipeline
access into the site that weoperate in.
The site in Mountain View is a1.25 megawatt block.
The site in Texas is 1,000 ofthose.
(20:47):
So it's a very, very largescale and we're capable of
putting 1,000.
We build them in phases.
Of course, we're not going tobuild them in one shot, and that
will actually be one of thebiggest data centers in the US
that are going to be built inthe coming three years or four
years.
And the reason we can scalethat way is the modularity as
(21:08):
well as the availability ofhydrogen into our environment.
We have hydrogen in Texas andwe are offsetting the hydrogen
use that went into refineries togo into clean energy generation
, which creates a hugepartnership for us.
The second thing over there iswe can address, with the site in
Texas, very large customers.
(21:29):
So from our perspective we canoperate with very large
customers that have a minimumrequirement of 100, 150
megawatts per phase.
That we couldn't do in MountainView because Mountain View is
very small in size and limitedin resources.
That's another thing which isvery different, but except that
the blocks look exactly the sameas MV1.
(21:50):
The blocks are just repeated.
The one interesting thing thatwe do with our customers is,
since our phases are nine monthsto 12 months phases, we let
them change the block on the fly, so they can change the block
every nine to 12 months, andthat's a huge difference than
what we have in other places andother companies can actually
offer.
So every nine to 12 months ourcustomers define a new spec
(22:13):
because there's new technologythat comes in there.
That's a very unique way tobuild a data center, because you
build it in incremental way andnot building one large campus
on the first day, and that'ssomething that we can offer in
Texas as well.
We do have customers in Texaswhich are relatively large
customers.
We can't name them yet, buthopefully we'll be able to name
them very soon in a pressrelease.
Jim (22:35):
We'll be waiting for that,
definitely.
So you know, obviously we'renow at the point where quantum
computing and AI are definitelypushing boundaries.
I think if I sat here a yearago from today, the world is
completely different.
But you know what?
Do you see the role of quantumcomputing in terms of
(22:57):
potentially reducing energydemands in future data centers?
Guest (23:01):
Quantum computing is.
We can talk for a whole hourjust on quantum computing, but
quantum computing by itself as atechnology is definitely a game
changer.
Itself as a technology isdefinitely a game changer.
The big question mark right nowand we see the first
implementation of quantumcomputing the question is can
(23:21):
they actually scale intolarge-scale operations in
production?
Because everything that we seeuntil now is like experimentals
and we're talking about three tofive to seven years before we
can actually see them in fullproduction.
I think we can't really predictwhat will happen.
It definitely will actuallygive us much higher power of
(23:42):
capability for compute and doall kinds of things that we
couldn't dream about doingbefore.
The question is how quickly canyou move it to production?
And how quickly can you move itto production in traditional
environments that it will scale?
If you need a very specialenvironment that requires tens
of billions of dollars ofinvestment just to make it work
(24:03):
for a single unit, then thatwill not scale properly.
We need to make sure thattechnology scales properly to be
able to leverage what it does.
But in my opinion, quantumcomputing is going to be the
next wave after the GPU styleworkload that we create right
now.
I'm sure NVIDIA is looking atit on top of others which are
working on that, even thoughNVIDIA is focusing right now
(24:25):
100% on high density GPUs toactually deliver the AI
platforms.
Jim (24:30):
So there's an ongoing
debate about centralized versus
decentralized AI computing.
Do you think AI workloads willremain heavily tied to large
data centers, or do you thinkwe'll see more edge computing
and distributed AI?
Guest (24:45):
So I think for the
learning stage of the models,
you will need decentralizedlarge environments and that's
where the reference to AIfactories, this is where you're
going to create the models.
You will need decentralizedlarge environments and that's
where the reference to AIfactories, this is where you're
going to create the models Forinference, which is where you
use it.
When you type a question inGrok, for example, that's
inference, because Grok does notdo learning from the questions,
(25:06):
it's actually just applying the.
That will actually push, willbe pushed to the edge in my
opinion, and that will actuallybe pushed to the edge in my
opinion, and will enable us tobuild smaller sites, like 10, 15
megawatts of sites which arecloser to the endpoint, to
enable to give faster responsesto people.
Not every application issensitive into latency, but most
(25:26):
people actually have a highlevel of sensitivity to get an
answer quickly to the questionthey ask.
They usually are not willing towait.
So I think the next five yearswe will see a higher and higher
deployment of inference sites,which are smaller sites, all
over the place and again thelimiting factor will be, like we
(25:46):
see right now, is theavailability of power, because
in the major cities we see zeroavailability of power.
So we don't even see smalllevel of availability, we see
zero availability and for anyimplementation they will try and
go into those areas.
There's got to be a solution.
How do you actually solve 10,15 sites like this in the middle
(26:06):
of Manhattan or Chicago or anyother place which is a large
metropolitan?
Where do you get the power from, Unfortunately, Yuval.
Jim (26:14):
We've made it to the final
question of this podcast.
We call it the trend drop.
It's like a desert islandquestion.
If you could only track orwatch one trend in the
development of AI and datacenters, what would that be?
Guest (26:26):
The one trend that I
would track 100% is a trend
which is how do we get power toenable the initial deployment of
AI factories?
How do we get the power for thatand how do we actually deploy
it on time at the level ofperformance that we expect from
the chip manufacturers?
And that is something that isgoing to make or break the whole
(26:48):
AI phenomena.
If we can't build theinfrastructure for the AI
applications and track the fastextremely I would say lightning
fast path where the chips to dothe technology is actually
growing at an exponential level,if we can't build
infrastructure for that, it willhalt the effort completely.
(27:11):
The secondary associated withthat is how does the companies
who actually leverage that AIactually have a business from
that and make money?
Because the bottom line, a lotof companies will not invest a
lot of money if they can't see abusiness model that actually
works and profitable.
And right now a lot of peoplemake speculation on investment
(27:33):
into this market, buy a lot ofequipment, build a lot of
infrastructure, but only ahandful of the companies is
actually profitable.
So we need to make sure thatthere is a path both on the
business side for profitabilityas well as the infrastructure
side to actually support thatlevel of profitability and that
level of growth in this market.
Jim (27:51):
Yuval, thank you so much
for your time, your insights.
As we all look at AI, this isnot an area we all tend to think
about, so really, reallyilluminating.
Guest (28:02):
Thank you, it was my
pleasure.
Thank you very much for havingme.
Jim (28:11):
Thanks so much for
listening to today's episode and
if you're enjoying TradingTomorrow, navigating trends and
capital markets, be sure to like, subscribe and share, and we'll
see you next time.