All Episodes

April 10, 2024 39 mins

Send us a text

Embark on an enlightening expedition through the ever-evolving world of cloud computing with our special guest, Anil Murty from Akash Network.  Our conversation navigates the transformative landscape of cloud infrastructure, marking the pivotal moments that have led to the birth of  decentralized cloud . Discover why Akash's innovative approach is shaking up the industry, offering cost benefits and an enticing alternative to the traditional, long-term financial commitments of big-name cloud providers. Anil illuminates how Akash's vision is revitalizing the cloud's original promise of eliminating hefty upfront capital expenditures and offering scalability, reshaping it into a competitive landscape where resources are accessible to everyone.

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Anil Murty (00:00):
If you're somebody that wants to utilize fleet of
H100s or, soon, the GH200platform, not only do you need
to commit to spending a lot ofmoney, but in many cases, if you
go to the bigger cloudproviders today, you need to
actually commit to a year orthree years of spend with them
before you can even get accessto these GPUs.
And so this is sort of thatinflection point, I think, where

(00:21):
a solution like Akash reallyshines sort of that inflection
point, I think, where a solutionlike Akash really shines.
For example, if you were to goto akashnetworkcomgpu right now,
what you would find is that youcan geta H100 with an SXM
interface for as low as $1.50,which, I believe, is you know,
half or less than half of whatyou can get at many of the other
places out there.

Host (00:59):
Hi there, welcome to Buidl Crypto Today.
I have with me Anil from Akash.
Welcome, Anil.

Anil Murty (01:07):
Hey, it's great to be here.
Thanks for having me.

Host (01:17):
Great to have you.

Anil Murty (01:19):
Great to have you.
So, anil, could you talk alittle bit about yourself?
Yeah, sure, I'll be going intothat a little bit.
So yeah, so my background Istarted out my education was in
college, was in electricalengineering.
So I background in electricalengineering, undergrad and then,
even though I have a graduateengineering degree in electrical
engineering, I actually focusedon computer networks and so a

(01:42):
lot of my coursework there wascomputer science classes, a
little bit of electricalengineering as well, and then
for the first several years I'dsay probably about half of my
career, which is about twodecades at this point I spent my
time working on embeddedsoftware so this is typically

(02:03):
device drivers for embeddeddevices and so I spent a bunch
of years doing that at companieslike Motorola, working on
consumer electronic devices andwriting essentially what's
called the hardware abstractionlayer for devices like that.
And then at some point alongthat journey I got more and more

(02:25):
, you know, towards the customer, facing portions of the
business, and that's where Irealized that you know,
understanding the whole productlifecycle and identifying a
problem in the existing customerbase or, you know, sort of a
gap in the market was somethingthat I find exciting and that

(02:49):
sort of took me towards lookingat product roles as a potential
transition point in my career,and somewhere along there I also
ended up going to businessschool while I was working, and
you know, put that together withthe experience that I had as an
engineer I ultimately ended upmoving into a product role,

(03:10):
initially working for mid-sizedcompanies and then startups, and
then macro mid-sized companies,and then, over the second half
of my career as a product person, I got to work on everything
from networking and hardwaredevices to cloud networking
devices to pure cloud and then,ultimately, monitoring and

(03:33):
telemetry type companies.
So I spent some time atcompanies like New Relic and
HashiCorp prior to coming toAkash, prior to coming to Akash.
And so, given that I had awhole bunch of background in
building solutions forcloud-native products and
cloud-native customers, when Iwas approached by the folks at

(03:57):
Akash or at Overclock Labs,which is the parent company
above Akash Network and thecreator of Akash Network as well
, the project really attractedme because I looked at it as a
significantly new way ofimagining how infrastructure
gets utilized, and just lookingat how the clouds have evolved

(04:19):
over the last couple of decades,it became very clear to me that
the original premise that theclouds were created for is no
longer valid anymore in manycases, and what Akash was doing
seemed to be in the rightdirection in terms of seeing

(04:40):
where the industry was going togo in the next few years, and
that's what got me excited aboutAkash and got me joining
Overclock Labs.

Host (04:47):
I do remember Greg, one of his interviews saying that
story of tech is story of cloud,and so, rightly so, and based
on what your experience, youwould naturally gravitate
towards the Akash version of thecloud.

(05:09):
So could you talk a little bitmore about that?

Anil Murty (05:13):
Yeah, so you know, like you said, having been in
the tech industry for the lastcouple of decades, for the first
few years of my career, youknow, cloud was either
non-existent or very nascent andmost companies were running all
of their software on-prem andoffering it as a service or just

(05:35):
shipping software as binaryimages that their customers
would utilize, and so this waslike the early to mid-2000s,
right.
And so this was like the earlyto mid-2000s, right.
And then when the concept of acloud was invented back in I
don't know, 2006, 2007, byAmazon, and it's really started
to gain traction, probably fouror five years after that, where

(05:57):
you saw, you know, there wasthis inflection point where,
initially, cloud was primarilytargeted towards, you know, the
startups and the small andmedium businesses, and then,
eventually, the enterprisesrealized that this is something
that they can utilize as well.
That transition probablyhappened somewhere around 2011,
2012, which is kind of the timeyou can consider the cloud going

(06:20):
mainstream and beginningenterprise adoption from then on
.
The cloud going mainstream andbeginning enterprise adoption
from then on.
I think for the first fewinitial years when the cloud
came up, the big draw towardsthe cloud was that it basically
gave, or it essentially leveledthe playing field for startup
companies.
So if you were a startupcompany in the, you know, around

(06:42):
the time that the dot-com boomhappened, or in the early 2000s,
if you wanted to build asoftware service and offer it to
your customers, you had afairly large upfront investment
to make, and the cloudessentially enabled startups to
be able to, you know, get tomarket much faster at a much

(07:07):
lower cost by taking away thatcapital expenditure that they
needed to spend oninfrastructure, on servers and
storage and all of that, andalso for resources to manage all
that infrastructure.
And so that was great and youknow, we've obviously had a
really good run in terms of alot of startups being able to
test out products at a reallylow cost, find product market
fit or, in some cases, not findproduct market fit and decide to

(07:30):
abandon the idea, do somethingelse, and so it really levels
the playing field for them andallowed a lot of startups to
disrupt the status quo andultimately bring value to the
customers.
But what's happened in the lastfew years, particularly as GPUs
have really taken off thanks toall the demand from AI and

(07:50):
machine learning workloads inthe last few years, particularly
since OpenAI's chat GPTmovement.
What has happened is that we'resort of going back to the
traditional ways where, giventhe scarcity in the availability
of GPUs, particularly certainhigh-end models like the A100s
and the H100s, and soon theGH200s and the B100s from NVIDIA

(08:15):
, not only are thesesignificantly more expensive,
but in many cases they're justreally hard to get.
In many cases they're justreally hard to get, and so if
you're somebody that wants toutilize, you know, a fleet of
H100s or, you know, soon, theGH200 platform, not only do you
need to commit to spending a lotof money, but in many cases, if

(08:39):
you go to the bigger cloudproviders today, you need to
actually commit to a year orthree years of spend with them
before you can even get accessto these GPUs.
So if you sort of go back to thestart of the cloud and compare
that to where we are today, thewhole premise of the cloud,
which is remove that capitalexpenditure, the upfront

(09:02):
expenditure that you've got todo, and give you the flexibility
to scale up and scale downwithout having to take on the uh
ongoing expense that sort ofgoes away.
Uh, if you have to commit to ayear of cloud expenditure in
order to get access to a certainpiece of hardware and so, uh,
this is sort of that inflectionpoint, I think, where a solution
like akash really shines, andthat's what we've've been seeing

(09:24):
with a lot of our customers andusers as well.

Host (09:27):
Thanks for that, anil.
So to follow up, a follow-upquestion, for that is how Akash
makes it more accessible, theGPU accessibility, as you
rightly pointed out.
Right, it's harder to gethands-on on H100s and the
higher-ups now NVIDIA is comingout with.
First of all.
That's my first question and Iremember Craig also mentioning

(09:52):
about Akash being suitable alsofor the small language models.
So yeah, if you could expand onthat.

Anil Murty (10:01):
Yeah, absolutely Would love to dig into that.
So there's a few differentthings that are at play here and
we kind of saw this coming, youknow, a year, year and a half
ago, and we're just kind of whywe, you know specifically, you
know, focused our but reallydoubled down on that strategy
and it was kind of, I think,driven by two or three things,
if I can frame it that way.

(10:21):
The first was, you know, it wasvery clear that there was going
to be a huge amount of demandfor GPU workloads because of the
growth in the amount ofapplications that are going to

(10:43):
get built in the next few years.
That was very clear.
I think it became more and moreclear after the chat GPT
movement, but it was clear tomany people even before that
that there is going to be somepoint in time whether it was
going to be six months, one year, two years, there was going to
be some point in time where thiswas going to happen.
So that's kind of number one.
The second thing was what wealso realized was that, uh, even

(11:09):
though initially when the chatgpt movement happened um, open,
ai was it almost seemed for alittle bit, you know, maybe a
month or two months, that openai was going to be the only game
in town and they were going tobasically suck all the oxygen
out of everything else andeverybody's going to be just
building an open AI.
And that sort of went backagain to history repeating
itself, which was, if you'vebeen around in tech for long

(11:32):
enough, or if you have readabout technology history even if
you've not been around for thatlong, what you have seen is
that there's always been pointsin technology history where,
even if a certain technologygets invented by a really big
player and is initially onlyavailable through that specific

(11:55):
player, over time there isenough movements in communities
around the world that leads toopen source solutions.
Movements in communities aroundthe world that leads to open
source solutions.
Arguably the biggest example ofthat, historically speaking, is
the Linux operating system.
So you know, way back in theday, in the 90s, obviously

(12:16):
Windows was the most dominantoperating system out there, and
today, if you look at mostserver workloads, as well as a
lot of consumer electronics andmany other services that you
access through SaaS services,all of them underneath run Linux
and as a result of communitiesthat build in the open and are

(12:49):
able to come together and createsomething that is, overall,
going to create a better worldfor people that are building.
It is pretty clear in our headthat that is going to be the
case even with AI.
If you look back at Akash, akashhas always had a significant
portion of its code-based opensource right from day one.

(13:11):
But what we did approximately,you know, a year and a half ago
is we decided to go 100% opensource, and this was way before
even the chat GPT movementsource, and this was way before
even the chat GPT movement.
And then not only did we decideto go 100% open source, we also
decided to go to an opendevelopment model.

(13:32):
So we essentially came up withthis idea of building in the
open, similar to what projectslike the Kubernetes project does
.
So they have the concept ofspecial interest groups and work
groups where people essentiallyare able to propose ideas, talk
about them in the open as partof a community and vote on

(13:56):
things and then work together toimplement certain things that
make sense from acommunity-driven project
perspective.
So this is the switch that wemade about a year and a half ago
and we, to this day, operatethe same way.
So literally every singledecision that gets made gets
made in the open.
It's documented in our GitHubrepository.

(14:17):
All of our code base sits thereas well, and so we made this
transition.
And then, sure enough, for thefirst few months after OpenAI
released ChatGPT and that wholeinflection point happened a few
months following that you sawthat there were competing open
source models being released forsimilar types of functions or

(14:39):
capabilities as what OpenAI wasreleasing.
And then since then, which wasabout a year or a year and three
months now, we have seen awhole bunch of new open source
models get released.
Hugging Face has been anamazing repository for all those
models and everything fromimage generation to large
language models to smalllanguage models everything

(15:00):
between those is now almostalways you can find an
equivalent open source versionof a closed source model, and so
our strategy of being an opencompany aligns really well with
that, and so that's worked outreally well for us.
And now taking those two thingsand then marrying it with one
of the questions that you asked,which is how does small

(15:22):
language models fit and how doeslarge language models fit?
Essentially, the way thingscome together really nicely for
us is given that we have been acrypto native company as well,
or a crypto native project aswell.
We obviously have a blockchainbased mechanism for matching
supply with demand.

(15:42):
So, essentially, the way Akashworks for folks that are not
familiar with it is that we'reessentially a two-sided
marketplace.
On the one side, you havesupply, which is compute supply,
whether it's regular compute oraccelerated compute in the form
of GPUs.
Now All of this supply isavailable on the network in
terms of individual providers,so a single provider can have a

(16:05):
single server, they can have10,000 servers, they can have
100,000 servers any number andeach of these providers are
independent entities and nosingle person owns the entire
infrastructure.
So, even though Overclock Labsis the company or the
organization that created ourcash network, we don't own all
the infrastructure.
We own a teeny, tiny portion ofit.

(16:27):
We're one of the providers onthe network, and there's over 75
of those providers today.
And then on the other side ofthis is people that want to
deploy applications onto thatcompute infrastructure, and the
way the matching of these twosides happens is through a
blockchain, and the reason weuse a blockchain for that is
because number one, it lets usbe able to do this in a very

(16:52):
automated fashion, so being ableto easily create a smart
contract between somebody thatwants a certain resource and
somebody that has that resourceto give can be done very easily
in blockchains, using smartcontracts or using programmatic
ways, and so that's what we'veimplemented is a two-sided
marketplace where you can getthe best possible resource in
terms of price and performancefor the workload that you want
to run.
And so, given that we have acrypto background, we have a

(17:14):
natural affinity or we have agood portion of our community,
that is consists of people thathave had GPU mining equipment
mining equipment so if you lookback to you know 2017, 2018,
2019, 2020 as well similar tohow NVIDIA has seen a huge boost
from AI workloads in the lastone year, two years prior to

(17:41):
that, the prior inflection pointthat NVIDIA had was from GPU
mining.
Some of the people that werearound in the GPU space then
would probably remember that andso there's a whole bunch of GPU
capacity sitting in miners evennow, whether it was for Bitcoin
mining or it was previously forEthereum mining or any others,
and a lot of these chains haveeither transitioned away from
proof of work type blockchainsto proof of stake or, in case of

(18:03):
Bitcoin, it's getting more andmore expensive, with each having
to be able to mine Bitcoin.
So there is, as a result, a lotof GPU capacity that's out
there that you know.
People have already investedthe money into that they would
love to monetize, and so, whilethose GPUs may not be the most
latest and the greatest GPUs,given that they've been around

(18:26):
for four, five, six years, theystill serve in many cases to be
a really good platform for beingable to do inference.
So, while you may not be able totrain the largest of the models
on these older GPUs, many ofthem work really great for
inference.
For example, one of the mostcommon GPUs that we get requests

(18:46):
for inference today is the RTX4090, believe it or not and what
people have found is that theprice to performance ratio of an
RTX 4090 is really good whenyou're trying to do basic
inference, whether it's runningsomething like LAMA or LL for

(19:09):
language responses as a naturallanguage processing engine.
You know wanting to do imagegeneration using stable
diffusion or any of the otherimage generation models out
there, they work as a reallygood platform for that type of
stuff.
So that's where sort of youknow us being able to match all

(19:30):
of this demand from the cryptoand mining communities towards
people that want to do smalllanguage models or just pure
inference on models with fewerparameters.
It works great.
Now, when you think about thehigher-end GPUs, which is
primarily people that want to beable to run models with tens of
billions of parameters or wantto be able to do large-scale

(19:55):
training, what we have foundhere is that we are able to
actually bring in crypto-drivenincentives.
So we have the concept of acommunity pool within our
protocol that has severalmillion dollars of money
available for us to deploy aspart of community incentives,

(20:17):
and so what we're able to do iswe're able to actually source a
lot, lot of these iron gpus aswell and offer them at a
significantly competitive pricerelative to anybody else that's
on out of the market.
So, for example, if you were togo to akashnetwork gpu right
now, what you would find is umthat you can get a h100 with an

(20:37):
sxm interface, but as low as adollar and50, which I believe is
half or less than half of whatyou can get at many of the other
places out there.
So I hope that answers some ofthe questions that you have.

Host (20:50):
Yeah, yeah, that definitely answers my questions
and brings in more questions forme, actually, that I was
thinking while you were talkingabout this.
So I have used a cost servicein the past and it's it's
amazing in terms of I hosted a,hosted a blog also, so
everything is kind ofcontainerized.
They're nice templates, it'svery easy to use and I was going

(21:12):
to get to the ease of use fornon-crypto native users.
So that was like a year ago andnow things might have improved
even more.
So any improvements there interms of, like, how the GPUs or
the GPU marketplace works,because that's relatively new
right?

Anil Murty (21:29):
Yeah, great question.
So, yeah, so the GPUmarketplace was launched.
I mean time flies right.
So we actually launched the GPUmarketplace in beta, I think
around June of last year, may toJune, may, june last year, may
23.
And then, you know, ga'd it, Ithink a month or two after that.
So it's been around for aboutsix or seven months now, but,

(21:49):
yeah, we're quickly coming up onalmost a year.
Yeah, so, from the perspectiveof being able to use GPUs or
request GPU resources on thenetwork, the way we have
implemented GPU support is tomatch it exactly the way regular
CPU resources work, and so anysort of deployments that you did
on regular non-acceleratedcompute I don't know how long

(22:14):
ago that was, maybe a year agoor so or maybe two years ago
you'll find that the deploymentworkflow is exactly the same
even with GPUs.
So, just like how you can writethis thing called a stack
definition language file, or anSDL file, as we refer to it,
which is effectively like aDocker compose file for those
that are infrastructure nerdslistening to this and what you

(22:36):
do there is you basically say,hey, these are the services that
I want to be running, and aservice could be a backend
service.
It could be a frontend service,it could be a machine learning
workload, it could be aninference app, whatever you like
, and so you can have multipleof these services specified
inside that file, and then, foreach of the services, you
specify something called acompute profile, which is

(22:57):
basically saying these are theresources, or this is the amount
of resources that I think theservice is going to need in
order to operate.
So the compute profiletypically is you know, I need
six CPUs, I need one GPU, I needa gigabyte of RAM, I need, you
know, two gigabytes of storage.
So you specify all these thingsand then submit this job onto

(23:19):
the network, and then what youget back is a whole bunch of
bids from various providers.
Each bid typically consists of,you know, information about the
provider, where is it located,what's been the uptime on the
provider for the last seven days, what's the name of the
provider and then, of course,what is the cost.
So what is this provider goingto charge you for running this

(23:42):
workload for one month?
And so you get all these bidsback, and then you go ahead and
accept one of the bids and then,the moment you do that, the
workload gets deployed onto thespecific provider and what you
get back is an endpoint thatgives you access into the
container running containerinstance, and if you expose
certain ports, then you know.

(24:04):
If one of those ports happensto be a port 80 or 443, then you
have essentially a HTTP youknow interface into that as well
.
So the entire workflow isexactly the same as what it was
with CPUs.
Nothing's changed, so thatshould be totally familiar if
you go try that.
The other aspect of that whichyou asked about was how do we
make it easy for non-cryptopeople to be able to access this

(24:26):
?
And that's a really goodquestion because obviously a
majority, a big share of the AIworkloads today are being built
and deployed by folks that arenot crypto-native right, and so,
to that end, there's a fewthings that are ongoing within
Akash and within Overclock Labs.
First and foremost, as youprobably know from past

(24:50):
conversations and from followingAkash, we have a fairly vibrant
ecosystem and a fairly vibrantcommunity.
So one aspect of our communityis that there is a bunch of
people that are actuallybuilding solutions on top of
Akash.
So, similar to how you know,when AWS and Azure and all of
these services took off, you hada bunch of people building you

(25:11):
know monitoring solutions,building things like Roku or you
know Vercel or these kinds ofthings that utilize AWS compute
underneath, or Vercel or thesekinds of things that utilize AWS
compute underneath.
There is a bunch of teams thatare building similar solutions
to utilize Akash computeunderneath.
In fact, one of those teams thename of the team was CloudMOS.
They were called CloudMOSbecause they're essentially

(25:35):
built on the Cosmos or they'repart of the Cosmos ecosystem,
and they were primarilytargeting Akash compute as the
platform that they would buildon top of.
They call themselves CloudMOS.
That team was actually acquiredby Oracle Oclabs about seven or
eight months ago and they'reactually part of our team now,
and so they built this clientthat takes you know our basic

(25:59):
APIs and our CLI and implementsessentially a UI on top of that
to make it easy to deploy.
And so now that those folks arepart of our team, we've
rebranded that toconsoleakashnetwork.
If you go to consoleakashnetwork, what you'll see is what looks
like a simpler version of AWSconsole, but specifically for

(26:20):
Akash.
So that's already there.
So you can check outconsoleakashnetwork and you can
see what that looks like.
What you will see in the nextfew months is us work on, you
know, more curated workflows forAI there's already like a bunch
of templates out there, buteven better curated workflows
and also potentially look atoffering a credit card based

(26:41):
interface and not just a cryptoand a wallet driven interface as
well.
So that's one aspect of it.
The second aspect of it isthere is other teams out there,
so there's a team called spheronthat has built a ui app that
already has a credit card basedinterface that can be utilized
for deployments and thenseparate from teams that are
directly building on us.

(27:01):
We're also in talks with youknow, partnerships talks with
certain Web2 companies.
So these are companies that havebuilt essentially AI inference
platforms, right?
So these are companies that arebuilt like a UI and API layer
that allows people to be able toutilize open source models and

(27:24):
abstracts away all theinfrastructure components from
that whole experience, right?
So whether you're utilizing AWSunderneath or Azure underneath
or Akash underneath, all of thatis abstracted away and what you
, as a data scientist or amachine learning engineer, get
is you get this API interface orUI interface where you can just
say, hey, this is the model I'dlike to run, I would like to

(27:47):
run it really fast, or I'd liketo run it medium or slow and
either run the model right nowand give me the outputs or give
me a programmatic interfacewhere I can request that the
model be run with this data setand with these parameters so I
can tune the model as well.
So we have several talks withcompanies that you'll be hearing

(28:08):
about in the next few monthsthat are Web2 companies that
have built these kinds ofplatforms that are going to be
using Akash computer.

Host (28:16):
Yeah, so you're fully realizing the SkyCloud concept
looks, looks like you know, withthis uh full realization of
that, where you can define youruh compute parameters and then
it uh does what it does in thebackground and as you, as you
described, like the fast, slowand you know it depends upon the
type of jobs you're running,right and time and the time of

(28:37):
the day.
Then you can have that priceselectivity as well.
So it sounds fantastic.

Anil Murty (28:44):
Yep, amazing.
The flexibility of Akash, Ithink, is that it lets you not
just be able to choose the kindsof compute resources and make
the tradeoffs between price toperformance that is applicable
to your specific applicationperformance that is applicable
to your specific application butit also gives you the option of

(29:04):
choosing to be as decentralizedor not decentralized as you
want to be.
So let's say, you use Akash fora few months and you decide
that these three of the 75providers are the ones that I
like the most and those are theonly ones I want to be deploying
to.
You can programmatically set itup so that you always default
to those providers.
Or, if you're somebody that isa completely you know you're a

(29:25):
hardcore decentralization fan,or you know you can be choosing
a different provider everysingle day and build your
application to do that.
So that flexibility in beingable to decide what path you
want to choose is essentiallywhat I think makes Akash really
unique, and so this brings me tolike, like a bigger, bigger

(29:46):
question.

Host (29:48):
Akash was talking about decentralized cloud before all
this deep in narrative, right.
So we've, like, I had myinterview with with Greg almost
a year, I had my interview withGreg almost a year, two years
ago, and so we were talkingabout these things, and so where
do you see the conversions nowof AI and crypto, like in the

(30:09):
broader scheme of things?

Anil Murty (30:11):
Yeah, that's been a really hot topic for the last
few months, hasn't it?
So you're absolutely right.
Basically, what we have seen,at least in the last several
months, is something that wehave been passionate about for
several years now.
Greg and adam, way longer thanI have this idea of
decentralizing theinfrastructure, or the compute

(30:34):
infrastructure and the cloud,which you know in many ways, is
a public utility at this point.
That's something that reallyhas taken on a narrative for
this specific crypto cyclecoming up, and so, as with all
narratives you know, similar toyou know, in the last crypto
cycle, DeFi and NFTs and youknow a few other things were

(30:55):
pretty hot and everybody wantedto jump on them.
You have a bunch of peopletrying to jump on this
decentralized physicalinfrastructure narrative or the
deep end narrative now, and abunch of people trying to claim
that they are quote unquotedecentralized compute
marketplaces.
What's been interesting towatch is that many of these
projects actually don't in theabsolute worst case, some of

(31:18):
them don't have an actualproduct underneath and they're
just talking about things thatyou know, in many ways are just
copying messages from uhprojects like akash and others
that have been at this forseveral years now in the, and
that's sort of in the worst casescenario where they don't
really have I haven't reallybuilt anything, but they're just
talking about it.
And then, in the best casescenario, is projects that have

(31:41):
legitimately built something butthey've not taken the effort to
truly think aboutdecentralization at the core.
So they may have gone andacquired compute from one or two
or three sources and thenoffering that as a decentralized
solution.
You know, that's not the truedefinition of decentralization.
That's just taking a regulargood old approach of you know

(32:05):
going and sourcing compute, butjust sourcing it from multiple
sources yourself, right?
So I think that's not.
It's not I'm not saying it's abad solution.
It works.
It's better than the first one,which is just claiming things
when you don't really haveanything, but it's also not
really decentralized.
What's also interesting to seeabout a lot of these solutions
is that they're all closedsource, so they don't open up.

(32:29):
They definitely don't open upthe source code for others to
look at, similar to what Akashhas done but they don't even
open up their metrics.
In case of Akash, you canactually go to a web page called
statsakashnetwork.
What this is is basically itshows you all the statistics of
things that are happening onAkash.

(32:50):
Every single second, everysingle minute, Every time a
block is created.
What you can see there is youcan see the total number of
providers on the network.
You can see the total amount ofcompute in terms of GPU, CPU
storage memory.
You can see the total number ofleases being created.
A lease is basically when oneworkload gets deployed onto one
provider.

(33:10):
That's typically a lease, soit's like one application being
deployed and you can see thetotal number of compute
resources being spent, amount ofmoney being spent on the
network, and all of thesemetrics are basically stored on
the blockchain.
So it's not something that weas overclock labs or as anybody

(33:31):
in the community, can go andsort of spoof or mess with or
fake in any way, because anybodycan query these parameters from
the blockchain and prove uswrong if we try to do that or
fake in any way, because anybodycan query these parameters from
the blockchain and Roo waswrong if we try to do that.
So in that sense, I don't knowof any other project out there
other than Akash that is fullyopen source, fully decentralized

(33:53):
and exposes all of itsstatistics on a blockchain for
anyone to query within thecompute deep end marketplace.
So I don't know of any othercompute deep end project that is
doing those three things, andif there is, then I would love
to learn about it.

Host (34:10):
Yeah, well said, I myself haven't, even though I've
interviewed some folks incompeting with you guys or in
healthy competition, but Ihaven't seen this clear
statistics from anyone so far,so that's good to see.
So, as far as the convergenceof AI and crypto goes, I mean,

(34:35):
clearly there is one solid usecase that Akash is building for,
which is providing GPUaccessibility, which is at the
inference level as well models,and you have more efficient
models coming, and then you haveinferencing getting better on

(34:57):
on commodity hardware.
You know you can definitely seethat uh utilization of gpus
even going higher.
Right, I'm currently looking atthe stats and I do see a lot of
utilization happening heremonth over month, so that's good
to see.
Okay, one question that's uh,one of the last questions is and

(35:21):
I've started doing this with myspeakers is um, what advice
would you give to somebody whocomes next on my show and and
this is you could say regardingsomething in the crypto sphere
that you have learned so far?

Anil Murty (35:41):
And actually I, just before I answer that, I
just realized I didn't answerthe previous question completely
, so I'll just quickly answerthat as well.
I didn't touch quite on the AIcrypto narrative, so I talked a
lot about, you know, the newprojects coming up and how Akash
potentially is different fromthose, but really the AI versus
AI across crypto narrative view,if you want to call it.

(36:03):
That makes complete sense to me, because one of the biggest
things that people talk about inthe non-crypto world today with
regards to AI is how AI isbeing controlled by a handful of
companies, right?
So there is this huge outcryamong a lot of people that you
know a few companies have enoughcompute capacity and are

(36:25):
capable of acquiring a lot ofcompute capacity into the future
, and these are the companiesthat are going to be able to
train the best models, run thebest models and all of that.
And I think this is wherecrypto really makes sense to me,
because it's the one way thatwe can build systems in the open
, allow easy or programmaticaggregation of capacity compute

(36:48):
capacity, the way Akash is doingand be able to crowdsource the
development of not just thedevelopment of models but also,
you know, the accessibility tomodels as well as compute in an
open fashion, and so being ableto do this with crypto is a lot
more easier to make programmaticand a lot more easier to make

(37:11):
you know sort of source from acommunity or crowdsource than it
is to do without crypto, andthat's why that thing makes
complete sense to me.
Now to answer your last question, which is you know what sort of
advice I might have for thenext person that comes along, I
think the biggest advice I wouldgive to someone and this is
coming from me as someone whowas not in crypto before I

(37:33):
started working on Akash andjoined Overclock Labs is when
you think about a crypto project.
I know there's a lot of peopleout there that build crypto
projects with the pure intentionof shilling a token or making a
quick buck and calling it a day.
I think it'd be really nice tosee more people think about the
real utility of crypto and howit can be applied specifically

(37:57):
to areas of our life thatrequire decentralization or
require things that need anincentive mechanism to make them
more like a public utility,without actually making them a
public utility in the sense of,you know, making a fixed cost or
a fixed price and nocompetition.

(38:18):
So I think what crypto isreally good at is being able to
sustain innovation while, at thesame time, leveling the playing
field and giving people accessto technology that otherwise
would not have access to them,while at the same time, you know
, allowing entrepreneurs to beable to generate wealth, retain
value or capture a certainportion of value that they

(38:40):
create, without having to getdemocratized completely or
without having to getcommoditized completely.
So I think that's kind of myoverall advice is to think of
solutions that could reallyuniquely be solved only with
crypto, as opposed to beingsolved by a web to solution, and
just do crypto for the sake ofit.

Host (38:58):
Great Thanks, anil, chatting with you and folks here
listening.
Where can they, where can theyfind you?

Anil Murty (39:07):
Yeah, so, akash, you can find us at akashnetwork
on the web.
And then I'm on Twitter.
My handle is underscore Anilunderscore Murti underscore.
And then I'm also on LinkedInand the usual places that you'll
find someone Awesome.

Host (39:24):
Thank you.

Anil Murty (39:25):
Thanks you.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.