All Episodes

April 21, 2024 31 mins

Send us a text

When ChatGPT was launched in December 2022, few could have predicted the rapid transformation the world would undergo in the following months. It emerged as the fastest-growing consumer technology in history, catching many by surprise. 

In the year since its launch, ChatGPT has revolutionized nearly every aspect of the tech industry. It enables computers to create articles, finish homework assignments, and generate art, fundamentally altering our understanding of work, creativity, and the very notion of 'search'

As companies explore advanced AI, they often struggle with managing the necessary infrastructure, including the specialized chips that power these technologies, which can slow their progress. Brev, led by Nader Khalil offers a strategic solution that simplifies the complex landscape for companies striving to leverage AI.

Brev’s platform is designed to allow companies to focus on leveraging AI for innovation without the overhead of managing hardware, ensuring they can harness these powerful technologies efficiently.

In the midst of this GPU capacity shortage, the need for more efficient resource management is highlighted. The lack of GPUs or the GPU availabilities out there in the world is not only affecting the largest customers in the world, but also startups and midsize companies that really need access to it.

Today, on Things Have Changed Podcast, we're diving deep with Nader Khalil, Co-Founder and CEO of Brev into how they support the generative AI boom, ensuring that businesses can innovate freely with AI, reducing the trouble of managing high demand GPUs like A100, H100 and others.

Here are some helpful links:

THE HIGH-EARNING WOMEN PODCAST
This podcast empowers high-earning women to make informed financial decisions and thrive.

Listen on: Apple Podcasts   Spotify

Support the show

Things Have Changed

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jed Tabernero (00:03):
In February, 2024 alone chat GBT had 1.6 billion
visits to its website.
And people have not stoppedbuzzing about generative AI.

(00:24):
Basically computers, writingarticles, doing homework.
Or creating art.
It's kind of reshaping how wethink about work.
Creativity.
And even search.

(00:45):
As companies explore advancedAI, they often struggle with
managing the necessaryinfrastructure.
Including the specialized chipsthat power, these technologies.
Which can slow their progress.
Breadth led by natter Kahleel.
Offers a strategic solution that
simplifies the complex landscapefor companies striving to

(01:08):
leverage AI.

Nader Khalil (01:09):
brev is a dev tool that makes it really easy to use
GPUs..
We take the hardwarerequirements that's needed.
We take the software that needsto get installed on top of that
as well.
And we can create essentially aone click deploy button for any
of these.
Today on things have changedpodcast.
We're going to talk to natterCEO and co-founder of breath.
About how he's planning onmaking GPU's.

(01:33):
Easy to use.
I think that, there's a lot of people doing
some really great work and.
My perspective is okay, how canwe work with them and really
just pave an epic road for someusers

Shikher Bhandary (02:20):
This whole wave of AI, wave of LLMs and
using a lot more specializedcompute, you really need a good
foundation to even begin.
Companies and startups are sodesperate for this hardware.
The lack of GPUs or the GPUavailabilities out there in the
world Is not only affecting thelargest customers in the world,

(02:43):
but also startups and midsizecompanies that really need
access to it So today we aresuper excited to have nadir He
is one of the first fivefounders on things have changed
podcast and we've come a longway since then And welcome back
to things have changed again.

Nader Khalil (02:58):
Yeah, thanks for having me.
Compute is definitely abottleneck.
Now, it used to be somethingwhere when you finished your
development process, you canthen burst to the cloud to
actually deploy your thing andserve it to users.
But now you actually need to usethe cloud at the earliest part
of the development lifecycle andnot just that, but everyone's
trying to figure out how to rollAI into their stack.
And yeah, there's never beenmore of a bottleneck.

Shikher Bhandary (03:19):
Yeah, so this could be like a very brief
overview for Our audience whomight have heard this whole AI
boom, they've probably seenNvidia and c, NBC is covering
Nvidia 24 hours a day, right?
So they probably know this stockis up 500000%.
But outside of all that, likewhat have you seen over the past
year and how is that fed intowhat you are building right now?

Nader Khalil (03:42):
Yeah, I think generative AI was, I think, a
catalyst for a lot of folks totry to figure out how to take
advantage of the next wave oftechnology.
I think typically you see withevery new wave of technology,
there's a new cohort of startupsthat kind of enter the space.
I think what's been very uniqueabout now is there's pressure
from the largest companies andthe boards of the largest
companies to figure out what,everyone's wondering what's

(04:02):
their AI strategy, what's theirAI strategy.
And this way feels very uniquein that everyone startups, big
companies, governments are alltrying to figure out what's
going on.
How do you leverage it?
How do you take advantage of it?
And yeah, it's been superexciting.

Jed Tabernero (04:15):
We're talking about apps like chat, GPT and
Gemini that.
Gosh, my mother has even triedchat GPT, dude.
So that's how universal kind ofgenerative AI has been.
There's more than a hundredmillion people who use chat GPT
on a weekly basis.
So when you talk about this waveand the changing priorities of
companies even at these massivetech startup levels, what got

(04:38):
you thinking about this solutionof BrevDev and how did that
bridge the gap between whatyou're understanding, what was
going on?
In the current space and how youcould provide value to these
folks, developing engineers andyour space in specific.

Nader Khalil (04:53):
I think coming up with ideas and trying to solve
them typically doesn't work.
Ideas are probably shit untilthe market gets to help shape
it.
And so what we had actually doneinitially, Brev, which happened
around November, 2020.
We were just trying to make itso that when we were scaling our
previous startup, dealingentirely with infrastructure
issues, we weren't able to talkto our stakeholders and build

(05:15):
things that they wanted.
So the initial goal, the kind ofnaive goal for Brev was, Hey,
let's build a infrastructuretool that essentially took that
burden away because it didn'thave anything to do with our
stakeholders.
And so we initially leaned intoserverless.
We said, Hey, serverless had alot of promise.
What if what if we could build aserverless platform where you're
not having to worry about thisat all?
What we learned in doing that isthat if you go to serverless

(05:37):
because you don't want to worryabout servers.
But you quickly find yourself,or you don't want to manage
servers rather, but you quicklyfind yourself managing
serverless and all the differentconstraints that come with that,
the different runtimes, thetimeouts, all these things that
you're patching.
And so we pivoted from there andsaid, Hey, what if we built a
platform that made it reallyeasy to use servers?
It's not serverless.
It's just the easiest way to useservers.

(06:00):
And that found us accidentallybuilding cloud dev environments.
The idea is like, Hey, you canjust code in the cloud because
it's now very simple to do butthat really lacked an inflection
point where we were stillstruggling with go to market.
We had some users using us.
We didn't know why they wereusing us.
And then one of our users was aCTO of an AI company.
And he was just like, Hey, I'musing you for all of my CPU
development.

(06:20):
Could I use you for my GPUdevelopment as well?
This happened in like June,July, 2022.
And we looked into the problem.
I emailed a bunch of so we wereYC in our previous company.
I emailed every YC company thatwas that said that they were AI.
And this is before chat GPT.
So that wasn't as large of everycohort.
So if a company had labeledthemselves as AI, they were

(06:40):
probably training their ownmodels and validated this, the
problem space.
And then we just leaned in andwe realized making a dev tool
that makes it really easy to useGPUs is actually, it makes more
sense.
The pain of using, of utilizingone is much harder.
The the complexities much harderand especially now with chat GPT
being a big catalyst andeveryone trying to figure out

(07:00):
their AI strategy, there aremore people that don't have
experience provisioning cloudinfrastructure that are,
application developers that aretrying to figure out.
How to fine tune and trainmodels and use these cloud
resources.

Jed Tabernero (07:12):
has that been for for startups?
Cause what I'm noticing thesedays is that obviously we have
these massive applications likechat GPT, who probably have
reserved for years in advance ofspecial relationships that give
them capacity prioritizationthat no other startup has.
I feel like.
Startups these days weredeveloping things in the AI and

(07:34):
ML space or having a tough timewith it.
Are you seeing a lot of thosepeople come to you for help?
How

Nader Khalil (07:40):
Yeah, absolutely.
And there's like researchinstitutions, universities,
they'll come to us for theircomputational infrastructure.
I think the way I look at it isif a cloud has the capacity or
the capability to build a largecluster, the way, for example,
if Azure gets new GPUs, theywill give them to open AI, which
needs them then they'll do andso the way that startups are
typically getting GPUs is byworking with smaller data

(08:01):
centers and cloud providers thathave a healthy access to GPUs.
And it's not a cluster thatOpenAI is going to get access
to, but it's just sufficientsupply.
And so there are a couple ofcompanies doing great work in
the space, and they're startingto service other larger startups
as well.
So there's like Lambda GPUCloud, Crusoe Cloud, which we
work with.
Dean Nelson from Cato.
Digital they take GPUs and findexcess capacity in data centers,

(08:25):
and they'll surface those aswell.
There's decentralized GPUmarketplaces like AkashCloud,
which we're working on anintegration for.
There are ways for startups andteams to still get access to
GPUs.
It's just just a littletrickier.

Shikher Bhandary (08:38):
it.
So that's where your platform,the relationships that you're
building with the differentsmaller data centers, as well as
having the optionality to, for acustomer to be able to tap into
whatever AWS or GCP.
Potentially can provide you, butyou also have a deeper pool by
accessing all these differententities and not just the

(08:59):
hyperscalers.

Nader Khalil (09:01):
Yeah, absolutely.
And yeah, from a high level brevis a dev tool that makes it
really easy to use GPUs.
So it's not just it's once youget a GPU, it's then also making
sure that you're using itproperly getting everything set
up and using it efficiently.
Like you don't need an A100 oran H100.
There are other GPUs that have alot of GPU memory.
There are things like the L40Ss,which we're working with Crusoe
cloud to surface.
And so a lot of that know how issomething that's just built into

(09:21):
our product.
And so you can start from one ofour guides, like fine tuning or
training Mistral.
Cool.
And so what Brev does is we takethe hardware requirements that's
needed.
We take the software that needsto get installed on top of that
as well.
And we can create essentially aone click deploy button for any
of these.
And so that will go andprovision the GPU from one of
the data centers or clouds thatwe work with, and then set up
all the software and just putyou in it.

(09:42):
And so the idea is not just thatyou're not worrying about the
shortage, but also about gettingthe GPU itself set up, the CUDA
drivers, or even packaging itand bundling it once you're
ready for inference.

Jed Tabernero (09:54):
much time do people really spend setting up
GPUs?
How much time do they spend oninfrastructure stuff that might
make it Interesting for usoutside of the industry to see
okay That's actually a hugevalue prop, right?
Because all of a sudden now youdon't have to spend time
thinking about how to set up theGPU.
You can spend more time thinkingabout how to make your model
better.
You can spend time doing thestuff that you really want.

(10:17):
So if you could help us, thefolks who aren't super familiar
and not smart enough tounderstand this space a little
better, that'd be great, man.

Nader Khalil (10:22):
Your CPU is essentially, is what's doing the
computations on your computer.
It's just like the brain on yourcomputer.
GPUs are really unique in thatthey're very good at doing
matrix math.
This was specifically designedfor graphics, for video game
graphics.
When you're moving pixels aroundon a screen, matrix math helps
do that efficiently.
And then what was where, whatwas really great is that's also
very useful for AI.

(10:43):
And a few years ago, while AIwasn't the hottest thing NVIDIA
chips were really being usedthere.
And that's why NVIDIA has such alead on, on other companies is
they were just they already hadbeen focusing on graphics.
That was really useful for AIand no one was really looking at
the space yet.
And as far as like getting theGPU set up, a lot of application
developers are typically writingat a much higher level in the

(11:05):
code base.
You're not dealing with actualhardware GPUs.
You're actually, you'recompiling things down to CUDA
kernels.
You're putting things onto thehardware and that it's not so
much that it's it, thecomplexity is just very
different.
And so that's where I think asmore people are getting into AI
development.
There the, essentially the pieis growing of who's trying to
use GPUs and they don'ttypically have experience going

(11:27):
down to these lower level, likethe lower level hardware.
And so that's where, a tool canwork really well.
There's also more to it thanjust setting up the GPU.
It's also understanding whichGPU you need.
GPUs have GPU memory.
That's typically the bottleneck.
The GPU memory is essentiallythe amount of a model or amount
of your data that it canessentially go through at any
given time.
And so if you have more GPUmemory, you can do things

(11:47):
faster.
There's also different thingslike FP8s, which certain GPUs
have and some do not.
And it's harder to take toreally leverage these things.
So there's differentcapabilities in a different
hardware.
And that's where a tool likebrev becomes really handy
because we can essentially puttogether templates that are
leveraging the capabilities ondifferent GPUs without having to
make a user actually have tothink about what what hardware

(12:08):
they're using, unless they wantto, in which case you can just
spin one up.

Jed Tabernero (12:11):
Makes a lot of sense.
And I think I really liked thequote that you have is it
enables builders to focus onwhat they're building rather
than what they're building iton.
I really liked that quote justbecause it helped me at least
understand okay, it's anoptimization tool.
You're optimizing for all ofthose things that we don't have
to think about as builders.
Right.

Nader Khalil (12:30):
Yeah.
And also I think somethingthat's.
It's often forgotten about AI MLworkflows.
If you're fine tuning ortraining a model, it's possible
to give it too much trainingdata.
And now it's overfitting tosomething that you trained it
on.
It's a lot of this stuff isactually it's like cooking,
you're putting a little bit ofsalt and some pepper and you're
trying things out.
And so the goal isn't thatthere's this one click that's
going to make it work.

(12:51):
The goal is that you givesomeone a tool that makes it so
that they can iterate veryquickly.
And so you're going to, as anorganization, have data sets
that you didn't think abouttraining with or data sets that
you want to try and see if youget better results from a model,
you might be able to take a muchsmaller model that's way
cheaper, but give it really gooddata set and you get way better
results than even GPT 4 And sothe goal for brev is like, how

(13:11):
can you give someone a tool thatgets them to be able to be
comfortable making these rapiditerations?
Because that's the dev flow,right?
Is how can you go from like finetuning and trying something out
to then actually runninginference on that model, testing
it out, seeing if it, how it'sperforming.
It's not a, it's not like a taskthat once it's checked, it's
done.

Shikher Bhandary (13:29):
You mentioned iterations, Nader, and we've
read the news where it'sfreaking so expensive because
obviously the supply demandmismatch, it's so expensive to
get access to any of thishardware, right?
This really sophisticated,specialized GPUs and the
compute.
I know that's one of the bigfeatures within your platform

(13:53):
where there is a greater focuson understanding the costing of
the product that you will useand not just understanding it,
but also.
Helping you optimize it for yoursuccess, right?
So can you talk about that a bitmore about how that feature came
through?
Was it just using some GPUs fromyour different vendors and

(14:14):
realizing, hang on, I have topay them like 15, 000 for six,
six hours of work.
So like, how did that kind ofcome through to you?
What

Nader Khalil (14:21):
I think anyone who's spun up cloud resources
has dealt with the pain ofleaving them on and then seeing
seeing that bill and also justtrying to, usually as a
developer, you're not thinkingabout the work that you're doing
is being metered, but suddenlywhen you're using cloud
resources, it is every secondthat you're in your code editor
or that you're in your Jupyternotebook.
You're actually paying and thatdoesn't feel as good.

(14:42):
And so we're working on a fewthings here to try to make this
easier.
So we're going to beimplementing soon automatically
stopping instances that are notbeing utilized.
First step is actually showingyou your utilization.
There's another thing is youmight not be utilizing your
compute.
So even now with brev.
You can start on a cheaper GPUor even a CPU and just move into
a GPU instance when you'reready.
And so giving you flexibility onthe compute for what you're

(15:05):
running is an important step.
But there's definitely a lotmore to come here.
It's a hard problem.
Definitely one of the maindiscomforts with starting to
burst out to the cloud,something really funny,
actually.
So Dean Nelson, the CEO of Catodigital.
I talked to him yesterday.
We to your point it's expensiveand people might just want to
actually explore you playingaround with the GPU, especially

(15:26):
in person, actually see itinstead of just interacting with
it through a cloud.
So in our office, we have agarage.
We we're getting eight T4s thatwe're going to hook up into the
garage.
So they're actually we'repicking them up right now.
We have two members from theteam going down to San Jose to
get the server rack.
We're going to plug them intothe garage and Invite folks over
to just hack it, hack on it.
You can use it for free.
If you're coming over, just plugin and and let it run.

(15:48):
So it'll be a very local computecluster.
The most local cloud, verylimited, obviously, but that
might be also a nice way for youto actually take a look at the
GPUs that you're running and getmore physical experience with
this thing.
Yeah.

Shikher Bhandary (16:02):
customers that you are actually speaking to?
Is it more like the startups andthe midsize companies that are
now using PrevDev?
to get access to resources.

Nader Khalil (16:13):
So access is one issue, but most people aren't
using us for access.
They're using us for more of thetool of once you have access,
what do you do?
There is, we definitely, we haveaccess to compute.
Brev works with any computesource too.
So if you can actually connectyour AWS account.
If you have quota, you canconnect your Azure or Google
cloud account and still use thesame tool, but there.
And so it is a lot about Gettingsomeone in an instance that's

(16:35):
properly set up to do the taskat hand.
And so we're seeing kind ofeverything.
And I think that's as a seedstage startup, that kind of
makes it harder as we try tohone in on our ICP as we see
researchers at institutions.
I'm actually heading to Georgiatech on Monday.
We're going to be there allweek.
We were meeting with a bunch oflabs.
I don't know if you saw, butthey just created a hacker space
with NVIDIA.
NVIDIA donated a bunch of GPUsto to Atlanta.

Shikher Bhandary (16:56):
Interesting.
Yeah.
Yeah.

Nader Khalil (16:57):
see the researchers at institutions.
We see startups across the boardemployees at larger startups.
We see a founding teams startingto use brev.
There's been large companies ineurope where a team of data
scientists have started usingbrev ctos of public companies
kicking the tires and seeing ifthere's a tool that they can use
internally yeah, we're seeing alot of usage with a lot of
different profiles

Jed Tabernero (17:17):
It's not just the startups who are looking for
space.
Now you're telling me like allacross the board, folks are
interested in using the productquestion for you.
How do they know about it?
How do they know about all ofthis stuff that you're putting
out?
All the cool things that youguys are doing.
I've seen some really greatstuff on YouTube.
I've tried to follow, but again,I'm not in the space.
So I don't understand shit guys,but it seems like it's really

(17:40):
helpful stuff, right?
It seems like it's reallyhelpful stuff.
We see you on X.
Putting stuff out there.
Are these the main ways you guysget customers?
Your number is out there.
So I wouldn't be surprised.
People are calling you everyday.

Nader Khalil (17:52):
Yeah, no my phone's definitely.
Always buzzing And yeah, I havemy phone number on the doc so
you can text us and i'll replyBut we yeah we pretty much what
we're doing is we're just tryingto make the best tool possible.
And so we keep coming, we keeptalking to users and building
new things that, that do help ushone in on PMF.
And as we build new things, wejust make a quick video.
Hey, this is what we released.

(18:13):
This is why we think it's great.
And so we put that out onTwitter X and LinkedIn, and then
we put out guides that show howto use brev, how to, fine tune
Mistral, like you said.
It's funny.
There are all these new modelscoming out, but one of our most
popular guides is still to thisday, llama two.
And I think it goes to show thatwhile there are a lot of these
new guides or new models comingout, it's, it gets a little

(18:36):
noisy when everyone's trying tofreak out and try to run the
model.
But actually, if you think aboutthe use cases, it's the quality
of your data, that's going todictate the quality of your
results if you're fine tuningyour training.
And so that's pretty much it.
We don't do much else for likemarketing.
Okay.
There's that word of mouth.

Shikher Bhandary (18:52):
That's awesome.
Nader, I wanted to ask havingbeen in the, like the
semiconductor industry, right?
Usually you have these massivegaps of supply and then,
ultimately just because there'sso much demand, the supply kind
of meets that.
So what happens say, a fewmonths from now when.
people can get more access toH100s.

(19:14):
I think the product, youmentioned a ton of features on
the product side, which is stillsuper relevant regardless of the
compute they're running,correct?

Nader Khalil (19:24):
Yeah, absolutely.
Once, a lot of our users willalso connect their own clouds
and still use brev.
So brev does two things.
It's like a cloud orchestrationtool and a consumer UI on top of
it.
And the UI helps with actuallyusing the compute that's
available.
The cloud orchestration toolhelps provision the GPUs.
Or any compute people use thisfor CPU still as well,

(19:44):
especially if you're doing likedata processing or collection,
you don't need a GPU running forthat.
So it is just to, the goal isjust to make a much simpler
cloud experience for these AI MLworkflows.

Jed Tabernero (19:54):
One of the significant things that we saw
and what prompted us at least toreach out is that acquisition
you did recently with Agoralabs.
Congrats on that.
First of all.

Shikher Bhandary (20:04):
Hey, congrats.
Hey, congrats.

Jed Tabernero (20:05):
Secondly, dude, it's, it seems like they're in
the ML ops space.
We did a little bit of researchon just Agora labs in general.
It seems like y'all get along.
We saw the videos the vibe seemsimmaculate for both teams.
Talk to us a little about that,man.
What was the decision makingaround that?
And, how did it help you guysevolve as a company as well to
have folks from Agora Labswithin BroDev.

Nader Khalil (20:28):
It was also, yeah, it's funny that video, everyone
on the Gora labs is above sixfeet.
I'm six feet tall.
So I, and I look like a munchkinin that video.

Jed Tabernero (20:35):
That's the first thing I thought of.
I was like, this dude's alreadysix two.
How tall are these guys?

Nader Khalil (20:41):
yeah, it's a, yeah the team is a very tall team.
No.
But yeah, when, so I saw the CTOof Agora labs making a, make a
brev account.
And I remember, first timefounder, I used to be very
afraid of competition.
You see someone make an accountlike, oh, what are they doing?
Why are they doing this?
But it's actually, I thinkeveryone should just get closer
to their competition.
They're working on a similarproblem set you guys.
There's probably also ways foryou guys to do some level of

(21:03):
competition.
Where there's ways you guys worktogether while you're
competitive in other regards.
Yeah, we, we reached out and thefirst thing I noticed, the
energy was just fantastic.
These guys are really smart.
They're they're really highenergy.
It was just like a clear fit.
Like I wanted to be on a teamwith them.
It was actually on the firstcall that I just floated the
idea by.
And they were working, they hadbeen surfacing compute from that

(21:23):
decentralized cloud Akash.
networks where on Akash, there'sa really healthy supply of A100s
and H100s available at some likereally great rates.
And what we did initially wasjust say, Hey, why don't you
guys provide us.
The Akash integration as an SDK,and that was a really nice way
for us to get to feel what itwas like working with them.
And it was so funny on the firstcall with them two, two of our

(21:43):
team members.
So we were a team of four Carterand Tyler both came up to me and
said, Hey get these guys atwhatever, whatever it takes,
let's get these guys on.
They're brilliant and that'sdefinitely been the experience.
It's been really fun workingwith everybody.
Anish, who's the one on the leftmost part of the video, he he's
been leading go to market withme.
It's been really great beingable to just strategize with him
and both of us hitting theground running.
Ishan is probably the smartestAI mind I've met.

(22:06):
It's been really great seeinghim apply himself.
There had been times where wewere like stuck putting together
some like NVIDIA resources.
It was taking us like two weekswith back and forth with their
engineers.
Everyone was.
Their engineers were stuck.
We were stuck.
And then on the first Saturdaythat Ishan came to SF, he
finished it in half a day.
Like I still don't understandhow that happened.
And then we were able to havethe deliverable.
Yeah he's brilliant.
And Tom just feels like wecloned Alec, our CTO.

(22:28):
Both of them just, they havetheir desks right next to each
other and they're just goingham.
So the entire team feels like itelevated.
What I've noticed with withteams, every time you shift
when, whether you add folks orremove folks, what you're really
doing is you're you'reexpressing with like very direct
action.
What what your team cares about,the folks that you bring on, you
brought them on for particularreasons.

(22:49):
The folks that you let go forparticular reasons.
And being able to be very clearHey, these are qualities that
our team views as important.
It's actually a really, it'sreally nice to see the entire
team step up.
And what you're really doing isevery time you shift your team,
it's an opportunity to justraise the bar and then everyone
steps up to the bar.
And so that's yeah, it's beenfeeling really good.
We have so many things plannedfor like just launches coming

(23:11):
and everything.
So it's yeah, it's going to be,it's gonna be really exciting.
Next couple of months,

Jed Tabernero (23:15):
That actually leads me to my next point, which
is what we talked about prior tothe call, dude.
I love the culture.
I love the culture.
I love your phone being on thewebsite.
I know we discussed that alittle bit but people don't
understand how much of a bigdeal that is, right?
Cause a lot of people who leadthese organizations are too busy
to be dealing with, certaincustomer problems, but you put

(23:36):
yourself in front of that,right?
By being the available person tosay, Hey, listen, if you have a
problem with our product, I cancall me.
So that, that, that's a reallyawesome culture to have.
I think I want to ask a littlebit about how you maintain that
culture.
And I'm interested in learninghow you're thinking about that
and how you're thinking aboutwhen you're hiring people, how
does this person fit intoBrevDev?

Nader Khalil (23:57):
there's a few things.
So one, a lot of peoplesometimes are afraid of bad
news.
It's usually not the most fun tohear, but if you really approach
it from the perspective of thereis no bad news, there's just the
current state and the desiredstate and then the Delta.
And then you have to just findthe way to get to the, to cross
the Delta.
It's very easy for us to.
Impose or to make things seemlike larger tasks than they are.
And in dealing with bad news isone of them just like making

(24:19):
small tasks that we could justdo without emotion or without
making it personal, very likenatural and human to make it a
very personal thing.
Like I built this thing and it'snot working kind of thing.
That's I think the first one.
As we bring on new folks, I'veseen this before, where as you
bring in someone and a lot of,for example, technical founders
will say Oh, I like, I need tobring a salesperson that's going
to help us get sales.

(24:40):
It's this idea that by bringingsome bringing talent in, you can
start to take a backseat and gointo a managerial role.
But everyone needs to just dostuff like, you're either
building or you're selling orprobably both.
And honestly, everyone on theteam is technical.
At the end of the day, peoplejust mimic what they see and
they see everyone just fullyapplying themselves as opposed

(25:00):
to just taking step backs anddoing like high level strategy
and managerial work, then thatencourages more people to just
really lean in and step in.
And yeah, I feel like the officefeels carbonated.
Like you open the front door,it's and so the energy is
amazing.
Everyone's just really fullyapplying themselves.

Shikher Bhandary (25:15):
That's awesome to hear.

Jed Tabernero (25:17):
Man, you probably think in terms of what is
happening for the next launch.
And so it becomes really nice tobe able to focus on something.
When you put your heads down,you've got your own space.
Now that's awesome that you havethis team.
I wanted to ask what's next,man.
What are you guys workingtowards as the next launch, the
next step, the next bigmilestone that you guys are

(25:38):
working towards now that youhave a little bit of a larger
team, you maybe have expanded alittle bit more expertise.
The space is looking quitepromising.
What's next, dude.

Nader Khalil (25:48):
Yeah.
Most immediate thing that we'refocusing on right now is one
connecting link.
A few different compute sourceslike a cache cloud.
So that'll go live probably nextweek.
And the main one is connectingor essentially closing the dev
loop so that we can make it likea really tight developer
feedback loop which allows forfaster iterations.
I won't share too much yet.

(26:08):
Really excited for that launchand it should happen probably in
the next two, three weeks.
That's something that we'rereally excited about.
Cause if we do get a tight devloop, that ends up solving a
really strong pain point.
We've been doing some great workwith Nvidia.
Also probably next week or two,we're going to announce the,
like our partnership, which isreally Nvidia has a catalog of
AI software and NGC.
You can see containers, models,weights, a whole a bunch of

(26:30):
stuff that you can run.
And I mentioned earlier, Brevmakes these one click deploys
where we can take the hardwarethat something needs to run on
and properly set up the softwarefor it.
And so we're working with Nvidiato make these one click deploy
buttons.
They're actually, they're livenow.
But we're going to like formallylaunch it probably next week in
the week, next week or two, butyou can use it now.
And what's been great too, isNvidia really wants to reduce

(26:52):
the friction to getting to runthese.
So they'll even cover the GPUcosts for the first hour or two.
So you can actually like finetune Mistral on an A100 and you
can get started for free.
So that's super exciting.
We're going to announce thatvery soon.

Shikher Bhandary (27:03):
Great incentive.

Nader Khalil (27:05):
Yeah.
Yeah.

Jed Tabernero (27:07):
that's great, dude.
That's awesome.
It's a big step.
Number one.
Number two is the development ofyour partnerships with a huge
company like this, that is we'vealready mentioned it throughout
the call, right?
NVIDIA is one of the mostimportant companies in this
space, if not the most importantcompany in the space and you're
working with them.

Nader Khalil (27:22):
I think partnerships is definitely a big
angle for us.
We, if you think about, Brevisbeing the easy button for
running a lot of these servicesand tools, we don't want to
build everything.
We want to see what people wantto do and just pave that path.
And we're working with HuggingFace.
We're trying to work with abunch of really great companies
in the space any scalereplicate.
I think that, there's a lot ofpeople doing some really great
work and.
My perspective is okay, how canwe work with them and really

(27:45):
just pave an epic road for someusers?

Jed Tabernero (27:49):
No I want to give this opportunity, man, because
we typically do this at the endof the show, and I think you
started this off actually, whichis that we give you guys a few
minutes to say what you, Wouldlike to communicate to our
audience.
Now our audience has evolvedsince you were last on the show.
A lot of the folks who take alistener, folks in the same
space, their founders as well,the people who work in tech

(28:10):
startups, and so we'd love tojust get, your last thoughts.
If you want to give a shout outto your team, to your partners,
to the people in the space andmaybe give a little plug about
how people can get moreinvolved, I think that'd be
awesome.

Nader Khalil (28:24):
Yeah.
Our goal is just to make theeasy button, make it really
simple for folks to get startedwith fine tuning, training,
deploying their AI models.
And so if you have any idea ofhow to help me there's I, we
have our roadmap, we have ourhypotheses and partners that
we're looking to that we haveaccess to that we're saying
like, Hey, what can we dotogether?
And so if you have an idea ofsomething that we could do
together or something that we'remissing.

(28:45):
Something that would provide alot of value to users.
Please reach out, message us inour discord, shoot me a text.
You'll see that in our docs myphone number there.
Yeah.
Just get in touch in any way wewere, we're constantly looking
to see what we're overlookingand what we're missing, what we
can do for users.
And if you have yeah, we welcomeany ideas.
The discord is probably the bestway to talk to us and users
directly.

Jed Tabernero (29:03):
Really appreciate you coming on the show again.
And I've learned so much justdoing research of the stuff that
you guys do.
You honestly, just myself, itforced me also to talk to Karam,
which I haven't talked to in awhile.
He's our friend.
Who's also a software developersis deep in the space now.
And yeah, I think a lot of thestuff that we discussed today
are going to be irrelevant forthe next few years.

(29:24):
In general, we love seeingcompanies like you, dude, who
are in this space.
We're not afraid of these giantswho are even working with them
and finding ways to, optimizetheir workflows together.
That's really dope.
I think it's fascinating thatthe customer obsession is at
that level, that the CEO caresabout these problems.
So dude, kudos to you.
Kudos to the recent acquisition.
Kudos to the growth, to the hardwork that you've been putting

(29:47):
into this kind of space.
And yeah, we love to see yougrow, dude.
So congratulations.
And hopefully we have you on theshow again in the next few years
and we see where Brevis has cometo.

Nader Khalil (29:59):
Yeah.
No, I really appreciate all thekind words and thank you.
Thank you guys for supporting usand for having me on again.
I look forward to talking to youin three years.

Shikher Bhandary (30:06):
Great.
Yeah.
Thanks a ton.
This was fun.
Thanks for tuning into today'sepisode.
We hope you found our discussionwith natter enlightening.
And that it sparked some ideasabout the impact of streamlined
technology infrastructure onyour own projects.
For more insights and episodes.
Don't forget to subscribe tothings.
Have changed podcast on yourfavorite platform.

(30:29):
Until next time, stay curious.

Jed Tabernero (30:31):
The information and opinions expressed in this
episode are for informationalpurposes only.
And are not intended asfinancial investment or
professional advice.
Always consult with a qualifiedprofessional before making any
decisions based on the conceptprovided.
Neither the podcast, nor iscreators are responsible for any

(30:52):
actions taken as a result oflistening to this episode.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.