Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Drop out from
Stanford MBA to go start your
first company my first job wasworking at Meta and Instagram as
a product manager.
Speaker 2 (00:07):
I'm a user type A
sentence.
I basically resigned as a CEOand we make the AI agent, astra,
the AI CEO.
It's going to be a littlebizarre for AI CEOs to manage a
real human.
Ai can work 24-7,.
They speak 30 plus languages.
Speaker 1 (00:24):
It's my pleasure to
welcome Javin, who's a serial
entrepreneur based in Valley,but I am more excited to talk to
her because, you know, in arecent startup she basically
replaced herself with an AIRight, so we'll cover that and a
lot more.
So welcome Welcome to the pod,javin.
How are you doing?
Speaker 2 (00:43):
Thank you so much for
having me.
I'm Shaoyin.
I'm the founder and ex-CEO ofhey Boss AI.
Really great to be here.
Speaker 1 (00:50):
Great.
And so you know because beforeyou started obviously you have
been in the Valley, you havebeen in the tech ecosystem for
you know many years Would loveto sort of understand your
journey overview and what kindof inspired you to, you know,
not just start up but actuallydrop out from Stanford MBA to go
start your first company, Right?
So we'd love to hear thejourney.
Speaker 2 (01:14):
Yeah, so I was born
and raised in China until I was
18.
I went to the United States forcollege.
I studied computer science andeconomics.
After college, my first job wasworking at Meta and Instagram
as a product manager.
Actually it was 2015.
It was like 10 years ago.
I was an early product managerat Instagram and mostly focusing
on growth a lot of video.
Back then video was like a verycool stuff.
(01:35):
It was like a really new spacefor Facebook.
And then after that I went toStanford for MBA.
Actually, after a year Idropped out because during the
summer of my MBA first year, youalso had MBA.
So you know, during the firstyear most people would get an
internship and work somewhere.
I just chose to, you know, workon an idea.
Back then it was a virtualevents idea.
(01:56):
So it was 2019, before COVID.
There was no virtual events.
My mom was a doctor from China.
She had to travelinternationally for a conference
in the US and she doesn't speakEnglish well, so it was a pain
in the ass to come to the UnitedStates for this conference.
So I thought, okay, what if wecan digitalize the whole
experiences, make it a livestream and make it possible for
other people to attend, and thatwas 2019.
(02:17):
We got some money fromAndreessen to test out the idea,
and that's when I dropped out.
And then the month that welaunched the same exact month
that we launched the firstversion COVID hit.
So all of the conferences gotcanceled and our business just
basically off to the races andlike exploded immediately.
So it was Timing did not havebeen better.
Right, it's a great timing,it's great timing, but then
(02:38):
after the COVID, it's the worsttiming, because then all the
conferences went back to inperson.
That's when, you know, ourgrowth kind of stopped a little
bit and then we chose to sellthe company in 2023.
So that was my firstventure-backed journey.
We raised a lot of money andthen we like sold it.
It's okay outcome for me.
So then, like end of 2023,early 2024, we started this
(03:00):
current company, haybox.
Initially it was not calledHaybox.
It was a gaming company for kidand we were using those AI
agents.
Initially, astra is our AIagent to help us build games for
kid.
It was an educational gamestudio, so we used AI agents to
help us brainstorm game ideasand then, obviously, we trained
the AI to be better and smarterand now the AI agent can build
(03:21):
those games.
In addition to brainstorming,they can develop those games.
It is really around like threeor four months ago.
We further.
Obviously, as the models getbetter, our agent flow is
getting better, our ability totrain AI gets better, we
realized the agent can not onlymake games but also website and
application and that is a 100xbigger market than the kids'
gaming company.
So that's when we decided toactually have our agent be a
(03:45):
separate product focusing on allkinds of development for
websites, for apps, for people,and that was like three, four
months ago.
We have this company calledHavas, which is the same exact
company, just under a differentname.
I would say we started off as aco-pilot, so people can.
It's always for non-technicalpeople, so everyday people want
quick with an idea, a website oran app.
But it was initially inco-pilot so people can use
natural language to make the AIcode for them.
(04:07):
That was like a few months agoand last week.
The biggest announcement is thatwe have officially became an
autopilot, meaning the entireprocess from user type a
sentence, so user just need totype one sentence.
There's AI product manager,there's AI designer, there's AI
developer, there's AI contentwriter, ai marketer.
They will work together to giveyou a finished product in nine
(04:28):
minutes, so it's like everythingwill be done.
So we officially make theentire process automated, which
also makes my job disappear,because I realize it's much,
much easier for the AI to manageall the AI employees than me
managing all the AI employees.
So, as a result, I basicallyresigned as a CEO and we make
the AI agent Astra the AI CEOand she's managing six AI
(04:50):
employees to deliver the entireprocess entirely using AI.
Speaker 1 (04:53):
Super, super
interesting, right.
So you know, while the world issort of still trying to figure
what to do with agents or how tobuild agents, you have this
company which is running withthe help of these swarms of
agents, right?
So I'm very curious.
I get it that you started Estra, or you launched Estra, as sort
of an agent or a co-pilot for,let's say, these non-technical
customers that you had At whatpoint, or you know what is the
(05:14):
evolution and at what point youkind of started thinking, okay,
maybe you know, we don't evenneed any humans in the company
anymore and it can just be doneby agents.
So, just curious to understandthe evolution, for you to really
say, okay, my job, I'm replacedin a way, right, so you're
taking some confidence, right?
How did you?
Speaker 2 (05:31):
get there.
Yeah, so I mean right now weare the world's first AI-run dev
agency.
So the type of things that wedo I'm not saying that we can
build everything entirelywithout any human, but for our
use cases, which is buildingwebsite and applications for
your business, for your onlinebusiness, for your personal
website, web application, foryour community, all of those
(05:51):
kind of stuff historically youwill hire an agency or a
freelancer in Fiverr to buildthe entire thing.
We can entirely replace thatwith human.
I still think if my job atFacebook as a product manager,
as a software engineer at Meta,probably wouldn't be entirely
replaced, but for our use case,we have officially replaced all
(06:11):
the humans in the loop.
I think the biggest evolutioncomes from when we first
realized we have to target.
We're targeting non-technicalusers.
People always say thatnon-technical user means no code
, but we realized in the pastfew months that no code is not
means no technical, because fora lot of the app, you still need
to understand what is front end, what is back end, for example,
(06:33):
bolt or Lovable or whatever youknow, replit.
You still need to understandwhat is a database and most of
our users do not understand that.
So we basically need totranslate human language as in
like I want to build a websitefor my fitness coaching business
into like what does that mean?
You need a contact form, youneed some backing from it, but
user doesn't want to know that.
They just want to know that mywebsite with a contact form
(06:53):
works, my coaching business witha payment works.
So essentially, we realized weneed more than just AI engineer.
We started off as an AIengineering co-pilot for people
to build stuff without technicalbackground.
We realized, no, no, no,engineer is not enough.
You also need an AI productmanager, you also need an AI
content writer, you also need anAI designer so they can work
together.
So our user can just say I wantto build a website for my
(07:15):
coaching business and theneverything will be done.
So that's when we realized, okay, first we can't just stop at AI
agent for engineering.
We have to also build the otherroles the product manager, the
designer.
Eventually we realize, okay,now we have all the roles that a
dev agency have.
We also need a way to manageall those people.
Right, because how does the AIdesigner know what task to do?
How does the AI designer knowthat's good enough or bad enough
(07:36):
?
We need an AI CEO to lead thosepeople, assigning tasks and
evaluate their work and improvetheir work.
So that's when we have the AICEO that manages the team.
So essentially, right now, thecurrent flow is that you as a
user, just type one sentence Iwant to make a website for blah
blah blah.
Then you can see an internalSlack group where all of those
AI agents are going to have aconversation, brainstorming.
(07:57):
It's like the AI CEO will belike okay, let's make a website
for this podcast, any ideas?
The designer will be like Ithink we should do the cyberpunk
style blah blah.
The seo person will be like Ithink we need to validate those
keywords so you can see theentire discussion of the ai team
, because we realize that is theonly way to make sure we can
design a good end result for ouruser that is not technical.
So I come from that evolution.
Speaker 1 (08:17):
Yeah, makes sense so
I get some of these sort of
vertically focused orfunctionally focused AIs you
said AI product manager, ai, youknow, seo content writer or AI
designer.
They're sort of trained to.
You know, have a, have anunderstanding of that particular
function, right.
But when you talk about Astra,right, they are sort of managing
(08:39):
the you know Astra?
Speaker 2 (08:41):
I guess that she is
managing Astra is the name of
the ASEO.
Speaker 1 (08:44):
Yes, you know, astra,
I guess that she is managing.
Astra is the name of the seo.
Yes, but for astra, who has tosort of understand this varied
context right of I don't knowhalf a dozen agents, right, how
does she make decisions, right?
And, um, you, of course youknow there is like a judgment
involved.
It is subjective, right, as ahuman, of course you were the
ceo before her, right?
So how did you sort of assignthose uh skill sets, uh, uh into
(09:06):
into an ai agent?
I'm very curious to understandthat.
Speaker 2 (09:08):
And yeah, so I would
say, in terms of a job
description for the AI CEO,astra, there are three things
that typically a human CEO woulddo.
Number one is managingcustomers.
Number two is managingemployees.
Number three is improvingproduct, improving your business
, right?
So I think for her she's doingall three right.
So, number one in terms ofmanaging customer, she can
directly talk to other customer.
So customer directly tell hertheir need.
(09:30):
She speaks 30 plus languages,work 24, seven, work in parallel
.
I cannot replace that.
And the second part is managingemployees, right?
So, like in that case, it'slike all the AI agent that does
everything.
And then that means like, ifyou have a task, if you have a
high level need I want to buildan app for my podcast followers
Basically, that the productmanager needs to turn that into
(09:52):
a product requirement document,right, and then that's like this
feature needs to be done bythis designer, and that is the
seo related thing.
And so astro, basicallyassigning those tasks for
different people and see theperformance of those people,
right, and then they need toknow, okay, are we good?
Is the design good enough?
Do we need to improve?
So is it?
That kind of process is managedby astro.
Uh, and the third piece, morelike now, the customer give us a
(10:13):
lot of feedback because webasically, right now you state
an idea, we build it end-to-endin nine minutes.
Then you can choose.
Customer can choose.
Either they can make us hostthe entire thing they're done,
which we also host them for them, or they can say I want to
change, I want to add featurewhere I'm not happy with this.
So essentially, we have a lotof data on the feedback from the
(10:34):
customer.
What do they like, what do theynot like?
We also have a data on does thecustomer eventually decided to
publish the thing or let us host?
So we know the end results goodor not.
And then Astra can use thosedata to understand what's
missing right For a certainthing.
If the customer wants somethingwe're always bad at that and
they can try to self-improve andmake it better themselves.
Sometimes they may need human tobe involved.
(10:55):
For example, when it comes todata privacy and data
transparency, we're very careful.
The human are very proactive,because other things are.
Sometimes we wait until astrato can escalate to us.
So I would say those are kindof the general flow, um, but the
most important thing for us isthat we are delivering a final
result, which is different fromall the other ai coding company.
They are like ide, so they youprompt, they deliver some tasks,
(11:17):
they finish a task.
We do not finish a task, wedeliver a final outcome.
So that means number one usersdo not like to write prompt.
Our users hate writing prompts.
They just wanted to have oneprompt and be done with it.
So we need to.
Essentially, astro needs towrite the right prompt to get it
done.
That's number one.
Number two is that we need todeliver the final outcome.
Are the customer happy withthat?
And different vertical havedifferent success.
(11:38):
If you're a podcaster making awebsite for your fan, what makes
that website good?
Maybe design is important, butalso fan engagement is also very
important.
It's very different from thesuccess criteria of a restaurant
who maybe want to showcasetheir beautiful food.
So the criteria of thosewebsites even though they're all
websites, they have verydifferent vertical knowledge and
vertical success metric.
(11:58):
So essentially, astrid and herteam needs to do the research to
understand what makes thatvertical convert better.
So they actually have abilityto do some research and look at
benchmarks to understand whatthey can do to optimize for that
particular use case.
So everything we do ispersonalized for those use cases
, focusing on the final outcome.
So then people will use us tohost the product and we can make
more money.
Speaker 1 (12:18):
Got.
It Makes sense and you know, ifyou look at over the last few
weeks right, mcp came about.
Google recently launched theA2A protocol right for various
agents to talk to each other,but you have been doing this
before.
All this sort of became astandard thing and, as I
mentioned, a lot of very sizablelistener base of this podcast
(12:39):
are technical founders who aretrying to figure how to make
some of these decisions.
So, before you move on to someof the other aspects of building
in AI, would love to get asense of what are some like hard
infrastructure and systemdesign choices that you made to
get this.
Multiple AI agents worktogether and, you know,
(13:01):
eventually deliver something,and I saw a bunch of your videos
.
It's a fantastic productdelivered in nine minutes, right
.
So it would have taken likesome very you know, core sort of
design choices, so would loveto maybe get a broad overview
right.
Could be helpful for thefounders who listen to this.
Speaker 2 (13:18):
Well, so I think that
there's a few things.
Number one is that we focus onthe use case first.
Right, so our use case?
I would say half of them arenon-technical people.
They're just like mom and pophave some need for website or
apps Today.
They hire a freelancer.
We would entirely replace that.
The other half are probablyproduct managers and some
engineers, some designers, andthey're trying to build some
(13:38):
prototype really fast, butthey're typically zero to one
prototype where they'retypically adding a new feature,
and so for that kind of case,essentially they want a good
outcome which is different from,maybe, cursor.
You come up with a task, theydeliver that task.
We're less about that, we'remore about like.
For product manager is morelike okay, step one, step two,
step three here are the threescreenshot.
(13:59):
Step two people don't convertbecause there are too many
buttons, and so when you talk tocursor, you have to say exactly
what button to increase ordecrease by what.
For us, you just need to giveus the three screenshots and say
number two people don't convert.
We think there are too manybuttons.
Just use one button and we'llfigure out which button to keep
(14:20):
and which other button to remove.
And that also means the designneeds to change, the content
needs to change in order forthat button to be removed.
So we will figure those out.
And that is very different fromCursor, which are more like.
Here's your task, finish thetask.
Ours is like here does itreally solve the problem for
user to communicate better?
So that's for the productmanager and for the mom and pop.
(14:42):
It's like does this websitework?
Is it bug-free?
Does that help them chargepeople?
Is there a lead form?
They can see the customer form.
So that's even bigger scope interms of final result.
So because of that, we're alwaysoptimizing for quality over
speed.
So obviously we're fast enoughthat we can do nine minutes,
deliver a final outcome, butwe're not as optimized on a task
(15:05):
basis.
If you benchmark one problem,how long does it take us versus
the competitor?
Because we do so many thingswithin one problem.
Typically it's slightly longerthan a task-based model in
Cursor because we're deliveringthe outcome.
So I guess we're always goingto focus on improving speed, but
right now we've always beenfocusing on quality first over
speed, because our users want tosee a good outcome.
(15:26):
So that's number one.
Number two is we realize that alot of those outcomes cannot be
simply delivered based onengineering tasks.
A lot of them is acomprehensive task based on
product management, based oncontent, based on design.
So we've always been focusingon figuring out what are the job
functions needed for this taskto deliver a really good outcome
, and so then we can loop in theright job function to get it
(15:48):
done.
That is fundamentally verydifferent from Cursor, where
you're focusing on engineeringtasks.
Okay, you know where the fileneeds to be changed.
For us, there's so many peoplethat need to be looped in and
there are so many evaluationsLike designer check is that good
Content?
Writers say is that clearProduct manager check is that a
smooth user flow?
So there's multiple checks andbalances among different job
(16:09):
functions to make sure theoutcome is delivered.
So I think that's kind of likeour eval is more complicated
because we're focusing on thefinal outcome versus a
task-based model.
Speaker 1 (16:18):
So I think- it's very
interesting, right, like
delivering the final outcome inlike less than 10 minutes.
You've sort of set up a highbar for yourself.
So I'm curious, like, how doesthe QA of it happen?
Or is it sort of baked into thewhole process, right, because
at some point you know you needto do thorough testing, right
Before you roll out that productto the customer?
(16:39):
Right?
Nine minutes, or.
Everything is happening sort ofin parallel, right, not
sequential.
So is it fair to assume that atevery point somebody or there's
a separate agent maybe who'schecking that?
Okay, it is working, at leastfrom a workflow perspective, at
least from the key objective ofthis product perspective, it is
(16:59):
working fine.
Of course, there could still besome, you know, subjective
changes.
For example, the customer wantsa different color scheme, or
you know what have you.
But the key objective of theproduct is getting achieved.
Is there sort of baked in intothe whole design process during
those minutes?
Speaker 2 (17:10):
Yeah, that's correct.
So essentially we need to.
So number one is we basicallyneed to identify what success
looks like for this use case andmaking sure we're delivering on
that.
So that also means like webasically need to use all the
right tools.
We actually have our owndatabase.
We have our own backend.
We have our own integration.
If you want to build an AI appwith OpenAI integrations, we
directly pay OpenAI.
(17:31):
You just pay us.
So you could imagine there areobviously we are basically like
a black box, but we doeverything right Versus like
Cursor, you have to get your ownAPI key, pay multiple vendors
to get it done.
So I think for us, obviouslyit's more challenging that we
have to figure out what to usein what time and to make sure we
deliver a good outcome.
But to some degree, thedownside to that is that if the
(17:53):
non-technical user is asking forsomething that our current
capability does not support,then maybe they want to build
some specialized fine tuningmodel that we currently don't
know how to do.
Then we're going to havetrouble satisfying that customer
.
But because we're focusing oncertain use cases, I think we're
being pretty thorough when itcomes to website and something
more common community-likewebsite or applications would be
(18:15):
pretty pretty good with that.
But if you're buildingsomething very advanced, I think
you're going to realize we havesome trouble because we
probably haven't learned thatskill yet.
So I think there's definitely atrade-off with that.
But high level is, like you know, for us we basically need to
loop in everything and makingsure we deliver a final outcome,
and that also means, like whenwe face a bug, the AI needs to
know how to solve those bugs.
(18:35):
Obviously, we may still havesome bugs, but I think, compared
to our competitors, if you cansee, that we actually have a bug
agent that solves bugs.
So making sure when there's abug they actually know how to
solve it and they canproactively identify issues and
solve that.
I can't say we're 100%.
Obviously there are stillthings you can improve, but we
want to deliver the outcomethat's bug-free to our customers
(18:55):
.
Rather than you know, typicallywhen you do in cloud, they say
you face an issue, you have toclick a button called fix issue.
Right, like we realize, atleast for our user, especially
the non-technical one, when theysee a bug they just turn away
Because they're not used to seebugs in their life.
They're used to hire the agency.
(19:17):
The agency gives them the finalresult.
So once we have a bug, we losethe customer, which is why we
have to kind of auto-fix as well.
So essentially, we have a QA,we have a quality insurance
person and a bug-fixing personin that team as well.
Speaker 1 (19:24):
One last question on
hey Boss, and then we'd love to
get more broader views from youon AI, right?
How do you manage Astra?
How do you give her feedback?
How do you manage Astra?
How do you give her feedback?
Is she doing?
I mean, you're still in theboard, right?
You're still so she reports toyou, right?
So what kind of systems youhave in place?
Speaker 2 (19:39):
that, okay, astra you
need to improve on this or that
?
Yeah, so I think there's a fewthings that, for certain factors
when it comes to data privacy,transparency, user trust I am
proactive.
And when it comes to executionwhat is the user, what are some
of the product that we canimprove she's proactive.
So, because you know, I reallydon't want her to drop the ball
(19:59):
when it comes to privacy andsafety and user transparency.
And I realized that there was atendency for AI agent to focus
in on the final outcome,ignoring the process.
So, for example, you probablytalk about the.
You saw our internal Slackchannel between AI agent where
they have a discussion to showyou what they're doing.
That was not there.
When Astra proactively thinksshe's always going to be like
(20:20):
think algorithm and deliver theresult, she's not going to think
how to make sure humanunderstand.
So that is something that Iproactively say okay, we need to
add that we need to translatewhat they're doing in code into
a way that the everyday humanunderstands.
So that's something that I haveto be proactive.
But when it comes to here arethe type of things that users
are not happy with that maybe AIcan learn more or maybe human
(20:41):
can help.
That is something that I thinkshe has way more data and she
can work way harder than me.
She's just going to beproactive.
So I think for me, managingmeans what are the aspects that
I need to be proactive versus AI?
And then that's number one.
Number two is how do I makesure that there is the right
information that AI is getting?
So what kind of informationdoes she have access to and what
(21:02):
kind of information do I haveaccess to so that I can trust
her input?
Because I know that with theright input, she can deliver the
right output.
But I need to make sure she hasthe right input.
Speaker 1 (21:12):
Yeah, makes sense.
I mean I think in that wayastra seems to be the the most
amazing reporter you can have inthe world, right, that you just
you just have to like give herthe right input and you are like
almost sure that you knowyou'll get like the right, right
outcome or right output fromher.
Speaker 2 (21:30):
Very interesting,
yeah our company was an AI agent
.
The core business does notrequire a human, which is why
it's uniquely okay to have an AICEO.
But for other companies.
If you're an offline business,you're serving customers in
person.
I think maybe the AI CEO ismuch, much harder Because all
our data is online.
We have all the data and theentire process is done by AI.
(21:54):
So I think I'm just on a betterspot, that I can use AI SEO
that way.
Speaker 1 (21:58):
Exactly so.
Do you not have any humanemployees at all?
Or is there somebody who's sortof doing the sales or bringing
in these projects?
Right, because if you're a devagency, somebody needs to bring
these projects for you to workupon, right?
So is there any human at all?
Speaker 2 (22:12):
Yeah, Well, I mean
like so we used to have human
employees.
We have engineers, we havedesigners.
They're focusing on thecustomer.
We have changed all of thoseroles so no one is really
developing or designing for acustomer.
We do have human advisors onthe development and design side,
making sure you know whateverthing the AI cannot solve or
they're not doing the right way,we make sure they deliver right
.
We make sure they deliver in away that builds human trust.
(22:33):
So we do have people.
On that we're still debating.
There is some heated debatewithin the company.
Do we need to have humansupport people?
Because historically, obviouslywe have all this knowledge base
that AI can just answer thequestion.
But sometimes people just wantto hop on a call, talk to a real
human.
They're willing to pay way morefor that no-transcript first
(23:22):
version for free.
So there's no test and trial.
You just like input your ideasif you like it and you only pay
us more if you like the idea.
So I think the marketing hasbasically changed from like
phone call with customer tobuild trust, so you can build
the first version to hey, here'syour first version for free.
Do you want to continue workingon it and pay us.
So from customer standpoint, Ithink the go-to-market is
different, so we don'tnecessarily need to have the
(23:43):
human hop on call anymore.
When the customer just get afree version, they can decide.
Speaker 1 (23:46):
You know, yeah, makes
sense, commentary on the
broader debate on this topic.
Right, that, how should AI beperceived in the age of this?
You know, knowledge work, right,where a lot of at least you
know I would talking personallyin investing, right, a lot of my
time in the day is kind ofearmarked for, you know,
(24:07):
researching something, readingsomething, all of which AI can
sort of do much better than me,right?
So if you were to make a morebroader sort of comment on how
should we plan, or, you know,look at AI, you know, maybe
augmenting some roles, maybereplacing some roles altogether,
right, how do you see that panout and how do we really, as
(24:27):
human race, continue to benefitfrom it rather than, you know,
suddenly be in a situation whenwe don't even know, right, what
we are supposed to do for, youknow, maybe, fulfillment, or,
you know, at a very less, youknow, to get incomes, right,
it's a fundamental question,right, because whenever I ask
that question in any forum,people sort of always draw
parallels that, okay, industrialrevolution happened and all
(24:49):
that happened, but frankly Idon't think all those previous
sort of technology revolutionshad impact.
Speaker 2 (24:57):
Right, you can, yeah,
yeah so I would say, like you
know, because what we do, I'mseeing very, too, very
interesting uh, I would say toointeresting trends happening at
the same time.
The first is that our companyentirely replaced human dev
agency.
So on one end, we're seeingthat job function, function by
function, is being replaced byAI, including my job.
(25:17):
So that is happening right nowand they're doing a better job.
Ai can work 24-7.
They speak 30-plus languages,we serve 100-plus countries and
they can always respond.
Better job AI can work 24-7.
They speak 30-plus languages,we serve 100-plus countries and
they can always respond to you.
There's no delay.
There's no way we can competeagainst that.
Speaker 3 (25:32):
So when it?
Speaker 2 (25:32):
comes to job function
, I think those functions
probably will be replaced, Imean, unless the product is so
specialized in legal compliance,knowledge or healthcare, where
very, very difficult, ai hasn'tlearned how to do, which I think
there will be fewer and feweraspects like that right, because
the AI model is getting betterand better.
So that just means more andmore of those jobs will be just
replaced, for sure, and sothat's what we see.
(25:54):
But obviously, when it comes tosigning contract, people still
want me to sign the contract,even though they're paying Astra
.
They want me to sign thecontract.
They don't trust her signature.
They still sometimes prefer memarketing the company.
I couldn't realize in the pastweek.
You know, me and Astra post thesame thing.
I get more views, you know,because people still like to see
a human, but I don't know, willthat change?
(26:16):
You know, right now it's like Idon't think I'm more effective
at marketing.
Speaker 1 (26:18):
I'm more than happy
to do a part two of this podcast
with Astra like to be honest wedo have a bit.
Speaker 2 (26:25):
We can give you a
link.
You can video call to interviewher.
Actually, yeah.
But yeah, what I'm trying tosay is I think that, even though
we see the same thing, I I getmore views.
So maybe that's not because I'mmore capable, just means people
like me more, just because I'mhuman, you know, uh, but will
that continue?
Because I mean, frankly, she ismore fashionable than me.
She has many different outfits,she can change.
You know what I mean.
So I don't know.
(26:46):
So that's on one hand.
The other hand is that we'reempowering a lot of people to
build companies.
Right now they're developingideas.
Right, it takes them nineminutes from idea to launch.
So a person is taking a showerlike what, if I can do this,
maybe I can be the nextZuckerberg.
You know, sometimes before theywould just write it down.
Now they can make it happen.
It's nine minutes from Showerto Zuckerberg Nine minutes.
(27:07):
So we are seeing a lot of peopledoing that, including people
that are not technical, theydon't know code, I mean, they're
just random bakery owners, butthey can now compete against
engineers on ideas.
So that's number one.
Number two we are seeing someengineers are very smart.
They are making 100 ideas atthe same time using our tool,
(27:30):
because a hundred ideas at thesame time using our tool,
because, why not?
Astro can work, you know, amillion times at the same time.
So they are saying here are my100 ideas.
Or even I have a general themegpt, help me figure out 100
ideas.
Okay, gpt, help you figure out100 ideas.
Now, copy paste 100 ideas inparallel and make hay bars, uh,
develop all of them and then nowyou can you know, you can do
like run seo for each of theideas to see which one clicks,
and then you double down onthose.
So I think the entire innovationprocess has changed.
It now became a numbers gametoo, right?
So that means more peoplewithout technical background
(27:52):
what was the general theme canuse AI to directly compete
against tech companies.
So there are also moreopportunities as well.
So I'm seeing both.
You know, I'm seeing a lot ofjob being replaced, but at the
same time, my company if you ask, does my company?
Do we have sin?
We replace a lot of people'sjobs.
Maybe we're sinful in that, butwe also create a lot of new
(28:13):
jobs.
So I wish I could create morejobs than I replace jobs.
Speaker 1 (28:17):
Absolutely, that's a
good sort of segue, right, that,
on one hand, the price to enterinnovation right, or price to
start a startup, has sort ofgone down.
Right, you have products likeHeyBoss for, let's say,
non-techies to quickly iterate.
You know, put an MVP togetherand test it out.
On the more sort of advancedside, you probably have a cursor
(28:40):
which, let's say, can help twolike 10x engineers to put
together something whichprobably would have taken 20x,
20, 10x engineers to kind ofwrite.
Right, that is one.
And second, is that, even withall this right?
Uh, first of all the price andhas gone down.
Let's say, you run theseseveral experiments and
(29:00):
something sort of takes off orhas very strong indicators of
taking off.
It has, in a way, becometransient, right, because, again
coming to the first thing, itis easier to copy for anybody,
right, after a point.
So, as founders, kind of youknow, as well as you know as
investors, right, how should wethink about, you know,
innovation, where does it lie?
And also, you know, in therealm or in the world where the
(29:25):
world, where to innovate on newthings or to bring about new
ideas has become such a low cost, how do we really think about
creating a differentiation.
Where does that come from?
And we have been talking a lotabout okay, there are several
companies that are scaling to100-200 million ARRs.
That's good, but Ifundamentally feel that a large
(29:48):
chunk of that growth is sort ofwhat I call experimental ARR,
where everybody's trying stuffright.
What is yet to be seen is that,okay, do they get that renewal
after the annual contract isover, and stuff like that.
And I'm pretty sure if not all,a good chunk of them will sort
of not be able to hold on tothat kind of growth, right.
So that has been this pushingpull for us as investors as well
(30:10):
as founders who are kind ofthinking about it.
So what would be your advice,recommendation or mental model?
You would say how do you lookfor opportunities, how do you
continue building or sustainingdifferentiation and how do you
scale or how do you go fromthere?
Speaker 2 (30:22):
Yeah, I mean, I think
there's a few things.
One is that even for us youknow we're seeing many different
use cases.
Right, we have to choose whichuse case to focus on.
I think there are intrinsicdifference in use cases.
Sometimes it's like one ofthose things that people need a
lot of trust to start.
Once they choose you, theydon't turn.
There are other things wherethey're like to experiment.
They also, like you know, verylikely to turn away.
(30:44):
So I think there arefundamental differences in ideas
when it comes to initialbarrier of entry whether it's
trust-based or product-based andthen basically the transition
cost of like if I were to notuse you, how costly would it be?
Whether it's trust orcompliance or product.
Right, I do think there's goingto be a lot of those like
consumer app's, like trends in.
(31:04):
There's going to be one thingthat's popular.
Then, okay, it's no longerpopular anymore.
There's going to be some toolthat say, oh, I can do this, and
then everybody's going to copyFor sure.
There's going to be a lot ofchurn in those type of use cases
.
I do think, for a lot of themaybe more large enterprise wars
, like trust-based thing, whereuser really needs to build
affinity with you.
Those are still like where youhave special distribution
(31:25):
advantage, where you havespecial trust advantage, where
you have special taste advantage.
Those sort of things, I think,are still going to win.
So that's number one.
Number two is that I think it'salso important for AI when it
comes to build tech company,even for us.
Our AI are constantly learningnew things.
Not only do we teach them newthings, but also they can learn
themselves right.
So it's more like having theinfrastructure where AI can
(31:47):
proactively learn and improvebased on user feedback, and now
they can do all kinds ofrestarts.
They can add to all MCP.
How do you make sure your AI isalways learning faster than
other people's AI?
I think that's also really,really important, but it
requires some technical skills,for sure, and also like
technical infrastructure, butalso, I think, proactively
(32:07):
designed to make sure your AIcan improve faster.
Speaker 1 (32:09):
You know, makes sense
, makes sense so what I'm taking
away also, which you kind ofdid not say it out loud, but
consumer product sort of willbecome transient by design.
I mean, it has sort of beenhappening over decades, if you
see right.
Speaker 2 (32:22):
I mean, if you have a
network effect, maybe that's
different, right, because nowthey're sure they could.
Speaker 1 (32:25):
Yeah, for example, I
do think that something like a
facebook or an amazon wouldstill maintain their scheme,
because it's not just product atthe end of the day.
Right, there is, and that'swhere, you know, maybe a strong
offline aspect of the businesscould create that
differentiation.
Right, amazon is fundamentallya logistics company with all
this network design.
Of course, tech is a bigenabler to that, right, uh.
(32:45):
But what you also sort of saidis that on the enterprise side,
you can find more stickiness and, you know, sort of
differentiation, because that'swhere the whole trust thing and,
okay, enterprise need to knowthat, okay, you're not going to
kind of, you know, leave themtomorrow and there's going to be
some kind of sustainabilitythere in in terms of tg, right,
in fact, that that's what ourthesis has been.
(33:09):
We obviously know that severalAI consumer companies will come
about Very difficult tounderwrite.
Which ones are those going tobe?
Versus having an enterprisegetting served within AI
companies, you can underwrite itbetter from a risk perspective.
Basically, right, because, uh,there is a you know, whole trust
angle and if you get in once inan enterprise, unless, like,
(33:32):
you're screwing up, you're notlikely to be replaced, right,
even, let's say they getsomething 20 cheaper or
something like that, right?
So that has been, uh uh, oursort of thinking so far.
Let's see how that evolves.
But look, this has been, Imean-.
Speaker 2 (33:44):
I mean, I do think
for consumer, though, there is
one nuanced thing which iscalled taste, and that is
actually, I mean, even if youhave the perfect AI, there are
still a lot of points.
I think that someone needs tohold the line for like what does
the case.
Look like.
You know, if you think aboutInstagram, they're not the first
company with filters.
Taste, right.
So I think that is probablygoing to be very, very important
.
Makes sense.
(34:04):
Yeah, even product manager, Ithink.
When it comes to productmanager, it's like, okay, we
used to write PRD, but now,chachi B, you can write PRD,
right.
So it's like, okay, maybe nowit's a competition to see which
product manager has the besttaste, and it's a little fuzzy,
right To define what is a goodtaste.
Yeah, yeah.
It's really really fuzzyfuzziness of the differentiation
.
I think that's that should bethe motto, right but I do think
(34:25):
there are ways that you canevaluate taste.
Good or bad it's hard todescribe it.
I think you can evaluate it, soyou know me, so I do think that
is important, yeah makes sense.
Speaker 1 (34:36):
This has been, this
has been fantastic.
Thanks a lot.
Really appreciate you takingthe time.
Thank you for having.
It was great fun and we arehoping for a part two of this
with Astra.
Speaker 3 (34:46):
Right, okay, sounds
great want to know when new
episodes are available, justsearch for Prime Venture
(35:07):
Partners Podcast in ApplePodcast, spotify, castbox or
however you get your podcasts,then hit subscribe and if you
have enjoyed the show, we wouldbe really grateful if you leave
us a review on Apple Podcast.
To read the full transcript,find the link in the show notes.