Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
And what I started to realize was most people are really
shitty at delegation, that that's just the reality.
They're horrible at sharing contacts.
They're horrible at communicating their thoughts.
They don't write or they don't write very well.
So they don't organize their thoughts very well.
They don't know how to bring people along.
And so if you're pretty bad at delegating something to another
(00:21):
human, you're probably going to be even worse at delegating it
to AI. Welcome back to Chain of Thought
everyone. If you are watching on YouTube,
you have already figured out we are not in our homes recording
today. We are in fact live in San
Francisco. I'm delighted to be at the
offices of Growth X with their founder and CEO Marcel Santilli.
(00:46):
You may also know Marcel as the former CMO of Deep Graham and
the former CMO of Scale AI, Which I mean, if you pay
attention to the news at all, was just purchased by Meta.
Well, 49% of it was for about $15 billion.
So he knows a few things about scaling and building AI
businesses. Not to mention Growth X is
(01:06):
having incredible success. In just the first six months,
they've already raised $12 million from Madrona Ventures
and they've actually already scaled to 7 million in revenue
run rate in just six months, an incredible amount of growth.
They have a unique perspective on how they're building their
business with customers like Reddit Ramp, Web Flow, Abnormal
(01:30):
Security, and Galileo. You're fresh off a $12 million
raise. You are having so much success
here. Welcome to show Marcel, it's
great to see you. Yeah, thanks.
Awesome having you. This is our new headquarters, I
guess, you know, so it's always awesome to to hang out.
Yeah, yeah, it's, it's a ton of fun.
We're really glad you had us down here and always great to
(01:52):
see you and catch up and hear about some of the incredible
ideas that are percolating for you because you obviously have
years of success, years of experience building businesses
within the AI space, building businesses in general.
I'd love to kind of tell our audience, what does it take to
build an AI native business? Yeah, I guess maybe I can go
back a little bit and give a little background, right and
(02:14):
kind of how what has informed mythinking.
But pretty early on I was at at Hashicorp, we scaled that
business open source. And so from I saw community
building from the ground up and seen the scalability of that
kind of Fast forward a little bit when I was a a scale AI,
obviously I got this awesome behind the scenes view into all
(02:38):
these AI labs before things kindof started to blow up AI and
kind of seeing what data they were using and what not.
And and scale is just a very unique business for for
different, different reasons. Won't get into into that, but
and but then at deep Gramma, I was at this very unique position
as well because there's at the time at least there's not that
many companies that were, you know, had their own data
(03:01):
centers. They were training their own
models. They had their own in house data
labeling operations, their theirown research teams and serving
their own models, having models as a service via NAPI.
So it's a very, very unique business as well.
And we built a startup program as part of that.
And so I was, my team was running that startup program and
(03:22):
I just kind of got this view into things.
And so from my, my years at scale and then deep RAM seeing
all of that, one of the things Isort of realized was there's
this kind of big disconnect between it.
It felt like all the money, all the attention was going into
these frontier models and training them more data, more
compute and everything else. But from my experience doing
(03:46):
conferences where we're interviewing a lot of these
researchers and what not, you realize some of them actually
never had a job. And what I mean by that is they
never actually had to deliver ona revenue target or built the
actual product. You know, these are researchers,
these are amazing, super smart, smarter than you can imagine
people, but they haven't actually apply and done the
(04:06):
actual job. And so, you know, as as a
operator, you know, over the years, I think what makes good
operators is you having empathy for what does it take to do a
good job, right? And what does it take to run a
good company as well. And so a lot, it just felt like
a lot of what was happening in AI was just building models,
(04:26):
right? And so disconnected from
actually applying that to, to real businesses.
And here I am. I, I saw all this context and
then a deep grim, it felt like it was closer to, you know,
applying it to, to increase productivities of companies and,
and help companies grow more efficiently.
But then I think where things started to connect for me was
(04:47):
when I just started building myself, you know, so I said, all
right, let me just start stitching together.
How do I go about doing great work?
And I like to call it kind of the messy middle.
How do I go about researching myaudience?
And I go and I research and ask these questions.
What is the mental model that I have when I'm asking these
questions? What are the sources of place or
(05:09):
places I go for information? And then how do you process that
information? How do you plan your work?
How do you go execute against that plan?
How do you iterate on on that execution and, and do a bunch of
other things? And so I started to realize was
as I was doing that, I was playing with the inputs.
I was, you know, prompting it differently.
I was, you know, adding more steps, changing how we were
(05:31):
doing the plans and what not. And and there's kind of this aha
moment where I realize, well, you get to a point where
actually what made me good at what I did could be pretty
reproducible, right? But it requires still a lot of
work to apply these models in a way that actually can can help
you reproduce these plans, right?
(05:52):
And, and so for me, that was thebiggest disconnect was you see
all these frontier models, they're Gray, but it just felt
like you're doing so much work to get them to be applied to
more complex knowledge work, right?
And, and then here was seeing like after doing a lot of this
work, how it could be applied. And, and so that, you know, Long
(06:14):
story short, kind of led to growth acts where at first I was
just teaching people how to build these workflows and what
not. And I quickly realized there's a
a different way you can build companies, you know?
Where? And we can get, I'm sure we'll,
we'll get into some of that. And, and so when I think about
AI native, the way I kind of think about it is completely
(06:34):
different. It's to me there's two
principles and two skills, if you will, that are the most
critical ones. And, and, and how in AI native
businesses, the first one is first principles thinking.
And so how do you decompose things down to, to the very core
foundation, the essence and, andthen rebuild it, but rebuild it
(06:56):
in a different way. And then the second one is
delegation. And what I started to realize
was most people are really shitty at delegation, that
that's just the reality. They're horrible at sharing
contacts. They're horrible at
communicating their thoughts. They don't write or they don't
write very well. So they don't organize their
thoughts very well. They don't know how to bring
(07:17):
people along. And so if you're pretty bad at
delegating something to another human, you're probably going to
be even worse at delegating it to AI, right?
And so if you don't have really good first principles thinking
and you don't know how to delegate, that is literally how
to build an AI dated business. You if you're good at those two
things, right? There is the infrastructure,
(07:37):
there is the stitching everything together, but it's
just completely rethinking everyfunction in the company, you
know, and we can go into examples how we're doing
recruiting differently, how we're doing every function in
the company differently, but it comes down to those two two
things I I think. There are so many great threads
to pull on from what you just said.
I love that. I'm like, OK, do we have more
(07:58):
time? I first of all, I love this
insight of, look, it's great to build a great model.
That's fantastic. It has a lot of value in the
moment, but there's always a newfrontier model coming.
You're always chasing the dragon, whereas if you build a
great business, you may be able to have more durability.
Whether or not you are always using your own frontier model,
(08:20):
there are other models you can leverage and you can build a
great workflow. You can actually understand your
customers in depth and build a great business around that.
And that seems like that's the approach that you're taking here
at Growth X today. The other interesting thing you
bring up here, which I think maybe relates to this point, is
this idea of like, look, we havebrilliant people who are
(08:42):
enabling the AI revolution. We have brilliant researchers.
Some of them have come on the show, some of them coming on the
show soon, but they may not always be the best folks to
build these AI businesses. They are crucial to doing so.
But you also need great delegators.
You need people who are incredible people, managers who
can then apply those skills to, hey, how do I manage all these
(09:02):
different AI work flows, AI agents, how are you approaching
recruiting and scaling growth X differently than you would in
another business? Yeah.
So I think we're set up quite differently.
So we started with services and the, the main reason we started
with services is because services are a forcing function
to figure out what does it take to do a great job and what does
(09:26):
it take to deliver high quality work outputs at the end of the
day, right? So if someone is actually paying
you for a work output, not for the tool that maybe you can do
the work, but actual the output of that work and hopefully that
that is informed by good strategy, then that is what is
actually valuable to businesses,right?
(09:46):
And so the process of doing that, what do you do?
You hire a service provider, an agency, a consultant to help
deliver that work output. And that person normally or or
company or agency, they're normally the experts, right?
And so for us, the reason we started with services is because
we want to understand what is that messy middle to deliver
(10:10):
great work products. And so just to give a very
concrete example. So let's say you go to a
Michelin star restaurant and youhave this Michelin star chef
delivers this amazing dish rightin front of you.
And you just look at it and it'slike, this is amazing.
And you take all the videos necessary, this area, all the
pictures and everything and evencome up with the perfect
(10:31):
ingredient list and it's all right there.
And then you bring someone and you say here's all the pictures
of the final product and here's all the ingredients.
Help me replicate this. You would not be able to do
that, right? So why is that?
Like it's not about the quality of the ingredients.
It's not even about how much detail you have in the final
output. The final output is a snapshot
(10:53):
of something, right? And and so in order to replicate
great, you need that messy middle, You need the try on
errors. And there's not only one way to
achieve greatness. And so knowledge work is that,
and, and I think what, what we started to realize was just like
how important that messy metal is to, and there's no easy, non
(11:16):
messy way to capture that. You're not going to deal with
this UI that some designer and product team is, you know, off
in the corner designing and building for six months and then
they launch and then it goes outand all of a sudden like a new
frontier model or something, some paradigm changes and it's
like you're 6 once behind, rightalready.
And so instead, when you think about like, forget the UI,
(11:40):
forget the form factor by which you're delivering it.
Just sign up to deliver the thing and then figure out what
does it take to deliver in a more AI native way, but not in a
way that's like, I want to replace humans or I only want to
use AI or I don't want to use AI.
It's more about like, it doesn'tmatter to us.
It really doesn't matter the waywe think about it is like, where
(12:00):
are the places that you absolutely need expert
interventions? Where are the places where you
need to shape how to do the work?
And and then what are the placeswhere you do need the humans in
order to get higher and better quality input from your
customers, you know, in order todeliver better work products
and, and overtime that that mix is going to change.
And I think, you know, we have alot in the works, but I think
(12:23):
there's the form, the current form factor, at least for
knowledge work is, is not there.And I think there, there's,
there needs A, to be a paradigm shift.
If you think about what's happening with coding agents,
you know, cursor windsurf, augment code and, and a lot of
these companies like they're, you know, in the end and just
(12:45):
like the day-to-day of our engineers and how critical these
tools are and they're truly likeprofessional tools.
They're making engineers and builders more productive.
And then you compare what's happening to knowledge work of
copying and pasting context overto chat DPT or Claude.
And you know, and you just kind of like it feels like you're
working for a chat window. Like it's like I work for this
(13:08):
chat window now. Like, and I'm constantly trying
to like wrangle and then wrestleit to do the thing I needed.
And I have to constantly tell her the same thing.
It's like my wife, like, did younot hear what I just told you
yesterday? It's like, it's like that.
I feel like that that's me, you know, so I have more empathy for
for which says that to me, you know, I'm joking, but oh, man.
And I think there there needs tobe kind of some kind of paradigm
(13:30):
shift. There.
Well, first of all, I'll say listeners, you heard it here,
but when I in the future steal this great comparison to a
Michelin star restaurant in order to describe how knowledge
work and the messy middle works in the future, just pretend you
didn't hear Marcel say it first because I'm going to, I'm going
to borrow that for sure. I'll start there.
(13:53):
But secondly, I, I just really love that you are taking us back
to as you said earlier, first principles thinking.
And in this case, yes, you're using services and deep
engagements to understand and then build products that can
help solve these problems. But more than that, you're
taking on 1st principles business thinking of saying
(14:14):
we're going to be customer obsessed.
We're going to understand our customer, we're going to get
close to the bare metal, we're going to understand what it
takes to ship for them. And we've been doing that a bit
with Galileo where we, you know,you may know us as an
observability and evaluations platform and you're probably
hearing us use the phrase reliability a lot more.
A lot of us are starting to use this phrase reliability because
(14:34):
we've realized that observability evaluations, while
crucial, are means to what we'reactually solving for customers,
which is that reliability challenge.
And I see you doing exactly thatfor knowledge work and exactly
that for growth teams saying, look, we're going to come in and
figure out like what your problem is.
(14:55):
We're going to actually solve itfor you.
And then we're going to break that down and say, how can we
solve it better? And that is such a a first
principle approach as you put it, and it clearly opens up a
savvy of pathways for you in thefuture.
What do you see as kind of the next stage of Growthx's
development as you continue to dive deeper into AI enabled
(15:16):
workflows, products and much more?
Yeah. So I can share a little bit of
kind of how we're thinking. And I think a lot of the a lot
of these things will connect better over over the next few
months as we share more. But so think of it as what are
we trying to do? We, I worked at Hashicorp and
maybe some of the audience are familiar with like Terraform,
(15:38):
right? Like, so when you think about
infrastructures code and Terraform, like almost like a
config plan, right, that, that you're applying.
And we're almost thinking of it similarly.
And so when we say workflows, I don't even know if workflows is
the right word to use anymore. But for us, it's like, how do
you codify what your best peopledo, right?
(15:59):
And in our case, the our best people are the people delivering
and ultimately accountable and being the drivers to delivering
a work product, right? That work product in our case
can be an article, it can be a landing page, it can be copy, it
can be research on something a topic, right for for a customer.
And so when you, when you think about that, we're like, we're
(16:22):
essentially building the engine that Bill helps build those
plans. So just to kind of, you know,
give an example here, what does it take to do a really good
piece of content, right? For us, as we start to break it
down, it's like, first I need tounderstand all the context
needed. So we call those artifacts,
right? So, so then we're starting to
(16:45):
say, hey, Galileo, what does Galileo do?
What audiences do you serve? What are your competitors do?
Like, how are your position in the market?
What do people say about you whoare for the audience we want to
serve? Let's go understand them better.
What are their top concerns? You know, what's their
day-to-day like? And, and all those things.
Then you're generating a lot of these artifacts and those
(17:06):
artifacts are always kind of being regenerated, right?
And, and so then as we go through that, we under, we saw
that there's workflows that we could build to make generating
these artifacts better and better.
Right now you have the context. Now you got to go figure out
where do I fetch the right inputs?
So that can be retrieval based on a knowledge base we already
built for Galileo, or that can be, hey, let me go and not just
(17:28):
do a Google search, but maybe I'm going to use Perplexity's
API and grab not just take Perplexity's answer, but I'm
going to take the 100 citations it gives me.
And I'm going to go citation by citation and see if it's
relevant based on the context I have and based on the job at
hand, right. And so as you go through that
and you formulate a plan and you're actually going to get
that plan, what we started to realize is like this is way
(17:51):
better done in code, right? And coding agents just got a
lot, a lot better. And the reason coding agents got
better is because they have all the messy metal completely out
in the open. So you look at a repo, you
there's open source projects every, you know, pull request,
every comment documentation is actually there.
(18:12):
Like I cannot name a single. If you take a one of the best
open source projects out there and you look at how clean those
documentations are for some of them and and then you compare to
internal knowledge bases inside of companies for how things get
done. I dare you to go to sales
department and find a document that actually describe how the
sales process works that even remotely as good as a doc for an
(18:35):
open source project. And and good luck finding the
pull request for that. Or how you describe how you go
about doing good work. What are all the learnings?
What are all the back and forth,the messy middle, right.
Yeah. And so for us, like what we
started doing is like we startedwith the service, then we build
a coding agent that builds workflows in code, and then a
runtime layer that can run that.And so there's steps where we
(18:58):
might be running hundreds of things in parallel, right?
Last month we did over half a million runs of workflows, and
each one of those runs can be, you know, 10 hours of human
work. But that's not what we're
focused on. What we're focused on is if
you're an expert in a topic, I don't want you to spend the next
100 hours of your time reading 100 protocols on a, on a
subject, knowing that 89 of those might not be relevant,
(19:20):
right? I want you to ingest that
information. And, and then I wanted to figure
out like where is the right place for human interventions,
right? Or expert intervention, however
you want to want to think about it.
And so for us like that's one ofthe things we're building right
now is a expert hub, if you will, or it's a orchestrator
layer that takes these workflowsthat are running in code right
(19:43):
and figures out where should theinterventions be what kind of
intervention should it be Is it an open-ended thing?
Edit this is it a comment on this approve or reject?
Is that a multiple choice? Is it a ranking thing?
You know, is that more of like areinforcement learning task or,
or, or what is it? Right?
And but then the really interesting thing is, as you
start to learn from that, you can start using LLM as a judge
(20:07):
in the loop to figure out if those interventions are being
good. But not only that, you can
actually teach the people doing the interventions to do a better
job. So let's say I give you an
outline of an article that is for abnormal security, right?
One of our customers in the security space and it's very
technical, right? And the person doing the
intervention, maybe they know something about security.
(20:29):
Excuse me, but they might not bethe ultimate expert, but you can
say, hey, by the way, here's some reminders, here's some
guidelines, here's some things you should watch for as you
review this, right. So now AI is actually are
actually helping the humans do better interventions, right?
And add more value and, and, andconnect the dots for them or
remind them how to do it. And the opposite is also true,
(20:52):
right? The the right human that you
calibrated and that you know, they're an expert and something
can look at something and go A versus B, right?
And you do enough of those and now you have data, enough data
to start to fine tune these whatis now a workflow into
essentially task specific domainspecific company specific models
or doing a mixture of expert type of approach, right?
(21:14):
And so, but but the opposite doesn't really work.
So today, if you go into some ofthese models that have a lot of
chain of thought, you know, and they do a lot of thinking and
you start to see them describingthe thinking they're going
through, You just want to hit stop right, Pod code.
You can do that. You can use escape, right?
And and so like that's those aresome of the layers we're
(21:35):
thinking about. And then the final one is kind
of the the learn engine that kind of layers on top of the
whole thing. I love it, Marcel.
I'm so excited to see what comesout of Growth X next.
You have such an incredible teamyou've built here and I've had a
sneak peek at some of the products you're doing as well,
and it's really exciting. So definitely make sure you're
checking out Growth X for all your growth needs.
(21:56):
We're big fans of them and you can also follow Marcel online
LinkedIn, where he puts out an incredible content about things
like the rise of voices and modality and AI and so much more
excellent advice for folks who are building and scaling AI
businesses. I also just want to call out
before we end here, like I love that you brought up this coding
example because as you point out, there's great
documentation, there's a great data set out there with all
(22:18):
these different repos. And we had the Co founders of
poolside on the podcast a few weeks back and they talked about
this. They're like, we think we can
solve coding first because you know, the English language is,
is messy. It's it's fun, it's useful, but
it is not as clear. Whereas hey, look, we have a ton
of incredible code that is clear.
We have good examples, we have bad examples.
We have clear best pack practices.
(22:39):
There's such an opportunity though, to go beyond that with
knowledge work. You know, coding is just the
start of where we are going to enable people around the world
to build faster, build more, andbuild better.
And I'm incredibly excited to see what Growth X does on that.
Thank you so much for coming on the podcast today.
It's great to have you on Chain of Thought.
Yeah, thanks for thanks for having me.
Yeah, it's our pleasure. We're going to have to swing
back around, do another recording here.
Once you move to your new offices after your, you know,
(23:00):
you hit that 100 million AR mark, we'll come back and do
another recording. So.
That sounds good and. For folks who are listening,
make sure that you are watching this episode on YouTube because
you're not going to get the fullexperience without it.
I I have to say Marcel I. You still have to have your
Charlie cameo though, which you almost forgot to do so.
This is Charlie. Do do do we just, should we just
leave it as a mystery? We'll just leave it as a
(23:21):
mystery. Yeah, go to, go to mass with
you. Maybe we'll do a behind the
scenes telling, telling people. I like that the.
Story behind Charlie, Yeah. Well, thank you all.
Thank you everyone for listening.
We'll see you again soon and make sure that you have
subscribed the podcast on your platform of choice.
And if you can leave a review, leave a comment.
We really appreciate it. It helps folks find the show.
Marcel, thank you for. Coming on.
Thanks.