Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Andreas Welsch (00:00):
Today, I have a
special guest who helps share
more about what the fundamentalsare when you want to take your
product further and scale.
Leah Tharin.
Leah, thank you so much forjoining.
Leah Tharin (00:12):
Thank you for
having me.
It's very cool to be here.
Andreas Welsch (00:14):
Why don't you
tell our audience a little bit
about yourself, who you are andwhat you do.
Leah Tharin (00:18):
Hi, my name is
Leah.
I have a self inflated ego onLinkedIn.
So that means about 60,000followers that I have built up
with a lot of sweat bypretending to be really smart
about product-led growth.
I'm on stages internationally,at least on planet Earth to talk
about this quite a lot.
I've been a product managermyself, and I've been with
(00:38):
companies like small PDF.
And that's probably the mostknowledgeable one, and Microsoft
as well.
But that was way, way, back inthe days.
But I talk a lot aboutproduct-led growth and I try to
talk about everything that I dovery specifically, clear,
without jargon and with somehowtakes sometimes.
Yeah.
Andreas Welsch (00:58):
Wonderful.
In preparation I said I've beenfollowing your content for a
while and I think you share itwith an edge and I think that's
very refreshing and, unique in asea of people who claim to be
influencers, who claim to beknowledgeable, but are
oftentimes just regurgitatingthings.
So that's why I'm super excitedto have you on.
Thank you for making the time.
Leah Tharin (01:17):
Thank you for
having me.
Andreas Welsch (01:18):
Leah, what did
you say?
Should we play a little game tokick things off?
Leah Tharin (01:21):
I think we
absolutely should.
Hit the buzzer!
Andreas Welsch (01:24):
Perfect.
So exactly, when I hit thebuzzer, you'll see a sentence
and I'd like you to answer withthe first thing that comes to
mind and why.
And to make it a little moreinteresting, you only have 60
seconds for your answer.
For those of you who arewatching us live, drop your
answer in the chat as well andwhy.
So are you ready for, what's thebuzz, Leah?
Leah Tharin (01:44):
I am super ready.
Andreas Welsch (01:45):
Alright, good.
Here we go.
So if AI were a color, whatwould it be?
60 seconds on the clock.
Go.
Leah Tharin (01:57):
Should I give you
the correct answer?
Because there's only one correctanswer for this one.
Andreas Welsch (02:01):
Absolutely.
There's only one.
Leah Tharin (02:03):
Any color that LSD
can also produce.
So that means all the colors ofthe rainbows because whenever
leaders are talking about AI,they're starting to hallucinate
as well.
So I think, so I've been raisingmoney around AI pitch decks.
I get a lot of pitch decks withAI in it as well.
So that's why I'm saying thatall the colors probably.
(02:23):
I would say that's prettyaccurate, so.
Andreas Welsch (02:25):
I love that.
That's awesome.
Yes.
It sometimes feels like you'reon a trip.
You just know if it's a good oneor a bad one.
Leah Tharin (02:33):
Yeah.
And it's never the same colorthat it was like a second ago.
Andreas Welsch (02:37):
That too just
don't stare into the sunrise
with your eyes open.
Yeah, it can be blinding.
Now you already mentioned that,right?
You get a lot of pitch takes.
You talk to a lot of startupsand I see a lot of startups that
are jumping onto this agentic AIhype train.
And what do you think is it justopportunistic?
(02:57):
Is it truly game changing?
Does the world need more AIagents?
Leah Tharin (03:03):
So I think it's
always funny because I think I'm
the only person on the internetthat does not know what an AI
agent really is because likeeverybody has slightly different
kind of definitions of what theterm is, and we're really good
at inventing new terms in thisindustry.
You know that, right?
Oh yeah.
And but I think okay, if we saythat, about three years or two
(03:27):
years ago, I was talking aboutthis with Kieran Flanagan, where
we talked about that AI will getto a point where it does not
need an interface anymore tooperate a product.
And I think that's a fairsummary of what an agent can do.
And I wanna say move the mouse,but metaphorically speaking, it
could move the mouse and it cantake some of your tasks.
Now that's already the case alsowith some LLMs.
(03:47):
But I think the point here isthat it can navigate interfaces
or systems in some kind of waythat automate something that has
not been automated from before.
So without going too much intodetail, I think we have all a
rough idea that we're talkingabout the same kind of deity
here.
So I think we are still in avery early stage.
(04:07):
I think that's not a very hottake in that sense.
But what's important is thatwhenever you have a technology
jump like this, and this is aone that, this one is probably a
little bit unprecedented.
You have usually an earlysegment and then you have a late
segment, right?
So the longer term segment andthen the earlier segment, and
then there's people who areselling shovels.
(04:28):
In any gold rush, right?
Like people are selling shovelsand then everybody's saying oh
what's the mantra on the stockmarket?
Sell on the news or what, yeah.
I'm not good in the stockmarket, but I think it goes a
little bit into the samedirection.
Yeah.
So right now we have a lot ofearly hype, which.
Is not very sustainable.
(04:48):
And you can see this when you'relooking at investors and how
they think about the investmentsthat they do in these companies.
So I'll give you a specificexample.
So let's say you have an AIagent that says that, Hey, you
can replace a, BDR, an SDR withwhatever agent we have.
They can fake the calls, theycan handle all the email
outgoing conversations and soforth.
(05:10):
What this is missing in asusual.
Is that these are very siloedsolutions, right?
So it's oh we're trying toreplace a salesperson.
We're trying to replace adesigner.
We're trying to replace my wifewhen she's making a restaurant
reservation.
Like something like this.
(05:32):
And this is going about it thewrong way, but it's a very
natural kind of thing becauseyou're always trying to kinda
replace what has been therebefore in one kind of person.
So to answer your question,finally, I think it is a hype
when it comes to the long term.
Value still.
But there are some really goodintegrations that are also gonna
(05:52):
come.
And I do think it's gonna changethe space dramatically, but AI
as a whole, and I do not knowwhether we actually need the
term AI agents, but for thepurpose of this, I do think,
yeah.
Yeah.
So yes and no.
Andreas Welsch (06:04):
And sometimes
that's the answer, too.
Yeah.
That, just speaks to thecomplexity and the newness of
things as well.
I really like what you saidabout we look at something that
we know that we can grasp and wesay, let's see if, or how we can
make that better, faster,cheaper, with the help of a new
technology like AI.
What I also think we are, we'remissing if we primarily take
(06:26):
that approach, is that the much,much bigger potential of.
How can we actually do this?
If we actually sit down and westart to rethink, is this not
only a way that we can automate,but is it even the best way to
begin with?
And I think many organizations,whether it's the one adopting
the technology or the onesbuilding it are, struggling
(06:47):
there as, as well because it'sso big and so broad on one hand,
but also so difficult to,conceive and, then to, implement
if it's such a radical change aswell.
Leah Tharin (06:58):
You know, what's
interesting about this is let's
say you're having this agentagain, that is like an automated
BDR, and let's just say you havean agent that can do this,
right?
Like it is satisfactory.
Good, right?
To whom would you sell thisagent?
You would usually sell it to aVP of Sales, right?
Or like a CRO.
But let's say now you reallyhave this fantastic idea that
(07:19):
you have an AI startup thatmakes selling a product.
So not just like the salesprocess, but like selling a
product in a way so much betterthat it not only covers what the
SDR does, but also the, usagethe usage metrics from the
product and everything else thatyou also need to whom do you
need to sell this agent now?
Now you need to sell it to theCEO.
(07:41):
Yes.
So now you have changed theaudience that you're actually
selling this entire thing to thecomplexity of what it is, and
your entire thing is nowstarting to fall apart a little
bit.
So I think one of the challengesthat we have is it's like this
incredibly difficult shift of,okay, there's one thing about
something being possible, butthen the second thing is, how do
we sell it to the market?
(08:02):
Yeah.
Because that's the thing.
If it's hard to explain.
You cannot do a PLG solutionanymore.
So like product-led growth,right?
That is just like having afreemium and hey, just try it
out a little bit.
'cause people do not understandwhat the context of it is.
Andreas Welsch (08:14):
Yes.
And, I think then also to yourpoint, who do you sell it to if
you sell it to the VP of Sales?
But the solution largely makesthem redundant as, as well, or
may makes large portion of theteam redundant.
What's the incentive to do thisif you are losing, have your
kingdom and Yeah, and yourcastle, right?
So that always argument.
Leah Tharin (08:34):
Yeah.
It's like we, we get you out ofa job.
Andreas Welsch (08:37):
That's the scary
side of things for many people
as well, I think.
Is this going to come is thisgoing to replace me?
But the part that you mentionedabout packaging, pricing who do
you sell this to?
I was recently teaching a courseon maven.com about how do you
actually price your aIsolutions, and I'm running
another cohort early in June tobuild the monetization model
(09:01):
because I think to your point,many organizations, many
startups are approaching it asa, here's a task and this can be
replaced with AI, or here's ahuman equivalent that can do
similar things.
But I think what we're missingin, many cases is, actually more
of this outcome drivenmonetization because.
(09:23):
I, ideally we need to caveatthis always.
Ideally I can give yousomething, I can give you some
technology that deliverspredictable outcome or
predictable quality.
So from transactional pricing oruser-based pricing to outcomes,
that ranges as wide.
But I'm wondering what are youseeing?
What are companies missing todaywhen they do price their AI
(09:44):
solutions, their gentech AIsolutions.
So what you're asking is in someways where do companies leave
money on the table in a way,right?
That's one way to look at theproblem.
And the other one is so how doyou set yourself up for
long-term success?
So on the second question, Ihave no idea.
'cause nobody knows where thismarket is going.
That's the first thing.
The second thing is I do notbelieve into any pricing models
(10:06):
or monetization strategies.
Let's put it that way, where.
What you charge from thecustomer is inhibiting their
customer success in some way.
So for, you said it already, butlet me give you an example.
Let's say OpenAI as or likeChachi pt, whatever you're using
(10:27):
as your favorite LLM is notcosting you per month,$20 or
whatever the subscription pricenow is, but like per chat,
you're gonna pay 50 cents.
Just like 50 cents.
What this does is it encouragesyou to do some really shitty
practices.
You know that, that's why youget messages internally as well
from the companies.
Hey, write everything into oneprompt.
(10:50):
Don't use it for your personalshit and so forth, right?
So like we start to designsystems around the monetization
models of the products that weare actually having.
That is problematic though,because yeah as I said already,
right?
Like you're actually stoppingthe customer from being
successful.
Now a customer is very glad topay you for something like a
(11:11):
share as we are doing with salesas well, if the outcome is
proven at the end of it.
And but this is a challenge thatis, quite hard to crack as well
because we also need to behonest, right?
I have a couple of people whoalways come with some evergreen
ideas, unlike, hey.
I wanna make a platform whereyou can sell we facilitate like
(11:33):
the selling of cars.
This always works until themoment where people can start to
abuse the system and then gooutside of the system and you're
not getting a sales commissionanymore.
This is a challenge that hasbeen always there for forever.
So in theory, it is correct thatyou should charge, if possible,
towards the outcome.
So if a deal is closed, then youalso get a share.
(11:53):
But how do you make sure this isactually happening?
And this, ironically, this isnot that big of a problem in B2B
companies because in B2Bcompanies like the fraud,
there's just not, they're notgonna short charge you too much.
So in a way, I'm thinking thatthe more a market that you go,
try to be more really on theoutcome as much as you can.
And on the lower end, try tokeep it simple.
(12:15):
That's, just keep it simple.
Do an average.
Define some like you someborders where you just say Hey,
you look, if you fire off ahundred thousand chats per day,
then you know you're not gonnause our stuff anymore and so
forth.
But keep it simple on the lowermarket.
And if you go up market on theB2B side try to target it
(12:35):
towards the, outcome.
The other thing is that you needto understand for enterprise
solutions, and I've seen some ofthem already that are coming,
that are quite exciting.
Let's say what I said is truebefore that, it is a solution
you probably need to explain tothese customers as well.
So it's not always as easy as,Hey we're gonna replace your
salespeople.
Let's say you're havingSalesforce and you're a huge
(12:56):
company, and we're now bringingthis entire stuff into that.
What that means is, if I'mSalesforce and I'm trying to
sell you now an agent that ishelping to sell on top, you
understand that, oh shit, thatmeans we're even more locked in
now to Salesforce.
So be careful that with themonetization, the packaging that
you have, that you're not evenit sounds like a nice thing,
(13:18):
right?
Like that you're locking in yourcustomers, but don't lock them
in just because you can only ifyou really have to, get to the
value of, whatever your agentsolution is.
And what I mean with that is, isthat it should behave.
More or less independently andnot just like from, your own
product platform.
I, hope that makes sense.
Yeah, it is.
It is a very confusing space.
(13:40):
It, certainly is.
Right?
And it's it's very nascent.
It's just emerging.
Different companies are.
Yeah probably throwing spaghettion the wall and see what sticks
at, this point is the impressionthat I get as well.
And there was so much goodinformation, so much so much
meat there.
Last couple minutes, what youshared and I remember about
what, two months ago?
Almost.
(14:01):
At beginning of March, SamAltman's send out this tweet.
Hey folks.
Hello World.
We're thinking about.
Changing the monetization modelof chat CPT from$20 a month to
credit based.
So you buy some credits and thenyou use it exactly what you
shared.
And I think you know that, thatsends a very troublesome signal
to the market and to yourcustomers.
(14:21):
Probably OpenAI went in with a$20 price point in most of the
Western world,$20 a month issomething that you can pay, that
is reasonable that doesn't breakthe banker or your budget.
But then when you look at theadditional features that OpenAI
has, added models that are morecapable, that are probably more
energy intensive and, resourceintensive in research.
(14:42):
And this and that.
Then$20 per user per month mightnot be the ideal price point
anymore if your costs exceededand, if the usage exceeds it.
So it, it can get tricky if, andif, the usage is hard to
forecast as well.
If you have a good product,
Leah Tharin (14:57):
I can give you, I
can give you a pricing model
that I think is quite clever inthis regard.
So I think a company that doesthis really well is Buzz Brow.
So Buzz Brow is not an AIcompany in any way.
But it is a podcasting platformwhere you upload your files and
then they're being distributedinto other.
Stuff.
So what you're getting with asubscription is you get a
specific limit.
If you go over it, you are notblocking the users at all.
(15:17):
Then you're gonna get chargedlike per gigabyte that you just
go over like a specific fee.
And I think that's goodpractice.
So we do not wanna stop theusers in doing something, but
with the subscription you get avery hefty kind of like base
kind of quota that you can gowith.
The problem is, that as well aswe said before, what is a
successful chat with an LLM?
(15:37):
Are we gonna charge you for likewhen they're, when it's
hallucinating data, how wouldyou know that something is good
or not?
So the challenges are gonnaremain for sure, but I think
keeping it simple is, probablyan art going forward.
Just one final point on this, ifyou do believe, and probably
there is no one right now in theworld that thinks.
That any of the solutions thatwe have right now are not gonna
fundamentally change right now.
(15:59):
So all of this is in a book, bigflux and a lot of stuff is we
don't even have any goodsolutions so far.
Do not try to lock in anycustomers with your early stage
solution with yearly longcontracts.
It's not gonna work because yourcustomers are not stupid.
Nobody's gonna commit to a twoyear plan on an agent where
(16:20):
they're like, Hey, you knowwhat?
I don't know whether in threemonths AI, open AI is coming out
with another model that is gonnabeat you.
That's an important one as well.
Andreas Welsch (16:28):
Plus, if you're
a vendor and if, you need to
change your pricing model, youhave some customers that are
grandfathered in with their planand you need to figure out how
do I migrate them?
Do they want to keep it?
If it's a three year, five yearlong contract, that can get
pretty difficult as, as well.
Before you move them tosomething that's more un vCAN at
that point.
Yeah.
(16:49):
Let's say you've, built your AI,product agent, AI product, what
have you, and you ask your salesrep to pitch it and, sell it.
What's the next step where thatvision of changing the world
with agents keeps falling flatfor most companies?
What are you seeing?
Where are we on the maturitycurve and is it more like the
internet internet of thenineties, or is it completely
(17:11):
different?
Leah Tharin (17:12):
So I think there's
two major points in this one.
The first one is that in almost.
Maybe even in every consultingclient that I have ever worked
with, and I'm working mostly onproduct like growth.
AI is also a topic that's cominga little bit, but I'm not an AI
leader specifically.
But the principle is always thesame.
The biggest opportunities thatwe have in companies nowadays
(17:35):
are by and largecross-functional, meaning that
if you need to optimizesomething, it's not the problem.
That we need to solve inside ofsales or marketing or product.
It's usually cross-functional inthe sense of Hey like sales is
not talking to product.
They're not doing this.
Sales is using their own tools,right?
Like every silo.
(17:56):
I almost wish swearing rightnow.
Every silo, every f silo.
Is using its own tools, right?
HubSpot is using sorry, sales isusing HubSpot for their data
stack.
Marketing, Google Analytics.
I don't know why in 2025, buthey, I've seen it too.
Product is using amplitude orwhatever, everybody has their
(18:16):
own tools.
So you and the problem is thatthe customer doesn't care about
this and they're landing in allof these individual tools at a
different time because theworkflows are different.
So if the opportunities arecross-functional and you're
going out there and you'reselling silo solutions, as we
said before, right?
Hey, we're replacing yoursalesperson, we're replacing
this.
This cannot be a winningstrategy going forward.
(18:39):
I'm gonna pause here just for asecond, but I think that's the
very first, most important pointin that you have to make sure
that you're showing the value ofthe cross-functional kind of
journey.
Because if this automated SDRmakes it harder for someone from
support to call them up, right?
And I like figure out, hey, whatdid we actually talk about in
(19:01):
this particular call?
Hey, what did you actuallypromise to them?
Then you're starting to createcross-functional problems rather
than solving them.
Andreas Welsch (19:08):
Reminds you of
the time before a lot of that
technology emerged and peoplewere talking to each other, had
spreadsheets and, these things.
And in a way similar.
Leah Tharin (19:18):
Yeah, no, 100%.
And maybe to your secondquestion on where we are in
terms of it's, is this the sameas two thousands or 1990 or
whatever?
I think I'm a big fan ofunderstanding the jobs to be
done right from people.
So what is the job that someonewants to do with whatever
they're doing in their life?
And there's always this one jobin every sales conversation, in
(19:42):
everything, when you're talkingabout, Hey, we want to get a new
product into our life, whetherthis is on business or in new
consumer life, and that is.
If you make someone look stupidor they think they could start
to look stupid because ofsomething that you're selling
them, they're not gonna adoptit.
They're not gonna pay money forit, and they're not gonna use
(20:02):
it.
For example, if you say that,Hey, you can click this button
and I'm gonna send Andreasautomatically an email where I'm
just saying thank you.
I wanna see that email.
I'm not trusting my AI yet tojust do it without me having any
oversight.
And that's a very important one.
You need to make sure thatwhatever flows you're building,
(20:24):
that there is a human element ofcontrol in terms of quality.
It doesn't matter how good yourmodel is, and that's an very
important point.
It's not about whether yourmodel is good enough.
It is.
Can you overcome the distrustfrom people that people still
have?
Of course, rightfully by theway.
To automate 80% of the flowwhile giving them 20% of
(20:45):
excellent control over theoutput.
Because that's what this isabout, right?
You do not wanna make someonelook stupid because otherwise
your pitch is gonna fail.
And I think that's, that hasbeen the same in the two
thousands, 1990, right?
So if you're, starting tointroduce more stuff you always
need to make sure that thepeople who are working with it
also start to appearprofessional and so forth.
(21:06):
And I would say, like on thisentire curve.
We have the same challenge thatwe had back then.
Enterprise data is not ready.
So whatever you're thinkingabout with your AI, what it's
gonna do, it's gonna take usanother 10 years to get the data
infrastructure in place for allthese B2B companies.
To use your agents and an agentwithout access to data is
useless anyways
Andreas Welsch (21:28):
Lots of powerful
things.
Definitely the part that AIneeds data and, that your data
still isn't where it needs to beis one that I keep coming back
to as well, and to me it's, mindboggling that, again, to your
point, we've been talking aboutthis since the early two
thousands or, what have you and,yet there were always more
important things to do orshinier project shinier objects
(21:49):
to pursue than.
Getting your fundamentals in,place and getting your
foundation in place.
Where I see the opportunitythough now is that AI can be
that that, that hook if, youwill, to say we need to do a
phase zero, which is getting ourdata in order to do phase one.
That takes courage and it takesbudget and, it takes expertise
(22:11):
as well.
Leah Tharin (22:11):
I'm 100% with You I
just think that people really
underestimate how hard it is toget your stuff under control.
Specifically the data, the bigdifference to back then is that
let's say you have a salesdepartment of 300 people.
Okay, so you have a salesdepartment of 300 people, so you
need to have 300 people who areselling your stuff to outside of
the market.
That means you need to have avery specific infrastructure
(22:34):
that is capable of servicingthese people.
Now, these people do not talk toeach other as well.
They may be in the same teamlike you have an entire
organization managing thisentire stuff.
If you do have an agent solutionfor something like this, it
doesn't matter.
The throughput that you have,right?
So like whether an AI can make aphone call even, or like writing
an email is not dependent onlike how much you're using it,
(22:56):
right?
So like the bandwidth is justmore that's more compute power
rather than anything.
So there's a big difference inthis, and there is a cost
argument in there, in that Ithink that existing companies
have data stacks built forhumans.
And their limitations.
So for instance, access control.
Why do we have master branchesand, forks and everything
(23:18):
because people are just messingwith this shit constantly,
right?
And that's going to change inways that I'm not sure whether
existing companies can reallylike fix, so to speak, their
existing data stack to make itcompatible with AI agents or
whatever the future is going tobe, because.
The requirements are gonna be sodifferent that it's probably
(23:38):
easier to build it up from theground new.
Andreas Welsch (23:41):
I think that's a
very important and powerful
message, right?
That, yeah.
The systems that were built forhumans might not be the same
that we need to operate AI andgenetic solutions.
But I'm wondering therespecifically from your
experience working with a lot offounders, working with a lot of
product leaders, what's the onething that they need to do right
(24:01):
now when it comes to AI and whenit comes to agents?
Leah Tharin (24:04):
I'm gonna start
with a very basic mistake that
Canva is doing right now.
But if you go to a Canva'spricing page, for instance, and
Canva is touting that they are avery straightforward like
they're a very forward thinkingcompany.
In the HTML, you cannot see allthe features, right?
So if an agent is going in thereor an LLM to research their
pricing page, it has to expandthe boxes that looks nice for
(24:27):
the human eye, but it's not veryhelpful for a bot to go through.
Now you can say oh yeah, butmaybe it's gonna find it in the
code eventually, right?
So or like it's gonna expand thedifference and so forth.
You need to start to expose yourstuff, not just to human users.
And we're not just talking aboutan API, we're talking about
actual websites because whatagents are doing is they're
(24:48):
imitating human behavior, as wejust said before.
So that's the very first one,right?
So like we will have some kindof, I don't wanna call it search
engine optimization, but it'sgonna be like agent
optimization.
There's also gonna be someshitty behaviors that we see
already, right?
So like an LLM cannotdifferentiate between.
(25:08):
A review site that says thatAndreas is awesome between
whether that's a true one or afalse one, right?
So like we start to haveprogrammatic false review pages
come up and so forth.
So that's not a tip, by the way.
Don't do that.
But I'm just saying you need toexpose your stuff to agents,
right?
So like you need to make it surethat, that it can be used there.
(25:28):
And then the other thing that wehave also a little bit on the
pricing is that.
Make sure that you have failsaves.
What you definitely do not wantto have is, for instance, if you
have usage based pricing, isthat there are no access
controls where companies, forinstance, can decide, for
instance, Hey, let's treat thislike an ad budget.
Don't spend more than a hundreddollars, and give me a warning
(25:49):
rather than just like spendingover this.
So this also needs to be bakedinto that and realize that
people who want to use AI agentsfor themselves, which I highly
recommend if you're selling someof them as well.
Try to figure out anorganizational structure first,
and your data infrastructure,what that actually means, rather
than just going to companies andhoping that they can figure it
(26:11):
out themselves.
So some AI agent solutions thatwe see right now have to have in
their offering also help.
With setting up the datainfrastructure, which means
sometimes also warm bodies likepeople, engineers that are
helping these companies tofacilitate this so they can see
the value.
But leaving that just to thecustomer and their data team is
(26:32):
usually a very bad approach.
Andreas Welsch (26:35):
So I think what
I'm hearing a little bit of is
eat your own dog food.
Coming back to that as well.
Leah Tharin (26:39):
100%.
Yeah, 100%.
There's, it's mind boggling tome how people just think that,
oh my God, like my idea is theshit.
It's important that founders arethinking that, don't get me
wrong.
But in order to reallyunderstand what challenges are,
you also need to drag these outinto, the light if you wanna.
There's a good book by aboutthis that has nothing to do with
(27:00):
AI.
Which I just forgot the titleabout.
No, it's a sales book.
I'll figure it out in a second,but maybe we can also add the
point is this, that in a, goodnegotiation, you wanna highlight
the negative points before thecustomer is coming onto this
objection by themselves.
And that's what you should dowith AI agents as well.
Andreas Welsch (27:19):
It sounds that
on one hand there's a lot of
opportunity and promise.
On the other hand, everybody'sstill figuring out what does it
actually mean for me and mybusiness?
But it's important that you dostart to figure it out and, work
with your customers and alsoseeing and learn what resonates
with them.
What is it that they actuallyneed?
How are they planning and usingit?
So it informs your product, yourmarketing, your sales strategy.
(27:41):
Now Leah, we're getting close tothe end of the show and I was
wondering if you can summarizethe key three takeaways for our
audience today.
Leah Tharin (27:48):
You are not just
selling a product, I think
you're selling a solution intoan existing data stack.
That is probably a mess.
I would say that's the firstone.
Yes.
Second is monetization willhurt.
But make sure that you arealigning yourself as much as you
can with the customer success.
(28:09):
A good monetization model is onewhere you maximize your own
gain, but also maximize the gainof your customer.
That's a very classical ICPthing that we say.
So like ideal customer profile.
It never works if that's in animbalance.
So don't try to fleet yourcustomers in that way.
And the other thing is that oneof the reasons why so many of
(28:29):
these AI agents or like thesmaller companies are not
getting investments is becausethey're focused too much on the
early on the silo solutions.
So try to think cross-functionalas much as you can with the
constraints that your customershave, and you have a really good
chance in the market.
Andreas Welsch (28:45):
I love that.
That's awesome.
Super actionable and practicaladvice.
Leah, thank you so much forjoining us and for sharing your
expertise with us today.
Leah Tharin (28:52):
Thank you for
having me.