All Episodes

December 17, 2024 47 mins

If we were to rank the top paradigm shifts in product in the last 10 years, I'd reckon AI and product-led growth would sit at the top of that list. Both concepts really come down to amplifying efficiency, doing more with less. But the real cheat code is bringing them together and using the time saving power of LLMs to enhance your PLG strategy.

In a recent panel event, How to Use AI to Supercharge Product-Led Growth, we brought together three awesome PLG experts — Ramli John, the founder of Delight Path, Dani Grant, the CEO of Jam.dev, and Anuj Adhiya, the author of Growth Hacking for Dummies. We got the three of them talking about the ways product teams can leverage AI technology right now to boost every phase of the user journey.


Resources from this episode:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Hannah Clark (00:01):
If we were to rank the top paradigm shifts
in product in the last 10years, I'd reckon AI and
product-led growth wouldsit at the top of that list.
Both concepts really comedown to amplifying efficiency,
doing more with less.
But the real cheat code isbringing them together and using
the time saving power of LLMsto enhance your PLG strategy.
In a recent panel event,how to use AI to supercharge

(00:23):
product-led growth, we broughttogether three awesome PLG
experts — Ramli John, thefounder of Delight Path, Dani
Grant, the CEO of Jam.dev,and Anuj Adhiya, the author
of Growth Hacking for Dummies.
We got the three of them talkingabout the ways product teams
can leverage AI technologyright now to boost every
phase of the user journey.
What I loved about thisevent was how each panelist's
expertise came together insuch a complimentary way,

(00:45):
offering lots of practicalways to adapt and adjust your
growth strategy using AI tools.
And they were justsuper nice people.
Let's jump in.
So today's session is going tobe focusing on how to use AI to
supercharge product-led growth.
And we'll be speaking with someamazing voices in the space.
We have a reallyexciting lineup today.
I'm really excitedto introduce them.

(01:05):
We've got Ramli John,who's the author of
Product-Led Onboarding.
He's also a renownedexpert in the PLG strategy.
Ramli brings a deepunderstanding of how to turn
prospects into passionate users.
We've got a little jeopardyquestion for Ramli today.
So Ramli, you've been calledthe onboarding wizard by some of
the biggest names in SaaS, andyour book Product-Led Onboarding
has been a game changer forcountless product teams.
We're really just pumpingyour tires right now.

(01:25):
If you can wave a magic wandand instantly fix the most
common onboarding mistakeyou've seen, what would it be?

Ramli John (01:30):
Thanks so much, Hannah.
I would say it's actuallynot related to the product.
It's often the biggest problemaround onboarding is internal
friction, not product friction.
And what I mean by that isproduct does their product thing
inside of the product, marketingdoes their onboarding emails,
and then customer success doestheir customer onboarding thing.
And they don'ttalk to each other.
This is what happened whilewe're even working at a

(01:53):
onboarding company calledAppcues, where like a
product adoption software,and we had that same issue.
So I think if I could wavea magic wand is if I can get
the teams to talk to eachother more and agree what
success look like for theuser, which is a hard problem,
that's where I would start.

Hannah Clark (02:09):
Man, siloing is just one of
those perennial issues.
Well, thanks for that.
We also have DaniGrant joining us.
She's the founder of Jam.devand a former product lead.
Dani is known for usingproduct data to design
seamless, high impact userexperiences, and she's also
just such an awesome person.
So, Dani, thanks somuch for joining us.
Question to pass to you.
So, Jam has skyrocketed to150,000 users at 32 different

(02:31):
Fortune 100 companies,which is incredible.
But you're also speaking atevents, conferences, and you're
active daily on LinkedIn.
Honestly, I'm jealous of how youmanaged to make all this work.
So what's your secret togetting all these things
done and staying sane?

Dani Grant (02:44):
First, just everyone here should read Ramli's book.
It is so good.
My co-founder read it first inour company and then was like,
told me you have to read it.
I read it.
Now it's required readingfor our growth product team.
And the thing that it willcompletely change your mind
on is it will change yourmind as to what onboarding is.
So we all, productmanagers, we all think about
onboarding all the time.
But we all think about itas from when you sign up to

(03:06):
when you've used the product.
And Ramli's book will show youthat onboarding starts a lot
before and ends a lot later andfocusing on onboarding in that
way will change the outcomes ofyour product like it has for us.
Anyway, Ramli's brilliant.
Read his book.
As far as speaking,we are so lucky.
All of our users are builders.
They are out in the worldtrying to change the

(03:26):
world by using software.
And so we're over herebuilding our company.
Our users are over therebuilding their companies.
And so our job is to sharewhat we're learning, building
Jam with everyone else,building their things.
And so we end up postinglearning lessons online,
joining things like this.
It's such a privilegeand an honor.
So thank you for having me.

Hannah Clark (03:44):
I appreciate you taking the time to
pick up another panelist.
That's so awesome.
See, I told you guys,she's a great person.
We also have AnujAdhiya joining us today.
So Anuj is a growth expertand author of Growth
Hacking for Dummies.
He's got writing onproductled.com and he's also
got deep experience guidingSaaS companies on implementing
product-led growth strategies.
So really honored to haveyou here with us today, Anuj.
During our pre call, youshared that you're planning

(04:05):
on setting up a Guinness bookof world record by gathering,
I think this is just so cool,the largest number of people
wearing party hats in Boston.
I think we need somemore context here.
Can you tell us more aboutwhat you're doing and how
can people get involved?
Where do we send the party hats?

Anuj Adhiya (04:21):
Firstly, thanks for having me.
This is such a fungroup to be part of.
And yes, I think on the surface,it feels like it has nothing
to do with product-led growth,but it really does because
it's a large experiment.
So there's this party inviteapp that I'm consulting
with called PartyClick.
If you want to go checkit out and easiest way
to set up an event.
And we just try to thinkof what are ways we can put

(04:43):
this in front of more people?
And, a lot of ideas camearound, especially things
like, Oh, we should do ourown celebrity lookalike thing,
like the Timothee Chalametthing that just happened and
I'm like, yeah, that's okay.
But you know, whatelse can we do?
And then somebody on the teamwas like, Don't they have world
records for largest gatherings?
Yes.
That's what we should do.
So I go and looked up theGuinness book and sure enough,

(05:05):
there's a world record forpeople with party hats.
I'm like, great.
This goes with the name of theproduct, party hats, PartyClick.
Great.
We should just do this.
So literally this week, I'min the process of getting
through the application withthe Guinness Book and, begging
and pleading the city ofBoston to let me have 2,500
people be in Boston Common.
So, let's see how close Iget to achieving that goal.

(05:27):
So, but, and the real reasonI'm saying this publicly
is more to hold myselfaccountable and shame me if
I don't make this happen.

Hannah Clark (05:33):
So if you're in the Boston area,
bring your party hats.
What was the dateagain in December?

Anuj Adhiya (05:37):
We're thinking December 22nd, so.

Hannah Clark (05:40):
Okay, well now you got it.

Anuj Adhiya (05:41):
What was in the area, we thought about it and if
I don't make it happen, you knowwho to throw the brick back at.

Hannah Clark (05:47):
So moving on to setting the stage, the
intersection of AI and PLG.
This question is for Ramli,as a general question, has
PLG become essential for theSaaS industry in general?
We see it as a buzzword.
Where are we atwith that right now?

Ramli John (05:59):
I would say it depends on, yeah, it depends.
That sucks.
But it depends on howyou define product led.
I think if you say productled, is PLG necessary?
If you have a freetrial, freemium?
I don't think so.
I think I actually suggeststartups start with high touch,
get close to your customers.
If you mean by product ledis removing any unnecessary
friction and create agreat experience for users.

(06:22):
I do think if that isthe definition that we're
talking about, POG, thenyes, the standard and bar
for people in terms of whatthey expect from a product
experience has been improved.
Exponentially grown a lotmore than it was 10 years
ago, where, somethingannoying is oh, that's normal.
But now it's if this isannoying, I have a hundred other
options that I can jump into.

(06:43):
There's a ton moreproducts out there.
And I really do believe thatcreating a great experience for
end users, for users who aregoing to be using the product.
It's going to be criticalin terms of retention and
activation and all the otherthings that we're going to be
talking about later, we're goingto be touching upon how AI will
affect the rest of the funnel.
And if that's the definitionof product led, then yes,

(07:05):
I do think it's essential.
Do you mean that's afree trial and freemium?
Then no, I don't think so.

Hannah Clark (07:11):
Wow.
Nuanced answer.
Okay.
Does anyone have anythingthat they wanted to add
about just like the relevanceor state of PLG today?

Dani Grant (07:16):
It's so powerful.

Ramli John (07:18):
I'll just add that I see a lot of founders actually
asking the wrong question,because the question they seem
to ask is how can I be morelike Slack or how can I be more
like, when really the questionthey should be asking is how
can I use my product betterto serve the end needs of my
customer or my user, right?

(07:39):
And that can takeany and many forms.
And that opens up farmore powerful avenues for
implementing a product ledapproach than just trying to
copy something that's justnever going to work for you.

Hannah Clark (07:51):
So true.
So let's move into whereAI is entering the space.
This one is for Dani.
So how would you say AIis poised to supercharge
PLG strategies?
I'm not sure if you've gotany stories from how you guys
have leveraged it at Jam.

Dani Grant (08:02):
We're about to go into all the tactics, but at a
broad level in PLG, your productdoes the selling and selling is
something that is more effectivewhen it's personalized, it
takes a lot of content, ittakes a lot of convincing.
And so this is actuallysomething that AI
is really good at.
Companies have been trying todo this forever, like machine
learning for personalization,super segmentation

(08:24):
with lots of content.
It's now easier than everto do this really well.
You have to understand largeswaths of unorganized data.
And so really excited todive into the tactics.
This just feels like somethingAI is really good at.
One last thing I'll say isthat the best PLG strategy
is just to have such a greatproduct that people can't help
but talk about it to others.
And with AI, you can nowbuild even more powerful

(08:45):
features for your users.
And so that's also a really bigpart of the puzzle, I think.

Hannah Clark (08:49):
Okay, so we'll move on to
building the foundation.
So AI's role acrossthe user journey.
I think this is going tobe the section that takes
up the bulk of the session.
So definitely a must listen.
Now is the time toput away the phones.
So let's walk througheach stage of PLG and show
how AI can support each.
So let's start with the acquire.
Dani, did you want to chime inwith how your team is using AI
to power that acquire stage?

Dani Grant (09:10):
There are two things to mention.
So the first is maybe themost obvious, which is in
PLG, your customers find yourproduct because the product
has sold itself in a way andthey onboard on their own.
And one place where youhave to generate a ton
of content is in SEO andAI can be really helpful.
Unfortunately, all ofus have seen what this
looks like when it's bad.

(09:30):
It just looks like AI mumbojumbo and it doesn't rank
actually, but AI is actuallyreally helpful in SEO.
So here's liketactically how we use it.
It's AI writes the outlineusing SEO best practices.
You literally take an articleabout SEO best practices.
You give it to the LLMand you say, write me an
outline that would followall these best practices.
AI that edits your SEO orgives you recommendations

(09:53):
for best practices.
Using perplexity to search forthe tools that you're going to
mention, get me all the pricesof all these tools, whatever.
So very helpful in that way.
But beyond that, we'realways thinking about like.
How do we build content thatengages our audience online?
Our audience are developersand in web development,
there's so much going on.
The space is movingreally quickly.

(10:14):
And so we wanted to beable to do content around
that for our audience.
So we thought, okay, let'screate a newsletter and a
podcast about what's happeningthis week in JavaScript.
If that's interesting to you,by the way, you can go to
thisweekinJavaScript.com andevery week in four minutes
or less, get the news ofJavaScript in the week.
And we're a tiny startup team.

(10:34):
We can't record a podcasteach week and AI is really
helpful for us and it allowsus to write great content.
We have like over 50 percentopen rates on these newsletters
when the podcast is reallywell listened to, but AI
really helps us do that.
So those are two waysthat we use AI for the
acquire step of PLG.

Hannah Clark (10:51):
Super cool.

Ramli John (10:52):
I think the other place that I've been using AI
for in terms of the acquire,it's like repurposing content.
So like taking like a podcastepisode and then plugging
into Castmagic, which isa tool or summaries and
it like outputs like blogposts and Twitter posts and
LinkedIn posts that you canshare and newsletter posts.

(11:12):
A bit that it's, it needsa little bit of massaging.
It's not perfect, but I thinkthat's another area in content
that I've seen AI has workedin terms of acquisition is.
How do you squeeze morejuice out of the lemon?
Is that the saying?
I'm not entirely sure.
How do you get more bang outof your content, essentially,
it's one way I've seen AIbe used for acquisition.

Dani Grant (11:34):
We actually do the exact same thing.
So one of our core beliefs isthat founder led sales in 2024
is happening largely online.
So it used to be the caseyou'd have to get like a warm
intro and then like you have afirst meeting and the founder
like introduces themselves tothe customer that's happening
passively online as peopleare scrolling LinkedIn today.
And so you're trying toconstantly share how you see
the world and what changesyou're trying to make in

(11:55):
the world passively online.
And so we use AI for partof this, and it's just
the repurposing content.
What we'll do is, if someoneon the team goes on like a
podcast somewhere, we grab theYouTube link, we'll put it into
a tool like OpusClip, whichwill clip up the YouTube link,
and it either gives us videoclips to share online, Or it

(12:15):
pulls out video clips of thingsthat were like aha moments.
And that gives us nuggetsof things to talk about
and write about later.
That's been really helpfulin the acquire stuff too.

Hannah Clark (12:24):
I think that's a huge thing right now.
I think both of you spoke tothis, being able to repurpose
content as much as possibleand yeah, get as much juice out
of the lemon as Ramli put it.
That's super awesome.
Okay, I think we can move onto the activate stage then.
So Ramli, you'rethe onboarding guy.
What would you suggest as faras aI enabled practices to
support that activate stage?

Ramli John (12:43):
Yeah, this is actually something that I
found another company do whilewe were in a conversation
around how they're using AIin terms of their onboarding.
So one of the challenges withproduct led is Challenges or
opportunities as well, is thatyou really have to have a strong
like user research or customerresearch muscle with sales that
approach you're like in directcontact with the customers,

(13:04):
you're hearing the objectionswhen your product is the one
selling, you don't hear thoseobjections, they just leave.
So now, in that case,like, how do you get those
valuable insights, thewhy that the data, the
quantitative data can't show.
One way they're using AI, whichI thought was cool, is they take
all their sales calls right?
They have some sales theyhave a hybrid approach.

(13:25):
They take their salescalls recording, and then
they have a, also a hightouch onboarding experience
for enterprise companies.
They take that recording, theyplug it into ChatGPT to train
the large language model.
Once it's plugged in,they've started using
it to create things.
One of them, which Ithought was cool, is
create a sales to customersuccess, handoff document.

(13:47):
And here's the thingsthat you should do.
You can help craft can you helpsequence create a five email
onboarding sequence based on thepain points on the sales call
and customer onboarding calls.
So since it's been trained onthe objections that the sales
calls have made and the kind ofquestion and confusion during
the onboarding calls, It'smore aware and more trained

(14:07):
as to what your customersare actually going through.
I suggested that they alsoplug in their more, most top
requested support ticketsinto that chat bot or that
ChatGPT, and then feed itinformation that is valuable
to that activation phaseso that when you write that
sequence or you write thattour, like what is, the tour
and the sequence, and thatcan really help out with that.

(14:29):
I think that's one way it'saround plug into ChatGPT.
I've also seen.
More advanced tools.
AI tools plug in yourdata and it figure out
like retention metrics.
So I'm seeing like amplitudeor, things like, June.so where
you plug in your data and ittells you, Hey, you're most
of your successful users.

(14:49):
If they do X, Y, and Z 30percent stick around, which,
I'm going to take that witha grain of salt, of course,
but it is some valuableinformation to go through the
data that you have, especiallyif you have a large enough.
User set rather than ifyou have 10 users, you
plug it in to this model.
It's not going to haveenough data to figure
that information out.

(15:09):
So I would say, I think that's acaveat to that kind of analysis
where if you do have enoughdata set, over time, then you
can probably plug it in andfigure out like some of those
information and data like that.

Hannah Clark (15:20):
Yeah, makes sense.
Does anyone else have any kindof tips that they would want
to chime in with on supportingthe activation stage of PLG?

Dani Grant (15:27):
This is obvious, but once you figure out, okay,
good activation, the user hasto take the following steps,
then you have to figure outwhat are a bunch of ideas we
should try to get those users,to do those steps, right?
So for us, we consider auser activated if they create
four bug reports with Jam.
And so we're always tryingto think like, how do we push
more of our users to theirthird and fourth bug report?

(15:48):
Like, how can we encouragethem and show them the value?
There's just in a bigbrainstorm, like lots of ideas
moment and AI is actually justfine at coming up with a lot
of bad ideas that trigger youto come up with the good ideas.
And so that's just anotherobvious way to use AI
in this step maybe.

Hannah Clark (16:04):
I do like the idea of generating bad ideas
that help you come to theconclusions of good ideas.

Dani Grant (16:08):
Here's the thing about humans.
It's really hard to havepure imagination to come
up with an idea from zero.
If you're ever tasked withbeing the first one to
write the first draft ofsomething, it's really hard.
But humans are really goodat reviewing and reacting
because that's just howour brains are wired.
We see something and thenit triggers other ideas.
And so as long as youhave one bad first draft
or one bad first idea.
You're suddenly really good atthe second draft, and so AI is

(16:29):
good at giving you the firstthing that triggers your brain
to think the way that it does.

Hannah Clark (16:33):
Love that advice.
Sorry, I think I might havecut you off a little bit there.

Ramli John (16:36):
No, I was just adding on that I think
the great thing about thiskind of large language
model is you can train it.
So it's oh, this is bad.
Here's why.
Here's why I think it's bad.
And you give it Somekind of feedback.
I heard this really greatquote from a podcast
episode with Nathan Berry.
He's over at Kit and he's youhave to treat AI like a very
early stage intern, like anintern who's like a beginner.

(16:58):
And by you telling it goodfeedback, it actually learns
what is good and what is bad.
So that, as Danny mentioned,that it comes with a bad
idea, give it feedback andit'll actually start producing
better and better outputand results based on that.

Hannah Clark (17:14):
Or maybe it'll, at least the bad version
will be a baseline better.
So your ideas are even better.
I don't think it worksthat way, but maybe.

Anuj Adhiya (17:20):
Ramli, I've been calling it like a
drunk intern, but you know.

Ramli John (17:24):
Oh, that's a better thing.
Because it's very unusual.

Anuj Adhiya (17:29):
But also the only other thing I will add
here, and maybe this is asegue into the next thing
that we talk about is.
Activation isn't really aseparate step from a user
perspective, because oncethey're in the product,
they're in the product, right?
And I think what's helped meconnect the dots downstream
and upstream is justthinking of activation as

(17:50):
just short term retention.
Because once they're in theproduct, that's the game, right?
Is how do you keep them around?
Longer, how do you monetizethem better, more, all of
that, and connecting whatthey do as Ramli, and Danny,
we're talking about initiallyto what happens later.
It's all happening alonga continuum of retention.
It's not a switch in auser's mind is oh, I've

(18:11):
been activated now.
I've been retained.
No it's all the same thing.

Hannah Clark (18:14):
Yeah.
You have to be thinking aboutthings, not just from what you
see as the stages, but wherethe user sees themselves.

Dani Grant (18:19):
It's possible.
It's like you're usingthe product and you use it
in the beginning stages.
You use it at the end.
I do think as a, not aslike a product person,
but as a user, there's apoint at which I'm checking
something out and there's apoint at which I'm using it.
And I think activationhappens in the middle.
So I do think they aresomewhat different from
the user's perspective.

Hannah Clark (18:36):
Anuj, what would you say is, or some of the
possibly use cases you've seenalready, or some ideas that
you have around using AI forthat retention stage, now that
we're in that, okay, we'vegot our user in the product,
where are we going from here?

Anuj Adhiya (18:47):
Yeah, and some of this, I'll say this up front,
it'll sound like an expansionof what's already been said
in many ways, because I thinkwhat's most fascinating to me
about retention and PLG is howit's completely transforming
our ability to understand andact on user behavior patterns.
And I'll share an example.
So many teams that I workwith, Unlike I think everybody,

(19:10):
even potentially audiencejust sitting on mountains of
product usage data, right?
Everybody's got amplitudeand heap and mixed panel
and whatever, right?
But whatever reason, maybe thestage of company, not enough
data analysts, whatever itmay be, they're struggling
to extract enough of theright kind of actionable
insights quickly enough.
So I think what's changingthe game is combining these

(19:32):
traditional analytics platforms,like your mixed panels,
amplitudes, heaps, right?
With these sort of modern AIanalysis capabilities, right?
And you can just see it, right?
It's almost like people wereoperating in the dark and now
the lights have been turned on.
Because the thing that startedto emerge to me, right?
And I'm sure many of ushave used these product
analytics tools for eons.

(19:54):
And I thought I was reasonablycompetent at them, but when
I started plugging in some ofthis data, into whether it was
private versions of ChatGPT orthings like that, it started
identifying these very sortof fascinating micro cohorts.
I think that would be almostimpossible to spot manually or

(20:14):
based on your level of expertisewith any of these tools, right?
And so going off of the examplesthat already be told, right?
I think you might discoverthat users who perform a
specific sequence of actionsin their first week become
power users at three timesthe normal rate, right?
Like I think there's one examplefrom one customer where, they

(20:34):
found that users who both exportdata and share it with teammates
in their first few days.
are showing much strongerengagement patterns, like
this kind of thing isthe kind of thing people
are looking for, right?
So I think what's reallypowerful here, I think, is
that combination of analysisand automation, right?
Because I think what some ofthese companies now are starting
to set up better, Behaviortrigger journeys, right?

(20:56):
Because people have alwayswanted to do this, right?
Because that when the systemdetects a particular user
following some sort ofhigh value pattern, it can
automatically shift the contentor the experience into getting
them faster to that path, right?
And so I think in one casefor me has gone even beyond
what's happening in productbecause a high potential users

(21:19):
in one case got invited toa beta program, or they got
invited to a customer advisoryboard based on their sort of
actual usage patterns, right?
And that's the thing peoplewant to do to Ramli's earlier
point of getting closer to thecustomer, but they just haven't
been able to do it fast enough.
Also, I think for me, thereal game changer is using
all of these tools toconnect the quantitative
to the qualitative, right?

(21:39):
Because machine learning,yes, you can analyze patterns
across support and communityand product feedback, but
connecting those insightswith usage data, right?
So like when you maybe combinethat with something like Gong or
your product analytics, right?
You get I think muchdeeper 360 degree view of.
So not just what they're doing,but why they're doing it, right?

(22:02):
And I think that's that of theinsight and the speed of that
insight, I think is invaluablebecause you're relying less
and less on assumptions andupping the percentage of what's
actually happening becauseof user patterns and needs.

Hannah Clark (22:15):
Okay.
So I'm curious about this is,do you or any other panelists
have anecdotes or stories ofseeing some of this in action?
Cause I mean, it's a perennialissue of qualitating or
combining the quant and qualdata and being able to tell
that story more effectively.

Anuj Adhiya (22:29):
I'll jump in with another example
from my end, right?
For a lot of teams, Ithink these exercises
are disconnected.
There'll be a team thatwill, run the product
market fit survey.
Then there'll be a completelyseparate database that will
host all of the support tickets.
The completely separate databaseof sales conversations, right?
And sometimes it's not cleareven whose job it is to

(22:50):
put all of this together.
And it's just that uncertaintyand indecision, not because
of somebody's fault, it'sjust because nobody was
told that this is their job.
Cause they're like, Oh,I'm a user researcher.
I'm a salesperson, right?
So I think it's power andassigning it as your, let's
call it your AI employee tomake the, take that decision

(23:11):
out of your hands and justCollate all of this data for
you and then present it toall the stakeholders involved.
I think is a trend I'm startingto just see more of where I
guess, the opportunity costof waiting or indecision
is just so high, right?
That this is the most MVP wayof let's just connect a few data
sources, or even if you can'tautomatically do it, let's just

(23:33):
take some exports of these data.
Let's just throw multiplespreadsheets into this
thing and let it elucidatethese patterns for us.
Yeah.
Most MVP way of startingto at least speed up
your rate of insights andbeing able to act on it.

Hannah Clark (23:52):
Web designers, this one's for you.
I've got 30 seconds totell you about Wix Studio,
the web platform foragencies and enterprises.
So here are four thingsyou can do in 30 seconds
or less on Studio.
Adapt your designs for everydevice with responsive AI.
Reuse assets like templates,widgets, sections, and design
libraries across sites andshare them with your team.
Add no code animationsand gradient backgrounds

(24:12):
right in the editor.
And export your designsfrom Sigma to Wix
Studio in just a click.
Time's up, but thelist keeps going.
Step into Wix Studioand see for yourself.
What I'm hearing here is usingAI as a way to break down some
of the silos between all thesedepartments are so are you
suggesting like generating across departmental, state of the

(24:33):
union report kind of thing tohelp people like give everybody
in different departments,like a bird's eye view of
what's going on across areas?

Anuj Adhiya (24:41):
Yeah, no, absolutely.
And so I think it's notcontroversial to say that
growth in general is.
It's a multiplayer sport, right?
It's cross functional bydefinition and certainly true
in the product led world.
Cause you're trying tobring all of these pieces
of content and educationand community and product
and sales and all of theseteams have to work together.

(25:01):
And if you have a growthteam of any size, the entire
purpose of that is to bringall of the key stakeholders
together to understand.
What's the currentstate of growth?
Where are the problems?
Where are the opportunities?
And it's always a challengefor everybody to understand,
not necessarily what is therole I play, but what is the

(25:22):
best, most impactful thingI could be doing right now.
And communicating that becauseI think being a person that
leads growth is as much aboutyour point, storytelling
and communicating insightsrather than here's what
the data says for peopleto understand why should
they care about this thing.
Yeah.
Don't rely on yourown abilities, right?
Get help from an AI to help youunderstand how do I communicate

(25:44):
this data point better tosales versus DevRel versus
product, whatever that may be.
And so I think it's power andbreaking down those silos and
getting everybody on the samepage is highly underrated.

Hannah Clark (25:56):
I really like this tactic.
I want to this tactic.
We had a podcast episodewith Michele Ronsen, who is
a well-known UX researcher.
It's a very similarkind of process that she
recommended that was alittle bit more analog.
It wasn't necessarilyAI enabled.
I like this as an AIenabled counterpart.
But her recommendation wasmore in terms of UX research.
When you've got these, youconducted all this research,
that's great, but then how is itusable to all of the different

(26:19):
departments that it connects to?
And her recommendationwas just to frame it.
Department by department,like what are these learnings
informing as far as theaction, the next action
items for each department.
So I see it as a similar,an equivalent to that in
which you're using the AIto translate, like what
are the findings that we'reseeing across departments?
And what are the action itemsthat we can derive to help us
work together better as a team?

(26:39):
Because yes, you're right.
I think that this is an issuethat we struggle with in every
startup and every company ofany size is, how do we work
better across functionally?
I love this discussion point.
Does anyone else have Anynotes on even using AI to work
better cross functionally, Ithink is a whole other area
that we haven't explored.

Dani Grant (26:54):
This is not AI for cross functional,
but look, why do companieshire product managers as
everyone on this call knows?
It's because you want onepoint person who is going to
get to the bottom of thingsand owns the whole problem
and owns the success of theproduct and is going to do what
literally whatever it takesto make the product succeed
and the project go well.
And so what Anuj issaying where something

(27:16):
is not someone's Problem.
Someone understood it ata spreadsheet level, but
not at a user level, likeit's so true, but that's
the beauty of PMs, right?
There's no PM in the world thatsays that's not my problem.
They're just there tomake the product succeed.
So not AI related, but onetactic that's been very helpful
for us that maybe is helpful foryou all is sometimes it can be.
a leap to go from aspreadsheet level to a user

(27:39):
level if you don't haveuser calls set up anyway.
And so the best thing youcan have is a drumbeat
of like you're talking tocustomers all the time, and
then you just plug in a fewquestions that you have.
But that's tricky.
How do you do that?
And so we have a little bitof automation around this
that I recommend for everyone,which is Every single week,
the top 100 users from theweek before that haven't
heard from us recently getan email from a Jamco founder

(28:01):
asking how things are going.
And if they reply, andtheir reply is like very
thoughtful, we'll say Ohwe'd love to learn more.
Do you would you beup to hop on a call?
And so we have thisdrumbeat of calls.
And what that means isthat if a PM is looking
at something and saying.
This doesn't make sense to me,or I'm curious to learn more.
There is something easyto join instead of having
to invent a new motion ofcustomer calls from scratch.

(28:22):
So anyway, one tactic, not AIrelated, but highly recommend.

Hannah Clark (28:25):
Awesome.
Yeah.
Yeah.
It doesn't have to be AI.
We're just looking to grow.
All right.
So we'll move on tothe expand stage here.
So Anuj, if you want to takethe lead on this one as well,
anything you wanted to leadwith on expanding, then we can
move into Dani's tech as well.

Anuj Adhiya (28:38):
Sure.
Like I said, everything's a bitof a continuum for me, right?
But if you think aboutthe traditional expansion
playbook how does it work?
Like you wait for usageindicators, maybe you have
a customer success team todo some quarterly reviews.
Quite frankly, I've worked withsome teams that are just hoping
to catch expansion signals.
But I'll give you, I'll giveyou a practical example.
There's this one SaaS companythat are using Gong, right?

(29:01):
To analyze customer interactionsacross every touch point, right?
Sales calls, supporttickets, QBRs, everything.
And the key is they'retrying to train the system
to look for very specificexpansion indicators.
So it's not just picking up onlike very obvious phrases, like
we need more licenses but Ithink it can start to identify
patterns around discussionsof say, like adjacent use

(29:24):
cases, a mention of a new teammember or a department or a
conversation about a pain pointthat could be solved with some
additional capability, so Ithink when you start to combine.
That sort of conversational datawith product usage analytics,
that's when it's allowing thisteam to create, let's just call
it expansion intent scores basedon multiple signals, right?

(29:47):
So then when they start to hitcertain usage thresholds, right?
Like in this one case,like they were approaching
like an API limit.
Or they were attemptingto access some premium
features, right?
And this sort of thingcorrelates with very specific
sort of conversation patternsthat they've also analyzed.
And that sort of started tocreate a really clear signal

(30:07):
for expansion opportunities.
All of this to say, it's notjust to identify expansion
opportunities, but to, Ithink, make the process also
feel more natural and valuealigned for the customer.

Hannah Clark (30:18):
Fair enough.
Yeah.
Dani has a process thatshe was going to walk us
through as far as using calltranscripts to generate first
traffic follow up emails,which I alluded to earlier.
Did you want to walk usthrough that process, Dani?

Dani Grant (30:29):
I mean, pretty self explanatory, I'll say that as
product people, we really atleast, I really believe the
product should do all of theheavy lifting, but really,
to change people's behavior,you also want a face that
you can trust, and so a lotof the expand step is very
human led versus product led.
In a really well runningmotion, you have a lot of
back to back calls helpingteams expand their usage.

(30:50):
You want to follow up from thosecalls with something thoughtful.
And again, it's hardto start from no draft.
It's a lot easier to startfrom a first draft, taking
a call transcript, giving itto Claude and say what were
the three main points thisperson communicated to me?
And then using that to drafthere's what we heard, is just
a lot easier than blank slate.
Especially if you have eightback to back calls, then you're

(31:11):
doing this at the end of theday, you don't fully remember,
or like brain is a little fried.

Hannah Clark (31:15):
I appreciate yeah, anything to save the old
brain at the end of the week.
Okay, we'll move on tosection three here, which
is from vision to execution.
So this is all about buildingAI enhanced PLG strategy.
I'm actually going to throw itto Ramli because we haven't seen
him for a couple minutes here.
I would love if you cantake us through some of
the common pitfalls andmisconceptions around PLG,
around using AI to support it.

(31:36):
Kind of what are some greatpitfalls or detrimental pitfalls
that you've seen in your work?

Ramli John (31:40):
I would say in terms of, itself, the output
is only as good as the input.
So if you feed it nothingor garbage information,
then you're going to getgarbage for output as well.
And I think just based on theconversation we're hearing,
what Dani is saying Don'tstart with a blank page.
You really got to feed it withthe right kind of information,
things specifically aroundtraining them with your best

(32:03):
customer experience and thebest calls that you have.
And giving it the right kindof context, especially during
the expansion that we justheard where here are the calls
that we've had with them.
And here is the documentsthat we have around them.
Give me the output around thatand giving it like the right
kind of context as to the rightkind of prompt in terms of
Hey, we're trying to do this.
Here's what we're expecting.

(32:24):
Give me three pointsand being goes back to
what Anuj said earlier.
He called it a treatedlike a drunk intern.
I would say treatedlike an intern who's
just starting the job.
So like a little bit moreverbose or a little bit more
upfront at the beginningand saying, Hey, here's
what I'm looking for.
Here's the output I'm expecting.
And here is the contextthat is in that in terms

(32:46):
of the information.
I think that goes along way rather than
being vague about it.
I've also seen promptswhere pretend that you're
director of customer successor customer, director of
product for this company.
Here's the informationabout the company.
Here's the The LinkedIn profileof this company, of this person.
And what would I write?
What are the three bulletpoints that this person might

(33:08):
care about based on the callsthat we've had with them?
I think that kind of like beinglike very clear and upfront and
being succinct about that aswell, it could be very helpful
in terms of using AI for that.

Dani Grant (33:19):
I think one pitfall to add is,
especially with reallyjunior people on your team.
If you remember being a newperson in the workforce or
a new person in your role,one of the hardest skills to
learn is what does done mean?
And the more junior you are,the more done just means
complete versus understandingthe impact of what really
good work can create.
And so I think one of thepitfalls with using AI in

(33:40):
any of these processes isthat for a junior person
done can just mean complete.
And so, oh, the AI ransome analysis, but a more
senior person would be like.
And that has surfaced somenew questions for us to
dive into that are the nextlogical things to look at.
Or this actually doesn'tquite make sense to me.
This must be a hallucinationor this draft of an email
reads like mumbo jumbo andobviously we shouldn't send it.

(34:00):
And so I thinkthat's a big pitfall.
And I think the way to avoidthat is to actually just have a
lot of conversations about whatdoes done mean in this team?
What does that mean in thiscompany using AI or not?

Hannah Clark (34:10):
Great career building advice in general
is to be constantlyasking that question.
What does done mean?
Having thatconversation constantly.

Anuj Adhiya (34:17):
Sorry, can I just add a couple of things.
One to what Dani said andone to what Ramli said.
So, The point Ramli was makingabout garbage in, garbage out.
So a pro tip for myself nowthat I have begun to implement
is I've started to realizemost people don't even know
how to Google properly.
And that's almost my way ofgetting an early indicator of

(34:38):
whether if I'm going to hand offa task to somebody, will they
even be able to prompt properly?
Cause prompting is so far awayfrom it can get to this point,
like super So I literally havepeople run what I think are,
basic Google searches, and justsee how they respond to that.
And I'm like, that almost givesme a signal of, okay, maybe

(34:59):
this is a person that needsa little bit more training.
Before I let them lose on thissort of a system versus, okay,
this person inherently getshow to query a system and let
them into the system first.
The other thing about what Dannywas talking about where things
might seem complete, right?
Or done.
I think this is why I have foundgreat value in one of the first

(35:20):
exercises I do is have everybodyon a team understand what is
the product North Star metric.
Because this is critical foreverybody to understand that
this is how we deliver valueto our users and customers.
And everything we do is inthe service of growing value.

(35:41):
And that applies as much toif you're going to prompt a
system, that is the perspectivethe system needs, not the
perspective of the specifictask or the specific analysis.
Is this isn't the serviceof that greater thing of
growing value to our users.
I think the common theme that'sset in running through all
of this is I think there'sa little bit of more context

(36:01):
setting and background thatwhoever's going to interact
with these systems should have.
In terms of that sort of greateruser perspective before they
start interacting with thesesystems and extract whatever
they think is an insight.

Hannah Clark (36:15):
Yeah, I have some definitely a few common
themes emerging here thatis as well as challenges
with regards to workingtogether as different teams.
We're just gonna go througha little pre close here.
Normally we'd take thistime to just tell you
about our next session.
For those who are engagedwith us month over month with
the sessions, just so youknow, we are not going to be
doing a panel in December.
We're taking a bit of abreak for the holidays.

(36:37):
So if you see, there's nosession, where are they?
We are going to be back inJanuary with a, basically a more
of a career focused session.
So it'll be about transitioninginto a career in product
management, that'll bethe focus of the panel.
So registration for thatwill be starting in December.
So we'll send out a linkto our subscribers when
that is available whenthe registration is open.

(36:58):
But it should be areally great session.
Those who are here probablywon't be relevant as
much since you're allproduct managers already.
But if you do know anyonewho's interested in the
career, who has been askingyou a lot of questions and is
curious about making that jump.
Please let them know.
We'd love if you couldhelp us spread the word.
We'll get right into questions.
Q& A.
Our most votedquestion is from DM.

(37:19):
It is, AI can exhibitbias in interpreting
nuanced data leading tomisleading conclusions.
In the context of PLG,how can we strategically
leverage AI to enhance userexperiences and drive growth
while mitigating these risks?
Anyone want to takethe lead on that one?

Dani Grant (37:32):
The thing about AI is it doesn't solve human
problems and you really justhave to use your judgment.
So it's a great tool, butyou need a skeptic's mind.
In the pitfalls of likejunior person thinking
something is done, justhaving an answer doesn't
actually solve your problem.
I just, when you're usingAI, especially if to be even
more really cognizant, reallythere, really present with
your work, really thinking.

Ramli John (37:51):
Yeah, I totally agree.
I think this goes into thewhole ethics of AI, like there
needs to be human intervention.
I really do believe that wherethat biases, there needs to be
a human to catch those kinds ofbiases, especially, like you,
I think it's just an input toyour decision making process and
you could have multiple inputs.
You can have qualitative data,quantitative data, you have.

(38:12):
AI suggestion, you can ask yourCEO, you can ask somebody from
customer success, but theseare all input to eventually,
it needs to be a human tomake the decision, but it
needs to be somebody's yes,that's what we're doing.
That's where we're going.
Because based on allof this information,
including an input from AI.
And I would be very cautiousif we let AI make the decision

(38:33):
for us and go from there.
And there's obviously, there'ssome biases there that they
might not catch on its own, so.

Anuj Adhiya (38:39):
So what I would say is, I think it's really
important to not forget howthings were done in the past.
So, let's just call them like,ground truth baselines, right?
Where, how did we dothings before AI, right?
Just so that we can have aclear comparison point, right?
Okay.
Because if we don't know howwe did it how will you even

(38:59):
know whether that system ishallucinating or even have
the opportunity to ask thatquestion as to, should I poke at
this a little bit more or not?
And I think what'sassociated with that, right?
I think it's important tohave Triangulation approach.
So I don't know, let's picka situation where let's say
an AI system is going toflag, like some users at

(39:20):
risk or something, right?
It's okay, you can't just takethat at face value, right?
You've got to look at sort ofthe raw product analytics data.
You've got to look atfeedback, look at whatever
the customer success teamis talking about, right?
And see are those signalsmanually also aligning?
And that gives you a littlebit more confidence in
the system as well, right?
I think a recent example I cameacross, I haven't personally

(39:42):
used it, so disclaimer.
I saw a heap analytics hasthis sort of great feature.
Where, they call it like,they've explicitly designed
for diversity in their trainingdata and in PLG, what that
means is they've ensured thattheir AI models learn from
users across different companysizes, industries, use cases,

(40:03):
things like that, right?
So what that does is ithelps you catch when, Your
AI system might be overindexing on a behavior pattern
from a larger or a moreactive customer base, right?
While missing any signalsfrom smaller ones, right?
So I think there are sometools that are catching on
to this and trying to accountfor it as well, right?
But I think just don't forgetthe way you've done it and you

(40:26):
can always manually verify.

Hannah Clark (40:27):
Good tips.
So the next best upvotedquestion is, would you
recommend using AI in placeof traditional support or
customer success teams?
I feel like we've alreadyanswered this one passively,
but we can maybe givesome more context here.
So those roles are too typicallyseen as responsible for cohort
retention, yet are two areaswhere we were seeing explosive
growth rate in agentic AI tools.
This is true.
This is interesting.

(40:48):
Okay.
Who wants to get into this one?

Dani Grant (40:50):
I think that the most important thing in 2024
is your user's experience.
Okay.
Because there's more competitionthan ever, and it's easier
than ever to switch tools.
So, because most companieshave some sort of PLG strategy,
if someone gets frustratedwith your product, it's not
that much later in the daythat they sign up for a free
trial of another product.
And so user experience in 2024is the most important thing.

(41:12):
And so I think that'sthe way to make that
decision for your users.
So if the user experience thatyou want to enable is we've
got a lot of documentation.
We want to make it reallyeasy to search and natural
language is a more simpleway to search our docs.
Then having some sort ofagentic customer support
is actually great, right?
Because it's just anice UI for your docs.
If the user experience that youwant to provide is that we're

(41:34):
here for you 24 7, there'sa person on the other side
who cares about your problemand is right there with you
for every step of the productjourney, then that's probably
not the experience to provide.
It's possible that for someof your users, for your
developer audience, you wantone, they actually don't want
to talk, get on a call, butthey want just quick query
language for informationand for your enterprise.

(41:56):
So anyway, I think thatthe way to think about
this is not through a toolslens, but rather through
what's the user experience.

Ramli John (42:02):
I totally agree.
I think people are craving, Ithink with the rise, with more
and more rise of AI, I thinkpeople crave human connection.
There's this quote, causeI'm working on a new book,
but it's from Josh Kaufman.
He wrote the book,The Personal MBA.
He said, there's aparadox of automation.
The more that the automationis efficient, the more crucial

(42:23):
the human experience is.
And I think that's so truebecause I think in talking about
customer success and customersupport and user experience,
like what Danny was talkingabout, I think people would
be delighted when there's anactual human chatting with them.
There's like face toface like right now.
And whenever I go toin person events, they
just Oh, you're real.

(42:44):
You're not like a chatbot.
That's like spitting out things.
You're actually somebodywho Cares for me, who I
feel valued as a customer.
And I think that's going togo in a longer way, especially
as, companies automate moreand more things through AI
support, marketing websitesand things like that.
And even emails I heard.

(43:04):
KFC is introducing AI emailsto get you to buy more fried
chicken, which is crazy,but I think it's really
going to be more importantto have humans involved,
especially in B2B products.
I think the word, customerand face to face does
matter in that value.

Dani Grant (43:18):
I think at the same time, the bar for what quality
AI experience is getting higher.
So imagine like yourown experiences using
products out in the world.
If someone introduces an AIfeature or an AI chat support,
and it is a low qualityexperience, you actually hate
them even more than if it wasjust like a normal feature
or a normal chat support.
There is something aboutoh my God, they're just
on this buzz train.

(43:39):
It's just hype.
It's just marketing.
It's whatever.
That just feels bad in your gut.
And so I think that if youare going to introduce such
a core functionality of yourproduct, like customer support
using AI, and that the customerknows that it's AI, it better
be like really darn good.
The bar is actually evenhigher to ship that than
it would be otherwise.

Hannah Clark (43:55):
Our last question for the day, how do you balance
personalization through AIwith user privacy concerns?

Dani Grant (44:01):
Follow your privacy policy.
If you're thinking thisis wrong, don't do it.
Like just don't do it.
For example, if in your privacypolicy, it says we're allowed to
look at top level usage metricsin order to improve our product,
then an AI can help you in that.
If in your privacy policy,it doesn't say we're allowed
to look at all user data toall like just, whatever, like

(44:22):
user trust above all else.

Hannah Clark (44:24):
I don't think that there's much
else to say about that.

Ramli John (44:29):
There's definitely some nuance response.
Like I've gotten emails whereOh, Ramli, I noticed you did
X, Y, and Z in our product.
And I think there is somevalue there where Oh yeah.
Okay.
I'm stuck.
I need help.
But if it's I'm not sure.
I think there's definitely,I would agree with Dani.
I think it really is likethe privacy concern where
Oh, I feel like I'm beingstuck, but I'm in your house.
I'm in your product.
So.

(44:49):
Maybe I do expect thata little bit for you
to see what I'm doing.

Anuj Adhiya (44:53):
I think this one goes to a question
I've been pondering, right?
When somebody, I don't knowwho said this to me, but they
called it, their attempt wasat respectful personalization.
And so I think that thekey for them was to be
really transparent aboutthe value exchange.
And I don't think peopleare not going to expect that
we use AI to personalizeexperiences moving forward,

(45:14):
but I think it'd be usefulto just follow, simple rules.
Okay, if you're going tocollect a particular data point.
We internally really shouldhave a really clear demonstrable
benefit to the user that asyour guiding light first.

Hannah Clark (45:28):
These are all very insightful.
Unfortunately, weare so out of time.
We always do this.
We always leave ourselvesnot quite enough time to get
through all the questions.
Thank you, everybody, forsuch an engaged session.
Thank you for participatingin the chat and for all
of your questions andfor joining us today.
Really great of youguys to make time.
I hope that Session was helpfulfor everybody who attended and I
want to give a warm thank you toour panelists Ramli, Anuj, Dani,

(45:50):
you guys have been amazing.
Well, it's a pleasure tohave you around with us
and thank you everybody.
Have a great day.
Thanks for listening in.
For more great insights,how-to guides and tool reviews,
subscribe to our newsletter attheproductmanager.com/subscribe.
You can hear more conversationslike this by subscribing to
The Product Manager, whereveryou get your podcasts.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Intentionally Disturbing

Intentionally Disturbing

Join me on this podcast as I navigate the murky waters of human behavior, current events, and personal anecdotes through in-depth interviews with incredible people—all served with a generous helping of sarcasm and satire. After years as a forensic and clinical psychologist, I offer a unique interview style and a low tolerance for bullshit, quickly steering conversations toward depth and darkness. I honor the seriousness while also appreciating wit. I’m your guide through the twisted labyrinth of the human psyche, armed with dark humor and biting wit.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.