Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Welcome back everyone
.
This is our fourth episode ofthe AI Powered Seller, and last
time we got a little spicy.
Last time we talked about willAI replace sales enablement we
got all sorts of feedback onthat one, but today we're going
a little bit of a different patharound.
Why so many teams are seeingzero value from rolling out
(00:27):
generative AI generically, right?
So we're going to talk aboutwhen it should be custom, when
it does not need to be custom,how to actually do it in a way
where you get a return, because,at the end of the day, none of
this matters.
If it's not improvingperformance, it's not improving
the metrics.
So we're gonna talk about data,when to customize, how to
customize and how to do it atscale.
So this will be a good onetoday.
So every week, as you all know,we take DM, so we're getting
(00:49):
these every single week.
If you have questions, send itto us.
If you have things that youwant us to go specifically deep
on, send it to us, because thisone was a really good one.
The DM question of the week waswhat role does data quality
play in customizing AI models?
Right?
So how can businesses ensurethey have the right data to do
customization the right way.
Speaker 2 (01:09):
So I think there's a
few different things I think
about with this.
You know, it's so funny, man, Italked to a lot of companies
and it's always well, the data.
You know we've got to have.
This data I want you to thinkabout with a sales org, right,
which is what we talk about.
The data is the playbook, thedata is the how we show up for
meetings, it's how we, how webest defined our ICP, our buyer
(01:31):
personas, our competitive battlecard, and so for me, you know,
we're going to dive into thiskind of the custom GPT world.
I think when I think of data forsales leaders, you need to be
thinking about your quality ofthat data, because when you want
to start to create some ofthese custom models, the more
you know precise, your playbooks, your personas, your ICPs are
(01:54):
the answers that it returns forthe sales team are exponentially
better than you know.
Some of the data points, right,you see these tools.
It's like when you mention thecustomers, you know, whatever
you get, a get a three percentlikelihood.
Like, don't get me wrong, likethat's an interesting data point
.
But instead imagine a world andwhat we're going to dive into
is, you know, hey, I've got acompetitive deal with this
(02:14):
company in this company.
Here's the situation.
It's a cfo.
We're competing against zendesk.
Um, tell me the top threethings I should avoid, or the
top three things I should doubledown on.
If, if your content, data is onpoint, that actually is what I
think creates a better.
You know, call it output.
Speaker 1 (02:30):
So that's an
interesting take and it's the
right takes.
I think when most people thinkdata, they actually think
literal metrics, versus whatJake just walked through is
actually the context, thecontent of, like what's
happening.
Context that's a good way,that's a really good way to put
it.
It's more context than it isdata, because you hear people
say, oh, we have all this data,it's like cool, but you have no
context to it.
That's right.
Here's the win rates of ourlast thousand deals, and that's
(03:00):
right.
If you don't tell it what tolook for.
It's the personas, it's what'sworked in the past, it's the
insights you can provide to thedata I think makes this work
well, I think that's right.
Man, let's jump in here andtalk custom AI models, right,
because I think a lot of timeswhen people hear it, they go, oh
, shit, yeah, what, what is it?
How do I do this?
Oh, I need all this data, needan llm.
(03:22):
What's an llm?
How do I set this up?
How do we go through all ofthis?
So what I think it's led to isa lot of people doing like just
plug and play that's right.
Speaker 2 (03:30):
I wrote a copilot,
right.
Nobody got fired forimplementing copilot.
Speaker 1 (03:33):
It's the safe way to
do it.
That's right, and I wrote aboutthis just this week on linkedin
of like.
You know, opp or opd right oflike are using other people's
prompts or other people's dataand I think when people are
doing that, it's taking them adifferent direction, so like.
So when does it make sense todo custom versus more of a plug
(03:54):
and play AI model for a company?
Speaker 2 (03:56):
Yeah, that's a great
question.
Look, I think things that aremore universal for your company,
right, and even like and let'stake like a co-pilot, you know
if you're going to installco-pilot for your company and
you know you want to have areally easy access to product
specifications, you want to haveeasy access to FAQs things that
are like universally you knowknown or universally you know.
(04:19):
There's not like kind ofprecise tactical applications
that would happen Totally fine.
You know a more generic modelthat you can just upload a bunch
of your information to and it'sgoing to pull out.
I'm like, hey, what's the specson this product?
Again, what does this thing doversus this?
So I think it's the moregeneric pieces where a more
general thing that you're goingto implement is relatively fine,
(04:42):
because you really just needthe LLM to look for the
information and you're giving ityour products right, and so it
doesn't need to necessarily knowhow to customize it to a
situation.
Now you could obviously arguelike, well, if you could
customize it to a situation,that would obviously be better
too.
But those are the applicationsand you know, I'd say chat, gpt,
for example, if you're usingthat, like the teams.
More general, you know it'sabout the level of precision
(05:07):
that you want.
You know if you are okay,because chat gpt out of the box
is pretty damn good.
You know the paid version andand you know perplexity is
getting there and a lot of themare getting there um, you know
if it's more like general q a?
You know general um.
You know research.
Uh, yeah, I think all those arefine yeah, it's actually.
Speaker 1 (05:25):
If I hadn't thought
about this way till you said it.
It's actually a good guide oflike.
If the answer can be generic,it's okay to use a generic.
Yeah, I think that's the rightway to be specific to your use
case a certain persona, acertain industry.
That's where you know the wordthat I think we're just going to
drill with people's context.
It will just be missing.
Speaker 2 (05:45):
That's exactly right.
Speaker 1 (05:46):
To go through it
right, because also, too, we had
to think about resources, like,if you're early as a company,
you may not have the resources.
You may not even have thecontext to give.
So it's probably actually Iwant your take on this.
I'm an early company.
Yeah, I don't have all theseresources.
I don't have a lot of data, Idon't have a lot of context.
Is it the right play to put ageneric one in, or could that
(06:07):
actually go down the wrong?
Could it take me down the wrongpath without actually doing?
Speaker 2 (06:12):
it the right way from
the beginning.
Yeah, I mean, we're going toget into this, but I think the
interesting thing is we talkedabout this in the last episode,
about sales enablement and thecontent Before to build out all
this documentation, it wouldhave taken you 40, 50, 60, you
know, you know people hours.
I mean chat.
Gpt can write your personas foryou, it can write your
(06:32):
competitive, you know, and, andthen it can.
It can get you 80 and then youspend one you know person hour
on the customization.
So, um, you know, look, I thinkmaybe it's about the level of
precision.
It's like do I need this to bedirectionally right?
70 to 90 percent of the time,and that's good enough?
Or, like, man, you know, is theis the extra mile to do the law
(06:55):
to get toward the long tail,you know, a 10 percent increase
in productivity, a 20 percentincrease in productivity.
It's like, you know, I thinkthose are the things that I
would balance is what's, what'sthe risk reward here?
So if you, you know, look, ifyou do some, you know, I would
test.
That's what I'd say test thegeneric model.
Hey, look, I work at thiscompany.
Insert link.
Um, I sell this product.
Insert link.
(07:16):
I'm in a competitive deal withinsert link and the persona is
this persona in this industry,um, um.
And then you try to run it andyou say like well, yeah, that's
pretty dang good and that getsme there.
So I think I would say it's atest thing, yeah yeah, I think
that'll be interesting to watchhow companies do it, because you
do, you have people rolling outcompletely generic programs and
(07:36):
then they're like, nope,nothing happened that's what we
see with sales works right now,man, I gotta tell you because,
again, we've got a lot ofcompanies where you know we we
lost a deal maybe four, three orfour weeks ago, and the guys,
like you know, they, went to the.
I mean, we've got buying fromthe CEO.
We never lose deals at contract, like, if we're sending a
contract, this is a done deal,right.
And then somehow the COO waslike, well, we need to pause
(07:58):
because we're thinking about ourgenerative AI strategy across
our org and we've talked aboutthis.
Man, it is.
And I emailed the CEO.
Look, I don't get bitter when Ilose deals.
Okay, I'm okay, I lose deals.
Speaker 1 (08:09):
So you realize that
every opener disqualifies what
comes after it.
Speaker 2 (08:13):
I don't get bitter, I
don't get better.
It lets me know you're gettingbetter.
I have more.
I know it's more of like Idon't feel the need to bond.
No problem, I'll follow up in amonth let's see how things are
going.
I felt obliged to email the CEO.
I'm like I'm just going to tellyou you're thinking about this
completely the wrong way.
You are going to implement asolution that there is no
(08:35):
solution that works for financeand also works for your BDR and
also works for your biz ops teamand for your customer success
team.
Each of these are personalizeduse cases of it.
So if you're going to pick agenerative AI strategy or
whatever, you're stilldeveloping custom applications
of how departments are going touse this to get that level of
(08:57):
specificity.
So that's the biggest problemthat I see is people think that
implementing a generic AI likeco pilots the most popular
because it's micro, especiallyfor bigger companies that are at
risk adverse, um, and thinkingthat just a generic application
is going to do it.
Like I said, it works in thosecases a product or more random
things but it just doesn't getspecific enough for a lot of
(09:18):
these.
So I felt, I felt you know,katie, I felt like it was my
duty to make sure the ceo was.
He didn't respond no he didn't,he'll respond.
Nobody wants to?
Yeah, exactly, he'll respond inlike two months.
Speaker 1 (09:29):
I was going to say
two, three months.
You're going to, you're goingto get a note on that.
It's like, hey, you were right,I love those ones, I frame
those ones.
Speaker 2 (09:46):
Hey, katie, you, this
comes up in like the next month
or so, then maybe we shouldpick back up the conversation
and everybody says yes, becauseI know what's going to happen in
the next month, and then theycome back and they're like, oh
yeah, maybe we should talk aboutthat.
So that's it, man.
I think there's a when there'sa precision of specificity that
(10:08):
is a 10, 20 percent better,because we as humans actually
suck at perceiving 10 to 20percent better.
You know like.
You know think about likebaseball is such a classic, you
know metrics, you know game, youthink about it the guy who hits
320 versus the guy who hits,you know, 280.
You know, if you watch thatboth of them hit, you'd be like
I can't, I don't know how muchis better.
I mean, you're literally,you're talking about a half of
what 50, 0.5% difference, right,5% difference.
(10:29):
We can't feel that.
We as humans don't feel smallperceptions, and I think this is
.
We're talking more about sales,best practices, but hey, that's
what we're here for.
It's like that's why reps don'tunderstand the difference
between a great script and agood script is because you can't
really feel the differenceuntil you look at the data on
and say, you know, actually,yeah, this did increase my
conversion rates from stagethree to stage four by eight
(10:51):
percent.
Well, guess what eight percentis meaningful, you know.
But I, I think that that levelof specificity that they're,
they're, you know the answer fora lot of this, and I know we're
going to get into kind of likewhy it's.
It's simple to start, um, Ithink that's a lot of it too.
It's like, well, yeah, this isgood enough.
It's like, well, I mean,actually it could be 10% or 20%
better?
Speaker 1 (11:11):
Yeah, because we have
really bad memories as human
too.
I'm experiencing this with myown team right now.
We have, in one of my orgs,almost tripled production over
the last nine months.
It's like production, not justrevenue, like at a per rep basis
.
We're almost at a 3x right now,which also means I have reps
(11:32):
making two to three times morenow and they don't remember what
it was like six, seven monthsago.
They only know what they'reexperiencing now and now.
It's like I'm still striving toget that next 10 and that next
10, that next 10, and there'sactually this gap a little bit
right now, like, well, katie,we're already here, like we're
three times better.
It's like, yeah, but we could befour that's right times and
(11:53):
like it's those little stepspeople always forget reps,
business leaders as well as likeoh, that's what worked, then
they don't remember thedifferences as they move forward
, that's right, the smallincremental wins.
Speaker 2 (12:03):
That that's stacked,
yes, over it, and I think that
that's the application here.
Is, you know, can it get youfive or ten percent better if
you do um?
Or are there these generalapplications that are totally
fun, yeah, so so let me ask thisto you.
So I think the cool part is,you know um, you have kind of
went through, you know your teamhas went through.
Uh, you know we did a six-weekaccelerator um and we'll have
(12:27):
another one kicking off.
We did launch the next one inJanuary early.
We'll have another one kickingoff, I think, late Feb.
So we'll link to the the linkfor that.
So talk about it Like so, ifyou want to get involved in this
kind of custom model, customGPT type world, why is it simple
to start, or how, how would youthink about starting?
Speaker 1 (12:48):
Oh, man, like I think
this is fun because with my
team.
So a quote I come back to allthe time.
All that I read at least once amonth.
I come back to this leadershipquote where it says the strength
of the leader can become theweakness of the team.
Sure, I come back to this quoteall the time to remind myself
of, like, where am I strong,that I might be creating a
weakness in my team.
(13:10):
And if I look at my previouscompany, I was, oh, whatever,
this would be a big word.
I was on the forefront of likeAI starting to, like I was
starting to learn a lot, but itnever translated down to my team
.
Well, and I was like I think Icreated that gap.
I was strong at it.
So people were like, oh, katiewill figure this out.
So my current company had,finally, I had all my leaders go
(13:34):
through the course.
Speaker 2 (13:35):
So this is the first
reason why this is easier
there's education out there.
It's no longer you and Italking over here.
Speaker 1 (13:39):
There's education.
That's why it's simpler.
But I took 100% novices likethey.
There was no context around AIoccurring going through that
program to now having managersbuilding custom gbds in my org
six weeks later, you know.
So the one, the education's outthere, but then two once you
(14:00):
know how to do this, you don'tneed that much to get going?
Speaker 2 (14:04):
no, not at all.
It's not technical.
I think a think.
A lot of times people hearcustom AI, whatever, and they're
like bro, I don't even know howto use it in the first place,
and they get overwhelmed andit's like it's an LLM.
Speaker 1 (14:18):
You just use words,
you talk to it like almost
literally you explain whatyou're looking to do, and this
is also why it's simple.
Generally, a lot of the bestuse cases for custom GPTs you're
already doing manually.
So if you can walk it throughthe process of what you're
(14:38):
trying to do automatically, it'sso much easier, right.
Speaker 2 (14:39):
Like we, so we did it
like our scorecards.
Speaker 1 (14:40):
We already had the
scorecard built right and, by
the way, there's also custom GPTscorecard builders that help
you build the scorecard.
But we took the scorecard.
We put in what we were lookingto do.
Actually, I'm going to rephrasethis we did not Gullar did.
I played zero role in this atall.
Gullar took the scorecards,gullar went through the program,
gullar built the custom GPT,gullar tested it, gullar
(15:03):
presented it, tweaked it and nowwe have a gpt running for all
of our calls yeah, every singleone of them being scored, and it
was like 10 hours of work.
I asked him.
I was like how long did it takefor you to actually do this?
He was like well, probably 1012 hours total.
I was like how would it fast,would it be?
Now he's like probably four tofive hours.
Speaker 2 (15:21):
I can think of that.
How much does that save himtime a month?
And now multiply that acrossall your leaders so actually,
this is, this is the other.
Speaker 1 (15:27):
I love how you
brought it.
You know we're not good at likethe small increments.
We're also not good at bignumbers.
Yeah, we don't understand.
Right now, for this team inparticular, I have 20 active
sellers and this is my inboundorg.
They are handling anywhere fromGod 60 to 70 opportunities or
like demos, like a month per repper rep, like a month per rep
(15:52):
per rep, and now almost everysingle one of those being scored
by this gpt where, at best case, my managers could have done
two to three calls per rep everyother week.
Yeah, it's all like it changeseverything.
We're talking about hundreds ofhours, right, not only being
saved but gained exactly.
Speaker 2 (16:04):
Yeah, it's like a
save, it's like overproducing.
Speaker 1 (16:07):
We're getting almost
everything scored now and being
able to measure actually rightbefore recordings.
I just got off with one of mymanagers from a one-on-one and
that's what we were talkingabout.
I was like we need the repsleveraging this more, not just
us to them, but them using it aswell, and then it's also it's
easier to update, right, which Ithink is the other thing we
talked about.
Where we didn't talk about onthe first portion is like you
got to keep improving it.
(16:28):
Even if you start generic, itshould never stay generic.
Like you have to continuallyfeed it more information, right,
and so I think those are thingswhere, like actually I throw
that back to you if you look atthese companies, like, how do
they keep them up to date?
Right, because I think a lot ofpeople like it's like why these
generic ones?
It's a flash, right, theylaunch it, it and it goes.
Speaker 2 (16:47):
Yeah, it's like it's
decent Woo yeah.
Speaker 1 (16:49):
And then it dies down
Like how do you maintain it,
how do you keep?
Speaker 2 (16:53):
it no-transcript, and
I go like, okay, you've done it
(17:15):
.
How many of you then kind ofgave up for a little bit and
then everyone like keeps theirhands up right, or how many of
you never went back, you know.
So I think that, leaders, youjust got to believe you know
these numbers.
You just got to believe youknow this, this, this, these
numbers you know, as we thinkabout the future of scaling
teams, you know everybody isless with more right.
And I was talking to one of ourprivate equity partners on
(17:37):
Tuesday and I was like you know,I talked to her.
I said, look the leader, yourleadership team has never been
asked to actually care.
Or look at rep productivity.
Rep productivity isn'tsomething that most people even
track and the reality is theyhave.
Most orgs know how to scale.
Is you add this amount ofpeople with this amount of quota
, with this?
And that's been the play.
(17:58):
That's been the play for a longtime.
And so the idea of repproductivity.
And look me as a leader, I needto learn this shit.
I need to.
I need to be an expert.
You know one again most of themdon't know.
Most of them don't know repproductivity.
So if you're not tracking.
That is wild Katie.
I mean I would guess 90% ofsales leaders I talk to don't
look at rep productivity.
(18:19):
That is insane, yeah, they don'tlook at it, because they look
at their day.
It's simple and easy to justsay, x number of quota
waterfalled up to your numberdown, or, however, with x,
number of attrition equals this.
That, if you think about how, Imean like, look, I built, I
built a model, you know kind offorecast like that in 2012, and
(18:40):
that's just kind of how theyhave learned to do it.
So, you know, part of this isjust understanding that you, as
a leader, are responsible forfinding ways to increase rep
productivity and in just acrossthe org, and and this is the
answer is if you can providethese custom solutions.
I mean, just just let thisthink about that.
You're you're.
(19:00):
It's the first technology wherenot only are we usually when
you're able to do more, thequality degrades at a massive
level.
Look at your outboundstrategies, perfect example.
And this is the first whereit's like not only is it more,
it's like a higher qualityoutput.
And so I think there's like abelief at the top that we need
to do this.
This is something that we'regoing to stick with.
This isn't something where,yeah, we're going to try it,
(19:24):
because, again, a lot of this isyou've got to think it's our
job now as CEOs, cros, vps, it'sour job to teach our people and
that means you have to learn itas well, too that this is a new
way to problem solve, and we'vetalked about this, I know, in
past episodes.
But there's a change managementlift around, getting the org
rallied around Like we're goingto continue to make this
(19:47):
investment and stick with it.
You know the cost and onceyou've done that, you've built
the, you've built the rhythm.
As you think about kind of thiscrawl, walk, run is.
You know it's like great, youknow this quarter we're going to
build here are the biggestbottlenecks and time sucks.
Is there a solution where wecould customize a solution?
And we will, you know, andwhat's the potential impact of
that?
Okay, great.
So we'll build this one andwe'll maintain this.
(20:08):
Again, you don't need to be aprogrammer to do this.
All you need to understand islike there's some very basic and
if we've got in that I'm sureBrian did a course, yeah,
probably.
What we'll maybe do is we'lllink to like this one episode or
course where Brian talks aboutjust how to write good custom
instructions for custom TPT.
So if we've got something,we'll link to it, but you know
(20:29):
the cost to maintain is reallyjust the consistent updating,
almost like a product, likeyou're creating these like
little mini products and youknow, a few hours a month, but
again, it's not, you don't needa technical person to do it.
And then then the last thingI'll say is you know, you don't
need to try to do like Imentioned.
I guess you don't need to tryto do like like I mentioned.
I guess you don't need to tryto do it all at once.
You know, you can just you kindof pick one or two, but I, I, I
(20:51):
think if you don't believe thatyou have to do it and that you
have to maintain these things,uh, you're going to be in
trouble.
And then the other piece is thetechnology is just getting so
good, so fast.
And so you know, even when youthink you've done you've, you've
mastered it.
Well then, you know, chat gbtad citing its sources.
Okay, perplexity, this is a,this is a big one.
About a month, three weeks orthat.
(21:14):
No, I guess it would have beena few months now.
Perplexity started in their api, adding the site, sourcing, and
so they're just they'reevolving.
So again, if you as a leaderaren't taking the time to
understand what's possible.
It's just worded in in sales.
We're not used to moving inthese fast cycles of rapid
change, but you can make itsimple.
So again, getting started, Ithink, is relatively easy to
(21:35):
start, just getting a basiceducation and then to maintain.
You can just pick one or twouse cases and then just having
this kind of consistentoptimization rhythm, and I think
that that's what a lot ofpeople need.
Speaker 1 (21:46):
I want to read this
Actually this is from um guller,
my manager, to one of my vps,jeff.
We built another one for ourother sales or to go through
right.
It's like.
So he built it for another orgright.
So we have right, org by org byorg.
I just want to read like I'm soproud, I'm so proud of my team
right now.
I'm so proud like I just lovethis right.
So jeff got it.
They've been communicating onit and then go to message jeff.
I told jeff I can make it evenbetter if he scores five to ten
(22:09):
calls with.
The scorecard provides thereasons to why he scored the way
way he did.
We can feed that back into theknowledge base and make it even
stronger.
I was like, oh, that's all youhave to do, it's such a good
example.
Oh okay, so he scores sevencalls, gives feedback on why it
scores the way that it did, fedit back in even better.
(22:29):
And what's actually interestingI was talking about this with
them this week even where it'snot perfect you mentioned like
80-90% what's wild is it isalmost never wrong on the misses
, like if a rep didn't dosomething well, it's almost
never wrong.
Just every once in a while itthinks the rep did something
well.
That's almost never wrong,right right.
Just every once in a while itthinks the rep did something
(22:50):
well that it didn't.
But that's the 10.
But because it gives the reason,we can see like, oh wait, no,
no, that was, that's not good,that wasn't good.
But it's wild.
Like if, if you have a goodscorecard in place, it catches
if a rep did not do it.
But sometimes it says, oh yeah,that was a great bucket
question.
You're like, no, it wasn't, andhere's why and we can address
it.
That's right, we feed it back,so it's just ongoing.
(23:11):
But it's just wild what you cando.
And that started generic andhow's it?
Now has become custom, right,so it's just.
Speaker 2 (23:17):
There's so many use
cases here yeah and again, but
you know, even spending a littlebit of time on this one, um the
other thing, think about thepositive you now now have.
How many of you, if I went andlistened to this manager's team
versus this manager's team,versus this manager's team
versus this manager team, howdifferent would be what they
consider quality?
(23:38):
And that's not good.
That's not good.
There is a right and a wrongway to probably run a call at
your company, at your company,and so if you are trying to, you
know you're letting four orfive different managers, you're
not creating a center ofexcellence for what it means to
be a great rep at your companyand you have to be able to
capture these best practiceswhere everybody can have
insights on them.
(23:59):
You know, you have look, weknow if people execute this,
this is the right way to do it.
So you're also creating aquality uniformity of proven
processes, which I think is also, you know, like.
Speaker 1 (24:11):
I always love this
when people go down as well, but
, jake, I'm not going to turn mypeople into robots, right,
there's 10 different ways to dothis.
I need to let go.
I was like oh, have you everbeen to like a Michelin star
restaurant?
Before you ever been to like aMichelin star restaurant?
Before you ever been to like afive star restaurant?
Speaker 2 (24:25):
Yeah.
Speaker 1 (24:26):
Do they let their
cooks all cook?
Speaker 2 (24:29):
different.
I'm an artiste.
Speaker 1 (24:30):
I'm an artiste.
No, the recipe is the recipe.
They might add some flair, butlike the recipe is the recipe,
and there are recipes hidingwithin orgs that they just don't
realize.
Like, jake is our top performer.
Here's what he does differentlythan the rest.
That's repeatable, right?
Because there's some thingsthat, like you know, the acronym
that I say a lot right now islike wiggle right, wgll, what
(24:52):
good looks like.
And I actually had someone askme once like well, why isn't
what greatness looks like?
Why is it?
Why is it what good looks likeversus what great looks like?
I actually have a reason forthat.
I believe everyone can be good,right, I don't necessarily
believe everyone can be great.
Agreed, there are some nuances,some natural abilities, some
behavioral that separates thattop 5%.
(25:16):
1% 100% agree, that's greatness,but I believe I can get
everybody good.
Speaker 2 (25:20):
My saying is I can
create an army of B players.
It is the extra.
That's what makes you an A, butif you just do these things,
you're not going to lose youwill succeed, you'll be good and
you can earn your way togreatness.
Speaker 1 (25:33):
But there's greatness
hiding in every org that they
just don't capture.
And out of fear of feelingprescriptive, I just had a
message today.
It's like well, how do I dothis without restricting my reps
?
I had a message today.
It's like well, how do I dothis without restricting my reps
?
I was like this is not aboutrestricting, this is about
uplifting your reps.
Here's what needs to beaccomplished in discovery how
you accomplish it Cool.
Speaker 2 (25:55):
We got to do this
part first, this part second,
there's a framework and as longas we do this but I'd also stop
people from skipping around Iwas very fortunate.
I was like 26, 27 is when Iwent to a company that kind of
taught me this and I didn'tbelieve it was a thing of like
the science of the sale is good,like a natural, natural seller,
and I was very interested inpsychology and you know I was
(26:16):
struggling.
I struggled my first few months, maybe I told this story before
around um, my boss pulls me inas a second-to-last person to
sell anything.
I'm like what the hell?
I'm God's gift to sales.
Why haven't I closed anything?
And my director, my boss's boss,had listened to my call and he
goes Jake, why aren't youfollowing the roadmap, the
script?
I go, the script.
I am Jake Dunlap, he goes.
(26:38):
Jake, let me ask you this Doyou think we're stupid?
He goes.
Do stupid?
He goes.
Do you think we train athousand people and I process?
That does not work and I go hmmwell, probably not close sixty
three thousand dollars in newbusiness in the next month and I
was just like holy crap, thisis a thing, because actually it
was very freeing because I hadthe best practices roadmap.
(26:59):
I didn't have to think about mynext question, I didn't have to
think, I could just show up andexecute and and I've got a
similar analogy to you.
I'm like okay, do you think thebest movies in the world are
made with no script and just thequality actor?
It's like so.
So Anthony Hopkins reads thescript, but you're better than
Anthony Hopkins, you're betterthan two-star Michelin
restaurant.
You are that damn good.
(27:19):
And so I think, as you know,we're talking about these things
.
All of these things areimportant.
That's why, if you haven'tdocumented, to be able to give
the AI context, that's why youshould be documenting.
Speaker 1 (27:30):
And we'll part with
some ideas here.
It's like one let AI interviewyou.
If you're like, well, I don'thave all this stuff, let it
interview you.
I want you to ask me questionsto better understand my deal
process.
I want you to ask me Such agood application.
Just let it interview, you Getthe context out of your head and
down on paper and now you havesomething to feed it.
(27:51):
And if you're really takingthis serious, like I'm in the
spreadsheet hell right now doing2025 planning, I'm all up in it
.
The very first additionalauxiliary hire I listed.
I'm calling it a GTM engineeror AI internal.
I just want to hire one personto be able to own this and drive
(28:13):
this, because there's so muchthat could be done.
I'm creating a budget for this.
I want this Now.
It's no excuse for me notunderstanding it.
I'm going to still try to stayon top of it, but leaders out
there make budget for this.
Get someone in Work with anagency, a consultant, someone
who knows this, to bring it intoyour orgs, otherwise you're
going to miss, you just are.
Speaker 2 (28:33):
Well, that's it.
I mean, and you know you'redoing exactly what we're talking
about is you're staying up tospeed on it, so you use cases
that unlock, and I think thatthat's what anybody should do.
So for me, man, there's someapplications for generic AI, but
I think all these things arerelated.
The quality of your context isgoing to help.
(28:55):
If you don't have context andif you really don't want to
create the time to createbandwidth to create context,
well then, just keep using moregeneric to create context.
Well then, just keep using moregeneric If you see the
opportunity.
Like what Katie to?
Hey, you're a CEO, you knowyou're scaling initially, or
you're a bigger company, Justagain interview me here, Ask me
10 questions in order to help mecreate a buyer persona document
(29:17):
, and, and you know, I willanswer your questions.
And now the cool part is, withthe mobile app, you can do it
all voice to voice, you can doit in your car, and now you can
use custom GPTs and the mobileapp too, which is a lot of fun.
So, all right, man.
Closing thoughts.
Speaker 1 (29:31):
I mean, at the end of
the day, create the space,
create the budget for this, butget started.
Don't let greatness preventgoodness.
We'll be like.
Well, I need to do it perfectly.
Get started.
If that's all it will take toget started as generic, start
generic but then build it tocustom.
But don't wait, I can'tremember the quote.
(29:53):
Right Like the pursuit ofperfection, get in the way of
greatness.
Where it's like oh, if I don'tdo it perfectly, it's not worth
it.
Get started and then buildtowards custom, but this is not
optional anymore.
Speaker 2 (30:05):
This is something
that has to be done and again,
you can go do it.
You heard this.
You know we've got someeducation.
So in the show notes we'll puta link to some of the custom
GPTs that we've built that youcan customize from there.
We just closed our biggest dealwith those as well.
We've got a sales team of 24now where we're building out,
(30:27):
like we're basically just takingthese custom GPTs and just
customizing it to their business, their personas, et cetera.
But I would encourage all ofyou to make your own too.
So if we've got a link to thattraining, I'll drop it in there.
Also, make sure everyone'ssubscribed.
We've got an AI-powered sellernewsletter.
So, you know, 100% free, justsign up.
We're sending out.
We've got, you know, one of ourmore technical people who pays
attention to, you know,perplexity adding, you know,
(30:48):
quotes or adding the citations,like I didn't know that.
You know that was Brian.
So, you know, subscribe to thenewsletter.
Check out some of the customGPTs.
You know for anybody out thereIf you have any questions,
always, you know, dm me, katie,you know, drop a comment on
youtube if you're watching thisthere.
Um, and yeah, man, this is agood topic.
(31:09):
I'm looking forward to to thenext one and we'll see on the
next episode everybody soundsgood man.