Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
All right, welcome
everyone.
Another episode of the AIPowered Seller.
I am Jake Dunlap, ceo of ScaledJourney, ai RevOptics and the
Dunlap household.
Well, I'm not really the CEO.
I'm more like the presidentthere, or maybe I'm more like
vice president.
Let's be honest, today's episodeis going to be a shot in the
gut for a lot of companies.
(00:20):
I'm going to talk about IT andhow IT is ruining many
companies' generative AIstrategy and, hopefully, what
you can do about it.
And it's not intentional andmaybe ruining is a little harsh,
but I'm just seeing it more andmore in the market, where
what's happening is companiesare trying to move forward.
(00:41):
They're like, hey, I know forthese roles, I can be more
productive if I just have thisthing.
And then IT is like no, wedidn't sanction that and it's
killing productivity as a partof that.
And so if you're an ITprofessional listening to this,
I just want to say that I loveyou and this isn't anything
personal but there is a big, bigdifference between
(01:02):
understanding what AI is andunderstanding the capabilities
of what it can do.
So today I'm going to talkabout the big ones, like what
we're seeing in the market.
There's kind of five coreissues that we're seeing IT push
back on Data security,integration, lack of control,
clear ROI and ongoing supportright, and I'm going to try to
(01:23):
break down each one of these soyou can break it down for your
leadership team or, if you're inIT, maybe I can give you some
comfort to move forward withsome more aggressive solutions
here, because the future is now.
You know, if you've listened tothe last week's episode or the
last episode of the pod, thenyou would have heard me talk
(01:43):
about what's already capabletoday In the last episode of the
pod.
Then you would have heard metalk about what's already
capable today.
In the last episode, I reallybroke down that man, forget this
general AI deployment.
You could be creating this, andso you know, if you didn't
listen to that episode, makesure to go take a listen.
Make sure, if you haven'talready subscribe to the podcast
so you get the latest andgreatest If you're watching this
live.
(02:04):
Thank you, I appreciate you.
Make sure to like the video,subscribe to the channel as well
, and we're going to get into it.
So the first thing I'm going totalk about is data security and
privacy.
Oh man, this is the IT callingcard.
So let me tell you whathappened?
There was an incident thathappened in April of 2023.
(02:24):
So, literally, my friends, twofull years ago, openai had only
been around.
Since what?
November, december, january,february, march Five months.
It's a little baby, a little AIbaby, and what happened in
April 2023 is someone fromSamsung decided to upload some
of their board minutes.
To summarize it Well, somehowsomebody and, by the way, this
(02:46):
is chat GPT like didn't, it wasstill like old information on.
You know, there wasn't like alive internet, there was no
custom GPTs.
The model was like the three,the three, five or I can't
remember what it was, and itliterally said like right there,
it's like do not uploadsensitive documents.
Like chat GPT is five monthsold.
It's like look man, I'mreckless right now.
(03:11):
And somebody was able toactually search those board
minutes and found some details.
It wasn't some like proprietaryyou know thing.
That incident created scartissue across every company,
like not every, but a lot ofcompanies, where they're like oh
, we can't share proprietarydata.
It's going to feed the modeland like, yeah, you know what?
You're not wrong two years ago,like that's what it was, but
(03:31):
now, if you fast forward totoday, you know, openai encrypts
data in AE Okay, these are nerdstuff AES-256-TLS-1.2.
Openai, also SOC 2 compliant.
It literally says and you knowwhat I'm going to do I'm going
to literally drop the T's andC's into the show notes.
It literally says noinformation.
(03:53):
If you use the Teams, theEnterprise or the API, no
information is shared back, theconversations aren't shared back
, nothing's shared back totraining, et cetera.
And so, when it comes to datasecurity and privacy, chat GPT,
open AI, is just as safe as anyof as Gemini, as co-pilot of any
of the other solutions, right,and it has superior models.
(04:16):
Chat GPT now has they've gotthe four, the four oh model, the
four five model, the oh, theone model, the three five model.
Like, there's so many differentmodels that are better for
different use cases that, by notallowing your team and, by the
way, perplexity is fantastic,claude is also fantastic, and
all three of those are superiorto Gemini and copilot.
(04:36):
And so, when it comes to datasecurity and privacy, if you are
making your generative AIdecisions purely based on that,
you are literally forcing yourteams to use software that is
inferior to what exists today.
So, look, it's job is to besafe and compliant, etc.
I get that, but we got to getrid of this mindset of chat.
(04:59):
Gpt April of 2023.
It has the best models and,again, any of these now are
secure.
You're not hearing any of these.
You know breaches.
You know, the worst thing I'veheard recently is you know, some
Russian hackers this was shootmaybe a few weeks ago were able
to get some custom GPTs thatwere public.
(05:21):
By the way, if you create thecustom GPTs in your Teams or
Enterprise environment, this isnot applicable.
Applicable.
They were able to have it giveup its custom instructions, like
hey, how did they write you?
How did they do this?
Which, again, I don't thinkthat that's a big deal and it's
certainly not applicable to acompany that has a team's
environment.
Okay, so that's what's up.
So, again, if you're a sellerout there, look, I'm not saying
(05:43):
go against your IT department,but I am saying that there is
close as possible to no risk Ifyou put a call transcript in
there that all of a sudden, yourcompetitor is going to go ooh,
let's see what Scaled is talkingabout.
(06:04):
Like it's not happening.
Okay, this is all fantasy land,right?
So if you're worried about it,don't worry about it.
It departments, I'm sorry,sorry, it's the truth.
So if you don't like it, go doyour own homework and tell me
I'm wrong.
Feel free to leave it in thecomments, but leave me the
comments on how the teams orenterprise edition will share
information back and potentialbreaches in the last you know 12
(06:26):
months.
Okay, all right, so that's myhigh horse on data.
All right, next up my friendsintegration stuff.
Okay, open AI's API is super,super easy you can use.
I mentioned it maybe for thefirst time make an innate in in
the last week's episode.
But more than anything, when Ithink about integration
complexity, I want you to thinkabout integrating it to the role
(06:49):
of the person, and that's foreverybody out there.
This is how you should bethinking about it.
For my role, my wife evencreated a custom GPT.
I should be creating littleagents for myself that prompt me
so I don't have to prompt, andso again last week's episode, I
broke down like what goes into acustom agent?
But when I think integration,it says okay for this marketing
(07:10):
role.
How should this marketing roleuse it?
Okay, they're going to usegenerative AI.
They're going to have onelittle agent for this thing.
They're going to still need toprompt for this thing.
Great, this next role?
How should they integrate it inthere?
So it's really not this complexintegration that we need to
think about across everydepartment, across every group.
(07:31):
It's about the role.
And I've said this 5,000 times.
It drives me crazy.
It's like stop asking what'sour AI integration strategy.
That is literally like I wonderif 1995, they had a chief
internet officer who said what'sour internet strategy?
You're like, well, what was theanswer then?
(07:51):
The answer then was well, itdepends.
And what does it depend on?
It depends on the role of theperson using it.
That's how they're gonna usethe internet.
That is also how they're gonnause generative AI.
So that's what's up, my friends, on integration stuff, okay.
Next is lack of control.
Okay, I can't control what itsays.
Like what if it givesmisinformation?
(08:13):
This is a big one, right?
Let's be honest.
Back in the day, the wordhallucination well,
hallucination used to meansomething completely different.
But over the last few years,hallucination means AI will make
shit up, right, and it willjust kind of say well, I think
that this is interesting.
Let me give you a really goodexample.
This is well over a year ago,right?
So now it's not like well, jakesaid it hallucinates.
(08:35):
I was using the paid version,probably a year and a half ago,
I was like, okay, I'm trying towrite this type of content, help
me to get some stats aroundthis thing.
It came back with some killerstats.
I was like, man, these arereally really good.
And it was also one of thosethings where I was like this is
too good, like these stats are,like they're too much like
validating my opinion, and soI'm like, can you cite your
(09:01):
sources for me?
And it goes oh no, no, thosewere just placeholder sources
and this is before like it hadsearch turned on a whole bunch
of other things and I was like,like you just made these up,
like thank god, I didn't go postthese things.
So when it comes to lack ofcontrol, the hallucination and
that part of it is getting lessand less and less an issue again
(09:21):
.
If you are, if you're not usingthe paid version of any of these
tools you absolutely should bethey will all cite their sources
for you.
So the whole idea of chatPTmaking stuff up in stats is gone
.
It's gone.
Chatgpt will cite its sources.
You say, great, cite yoursource.
It's like here's the link,here's the things I used as a
part of it Journey AI.
If you go use meetjourneyai it,literally the very first thing
(09:44):
it returns a search.
It says here are the fivesources that I used.
You use some of the advancedreasoning models, it will show
you the sources.
So you know the lack of controlover outputs.
The other point so that's onepoint is it already is citing
sources.
So therefore it's pulling frominformation and making
extrapolation not just based offgeneral LLM thought, but also
(10:07):
off of sources.
So know that.
The other thing, I think thatis very, very interesting around
this.
Okay, this is going to be thegut punch maybe for a lot of you
.
How often does your friend oryour leader give bad advice?
How often does you ask a humanabout something and they give
you advice and you're likethat's like 80% good.
(10:28):
And they give you advice andyou're like that's like 80% good
.
So why do we?
And maybe I'll tell me take itfurther how often do you do a
Google search and the resultssuck or like mediocre?
So, guys, because it doesn'treturn the perfect answer, the
perfect, controlled, exact thingevery single time, who cares?
(10:50):
Neither does any other placethat you get advice from, every
place you get advice from, givesmediocre advice at times, other
times 10 out of 10.
So I want a lot of you to thinkabout that.
Are you throwing the, you know?
One, does your prompting justsuck to where you're not getting
good outcomes?
Two sure, it might not beperfect, it might hallucinate a
(11:11):
little bit, but that's betterthan not using it right?
The analogy that I always use isthis your people have the
internet okay, your people cango to websites.
Your people right now can go toOnlyFans right, if that's their
thing on the internet.
So is it better that they stillhave the internet and can use
it, or is it better to shuteverybody off because it might
(11:33):
not be, you know, like becauseof one or two use cases?
Well, let me guess the answeris it's better to have the
internet, okay.
So we have to take that sameapproach to AI.
Yes, we don't have a hundredpercent control of it, but you
don't have a hundred percentcontrol now over the quality of
answers that people are getting.
So get over it as a part of it,okay.
Next up, I've got ROI concerns.
(11:54):
Okay, jake, what's the ROI?
And I get this one.
This one to me is like Itotally get it.
You know, as someone who's beenin sales for a long time.
There are kind of some verybasic sales arguments.
The first is I can make youmore money.
Make you more money is alwaysthe best sales argument, because
then I can show this reallypositive lift.
(12:15):
Everyone likes to make moremoney.
Step two is I can save youmoney.
Not quite as good, we strugglewith that component.
The next is I can get you ahigher quality of insert thing
right, so more higher qualitywidgets.
And then the last is timesavings.
Time savings is the one thatpeople struggle to quantify the
(12:35):
most and that is, I think, oneof the biggest issues that I see
in generative AI is becauseit's a time saving and also
higher quality.
It's actually the last twowhich, candidly, sometimes can
be the hardest to prove, becauseit's those two things I think
companies are struggling to saywhat's the exact ROI on this?
Oh my gosh.
Because it's those two things Ithink companies are struggling
to say what's the exact ROI onthis?
Oh my gosh, like, what's theROI right?
And so I will tell you thestats that we are seeing here
(13:00):
and you can do what you want.
About eight months ago, we did asurvey.
We had about 300 sales reps,sales leaders, respond how many
of you are?
You know how much for those.
If you're, you know you had tobe using chat to BT.
How often are you using it?
And I, at that point, I want tosay it was like 28% of people
said five hours or more a week.
Holy crap, I mean that thatshould be eyeopening, guys.
(13:23):
We literally redid that survey.
Um, I think we only had acouple hundred respondents, um,
like a month ago.
So six months later, 42% saidthey were using it five hours or
more.
My friends, let me ask you whatis faster?
Going to Google?
Okay, I get a new meeting.
Oh, yay, I got a new meeting.
I go.
This is me typing.
(13:44):
Right, I'm typing.
Oh, let me go to their website.
Okay, I go to their website,click around, blah, blah, blah.
Then I'm like okay, now let mego to their LinkedIn profile.
Okay, read, read, read, read,read, read.
Okay, fast forward 20, 30minutes.
That's how people are doing itright now.
Or, you know, and again,no-transcript, give me a PDF of
(14:16):
the person's LinkedIn profile.
Then it reads it all for youand it says great, jake, here's
your discovery, call prep.
So then obviously you couldprompt that.
You can say, great, hey, here'sthe link, here's what I think
you know if you don't want touse a custom GPT.
But what is faster, logically,like what's the ROI of Google?
(14:39):
Like, what's the like, wheneverwe went from going to the
library to Google, who was thefirst person that was like, well
, what's the ROI of using Google?
I can go to the library and getthis information.
It's like, well, logically,logically, I think this is going
to save us a ton of time.
So it's like I can just givesomething a link, give it the
(15:02):
thing, it does all the summaryand analysis for me, then I can
look at it and review it and seewhat I like.
Or I go and I'd search aroundthe internet.
So, my friends, to implementthis stuff, you're talking about
50, 60 bucks a month per userto have custom agents.
You know you can get the paidversion of this stuff for 20, 30
bucks a user, so the costs arenot high.
The time savings literallyyou're talking about.
You know we're talking to ateam that has 400 people.
They deployed Copilot and theydidn't employ any of the agents
(15:24):
along.
That do all the things I'mmentioning.
And I'm like, guys, you have400 reps because you're still
making them go and learn how toprompt and be an expert and do
these things.
You're still requiring them tospend hours doing that.
If you have 400 reps and I cansave them two hours, you know,
or two and a half hours per repper week because they now are
(15:45):
getting prompt, that is athousand hours of productivity
in your sales team per week.
What are your quotas, okay?
Well, there you go.
I just gave you guys two freecalendar months, or whatever the
math is.
So the ROI to me is prettystraightforward right Now.
Yes, should, over time, we beable to show more meetings
(16:07):
booked?
Absolutely.
Should we be able to showdecreases in sales cycles
because we're having better,higher quality conversations,
more win rates?
Absolutely, we should be ableto tie it to make more money,
which is what everybody loves.
But guess what?
I can tie it to higher qualityinsights and time savings pretty
significantly.
It's a big, big deal.
(16:29):
So that's my ROI cost concerns.
It's not that expensive.
It's way cheaper than probably80% of the sales tech you use.
There's a lot that I would giveup as a part of that.
So support and maintenance Okay, this is last but certainly not
least, and again I get this onein particular.
The issue we're seeing is thatthis stuff is just evolving so
(16:50):
quickly that, you know, in thelast, you know, eight weeks,
chatgpt released the O1 ProReasoning Model 10 out of 10.
Claude raised $3.5 billion.
They just released the 4-5model.
So this stuff is evolvingquickly.
So, yeah, there is a little bitof staying up to speed.
(17:11):
It's required.
It's required for everybody,not just IT.
For for you in sales, for youas sales leadership.
You have to stay on top of thisstuff too, right, and so you
know, for us, the beautiful partis all of this is actually
because the the nice part aboutwhat's happening is there's a
lot of really good technologiesthat they're doing the
(17:32):
development for you, and a lotof this is how you put it
together.
The future will be everyone canbuild a X agent, but are my
instructions or my knowledgedocuments better than yours?
Can I program the agent withproprietary data sets better
than you can?
And so, when you think aboutmaintenance and, you know,
keeping up to speed on some ofyour custom GPTs yeah, they are.
(17:53):
Those are things that you'regoing to need to adapt every few
months, right?
Oh, there's a new model out orthis thing happened.
So, yeah, I need to go andchange this part of the
instructions.
But the cool part about this isit's not highly technical, like
most of these changes arewording or phrasing or how we're
putting together things ormaking the knowledge documents
(18:14):
better.
So that's the reallyinteresting is, you know, we're
entering this world of wheresellers every single one of you
can learn how to create anautomation and make it's not
tough, right?
Lots of people on our team havetrained themselves.
So, you know, when it comes to,you know, call it this idea of
support and maintenance.
The support and maintenance isvery minimal, right, and it's no
different.
You know, just think, the onlything you're really supporting
(18:36):
and maintaining is theproprietary knowledge sets,
because all the technologies areautomatically upgrading in this
and, yeah, sure, if one oftheir technologies goes down, it
something misfires, it happens,but that happens in technology
every time.
So this idea of support andmaintenance being a blocker,
it's so much easier to supportthan like almost any sales
(18:57):
technology tool, because you'rejust using words.
You're saying, hey, why isn'tit giving me the same outcome?
How do I go make those tweaks?
So it's easier, at least in myopinion, it's literally easier
than ever.
So that's what I've got for you.
All right, data security andprivacy Guys.
April 2023, it's two yearslater.
Can we please move on?
Right, all of these things aresecure.
(19:17):
You're not seeing it happenwhen you invest in the paid
versions, et cetera.
It's time to move on.
There are superior models.
You are hamstringing, you knowhamstrung.
Hamstringing your team by notletting them use the best of the
best and forcing them to use,you know, technology that's
(19:37):
already been outpaced a year ago.
Okay, integration complexitywe're talking about integrating
in the roles.
Stop with this big picturestuff.
We'll focus on the roles.
Let's figure out here how we'regoing to integrate it here.
Pick one or two pilot use cases.
Right.
Lack of control Again,continuing to make the agents
better.
You can get your quality ofinformation better or the output
better, and just understandthese models are getting better
and better.
Ai is never gonna be worse thanit is right now.
(19:58):
Roi I don't know how to say it.
Like, what's the ROI ofSalesforce?
I don't know, but I know itsaves you a lot of time and
forecasting would be nearlyimpossible without it.
I don't know how to quantifythat.
What's the ROI of having accessto Google?
I don't know, but the ROI ofhaving the internet is better
than the ROI of not having theinternet right.
And support and maintenance.
This stuff is really reallyreally basic stuff.
(20:19):
This stuff is not overlytechnical.
Your code is breaking.
That's not how it works.
A lot of this is just keepingup to speed with, like small
changes in the model and makingsure we're supplying it with
better information.
So, my IT friends, hopefully yougot a lot of value out of this.
Maybe you're like, hey, I wantto subscribe too.
Make sure to forward thisepisode over to your IT team.
(20:40):
Make sure to subscribe to thechannel, like the video, if you
got value out of it, or makesure to share this podcast if
you're listening as well, too,with your team.
My hope for this episode is wekind of are lowering the
temperature a little bit.
We're making it a little easierto be like, all right, cool,
like let's get going, let's notworry about the fear and
(21:00):
uncertainty and that stuff.
Like we actually can getstarted much faster and it's a
lot less complex than we think,and it's actually getting less
complex as time goes on.
So that's what I got for youeverybody.
Thank you for tuning in anotherepisode of AI Powered Seller,
we will see you on a new episodeevery two weeks so, like I said
, make sure to subscribe to thechannel and share with your
(21:21):
teams.
Like I said, I think every CEO,cio needs to listen to this
episode to understand what isactually happening at the
forefront of generative AI.
Thanks again, everyone.
We'll see you on the next one.