Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
It's been another
eventful few weeks in the world
of artificial intelligence.
In just the last week alone,the governor of Tennessee has
signed the Elvis Bill meant toprotect musicians from
AI-generated content, Congresshas banned the use of Microsoft
Copilot by all members and staff, and Amazon came out and
admitted that their AI-poweredcashierless stores were really
(00:20):
just offshore workers remotelymonitoring shoppers via camera.
Is it time to reset ourexpectations around AI?
I sat down with my longtimebusiness partner and friend,
Evan Leonard, to talk about allthat and more.
Speaker 2 (01:05):
This is the Crush
Bank AI for MSPs podcast.
So, david, the title of thewebinar Reset your Expectations
Around AI.
What is it that reallyfrustrates you around the
expectations?
Speaker 1 (01:11):
So this is going to
come off as a bit of a crazy way
to start this webinar,especially when you consider the
topic and what we do as abusiness.
Our entire company is built onAI technology, but my single
biggest frustration in talkingto people and working with
customers and giving lecturesand holding sessions and
(01:31):
educating people is that peoplejust plain assume it's better
than it is Functionally.
They believe what happensbehind the scenes, in the black
box, as we call it for AI, isjust pure magic and I know that
comes from a place ofunfamiliarity and awe in how
this stuff has just emergedrecently.
(01:51):
I mean, it's only what now, 18or so months since OpenAI
released ChatGPT and sort ofchanged the world and opened the
floodgates on generative AI,and people just assume this
stuff is incredible and it worksout of the box and it's, like I
said, better than it is.
The problem with that is thatthat causes all sorts of
(02:12):
downstream issues for anorganization trying to embrace
this technology.
It means that you assume theoutput is correct when it
doesn't.
That's my single biggest issuewith AI when you sort of let it
loose in your organization andlet your employees or your staff
or your team or whoever workwith it.
If they're not experts in whatthey're asking AI to do, then
(02:34):
how can they vet it when there'sno guarantee that the
technology is right?
It means you're not properlytraining those employees on how
to create inputs, how to doprompts, how to prepare data
basically, how to get yourorganization ready for AI.
Speaker 2 (02:47):
David, you love to
tell these stories around.
You know these like some thingsthat hit headlines, some things
that don't you know.
You're a wealth of justinformation gathering.
But I remember the time youtold me about the attorney who
was using OpenAI and what itcame back with a fictitious case
and all as it was like a primeexample of how people who just
(03:09):
don't vet it out don'tunderstand some of these
responses.
Speaker 1 (03:12):
Yeah.
So that's a great story whichmost people probably familiar
with, so I'll tell it reallyquickly.
But I'm also going to tellanother story which is made less
headlines but is more impactful, I think, when talking about-.
Speaker 2 (03:21):
You told me I wasn't.
Speaker 1 (03:23):
No, no, the attorney
one is fun.
Long story short.
A New York actually basedattorney was trying to prepare a
brief for a court filing and hewent to Chad GPT and asked for
precedents around a certain case.
And Chad GPT spit out an answerthat included a couple of cases
that he could use as precedentsfor his filing.
And he prepared the brief andhe submitted it to the court and
(03:43):
everything was all fine anddandy until someone realized the
Chad Chappie team made up thecases completely, hallucinated
the precedents because, quitefrankly, they sounded like
examples of cases that fit thenarrative.
And that's what large languagemodels do they create language
that sounds like it should beaccurate.
It doesn't always have to beaccurate.
It's not always grounded in thetruth, which I know is
(04:06):
something we'll talk about alittle bit more as we go on the
other one that I find kind ofinteresting, which I love to
tell it's not abouthallucinations, it's about bias,
which is another really bigproblem in AI in general, which,
again, I think we'll talk alittle bit more about.
But years ago, three or fouryears ago, amazon built a model,
an AI model, to use to evaluateresumes from candidates, and in
order to do that?
(04:27):
You need training data.
Data is going to be acontinuing theme of this webinar
, so be prepared for it.
So, to get that training data,what did they do?
They used the resumes of theemployees they've hired over the
years, and this was on the AWS.
On the technical side, theyused the resumes of employees
they'd hired over the years andthey graded them so Evan's an A,
(04:47):
david's a B, plus so-and-so andthey used that to train the
model and evaluate resumes theycame in.
The problem was, if you thinkabout tech traditionally and we
know this as a couple of peoplethat have owned a tech company
it's been a very male-dominatedspace.
We went for years withoutfinding any female tech
employees because they justweren't out there.
(05:07):
Fortunately, that has changed,which is great, but that hadn't
always been the case.
Anyway, long story short,amazon fed all of their resumes
in and the model started toidentify the fact that only men
made good tech employees.
So it immediately spit out andrejected any resume from a woman
, any resume that looked likethey worked for a female-focused
company or went to afemale-specific school, things
(05:29):
like that.
So that's an example of how thebias gets baked into the model
completely innocuously we'regoing to talk about Google.
I know at some point, thethings that happened with the
Google model.
With the images they werecreating, they look like someone
maliciously tried to train themodel to be woke.
That's not how these thingswork.
It just happens naturally, ithappens organically and it can
(05:52):
be a little bit disconcerting,obviously.
Speaker 2 (05:54):
Yeah, well, I mean,
the humans always have some
input in the training and youknow, a lot of people just may
not have certain experiences orunderstandings and, like you
said, it happens sometimes byaccident or a lot of times.
Speaker 1 (06:08):
Yeah, no, it's
generally by accident, of course
.
Speaker 2 (06:11):
So I know you've
spoken about this like there's
limitations to AI.
What are some of thelimitations?
I mean, everyone thinks that,like AI is this magic and you
know, just you know, alwayscomes up and spits out answers
and everything else, but theydon't really understand the
underpinnings of it.
So what are some of thelimitations here?
Speaker 1 (06:28):
So there's a couple
of different ways we can talk
about limitations around AI.
First, I get a little bittechnical here for a minute and
at the risk of boring people, Iwon't go on a computer science
seminar lesson here.
But the technology that drivesall of this is what we call GPUs
graphical processing units andthey are computer chips.
I'm going to oversimplify it.
(06:48):
They're computer chips that arevery good at mathematical
calculations, and that is whatis required for AI.
So the first limitation aroundAI is, quite frankly, we don't
have enough GPUs.
There are just not enough onthe planet.
It's why NVIDIA's stock is at900 or whatever it is today.
(07:12):
It's why NVIDIA is one of themost powerful and valuable
companies in the world and it'swhy everyone from Microsoft to
OpenAI to Amazon and Google aretrying to get into the space of
creating chips.
Right, you wouldn't think thatall these cloud,
born-in-the-cloud companieswould become hardware
manufacturers, but it's thatcritical for it.
The other way we could talkabout limitations again
manufacturers, but it's thatcritical for it.
The other way we could talkabout limitations again are what
the actual software can do.
So first and foremost, like Isaid, it has to be trained.
(07:35):
There is data and you have toteach the system how to work and
understand inside your owndomain of knowledge.
So, for example and I don'twant to make this a crush bank
ad by any stretch but we spenthours and days and months
training models around ITsupport, understanding and
knowledge and we're able tobuild domain-specific models
that understand that technologyand that language, quite frankly
(07:57):
.
So if you go and ask anuntrained model like again, go
to ChatGPT and ask it a questionabout a specific system, it may
come back with an answerbecause it has that information
that it pulled off of Google oroff the web or wherever, but
it's not trained to understandthe actual how.
Things like bias andhallucination and drift all get
baked into the conversation, allget baked into the model.
(08:17):
There's this term we use.
It's called the black boxproblem, where you look at a set
of inputs and you look at a setof outputs and you don't
(08:39):
understand why the model or whythe AI came back with those
outputs.
A great example of that isagain from the early days of
ChatGPT, when the New York Timeswriter basically tricked Bing
into convincing it to leave hiswife or try to leave his wife
and run away with him, andMicrosoft OpenAI couldn't
explain why that happened, so Ithink it's critical as an
(09:03):
organization, that youunderstand some of the
limitations and how to overcomethem and, more importantly, how
to leverage them and how toreally take advantage of the
technology, rather than justagain assume it's perfect and
let it be, that's great.
Speaker 2 (09:16):
And you know, being
in infrastructure with you for
the last 30 years, you know, andI was never a technologist, I
had to, you know, reallyunderstand infrastructure and
applications and databases andall these things that I never
thought I'd need to know.
You know, growing up andeverything, and now we've kind
of switched gears here and nowdoing, you know, basically
(09:38):
essentially software development.
And there's so many people thatI've talked to, whether it's in
ypo or other avenues, and theythey say, well, you know, I have
my development team looking atthe ai, right, and they're well,
they're developers.
So I mean, why not right, theyshould be able to handle ai or
really fully understand this.
And you and I both know thatthere's plenty of times that
(10:01):
people have called us up andsaid, oh, I won't say the word,
oh, my, you know we don't knowwhat we've done here.
And you know, have we exposedour data, all these other things
?
And you know, I'm amazed at thedifferent skill sets that we
need to really implement AI asan application, or what people
(10:21):
need to do to implement it, toinfuse it into their
applications, or what they'retrying to do with it.
And you know, help usunderstand, like, what kind of
skill sets do you need if you'rethinking about, you know,
coming up with an AI solution orsomehow using AI to help you
with anything from data toinformation, to metrics and all
(10:42):
that kind of stuff?
Speaker 1 (10:44):
So first I'm going to
talk a little bit about the
problem you described and why Ithink it happens and I think
it's kind of interesting, and wewill talk about the skill sets
because obviously they'rethey're numerous the things that
you need to have an expertisein.
But I'm going to use an example, and I was actually just having
this conversation with someonethis morning.
We were talking to a bigfinancial firm.
They're not a client, they werejust someone we know.
(11:05):
They were actually someone thatdid some early investments in
our company and they do a bunchof internal development and
basically they described thatthey were having some issues
getting the outputs, theperformance, what they wanted,
out of some of the AI solutionsthey were building internally
and they have a huge developmentshop and the way I explained it
to them and I'll take it andI'll make it a little bit more
personal, let's say, I came toyou, so we started this company
(11:27):
back in 2016, as we talked about, and we've been developing ever
since now, for the last sevenor eight years, let's say three
or four years ago, I came to youwith an idea and I said, hey,
I've got this great idea aroundmapping functionality,
geospatial technology, that weshould put into CrushBank,
ignore the fact that it hasnothing to do with our platform
and just go with me for thisexample.
(11:48):
So I came to you and said Ihave this great idea around
mapping and we're going to do Xand Y and you say I love it,
let's get it into the pipeline,let's get on the roadmap, let's
get it deployed.
What we would do is we wouldbuild a bunch of the front end
code and we would throw a bunchof API calls over the fence to
probably Google Maps or Garminor wherever getting the GPS data
(12:09):
from, and we would get backanswers and we would interpret
the answers and we would displaythem inside the application and
we would have a ton of faiththat that mapping information
that we got back from we'll justsay Google for now was accurate
, right.
So in other words, let's use anexample at our old MSP.
If we wanted to plot someone'spath a tech that was on the road
(12:30):
and he has to go to fourdifferent locations in a day,
you know we can maybe build analgorithm to optimize that with
mapping data and we'd beconfident.
The last thing I need to worryabout is the mapping data.
The problem is that all ofthese companies think you can do
the same thing with all theseAPIs that leverage large
language models and I'm notpicking on them, but I'm going
(12:50):
to use OpenAI as an example.
So people think that you canjust throw an API call over the
fence to OpenAPI, give it abunch of data and you will get
back an answer that enhancesyour application or is accurate
or is valuable or is useful, andthat's just simply not the case
.
There are a suite of skills thatyou need to understand in order
(13:10):
to leverage this stuff that isway beyond what most
organizations have today and,quite frankly, it's not a
surprise they don't have them,because this is new technology.
Two years ago, no one wastalking about this, so you
didn't.
If I was on a webinar two yearsago and I said to everyone
listening that they needed tohire prompt engineers, no one
would know what I was talkingabout.
I can't.
Maybe I didn't even know whatthey would think of if I said
(13:32):
something like that, but it's assimple as that.
It's as simple as having peoplethat understand how prompts
work.
It's as simple as having not assimple but you.
But you need to have datascientists.
You need to have people thatunderstand data and how to
manipulate it.
You need to have subject matterexperts.
So we've built applicationsaround delivering IT support
better, and our entire companyis made up of people that have
(13:55):
been or spent some time inmanaged services or in IT
support.
Because I know when I see aticket and I have to create a
summary of it, I can vet whatthat summary is.
If you show me a medical recordand say, write a summary of
that, I'm completely out of mydepth.
So you need subject matterexpertise.
You need people that understanddata science and how that works
(14:17):
.
You need people that can makethese prompts, understand how to
tune these models, and you needto start with the foundation
that these people don't exist inyour organization today.
Maybe there are people therethat have the expertise or have
the capacity or the capability,but they need to be trained, and
just taking a developer andsaying use these API calls is a
recipe for disaster.
Speaker 2 (14:39):
And just from again
our experience.
We're on our fourth or fifthiteration of Watson, right?
Just going from semanticservice to generative, to
everything that we can do withit today.
You know, the upkeep of thisstuff, the upkeep of the
applications you're connectingto, the upkeep of the baseline,
(15:00):
technology continually changes,right?
So if you're not continually inthat business, in that
understanding of the technology,I mean things go sideways
pretty quickly, right?
Speaker 1 (15:12):
Yeah.
So the other thing that changesis the results that come back
from those API calls.
So let's say, for example, youbuild something and you put some
prompts together and you have agreat solution that's bringing
back valuable data, valuableanswers.
If this was a database or aspreadsheet or some sort of
mathematical, I know that everytime I do select star from
(15:32):
customers where city equals NewYork, I'm going to get a list of
the New York customers.
That's a database reference formy non-technical friends on the
call.
I know if I'm in Excel and Isay two plus two, I know I'm
going to get four friends on thecall.
I know if I'm in Excel and Isay two plus two, I know I'm
going to get four.
If I use we'll change.
If I use Anthropic from, if Iuse Claude from Anthropic as my
(15:54):
large language model and I sendthe same prompt over today and
three months from now, I'm notguaranteed to get the same
answer.
There's going to be differentdata.
The technology is made togenerate different answers and
that's the thing that peopleneed to understand.
It's not only they need toupkeep the applications.
You need to monitor the answersyou're getting back from these
systems to make sure, thosethings we talk about
(16:14):
hallucinations, bias, drift arenot making their way into the
system, and it's why I don'twant to steal your thunder.
But it's why we talk aboutgovernance and why that's so
important for what we do.
Speaker 2 (16:26):
You know, I think
this next question really kind
of ties in nicely about tuningthe model right.
So I hear you talk about allthe time and, you know, to your
frustration sometimes becauseyou're trying to teach.
You know, from myself to thesales team, to marketing, to
everyone you know what theimportance of prompt tuning.
So they are tuning the model,so you know talk to us a little
(16:48):
bit about that and what theimportance is.
Speaker 1 (16:51):
So this goes again
back to what we talked a little
bit about with subject matterexperts and understanding your
environment and your domain ofknowledge and your industry.
But there's large language.
Models are, for simplicity'ssake, a collection of a lot of
data.
They're used with mathematicalrepresentation, they're
tokenized, they're plotted on aCartesian plane.
(17:13):
There's a whole bunch ofhigh-end mathematics that goes
into it.
But for simplification, it'sjust a lot of data and in order
to get optimum results, thatdata needs to be the data that
is relevant to your use case,your example.
So let's say again I want tobuild an IT-specific application
using OpenAI and I'll just sayI use GPT-4, their latest and
(17:38):
greatest model.
It's not custom built for whatwe do.
So there's a few options of howyou can attack that.
You can build new largelanguage models from scratch.
Good luck with that.
I mean you could probably, inthis crazy world we're in, you
could probably get someone tofund it if you have a good
enough story.
But we're talking millions andmillions of dollars and
(18:00):
technology and time that youjust don't have access to right
Things like the GPUs that Ireferenced.
So let's just take buildingmodels off the table.
You can fine tune a model.
Again, this is expensive, right?
Because think about the GPT-4,for example, has 1.7 trillion
parameters in it.
Do you know how much you haveto add in tuning to that?
(18:23):
It's not fine tuning anymore init.
Do you know how much you haveto add in tuning to that?
It's not fine tuning anymore.
It's hundreds of millions ofexamples.
To move the needle on that,like 1.7 trillion is a lot.
I know I laugh when I say that,but that's a lot of parameters
to try and move the model on.
The reason we choose IBM andthis is again, I love them and
our partnership with them iscrucial to what we do, and I'm a
big believer.
But this is not meant to be anIBM advertisement.
(18:45):
What we do with IBM is thistechnology called prompt tuning,
where we essentially can tune alarge model with a lot less
data.
So for our summaries, forexample, for our summarization
technology, where we look attickets and create summaries of
them, we did that with a coupleof thousand examples, as opposed
to having to put tens andhundreds of thousands of
(19:07):
examples and maybe even millionstogether.
So prompt tuning, which was atechnology that was really
pioneered by MIT and IBM, is areally powerful way to do that,
but it's important.
But the other part of that whichI think is important for people
to understand, which is againone of the key takeaways I'd
like people to have from thisfor people to understand, which
is again one of the keytakeaways I'd like people to
have from this, is that sizedoesn't matter and I try not to
laugh when I say that, butplease don't take the double
(19:29):
entendre reference to meananything.
The bigger isn't always better.
When it comes to large languagemodels, sure, if you want a
large language model to help youcreate a novel or to write a
play or even do some sort ofmarketing exercise with you,
then sure, a lot of parametersis good because you need to
understand how people think, howthey talk, how they act.
But we got much better resultsout of a 3 billion parameter
(19:52):
model than we got out of a 20billion parameter model.
So one seventh of the size,because it was custom built and
we were able to tune it muchmore acutely to what we were
looking to accomplish.
And I think that's important toremember, because when you work
with, if you're building an AIsolution and you're going just
to use the OpenAI APIs, that's amouthful.
(20:13):
They only have a few models,and that's not a criticism of
OpenAI at all.
They got some of the bestmodels on the planet but we have
multiple different models inour system for everything from
summarization to resolutions, tochat, to identity, to sorry, to
recognition of identities, toclassification.
They all have differenttechnology needs and they all
(20:38):
use different models.
So I think it's crucial tounderstand that.
That kind of goes back to yourprevious question about the
skill sets.
It's important to understandthat it's not one model fits all
.
Go to Hugging Face, for example.
Hugging Face is the kind of therepository, the open source
repository for large languagemodels.
There's like 80,000 of them onthere some insane number.
(20:58):
I can assure you, people aren'tmaking them because they're
bored.
They're making them becausethey all have different use
cases and different needs.
So I think it's reallyimportant again to understand
what you're working with and howyou can leverage these sort of
different sets of language.
Speaker 2 (21:14):
You know there's a
yin and the yang to everything,
right?
So I think we talked about theyin just now and I think we're
about to talk about the yang incertain respects, right?
So that's data and the realimportance of this aspect, right
?
So one of the things that we dois we create, you know, a
private data lake for companiesand with everything you just
(21:35):
spoke about, right, how we train, how it's your own data and you
know the importance of that,right?
So you know, we always havethis expression over at Crush
Bank.
It's all about the data and youknow, when I hear that, I
always kind of chuckle.
And you and I, you know,growing up in the 80s and 90s,
we watch all the same movies, wehave all the same references to
(21:55):
movies and everything else, andone of the movies that was a
little bit of a sleeper to meand to you was this movie called
Sneakers that was out in 1992.
Speaker 1 (22:06):
It starred Robert
Redford and Robert Redford a
classic Ben Kingsley.
Speaker 2 (22:09):
There's so many great
actors and actresses in that
movie.
You would be amazed.
And if it's not on your list,put it on your watch list.
But there one dramatic scenewhere ben kingsley and and um
robert redford they used to bebest friends and you have to
watch the movie and and they'rethey've, you know, not kept in
touch for various reasons, andit's a dramatic scene towards
(22:30):
the end and and and ben kingsleyis like, so frustrated and he's
like it's all about theinformation right, and he kind
of says it's the about theinformation, right, and he kind
of says this to Robert Redfordand then and you know, if you
know the plot of the movie, it'svery dramatic, it's a very bold
statement and I kind of feelit's the same way here, right,
so it's about the data and thedifference between having the
(22:52):
data you know, having good dataversus data.
That's not so good, right,because that can drive different
types of results, right?
We've seen it with our clients,where you have some that are
just amazing, have amazing dataand they have tremendous results
right out of the gate, orthey're on this mission, this
drive, to say you know what thefuture is all about?
(23:16):
Data, right, this is ourbiggest asset.
It's all about the intellectualproperty and what we can do here
.
The future is all about data.
Right, this is our biggestasset.
It's all about the intellectualproperty and what we can do
here.
The future could be reallyamazing and we have to drive
this into our organization andthe ones that do it are just,
they see tremendous results,whether it's in our business or
you know some of our friends whoare doing some other types of
use cases like that's where thisis so vitally important, right?
(23:42):
So what does all this mean andhow is the AI make a huge impact
or, you know, not such a hugeimpact around data?
I know you're just like awealth of information around
this stuff, so give us someinsight on it.
Speaker 1 (23:57):
So when I talk to
people and I always try to give
some advice, and one of thefirst things I always say is
exactly that it's all about yourdata.
Right, you can't have AIinitiatives in your organization
if you can't get data to feedthem.
Now there are cases where youcan leverage data from somewhere
else.
Right, let's say you are insome sort of a business that is
driven by weather patterns.
(24:18):
Right, You're a coffee shop andyou sell more hot chocolate in
the winter months than you do inthe summer months, and you want
to figure out how to order yourhot chocolate from your
manufacturer, from yourdistributor, so you can get a
bunch of weather data from theweather channel and you can
build some AI solutions aroundthat.
But that's still only half thestory.
The data really driveseverything in your organization.
(24:42):
But it's not just the data,it's what you do with it, and I
feel like at this point I'mturning into a storyteller.
I should be wearing a cardigansweater with suede patches and
smoking a cigar, but I'll tellthe story anyway.
Part of this story didn'tparticularly age well, but
you'll have to overlook that.
You'll understand what I meanin a minute.
(25:12):
One of the best use cases I'veever seen or heard of using your
organization's data Cards withKevin Spacey.
That's the part that didn'tparticularly age well, but we'll
overlook what happened thereand we'll just talk about the
show, which was a great showbefore his inner demons came out
to the public.
And I was actually at an IBMconference seven or eight years
ago and he was the keynotespeaker.
It was fascinating, and hetalked about how Netflix
(25:35):
leveraged their viewer data toessentially build House of Cards
in a lab.
So what I mean by that is theylook to see what type of
television shows people werewatching oh, political dramas,
Great.
They look to see who the actorswere that were interested Robin
Wright, Penn, Kevin Spacey, allthe other people that were in
that show Check, check, check.
(25:56):
So they literally constructedthis show out of data that they
were able to glean from theirviewers' habits and build
entertainment around that.
But they even took that onestep further and they basically
tailored the ads you saw forHouse of Cards to your viewing
habits.
So, in other words, if I hadwatched American Beauty half a
dozen times maybe I did, maybe Ididn't I would probably get an
(26:19):
ad that featured Kevin Spacey Ifyou had watched a show that
featured Robin Wright Penn, youwould probably get an ad that
featured her.
So they were taking all thisand again, this was years ago,
we've come much further thanthat so they were taking all
this data and using it to driveoutcomes in their business.
So I talk about AI and I saylisten.
The first thing you need to dois get a handle on your data,
inventory, it, catalog it,understand how to get access to
(26:42):
it, know who knows what it means.
But just as important as thatis determine what outcomes you
want to drive with this data.
So I'll give you an example andI'll ask a question that brings
it close to home.
We owned an MSP for 30 plusyears and we dissected.
We had so much data in thatcompany and we dissected it
seven ways, from Sunday upwards,frontwards, backwards, sideways
(27:04):
to figure out what it told us.
If I told you that I was ableto get a handle on all of the
data inside of chips when wewere running the company, like,
what type of outcomes could youhave driven, as a business owner
and a leader, with the breadthof data that we had inside that
organization?
Speaker 2 (27:20):
Yeah, you know it's
funny because you and I belong
to a group called True ProfitGroup for many, many years and
you know it drove us to be moreconscientious of what our data
really told us.
And, like, where is the?
Where could it help ourorganization?
Like, where is the where couldit help our organization?
So, just from, like the ITsupport perspective, you know we
(27:40):
learned that when you know aticket was touched more than
once, right, you know.
And if it was touched two orthree times, forget about it.
You know we were in jeopardy ofchurning clients, right, they
were very unhappy with oursupport level and you know we
were following this.
We're trying to figure out,like, what's going on, that you
(28:01):
know, really just getting theintelligence out of this
information of how do we combatthis churn.
How do we combat?
Because we know we also learnedthat you're going to lose about
12% of your clients or yourrevenue per year.
Yeah, and that was that was.
There's really not a lot youcan do about it.
(28:21):
That's a good company, RightOne percent a month.
And it was because someone gotbought.
It was someone went out ofbusiness or whatever the case
may be, or you know, maybe youhad some poor results, but you
try and limit that as much aspossible.
And we were able to hone in onthat information, which is also
kind of what started to drive usinto this business.
But we started to figure outthat, you know, if tickets had
(28:43):
too many touches, for example,we were in jeopardy of losing
that client, that that was anunhappy client.
Speaker 1 (28:50):
Yeah, you know it's
funny like, and that's why I say
it's important to do thisexercise and understand what
data drives your businessBecause, again, you can't have
successful AI outcomes withoutdata and if you can't get what
you need out of that data andunderstand how it affects your
company, then you're honestlyjust wasting the time.
But the funny part is and I'llgive a little bit more detail on
this this was a data exercisethat we had done at Chips with a
(29:12):
bunch of other like-minded MSPsand we determined that the
magic number, for whateverreason, was 1.3.
So at a client when the averagetouches per ticket across the
entire user base got above 1.3,it drove customer sat through
the floor, which then drovecustomer churn through the roof.
(29:33):
I hope I said those right, good, okay, so, but the funny part
is when we started that exerciseand I did it cause I'm a nerd
and I just needed something todo with all this data that we
had when we started thisexercise we looked at every
other metric.
We looked at average cost,obviously.
We looked at response time, welooked at resolution time, we
looked at you name it tocustomer churn, until we
(29:54):
stumbled across the one thatreally fit the bill, which was,
in this case, touches per ticket, whatever, however you want to
call that, escalations, handoffs, whatever the case may be,
that's what frustrates clientsand that would draw customer
churn.
But I implore you, when you dothis, first I implore you to go
(30:15):
to take off on this dataexercise to understand what data
you have, what you have accessto and how it's valuable to your
organization.
But I equally implore you notto go in with any preconceived
notions.
I mean, it's fine to have them,but be open to changing your
mind.
Right?
Don't assume you sell more hotchocolate because the weather's
bad.
Right, there's a funny old linein Cheers where Norm talks
(30:36):
about how he drinks beer in thesummertime because it's hot, and
then Cliff asks him why is hedrinking in the winter?
Well, what else are you goingto do with it?
What else are you going todrink?
So it's not always black andwhite, the correlations aren't
always where you think they are,but it's important again, to do
that exercise and do thatinvestigations in your
organization.
Speaker 2 (30:54):
Yeah, david, I'm
going to take this in a little
slightly different slant heretoo, now, where you know we talk
about, you know, ai and humans,and a lot of people come to us
and say, hey, how can I get ridof people?
Right, if you're a businessowner, you know, some days you
just you sit there up at nightgoing, how can I get rid of
people?
You know.
But then you realize thatpeople also drive your business
and success and everything else,and you know what we found is
(31:17):
you know, first of all, thisstuff right now, how it is, it
enhances your people.
But the other part of it is, youknow, we've seen it where it
can, kind of.
You know, we're going toenhance your data.
Right, we're going to take awaysome menial tasks so people can
actually do some things thatare a little more important and
where the AI can do some stuffthat's a little bit more
(31:38):
accurate.
And I know, before we soldchips, that you would have
innovation meetings with some ofour clients on how they can
better drive their data, driveinformation, how to get rid of
some of these tasks that theyhad to do.
Maybe you can give us a littlebit of insight or discuss that a
(31:58):
little bit here too.
Speaker 1 (32:01):
Yeah, so I think we
jumped the chasm way too quickly
, from I need to figure out whatAI is to I'm going to use it to
replace half my people.
It's a good theory?
It's not a good theory, but itmakes sense in thought but in
practice in theory yes, but inpractice it's obviously
completely unrealistic.
But I think that we don't lendenough value to the human AI
(32:26):
collaboration process.
So what I mean by that is likeit's a perfect example.
Right, there are menial and youknow menial is not the best
word but there are tasks thatdon't require a ton of high
level intelligence and thoughtand processing from a person.
That can be automated by AI.
Right, whether it iscategorizing something,
(32:47):
classifying something,extracting it.
Right, if I was to give youlet's say I gave you a legal
brief to read and said can youplease pull out the terms of
this brief that will ourcontract?
We use a contract.
Please pull out the terms ofthis contract that our client
needs to be worried about.
It will take you some time andyou have to have a legal mind to
do it, and that's fine.
(33:07):
But you can very easily trainan AI to do a really good job at
something like that if youspecifically train it for that.
But ideally, you want it towork hand in hand with a human
being.
Right, because you want it towork hand in hand with a human
being, right, because you wantit to be able to essentially go
back and forth where the humancan ask the AI questions right,
what type of?
What are the terms of thisagreement?
What's the indemnificationclause, what is the non-compete,
(33:29):
what is the non-disclosure?
And when it spits back ananswer, you can also then ask it
to say well, how could we?
Does that term match up withstate law in Delaware or
something?
I don't know.
I'm just making this up.
I'm not.
I don't have a legal background.
But my point is thecollaboration of that is
important, and we went way toofar to the kind of McDonald's
(33:49):
fast food model where we don'tneed people taking orders
anymore, we can have machines doit and we could automate the
whole process.
Doesn't work for McDonald's andit certainly doesn't work in
our businesses with what we callknowledge workers.
Right, find the areas in yourorganization that you can
automate and optimize with somesort of machine assistance and
let your people focus on thethings that they are good at in
(34:10):
a unique way so managing clients, understanding relationships,
selling just human effort thatrequire that human touch that we
have not been able to automateyet, despite what Sam Altman
says and thinks that human touchis still critical to success in
business, but it works betterif they work hand in hand with a
(34:33):
machine and, quite frankly,it's also critical that they vet
and QA that machine right.
So again, that example, I'm notgoing to go back into it, but
that attorney example would havebeen fine if he had read the
cases and said this isridiculous, these are not
precedents, I'm not going toturn this in.
He could have fed theprecedents into the system and
said turn this into a brief,right.
(34:53):
So that then goes from a majorfailure of someone just trying
to shortcut their job and notqualifying the results to come
out.
You could have very easilyflipped that around and said
look up these two cases, andespecially trained model, and
help me write a brief based onthese precedents.
That would save him a ton oftime.
But it still requires the legalmind and legal expertise.
(35:14):
So it's a bit of a subtledifference there.
But I think that we'reundervaluing that human AI
collaboration.
I know I've said that threetimes, but I'm really.
I really believe in that.
Speaker 2 (35:22):
Yeah, and you know to
your point.
We've kind of seen this rightwhen you know classifying
tickets, budgeting tickets,right, those are things that are
not easy for someone who's nottechnical right, but someone
who's like at level two.
Sure, it's pretty simple forthem, but do you really want
them spending the time onclassifying tickets or budgeting
tickets?
You have to know Right.
So if you could train the AImodel, you can get that done
(35:47):
accurately, way more accuratethan a human being.
And by the way, the AI will doit every single time.
Speaker 1 (35:56):
So it's funny.
I think that's, I think that'sinteresting and I think you
touched on a good point there.
And again, I know not everyoneon this call is an IT support or
in this webinar is on ITsupport or in an MSP, but it's
something we know well.
So it's a good conversation.
A good talking point is that alot of companies that I've seen
us included gave this task tosomeone non-technical because it
seems like a low value task.
(36:17):
Non-technical because it seemslike a low value task, and maybe
it is.
It's critical.
I think classifying andbudgeting tickets, again in IT
support is crucial to thesuccess of your support delivery
organization.
But I'll be honest, I don'twant to pay someone $100,000 a
year to read every ticket andsay this is a Citrix issue, this
is a password reset, this is aVPN.
There's got to be much betteruse of that time.
(36:38):
So what do we choose to do, toour own undoing sometimes, is
give that to someonenon-technical.
The problem is they're notqualified to do that.
So really, those are the areasthat if you think someone, that
if you think you can train someautomation system to do that,
you absolutely should, and focuson the areas that have high
value to your organization.
Speaker 2 (36:59):
So you know
generative AI has been obviously
pretty big now for the last 18months.
You know open AI gave somegreat exposure to this
technology.
Is there anything companiesshould be thinking about
differently when deploying AIsolutions, particularly
generative AI, compared totraditional products and
(37:22):
software solutions?
Speaker 1 (37:23):
generative AI
compared to traditional products
and software solutions?
So I'm going to answer thatquestion.
I'm going to interpret thatquestion my own way and answer
it in a couple of different ways, so kind of like you used to do
when you were in college, whenyou didn't know the answer to a
test and you basically wroteyour own question and
re-answered it.
Speaker 2 (37:35):
I'm going to do that
a little bit.
What's that I said?
I did do that once.
Speaker 1 (37:43):
But I actually know
the answer to this.
But I'm going to answer yourquestion, but I'm going to
answer it in a couple ofdifferent ways.
So you mentioned, obviously,that last year there was an
explosion in this technology anda lot of excitement around what
OpenAI was doing, and that'sgreat, and I think 2023 will go
down as the year of theawakening of AI, specifically
generative AI, and where itreally went mainstream.
It got consumerized, it goteasily consumable by users and
(38:03):
businesses and the like.
I think 2024 is going to be thetheme of 2024 is going to be
guardrails.
It is going to be exactly whatwe talked about where
understanding these outputs andmaking sure that they are
grounded in truth and groundedin facts and that there is
governance on top of them, andthat there is governance on top
(38:24):
of them.
So, as you move forward thisyear, that is the sort of the
mantra I want you to keep in theback of your head is governance
, control and guardrails on thedata that you are letting into
and out of your organization.
So I think that is crucial tothe success of these.
Again, we're risking thingslike data breaches.
We're risking inefficientworkers, like bad information
(38:44):
making its way into yourorganization.
People actually see now thatthat's possible, thank goodness.
I mean, it took a year, butpeople are starting to see that
that's possible.
And I think guardrails arecritical.
And I think again, as you go tovendors and talk about
solutions whether it's us oranyone else like I don't care if
you've ever talked to us from abusiness standpoint just call
me to BS or just ask for someadvice.
I'm happy to talk about it.
(39:05):
When you go to your vendor andthey talk about AI, ask them
what they're using, ask themwhat guardrails in place, what
controls, what governance, howare they making sure and vetting
the data?
That's the first piece.
The other piece is more of ananswer to your question, which
is around rolling these systemsout.
And there's this gentleman who Ithink is one of the foremost,
(39:26):
in my mind, voices in generativeAI.
He's the dean of students atthe NYU Stern Business School.
His name's Connor Grennan, Ibelieve it is, and he talks all
the time about AI, specificallyaround generative AI, and he
makes this really good analogyaround deploying these systems
inside your organization, andI'm going to steal it from him
because I find it fascinatingand I think it's something that
(39:47):
you should all be thinking about.
He calls it the treadmilleffect.
So traditionally, when youdeploy software inside your
organization, you do a bunch ofthings.
You make a bunch of wackyvideos that say here's how you
use the system, and you shut offaccess to the old system.
And you do training and yougive a bunch of metrics and say
you need to do this, this andthis, and there's this sort of
(40:08):
methodology for training peopleon how to use this new software
and getting it deployed and they, quite frankly, have no choice
because that's how they do theirbusiness.
That is not what AI is in anyway, shape or form.
Ai is a change of mindset and achange of behavior, and that's
again.
Think about a treadmill for aminute, right?
(40:28):
So let's say I want to loseweight and I think the way I
want to do it is by running on atreadmill.
Right, and I probably shouldmore than I do.
And that's neither here northere.
I know how to run on atreadmill.
I don't need to watch a video,no one needs to train me, no one
needs to teach me how to use it.
I also can run on a treadmillfor five minutes.
I'm not going to lose anyweight running for five minutes
and I'm going to stop after fiveminutes because it is
mind-numbingly boring and I justdon't.
(40:51):
I have not changed my mindsetto make this a critical part of
my day-to-day routine, myday-to-day activity.
That is the way you need tothink about AI software.
It is a behavioral change foryour organization.
They have to start doing thingsdifferently.
It's not just a differentsystem, right?
If you're using MicrosoftDynamics and you move to
Salesforce, there's a differentUI and you need to learn how it
works and there's a bunch ofthings you need to go through
(41:12):
and understand how workflowswork and all that fun stuff.
But at the end of the day, it'sa CRM platform and you shut off
the old one and you use the newone, you watch a video and
you're good to go.
If you put generative AI or anyAI platform in front of your
users and you don't make themchange behavior through things
like shutting off access tolegacy systems requiring
(41:34):
searches or generations orwhatever the case may be, you
are never gonna get adoption.
Humans are inherently lazy.
That's just all there is to it.
It's the reason I don't wake upin the morning and go to the
gym every day.
It's the reason I don't run onthe treadmill for an hour.
I want immediate results.
I want immediate feedback.
I am a lazy animal.
If I don't change my behavior,I am never going to get the
(41:56):
results from that opportunity,which, in this case, is AI or
opportunity, which in this caseis AI or, in my example, is
exercise Boy.
Speaker 2 (42:04):
I thought I was the
only one.
I'm glad to hear there'ssomeone who makes excuses in the
morning not to exercise.
We've also seen in the lastprobably six to nine months or
maybe a little bit longer someof these Fortune 500 companies
put in these AI use policies,right.
So they've blocked things likeopen AI because they were afraid
(42:29):
of, you know, some.
You know A backlash B.
They can be liable for thingsthat it's returning, you know,
for several different reasonsand it may not be good for their
.
You know they could leakcustomer data, all these type of
things, right, so they put inall this AI use policy.
Should every company create ainternal AI use policy for their
(42:52):
employees to buy by?
Speaker 1 (42:54):
I think it's crucial.
I think internal and external,quite frankly.
So I think it's crucial thatyou put policies in place in
your organization, just like youdo with everything else right?
We have acceptable use policiesfor email.
We have acceptable use policiesfor Internet access, right Like
when we had our company, whenwe were all under the same roof.
We had to block streamingservices because we had
employees that were streamingthe World Cup.
(43:15):
We had customers fire usbecause we had employees
checking fantasy football scoreson site at their offices.
So we have to put thesepolicies in place, and I think
it's equally important with AI,again, both internally and
externally.
So you need to put thesepolicies in place to say what
people are allowed to do, howthey're allowed to do it, what
(43:37):
platforms they're allowed to use.
All of that.
And it's not.
I'm not a lawyer.
I'm not giving legal advice.
There are people out there thatknow a lot about this stuff.
There are some templatesavailable there's.
You know, I'm a part ofCompTIA's AI Advisory Council.
We're putting some prescriptiveguidance together for people.
So, there, you definitelyshould be doing that, but, at
(43:57):
the same token, you should berelying on that from your
vendors as well.
So if you have a vendor thathas AI baked into their solution
, you need their policies andtheir procedures as well, like
the last thing you want to do isbe liable for something that
your system spits out because ofa platform you bought from a
third party vendor.
I do this when I, when I dothese live presentations where
(44:20):
we have actually slides, I talkabout this really funny example.
So there's this product calledAI Lead L-E-D-E, and the lead
for those of you that don't knowjournalism is like the first
line of a story.
So if someone says don't burythe lead, it means put in the
first line of the story whatactually happened, as opposed to
making someone dig into it.
So this platform, ai Lead, isfor small businesses, small
(44:42):
newspapers rather, that want toreport news stories but don't
have enough people, enoughjournalists, to write up the
news story.
So basically, you feed it abunch of details and it spits
out a story.
It's interesting technology.
It's pretty cool the way itworks.
Someone found an example itwasn't me, I wish you could take
credit for it where thissoftware or this newspaper in
Ohio somewhere, I think inDayton Ohio went and reported on
(45:06):
a football game between twoOhio high schools and the
headline was something along thelines of that.
It was an athletic competitionof the, a close encounter of the
athletic kind.
So basically it was usinggenerative AI and it got too
much Star Wars in its training,so it called this game between
two high schools a generative an.
It got too much Star Wars inits training, so it called this
game between two high schools agenerative an athletic.
Sorry, a close encounter of theathletic kind.
(45:28):
But what was funny about it wasthe person that originally
found this went and Googled thatphrase and found it on hundreds
of other articles on smalllocal newspapers writing about
high school and college sports.
So my point is like if you buythis product from someone, do
you really want to have itgenerate the same headline for
you that everyone else does,especially one that sounds as
(45:49):
ridiculous as that?
Let's look at the MSP space,for example.
Right, and I don't know ofanyone that's doing this, so I
can say this freely If you go towork with a marketing vendor
that specializes in MSPs and yousay, help me write content for
my website, and they're fullupfront disclosed to you that
they use AI to write theircontent, well what guarantees do
(46:11):
you have that the same contentdoesn't get repurposed and refed
out into a bunch of differentplaces, a bunch of different
MSPs, especially if they'regiving you the platform and not
giving you the person.
So I think that's importantfrom policies and procedures.
I've rambled off of thequestion now and I apologize for
that, but yeah, it is certainlyimportant to put that stuff
into place.
I think it's one of the firstthings you should be doing.
Speaker 2 (46:32):
So, talking about
expanding on a question, this
next one we can talk about forthe full hour if we wanted to,
but we probably only have aboutfive to seven minutes.
So it's real important.
You know, last year 2023, wesaw generative AI, you know,
really explode onto the sceneright with the commercialization
(46:52):
of open AI.
What do you see in 2024?
Speaker 1 (46:57):
Yeah.
So I mean, I talked a bunchabout this already.
I think 2024 is a couple ofthings.
The first one I'll reiterate isthat it's the year of the
guardrail, so very critical thatgovernance starts to become an
overlay on all of this.
I would not do anythingAI-based without putting some
sort of governance in charge ofit.
The other piece I would like totalk about, or I would mention,
is that I think 2024 is theyear that AI becomes multimodal,
(47:18):
and what I mean by that is it'sgot to become a seamless
transition between differentmodalities of communication.
Whether it is I ask you aquestion, you generate text.
Well, what I would like you todo when I ask you a question is
to generate a text response, butalso maybe build a diagram with
an image or some sort of ananimation that shows how
something works right.
(47:39):
So if I have an AI model aroundauto repair and I have to
replace a carburetor on my 67Corvette, I may want an animated
video of how that looks.
I may want images of it.
I just don't want line-by-lineinstructions, so to speak.
So I really think that 2024 isthe year that data becomes
(48:00):
multimodal and critical, and Ithink that a lot of the
companies we see are starting todo that right.
So what OpenAI is doing withtheir Sora piece, where you can
just give it a one-minute, aone-sentence explanation, or
come out with a one-minute video, that's cool, but I think it
just needs to get a little bitmore seamless.
And again, like I said, aigovernance is crucial.
(48:21):
You need to look, all you needto do is look at what governance
governments rather sorry andlegal bodies, legislative bodies
are doing to control this.
So they're looking ateverything from copyright.
Openai is being sued byeveryone, from the New York
Times to Sarah Silverman,because they train their model
(48:42):
on what those people considercopyrighted data.
They train their model on whatthey can what those people
consider copyrighted data, um.
So really, uh, transparency onthe content of these models,
governance on what gets output,and multimodal where you can go,
where you can seamlessly jumpback and forth between um
everything from text to speech,to video, and and so on and so
forth.
Yeah, you.
Speaker 2 (49:00):
You know it's funny.
Today the governor of Tennesseesigned into law how he's
protecting musicians, right?
No surprise, tennessee.
Protecting musicians from AI,right, and how their
intellectual property is to be,you know, not absorbed by AI,
(49:21):
making it, you know, impossiblefor them to make future
royalties and everything else.
So you see this more and more.
You know from a localgovernment.
We've seen it in the EU, whichis a little bit more further,
along with AI governance, right,and you know, people just don't
want to be held liable for thatblack box potentially.
(49:41):
They want to make sure thatthey are in complete compliance
and when you're kind of buyingthese platforms, databases,
services, amazon, whatever youdon't want to have to worry
about the AI governance.
Speaker 1 (49:59):
And that's what we're
kind of in the wild wild west
of.
Fundamentally, that needs to bebaked into the product, and the
only way that's going to happenis if everyone starts pushing
back on these vendors to askquestions about governance and
how they are leveraging it.
I do want to ask a question.
Speaker 2 (50:14):
Last thing it sounds
very similar to, like you know,
cybersecurity, where now youhave to send out surveys to your
vendors about cybersecurity.
I'm sure the same thing isgoing to be coming down.
Speaker 1 (50:24):
Yeah.
So I like to say we joke aboutthis, because back in the day
when cyber breaches started tobecome a reality and these
insurance companies startedunderwriting cybersecurity
policies, you and I used to jokethat these insurance companies
had no idea what they wereunderwriting, what they were
insuring, and, sure enough, theyall got slaughtered on paying
out cyber liability claims.
I can assure you that's notgoing to happen again.
(50:46):
Right, they are putting thesecontrols into your policies.
They're going to ask questionsabout generative AI.
They're going to ask whatmodels that you use.
These other companies get rakedacross the coals again on
claims that jump from whetherit's copyright infringement to
(51:06):
liability claims, to whateverthat come out of generative AI,
but along the same lines.
I know we have some otherquestions and we're wrapping up,
but there's a Kushi.
I hope I got that name right.
I apologize if I did not askeda question in chat.
That I think is interesting.
It's a point I want to touch onand the question is what
potential societal impacts doyou foresee as LMs become more
(51:27):
ubiquitous in various industriesand applications?
And the first thing that jumpsto mind for me, which I find
very interesting, is that, as wesit here today, I think the
estimate is something like 25%of all the content in the world
was created by generative AI.
Two years ago, that would havebeen less than 1%.
(51:48):
I forget the statistic.
I saw when it's going to belike 50% or 75% or whatever.
But the bottom line is so muchof the content we see today is
created by generative AI.
I see it every day in my life.
I scroll my LinkedIn feed andobviously you can tell when an
image is generated by generativeAI.
I see it every day in my life.
I scroll my LinkedIn feed and Ican.
Very obviously you can tellwhen an image is generated by,
you know, deep mind or any sortof generative image creation.
(52:12):
If you start typing a post inLinkedIn, it asks you if you
want generative AI to clean itup or to create it for you.
Like everything you do, thisstuff is baked into and what's
happening is it's becoming aloop now where we are using
generative AI to train largelanguage models, and that is a
very dangerous precedent.
I'm going to tell you whyBecause everything starts to
(52:35):
sort of flatline when thathappens.
One of the values of trainingthese large language models on
actual human generated contentis the ebb and flow of it the
positives, the negatives, theintricacies, the weirdness, like
all the outliers are actuallyreally valuable for the large
(52:55):
language model, believe it ornot.
So when we start to flatlinethis, it kind of starts out here
and sort of streamlines, andnow everything is one flat line,
and that concerns mesignificantly.
Around the future of what thesemodels look like, I think what
will happen is that these modelswill start to essentially go
(53:15):
off the rails and becomesomewhat useless, and we'll have
to take a few steps back andrethink them, which is why I
think, personally, that thesesmaller models will be much more
valuable in the long run thanthese huge large language models
, because they will be custombuilt for specific tasks.
I'll give you a great example.
This week, anthropic who's oneof the other really big players
in the space.
(53:36):
They announced that theirClaude 3 model actually beat
OpenAI's GPT-4 and a bunch ofbenchmarks.
All right, great.
They're both trillions ofparameters.
Who cares?
What was more interesting,though, is that their smallest
model, which is called somethingwith a Q I forget what it's
called their smallest model wasoutperforming their largest
model in specific tasks.
(53:56):
My point is and again thatcircles back to what we said
earlier I think these smallermodels that are custom built,
with a lot less parameters, thataren't retrained on the content
they generate, will start tobecome much more valuable and
until we rein those in and startto leverage those, we're going
down a bit of a dangerous path.
Hope you enjoyed thatconversation.
(54:17):
Thank you so much for listening, tuning in as always.
Check us out at crushbankcom,follow us on LinkedIn at
CrushBank and keep it tuned herefor future episodes of our
podcast.
Thanks again.
This has been the CrushBank AIfor MSPs podcast.
Thank you.