Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Jimmy, welcome to the Evolved Radio podcast. Thanks
for having me on. So happy to be here. Appreciate it. Awesome.
So this is gonna be great. I think we'll we'll just sort of jump right
into it. I think an interesting place to start is just
a reflection on the sort of modern version of
LLMs and AI. I mean AI has kind of existed forever in a
(00:24):
lot of ways like I remember very early on my computer
days the, the SoundBlaster sound card that I got came with like
the Parrot AI or the and like a
it was like a psychologist that you could talk to and it would repeat your
questions and ask you how you feel about that. Sort of like what I imagine
is sort of the very first iterations of this. But, like, the very
(00:45):
modern version of the LLMs that that are really revolutionizing
everything right now have only kind of existed for about
2 years which I find kind of mind blowing, because of all the
things they're impacting. So, any thoughts on sort of,
like, that this sort of first two year stage and
and how things are sort of progressing so quickly just to open
(01:06):
up? Oh, I remember the first time I used
chat gpt, it was, like, using the Internet for the first time or high
speed Internet for the first time or something like that. Like, it was just,
oh, this is gonna change everything we do.
And sitting back and watching it like magic, like, I couldn't believe that
it was doing what it was doing, and it could talk like a person and
(01:28):
it could answer things really quickly.
And, you know, seeing that definitely opened my eyes to
the future. And here I am, you know, less than 2 years
later starting a company. So, like, sir, centered around the
thesis that, like, this is the chain gonna change all areas of business,
and people need help to do it. Mhmm. But I I mean, I can't
(01:51):
believe that it's only been around this long either. Like like I,
it's, it's funny working in AI. I've, I've been, you know,
publicly had this AI company for about a month now and you
go to technically implement something. And by the time you're
done some new version of this out and you, and had you started it a
month later, you would have done it a different way. And that's how fast things
(02:12):
are moving in this space, and it's just incredible. It's it's I'm so grateful to
be a part of it because it's so fun. Yeah. It is wild how quickly
change and the things are changing. I did wanna ask you, like like, what
was the sort of the moment in time? Like, do you remember sort of a
particular point or moment or evening where you're like, you know
what? I'm gonna start my own AI company. What what was that moment? Do you
(02:33):
remember? You know, I I I don't know that
there's a particular moment, but there is
definitely a number of
interactions that I could remember where I would show AI to
people and I would see their reactions.
So for example, I, you know, I worked in cybersecurity and I did a lot
(02:56):
of, demos of using AI to hack
people. And the laughs that I would get, the
inquisitiveness that I would get, the questions that I would get people stopping me afterwards,
then became a hobby and just showing my friends. I I guess one
moment where it was like, yes, I'm definitely gonna start a company on this was
I build a proof of concept and
(03:18):
I build a proof of concept, and I had my father-in-law test it. My father-in-law
is in his seventies, and he is the
executive secretary for his mosque. And he was doing a fundraising campaign and
there was a, a group of people who raised their hands saying that they would
donate to something, but he had to, you know, send file communication to
actually get them to mail the check or whatever it was. And I
(03:40):
gave him a proof of concept of, of early hats, before it was
hats. And, you know, I I told him, why don't you try
using it on this? And then he called me 2 days later and said,
Jimmy, Jimmy, Jimmy, what's what's that website that you
gave me that your your your AI? I used it yesterday,
and it wrote me this really great email. I I sent it out to everyone.
(04:03):
Yeah. I wanna use it again. Wanna give it to to everyone else on the
team here. And I was like, woah. This is a
man who doesn't, like, use 2 computer screens. He uses
2
(04:23):
to use this thing in, like, a number of minutes and immediately into
production. Like, saw the the life changing or the the way which
is in completely changed, you know, his his nonprofit, like,
the business. And that was, like, okay. This is gonna
have really lasting change. And, I I think that was
definitely a big catalyst for me to take the jump and and go full time
(04:46):
at it. Okay. Very cool. Yeah. It's I mean, just quickly on
that. Like, it is it is wild sort of the the the adoption of
this. Like, I can't remember the stat. You may know this, but, like,
ChatGPT's, like, acceleration to, what was it, a a
1000000 users or something like that is sort of this benchmark that they've used for,
like, adoption. You know, Facebook, Instagram were all these sort of
(05:08):
meteoric rises and then absolutely eclipsed
by chat gpt adoption. Do you remember that stat? Yeah.
It was, I think it was a 1000000 users in 2 months or something like
that, and then a 100000000 users, soon afterwards.
So they I mean, they did it in, like, like,
it's in order of magnitude each time faster than it happened, like,
(05:29):
each time that happened. And it was like
2 months or something where the previous was like, you know, 12 months.
Yeah. Yeah. It's so wild. Cut. So I guess that that leads
well to sort of the other thing I wanted to to sort of dig into
here is like like LLMs are all over the place, right? Like, you
know obviously everyone knows ChatGPT, Microsoft
(05:50):
Copilot is now kind of, like, rising up. And now,
you know, Google was caught a bit flat footed, largely
I understand because of sort their concern around safety and rollout.
So it was like these tools existed. They just hadn't made them public. Some people
argue as to whether or not they felt it might cannibalize the search business. I
think that's there's there's probably some validity to that. So they've got now Gemini,
(06:13):
rolling out. And then there's, you know, Meta's developing their own
internal. There's lots of even open source LLMs.
So I understand you are sort of, like, building kinda your own model,
and I'm I'm interested in that. Like like like, why your own model versus just
sort of leveraging off the existing LLMs that are out there?
Yeah. No. That's a great question. Actually, what we're doing is we're
(06:36):
empowering users to tune, or customize their
own models. So we we aren't necessarily interested
in in creating our own model. Maybe we will in the future.
But we aren't beholden to one model. So every
model that you've mentioned, Gemini, llama 2, which
is Facebook's open source model,
(07:00):
GPT 3.5, GPT 4, even Claude,
by Anthropic, we're using internally.
And so there's there's different ways that you can customize,
each of these models to do what you need do,
whether that be through context or through rag or actually tuning the models.
(07:20):
And we're in the business of helping small businesses,
do that through their MSP. I see. Okay.
So I yeah. I misunderstood this. You it's not that you built your own model.
It's kinda like a a department store maybe
for, for different models for different purposes. Is that a good analogy?
Sort of. It it's like it's like,
(07:44):
there's a lot of work that goes into the customization
rollout and access to data, of actually
using a model at scale inside your own business or
efficiently, whether that be business process efficient
or cost efficient or both, and we're a platform that helps
you manage all of that end to end. Okay.
(08:06):
With that, like, maybe getting into some of the technical details but like,
if you do training on one of your models and you're doing something in sort
of this space but you're using a different model for something else are you able
to sort of port some of that over through through your guys' platform?
Is that some of the benefit there? Yeah. So so you can do that in
a couple different ways. You can you can change
(08:28):
the context so you can move let's say you write a really good prompt or,
you have an application that's built around, specific prompts that
you you've built inside of our, UI. So so in our system, we
basically have a a UI for prompt engineering for MSPs, and they could
publish apps for their end users. You can switch out the underlying
models, very easily, in in doing it that
(08:50):
way. The only limitation is the context window. So
for example, like, Claude has a 100 k,
token context window. A token's like a syllable. So you can think of it as,
you know, whatever, 25,000 words or something
like a very, very long context window where, GPT
3.5, you know, there a lot of those, was 16
(09:13):
k or, 32 k
tokens. So, you know, 1 fourth the size. So that would be one
limitation of it. Another something that we're doing in the future
is is, retrieval. So it's it's RAG,
where basically the LOM can look inside of a a database,
or some sort of dataset and then spit out factual information,
(09:37):
or reference that information. We we you you
those are interchangeable as well where you just have to have the the vector database,
configured properly. The the instance where you can't,
is actually tuning. So the difference between,
training and tuning a model, basically,
(09:57):
OpenAI and, you know, meta, these developers of these
foundational large language models, they get all this training data
in and they, you know, publish the model. And then afterwards, when you
feed it additional information that's generally called tuning, that's
very expensive to do and expensive to run because you're not using
foundational model anymore that could be on a shared server or shared
(10:19):
resource or something like that or even if it's read only.
So so that gets you know, you're paying
GPU, like, processing power to actually, tune
it, and then you're paying for access to it as well, whether that be through
our platform or another. So, in in those three
(10:40):
scenarios, outside of tuning, you can generally switch the
underlying, foundational model. And that's very important
because say say you, are an
MSP and you do tons of tons of work
creating an amazing dataset on how to run an MSP business,
or perhaps one of your customers is in the
(11:02):
widget factory business, and they come up with a huge, a
phenomenal data that or a set of prompts or whatever it is,
on how to, create, you know, build widgets in the widget
factory. When a new foundational model comes out, if you
invested everything in tuning, you have to pay all of that
money again to to retune the data set. So, you know, you say
(11:25):
you're doing it on llama 2, then llama 3 comes out, llama 4. Whereas,
with with rag or with, you know, cleverly using context
windows, you you don't have to do that. So there's a
lot of trade offs between speed, efficiency,
cost, complexity, future proofing, and just using
LMS internally. And I think I think there's a lot of, like,
(11:56):
clear for clarification, you mentioned RAG. I'm not familiar with the the sort of
the the terminology there. What does that refer to? I
believe the
So you can use GPT 4RAG, and, that is a
(12:18):
version of the model that's able to look inside of a database.
So, say you had, all of
your SOPs stored in a database somewhere,
gpt4RAG could go and and say
instead you say, hey. How do I
restart this data server or whatever?
(12:41):
It could just make something up based on its foundational knowledge and
and, you know, probably 80% of the time it would be right
because it's pretty good. Like, g p t four is pretty good. But with
reg, it could actually look inside of a database and and bring
back a PDF user manual and say based on, you
know, page 6 of, you know, this latest user
(13:03):
manual,
Yes. Depending on yes. Exactly. So so
that's like and and you you get,
like, say say you could get, like, 80% accuracy on complex tasks with just foundational
models, like, they're pretty good.
(13:32):
RAG, and you'd get similar results for for for tuning
for tuned models, maybe even worse results in some cases. Some of the studies
I'm seeing that you're actually getting better results from right because it's, you know, it
can be more factual. So, you know, they're
like, there's technical trade offs between them all. Okay. That's cool.
I appreciate the background on that. It gets into some of the use case context
(13:53):
that I wanna get into as well. I I do wanna call out, like, I
have I have this conversation with anyone whenever I'm talking about AI because to
me, it's sort of like what I find the most fascinating
about this technology is like I kind of relate it
to like, it's a bit of a parlor trick. Like if you really
understand how it works like once I started digging into this and understanding the
(14:14):
technology I was really blown away by by sort of
how it actually works under the covers. So, I I know I'm not
exactly an AI expert so I'll go through this and then you kinda correct me
if I get anything wrong here. But, I had my brother reach out to me
and he's like, hey. Do do you know much about sort of chat GPT and
all this stuff coming out? And I was like, yeah. Like, it's pretty amazing.
(14:34):
I can't remember how we got into this, but I I I want I I
guess sort of started telling him, like, this is kinda how it works. Like, it's
actually more fascinating. The way I describe this is, like, usually if you find how
a magic trick works, it kinda takes away the magic. You're like, oh, well, okay.
Well, it doesn't doesn't feel quite so special. Like, I don't wanna know how the
magic trick is done. I enjoy the magic. And this is sort of this exception
to that rule of my mind of, like, what chat gpt and LLMs do
(14:58):
is amazing. Like, it's magical in a lot of ways, But it's more magical if
you understand how it actually works in that it's not smart at
all. It's it's purely a prediction engine and it's really, really
good at just sort of guessing what's next in a sequence based on on
sort of those those tokens of, like, word groups and stuff. Right? So it's like
this new numerical valuation that it sort of figures out on
(15:20):
the fly. Right? And I've I've done some prompts and stuff like that where I've
actually sort of broken it open and it gives you the v b script window
where it starts actually generating. Like, it's not just the typewriter text where
it actually starts like like filling in text and replacing text and going
all crazy as it as it sort of does this this sort of multi line
prediction. So I find this really mind blowing if you understand like
(15:41):
it doesn't understand what it's actually replying in a lot of ways, like,
the contextual awareness. It's it's really just sort of number sequencing
based on these groups or, like numbers assigned to
words, which to me, like I said, is is actually kinda more
incredible that it's actually able to do what it's do without, without being
intelligent at all. Right? Like, do I mostly understand that
(16:04):
correctly? And are you as fascinated by how that works as me?
No. No. You you do. You are. And and so it's
even like so chat gbt is a chat implementation of
a, text completion. So what what
it actually is is every time there's a
question response, it's just a longer, set
(16:26):
of text completion being set in. So there's a system
prompt for chat gbt, and it's it's literally
says you are chat gbt. You can do,
you know, these different things. You should answer users.
The question and answers follow this this format and then
system or or its user and then it's chat
(16:48):
gbt colon. Right? And then the user
sends first response, and it says user colon.
And then it and then it it it leaves a
blank spot for chat gbt colon, and then jet gbt fills in that text.
And then it just keeps sending the same thing back over and over and over
again. So, like like, you're not like, people think of it
(17:10):
as, like, oh, I'm sending off, like, something and then it's, you know, listening to
everything I'm doing and blah blah blah and sending it back. It's really just, you
know, one. It's like a text file that just keeps getting a little bigger where
it's like user response user response, which I
I mean, once I saw that I was like, oh, really? Yeah. Yeah.
It's wild. Like like like I said, it's just it's crazy how it works under
(17:32):
the covers. So, yeah, getting getting them into into the
nerdy weeds, but, you know, we're we're a technical group that generally listens to this
podcast, so I'm sure people will will also find this
somewhat fascinating. We'll switch to,
a bit on the I guess the practical use cases, Right? So, like,
you're you're building this specifically for MSPs. And, again,
(17:54):
like like, I'd love you to just sort of expand on that. Like
like, why why a model or this platform
for MSPs in particular? What did you envision as being possible
with that? So it's actually for
MSPs to get in the AI business. So it's for MSPs to bring to their
customers and naturally use it themselves inside their own
(18:16):
business. The reason for that is I've seen,
yeah, different mega trends in the past. You look at the move to cloud,
which in a lot of cases increase the cost per
seat, not necessarily the total revenue, but the cost
per managed user or managed device, about
50% in in additional revenue, through that
(18:38):
that transition. And in many cases, a move to a managed
billing model or the introduction of recurring revenue.
And it took a while to get there. It meant
much because, you know, talking to someone, hey. We're gonna move
your server your exchange server out of the closet into Office
365. Like, it was a it wasn't the easiest conversation to have. There's a lot
(19:01):
of businesses who were reluctant, this move from capital
expenditure model to operational expenditure model.
But, you know, MSPs were the only group of people capable of handling
that transition for small businesses, while big enterprises, you know,
hired large IT teams to do it internally.
Similar thing happened in cybersecurity, but it happened a a little bit faster.
(19:24):
So, many MSPs again increased their per
seat, revenue by about 50%,
over maybe 5, 10 years with the,
introduction of more cyber security services. And I'm talking about
changing from I just offer, you know, a a web
root or, you know, Sophos or like, I just offer 1 antivirus
(19:47):
as part of your managed package in addition to your RMN to, you know,
you're getting email protection, you're getting endpoint protection, you're getting sock
services, MFA, like, the whole suite of it, all the
the tools and services that m MSPs have been adding.
Cybersecurity, hard sell. Very difficult. I've been in the
cybersecurity sales training business through, you know, working at Scout for a
(20:10):
while where, you know, you can make great cybersecurity products for small
businesses for MSPs to deliver, but you still have to help the MSPs with their
biggest problem, which is actually convincing people that they actually need
need the damn thing. And and AI
is just different. So it it it happened
cybersecurity happened faster with the movement to cloud, and I think AI is
(20:32):
gonna be a similar scenario where small businesses are gonna need help.
They're all gonna need to integrate AI into their business, whether it's
through, you know, what we see today with interactions with large
language models and those use cases, which, you know, you might have been asking
about, like, writing job descriptions, doing SOPs,
documenting things, summarizing conversations, helping with customer service
(20:54):
workflows, lots of text heavy, tasks.
But I think it's gonna happen way faster because AI is show,
don't tell. And it's an operational
10 xer versus
a cost center. It's still, you know, it's still a cost, but but it's
(21:14):
it's much easier. Hey. You know, you know, this thing that's been taking
your employees, whatever, 3 days to
do here with AI, they can do it in in 20 minutes. Like,
who's gonna say no to that? So I think the explosion
with MSPs is gonna happen way faster than anyone's ready for.
That's interesting so like I guess it's sort of dual purpose like there's definitely some
(21:38):
things you can do to leverage the the models
internally like you said like finding relevant SOPs for a particular issue kind
of you know copilot for MSP
type data, but also, you know, how is how is that
being leveraged and utilized, for for the client base. Right? So
I I like, I had a conversation with someone recently on the podcast
(22:01):
and and it had occurred to me that, like, I I was surprised
I hadn't thought of this earlier of I think what you're sort of starting to
describe here is, like, the consulting opportunity around how you actually
roll out, implement, and leverage AI in in,
as an MSP in your client businesses is gonna become very
relevant very fast. Right?
(22:23):
Well, I think right now, if a small business needs help writing a prompt,
who can they go to for help? Yeah. Yeah. That's a good point.
I mean, MSPs are gonna get those questions eventually.
And I would bet your average level 1 technician
is better at writing a prompt than the average CFO of a,
of a, you know, $10,000,000 small whatever.
(22:46):
$5,000,000 a year small business. Probably safe safe
bet. Yep. But like what about like, you know,
like you leveraging AI, I suppose, in in in their workflow,
even some training around like Copilot, like you're rolling out 365,
you know, hey, we're gonna add Copilot for you guys and, you know, some
training around how to utilize that. I think those are, you know, some
(23:08):
interesting use cases, very valuable to the client as well.
What about sort of, like, more complex integrations of of
language models in, you know, workflows, right, like interactions with,
their clients or, you know improvements in workflows internally.
What about some of those more complex use cases? Have you thought put put some
thought towards what those would look like potentially? Yeah.
(23:31):
Yeah. So so for our, what what
we're building at ads and when we're releasing, I don't know, it might be out
at the time of the release of this episode, Is,
part of this Just quickly on that, what's what's your release date? In
in the 1st March is our is our first product just going out. So
Perfect. K. So by by this the time this is live, this is very
(23:53):
likely to be live. So right out and check it out. There you go. There
you go. Yeah. But what we have is an is an,
AI app builder where you can, do all the prompt
engineering work and build all the inputs, into a dynamic
prompt. You know, if then then this type of thing, and then
publish that to the end users. The end users just upload a file, press
(24:15):
enter, or, you know, type in yes, no, maybe so.
So so examples are marketing. Right? Marketing is a is an example because it's
synonymous with all all small businesses, and small businesses generally struggle
with marketing. They don't have enough money to pay an outside firm or they're paying
an outside firm. They're not doing a great job.
Say, the very least, every time you release a new
(24:37):
product, you wanna post about it on social media, and you don't have
anyone to write about it
on social media. So you you copy and paste
your your MSP or, someone internally builds an app,
that generates the social media posts, and it takes a specific input,
whether that's a PDF of the product documentation that you're releasing
(25:01):
or an email update or just a summary describing it. And
they add some additional system context about the business and who it is and the
tone that they like to use. And then the end user, all they're doing is
copy paste. I want this to be specialized for ins
for Facebook, for Instagram, for LinkedIn. I want it to be this
length and and press enter. Another use case
(25:23):
might be for job descriptions. Right? You could very easily like, that's a
one to many, type scenario, example where
an MSP could build, a an app
that generates, job description where you input the title of the
job, the salary, whatever the, expected inputs
are, and then, it it, you know, generates
(25:46):
the whole job description for them, and the MSP could populate that to
all of their different
of aspect of these models is like like there's often people
talk about sort of future jobs are going to be prompt engineering and like maybe
that actually gets sort of like like, sort of born out of this and and
(26:08):
sort of becomes less prevalent than it currently is. But I don't think people really
understand how different and how much better
information you can get from these systems if you engineer the prompt
just right. Like using sort of like like language variables and things like
that. So like maybe just, like, a a quick bit on this,
like, kinda your perspective because, like, I think a lot of people understand this is
(26:30):
just, like, the ChatGPT interface. I ask some questions, I ask it to build me
a, you know, a job description and it comes back with
some pretty good stuff like like better than what most people would write
but the difference between that and having a really good
prompt that is designed in a way that actually outputs
something that is like a 100 times better than what it would just generically
(26:52):
spit out. I think it's something people don't quite understand. You wanna
expand on that? Yeah. So with
some of the newer models you can have a very large context window like I
said earlier. So you can provide 5 examples of
of job descriptions, and and write ups on them. You can
provide nuances and, you know, if it's like this then do this
(27:14):
and and, really spending time to make a very
good prompt. But the the the thing that's changing each
time is just, you know, the title and a couple words about the
summary of what the job is. So we're we're
actually, releasing a a course and it should be
released actually. As this is released, you go to our
(27:36):
website and get access to it on hats.ai,
on on how to do basic prompt engineering and and the basics around
it and and how to do it well.
Yeah. I mean, I think it's a skill that's extremely
relevant. You could think about how when Google first came out, the people who could
get Google right away and a lot of those people own MSPs now.
(27:59):
Right? They could get the information they needed and other people would just type in
the wrong thing. So it's a similar skill.
I can relate to that actually because I used to work at an ISP way
back in the day, and we used to run these open houses where we, like,
sort of educate people how to use the Internet and how to use search engines.
I used to challenge people in the room when I was doing these sessions of,
like, name something. I can, like, I can find relevant information
(28:22):
somewhere and and they're like, oh, okay. Whatever. Like, how to build a box for
ferrets? And I'm like, alright. Here you go. Like, can they it would spit it
up. Like, like, who's the lead for this year's f one? And I'm like,
easy. Here you go. Right? And, like, people were kinda blown away by this because
they couldn't find that information. They'd have to they'd really struggle. And it was sort
of you're right. Like, that's early prompt engineering is a good Google
search. Like, maybe it's, again, less prevalent now. But, you know, it's a I
(28:46):
think an interesting analogy. I guess like the one of the other
sort of elephants around this is is data security. Right? I think is a really
important, aspect of this and I'm curious sort of how you guys are considering this
in in the product that you're building because, you know,
MSPs hold a lot of, sensitive data and they're
you know, I guess one of the another one of those things that people don't
(29:07):
quite understand about these models is if you're using just sort of like
free version of Chappy GPT you're potentially providing information to
be fed back into them into the the training model. Like it's
not this is this stuff is not as sort of of private as maybe
people assume it would be. So, like, what are your thoughts about your product? How
you're building it for sensitivity and care around
(29:29):
potential prevention potentially sensitive data, that
MSPs will be holding and wanting to leverage in this but are sort of
cautious around security implications. Yeah. I I
think just AI rollout as a whole or the goal AI
transition, the biggest problem that we're seeing is,
data readiness. So, for example, why not just turn
(29:52):
on, Copilot, right, on on
an organization? Like, Microsoft makes you do a whole data readiness thing where
it's like, okay, this thing's gonna have access to all your PDFs, like, that
you have in these folders. Like, did you, you know and
and the concern is right, like, I'm a I'm a marketing intern and I
say, how much budget should I allocate for,
(30:15):
q 2, for this program? And it says, well, the
CMO's salary is is, you know, this much money
and based on this, you know, spreadsheet that I found.
So so you need to be really careful there. I think
we set this platform up, to be a
secure, safe, alternative to sort of everything
(30:36):
out there where we're not trying to monetize your data. We're trying to put you
in the in the AI business where you have very granular control.
And and a big part of that and, you know, it's it's a lot more
work on our end to make to keep the data separate from the
models. So we you can very easily switch, models
in the future. Right. Or as you iterate or as you build.
(30:59):
Another piece of it, I think is just the
the use cases. So people are very quick to make publicly
facing, applications with with generative AI, and that's how
you get problems like the GM dealership or
the Chevy dealership that, you know, like, people
are getting it to generate Python scripts and sell them Teslas
(31:22):
and Elon Musk is screenshotting it. And then all of a sudden GM's got a
big PR problem because some random, you know, Chevy dealership and I don't
know where. Right. Like set up a chatbot on their public facing
website. Like, I I I you
need control over what users are doing,
and you need unification especially in, like, a customer
(31:44):
service environment. So that's why, you know, we build things one to
many where you can control things at the MSP level or at the
admin level and edit prompts, as a whole. But
also, you know, educating users, like, this is your first
draft. You should review this before you set it publicly.
Start with use cases like marketing and maybe not, you know,
(32:06):
entering code into your, right, production,
Linux terminal that, you know, hasn't been tested or vetted.
So it's, you know, it's a way to do things faster and then a way
to do things a little better. But, like, we have one product that is
external facing, for example. It's a it's a phone customer service agent that you
can call and it can take notes, transfer you, create a ticket,
(32:30):
you know, send email, whatever. And the amount
of guardrails we have to put in to, you know, end the
conversation if it starts drifting, like, it's a lot. And and
there's, like, AI, large language model security,
is is just beginning because these tools were used
internally for so long. Right. So I think that there's gonna be a
(32:52):
like, there's gonna be browser plugins to prevent people from
putting social security numbers or company information at a chat gbt that
pop up. Like, there we got a ways to go on all this.
Yeah. No. It's a part of the I suppose part of the issue of of
such a fast evolving field is like, you know, this stuff takes time and
has to the the the guardrails and safety has to evolve as it
(33:15):
evolves as well. Right? Yeah. I mean, it's like
imagine that we took, you know, like like,
it's it's it's almost like it's like, email
was not set up for security, and here we are still using the
same version of it. Late I mean, yeah, we've we've improved a
little bit. Right? Still evidenced by the fact that the
(33:38):
number of fishing and, you know, user training that's still required. It's an
inherently unsafe system for sure. Yeah. Like, it
wasn't necessarily designed with bad actors in mind. And I and and to some extent,
I would argue that that the initial large language models,
weren't, you know, designed with that. And and now there's talk of,
you know, we need a, a sovereign AI,
(34:00):
like like, or AI infrastructure should be,
regulated at the national level, where you have, you know, say,
the large language model and the, GPUs that run them that are,
you know, dictated by the federal government and, you know, maybe the Saudi
government has a different one. And things will probably go that
way, and I'm not, you know, like, I'm I'm I'm in no position to
(34:23):
comment. Maybe some, I don't know, like, FNAI company, but that's about it. I could
have an opinion, I guess. Yeah. Yeah. Yeah. Yeah. I I should say I'm
not, an expert in foreign policy and Sure. Yep. Live
liberty, like, civil liberties and and
copyright and all of that. But one thing I know for sure
is the technology will
(34:46):
evolve faster than the regulations. Agreed. So it'll it
it almost doesn't matter. Yep. Like, OpenAI is
getting sued by the New York Times, but everybody's already using them. So Right. I
think. Yeah. I mean, like, you know, like, Meta
and, all of the the social platforms have been around for
a long time. They still haven't figured out how to regulate them either. Right? So,
(35:07):
yeah, I don't think they're gonna be, I mean honestly maybe they're they're quicker with
AI at least they're they're trying to work on it on some regulations around it
but agreement on how that's actually gonna play out I think will be
pretty pretty, messy for a little while yet. I guess
that naturally leads to, you know, prediction time, right?
Predictions are are terrible because, you know, if you get them wrong then people may
(35:30):
remember. If you get them right, it'll see it'll seem, obvious in in
hindsight but, I would love to sort of since you've put a lot of time
to this and and you're passionate about the field I'm I'm curious about sort of
how you feel AI, writ
large will impact and change the the the MSP
business model for, sort of in the next five to 10 years? That's
(35:52):
a I think in 5 to 10 years,
MSPs will be managing, potentially
tuning large language models and their,
relevant insert in infrastructure around them like vector databases,
for the majority of their customers. And and that
may be, like, every small business, has
(36:14):
some version of a customized large language model that they
use inside their business. And the
MSPs are managing the infrastructure for that. I also think that there's going to
be a movement back to private cloud and on prem for some of this
stuff. And the most qualified
people I know to go set up a server room or a server
(36:37):
rack with a bunch of GPUs and virtualization and,
you know, management
ongoing maintenance of it. But, and and I
also think that, you'll start to see super winners in
(37:00):
in in in industry pop up. So I don't know if it'll
happen in MSPs, but say one of these really big MSP platforms
gets really smart about, organizing their data, and
they they create, you know, this thing that everybody's been trying
to sell. Like, the early AI people have been, you know, dreaming of of
this AI that solves tickets for you automatically and can do your
(37:23):
job. Like, say, say, say somebody figures that out and then starts
licensing that out to all the other MSPs. And they're no longer in the MSP
business anymore. Right. They sell all that off. They're just in the data
business and own, own the dataset that, you know, can solve tickets. Like that
stuff's going to happen in every industry,
I think. Yep. Yeah. I think there's
(37:44):
it'll be wild to sort of see you, like, there's there's sort of, like, some
of these things I think are are somewhat predictable and then there's gonna be
absolute wildcards that in, like, 3 to 5 years especially we're like,
oh, interesting. Did not see that coming. Right? So
it would be a fun space to watch for sure.
Awesome. Well, this has been great, Jimmy. I appreciate your time and, best of luck
(38:06):
with this. I think it'll be really fascinating to see how this evolves
and, really, really happy to see, you in particular as one of the
people sort of at the at the head of this spear head of the spear
for the MSP industry in in evolving the AI workload
for them. Thank you so much. Really appreciate it. Really
enjoyed the conversation today. Alright. Take care.